Compare commits

...

122 Commits

Author SHA1 Message Date
terence
38955fd08c Optimize pending attestation processing by adding batching (#15801)
* Optimize pending attestation processing by adding batching

* Update beacon-chain/sync/pending_attestations_queue.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/sync/pending_attestations_queue.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Add root for debug

* Change it to map

* Dont need receiver

* Use two slices

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-10-08 05:18:13 +00:00
kasey
71f05b597f Use NetworkSchedule config to determine max blobs at epoch (#15714)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-10-08 04:02:05 +00:00
kasey
0d742c6f88 make registerSubscribers idempotent (#15779)
* make registerSubscribers idempotent

* clean up debugging changes

* test fix

* rm unused var

* sobbing noises

* naming feedback and separate test for digestActionDone

* gazelle

* manu's feedback

* refactor to enable immediate sub after init sync

* preston comment re panic causing db corruption risk

* ensure we check that we're 1 epoch past the fork

* manu feedback

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-10-08 01:09:22 +00:00
Manu NALEPA
06b5409ff0 When looking for peers, skip peers with error instead of aborting the whole function (#15815)
* `findPeersWithSubnets`: If the `filter` function returns an error for a given peer, log an error and skip the peer instead of aborting the whole function.

* `computeIndicesByRootByPeer`: If the loop returns an error for a given peer, log an error and skip the peer instead of aborting the whole function.

* Add changelog.

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-10-07 19:13:08 +00:00
Sahil Sojitra
9805e90d73 chore: refactor to use builtin max/min (#15817)
* passed the tests with inbuilt max func

* tested min changes

* fix bazel files

* added changelog
2025-10-07 19:02:47 +00:00
Manu NALEPA
537f3cb863 StatusV2: Send correct earliest available slot. (#15818)
* `buildStatusFromStream`: Use parent context.

* Status tests: Use `t.Context` everywhere.

* `buildStatusFromStream`: Respond statusV2 only if Fulu is enabled.

Without doing so, earliest available slot is never defined, and then `s.cfg.p2p.EarliestAvailableSlot` will block until the context is canceled.

* Send our real earliest available slot when sending a Status request post Fulu instead of `0`.

* Add changelog.
2025-10-07 18:45:47 +00:00
Manu NALEPA
b45e87abd6 Move some logs to trace (#15816)
* Prettify logs for byRange/byRoot data column sidecar requests.

* Moving byRoot/byRange data column sidecars requests from peers to TRACE level.

* Move "Peer requested blob sidecar by root not found in db" in TRACE.

* Add changelog.

* Fix Kasey's comment.

* Apply Kasey's suggestion.
2025-10-07 18:44:55 +00:00
Preston Van Loon
4c4b12cca7 Clear disconnected peers from connected_libp2p_peers (#15807)
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-10-06 19:59:42 +00:00
james-prysm
aabded250f fixing Origin block root is not set error log (#15808)
* fixing error log

* kasey's suggestion

* kasey's feedback replacing comment
2025-10-06 15:47:27 +00:00
Manu NALEPA
4f9e56fc70 Custody Info: Waits for initialization (#15804)
* Revert "`createLocalNode`: Wait before retrying to retrieve the custody group count if not present. (#15735)"

This reverts commit 4585cdc932.

* Revert "Fix no custody info available at start (#15732)"

This reverts commit 80eba4e6dd.

* Add context to `EarliestAvailableSlot` and `CustodyGroupCount` (no functional change).

* Remove double imports.

* `EarliestAvailableSlot` and `CustodyGroupCount`: Wait for custody info to be initialized.
2025-10-06 10:55:48 +00:00
james-prysm
2a86132994 removing old unused configs and hiding prysm specific configs (#15797)
* removing old unused configs and hiding configs to align with other clients

* gaz and removing unneeded from test

* fixing test

* fixing test
2025-10-03 18:32:44 +00:00
kasey
74c47e25a9 exclude unscheduled forks from the schedule (#15799)
* exclude unscheduled forks from the schedule

* update tests that relied on fulu being far future

* don't mess with the version->epoch mapping

* LastFork cheats the tests by filtering by far_future_epoch

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-10-03 15:29:27 +00:00
Bastin
28eb1a4c3c add delay before broadcasting lc objects (#15776)
* add delay before broadcasting lc objects

* address commments

* address commments

* reduce test runtime
2025-10-03 13:02:46 +00:00
terence
1f89394727 Return early if there's no block for data column sidecar (#15802) 2025-10-03 02:59:26 +00:00
kasey
bf1095c782 ignore version|digest mismatch if far future (#15798)
* ignore version/digest mismatch if far future

* bonus: this log generates a lot of noise, bump it down to trace

* unit test

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-10-02 19:57:54 +00:00
Manu NALEPA
b24fe0d23a requestAndSaveMissingDataColumnSidecars: Fix log (#15794) 2025-10-02 16:38:00 +00:00
Bastin
cbe50269de Change LC p2p validation rules (#15783)
* compare incoming lc message with locally computed object

* fix logs

* add comment
2025-10-02 15:04:33 +00:00
Manu NALEPA
4ed2953fcf inclusionProofKey: Include the commitments in the key. (#15795)
* `inclusionProofKey`: Include the commitments in the key.

* Fix Potuz's comment.

* Update beacon-chain/verification/data_column.go

Co-authored-by: Potuz <potuz@prysmaticlabs.com>

* Fix Potuz's comment.

---------

Co-authored-by: Potuz <potuz@prysmaticlabs.com>
2025-10-02 13:17:11 +00:00
Potuz
915837d059 Process pending on block (#15791)
* Process pending attestations with block insertion

* fix tests

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Terence's review

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-02 00:19:04 +00:00
Potuz
26b276660f Avoid unnecessary calls to ExitInformation() (#15764)
* Avoid unnecessary calls to ExitInformation()

ExitInformation runs a loop over the whole validator set. This is needed
in case that there are slashings or exits to be processed in a block (we
could be caching or avoid this entirely post-Electra though). This PR
removes these calls on normal state transition to this function. h/t to
@terencechain for finding out this bug.

In addition, on processing withdrawal requests and registry updates, we
kept recomputing the exit information at the same time that the state is
updated and the function that updates the state already takes care of
tracking and updating the right exit information. So this PR removes the
calls to compute this exit information on a loop. Notice that this bug
has been present even before we had a function `ExitInformation()` so I
will document here to help the reviewer

Our previous behavior is to do this in a loop

```
st, err = validators.InitiateValidatorExit(ctx, st, vIdx, validators.ExitInformation(st))
```

This is a bit problematic since `ExitInformation` loops over the whole validator set to compute the exit information (and the total active balance) and then the function `InitiateValidatorExit` actually recomputes the total active balance looping again over the whole validator set and overwriting the pointer returned by `ExitInformation`.

On the other hand, the funciton `InitiateValidatorExit` does mutate the state `st` itself. So each call to `ExitInformation(st)` may actually return a different pointer.

The function ExitInformation computes as follows

```
	err := s.ReadFromEveryValidator(func(idx int, val state.ReadOnlyValidator) error {
		e := val.ExitEpoch()
		if e != farFutureEpoch {
			if e > exitInfo.HighestExitEpoch {
				exitInfo.HighestExitEpoch = e
				exitInfo.Churn = 1
			} else if e == exitInfo.HighestExitEpoch {
				exitInfo.Churn++
			}
```

So it simply increases the churn for each validator that has epoch equal to the highest exit epoch.

The function `InitiateValidatorExit` mutates this pointer in the following way

if the state is post-electra, it disregards completely this pointer and computes the highest exit epoch and updates churn inconditionally, so the pointer `exitInfo.HighestExitEpoch` will always have the right value and is not even neded to be computed before. We could even avoid the fist loop even. If the state is pre-Electra then the function itself updates correctly the exit info for the next iteration.

* Only care about exits pre-Electra

* Update beacon-chain/core/transition/transition_no_verify_sig.go

Co-authored-by: terence <terence@prysmaticlabs.com>

* Radek's review

---------

Co-authored-by: terence <terence@prysmaticlabs.com>
2025-10-02 00:17:39 +00:00
james-prysm
580509f2f4 attempting to improve duties v2 (#15784)
* attempting to improve duties v2

* removing go routine

* changelog

* unnessesary variable

* fixing test

* small optimization existing early on  CommitteeAssignments function

* fixing small bug

* fixes performance issues with duties v2

* fixed changelog

* gofmt
2025-10-01 20:40:14 +00:00
Potuz
be144da099 fix test race conditions (#15792)
Fix race condition where svc.verifierWaiter was being set after
svc.Start() was already running, causing nil pointer dereference.
2025-10-01 19:29:20 +00:00
Manu NALEPA
cc2565a422 Update c-kzg-4844 to v2.1.5 (#15708)
* Sort sidecars by index before calling `RecoverCellsAndKZGProofs`.

Reason: Starting at `c-kzg-4844 v2.1.2`, the library needs input to be sorted.

* Update `c-kzg-4844` to `v2.1.3`

* Update `c-kzg-4844` to `v2.1.5`
2025-10-01 14:24:22 +00:00
Manu NALEPA
d86353ea9d P2P service: Remove unused clock. (#15786) 2025-10-01 11:50:30 +00:00
Manu NALEPA
45d6002411 Aggregate logs when broadcasting data column sidecars (#15748)
* Aggregate logs when broadcasting data column sidecars

* Fix James' comment.
2025-10-01 08:21:10 +00:00
Muzry
08c855fd4b fix ProduceSyncCommitteeContribution error on invalid index (#15770)
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-10-01 02:20:55 +00:00
Justin Traglia
7c86b5d737 Add sources for compute_fork_digest to specrefs (#15699)
* Add sources for compute_fork_digest to specrefs

* Delete non-existant exception keys

* Lint specref files

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-10-01 01:35:41 +00:00
james-prysm
023287f7df properly skipping ommitted values for /eth/v1/config/spec (#15777)
* properly skipping ommitted values

* simplifying

* Update beacon-chain/rpc/eth/config/handlers.go

Co-authored-by: Bastin <43618253+Inspector-Butters@users.noreply.github.com>

* Update beacon-chain/rpc/eth/config/handlers.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* removing omitempty support based on radek's feedback

---------

Co-authored-by: Bastin <43618253+Inspector-Butters@users.noreply.github.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-09-30 16:03:36 +00:00
Preston Van Loon
1432867c92 Github CI: Update runs-on to ubuntu-4 (#15778) 2025-09-30 03:39:20 +00:00
Preston Van Loon
18efd620dc Add strip=always to release builds (#15774) 2025-09-29 17:00:28 +00:00
james-prysm
6139d58fa5 fixing config string parsing regression (#15773)
* adding string parsing and test

* gaz
2025-09-29 16:15:01 +00:00
Sahil Sojitra
0ea5e2cf9d refactor to use reflect.TypeFor (#15627)
* refactor to use reflect.TypeFor

* added changelog fragment file

* update changelog

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-09-26 17:26:44 +00:00
Jun Song
29fe707143 SSZ-QL: Support nested List type (#15725)
* Add nested 2d list cases

* Add elementSize member for listInfo to track each element's byte size

* Fix misleading variable in RunStructTest

* Changelog

* Regen pb file

* Update encoding/ssz/query/list.go

Co-authored-by: Radosław Kapka <radoslaw.kapka@gmail.com>

* Rename elementSize into plural

* Update changelog/syjn99_ssz-ql-nested-list.md

---------

Co-authored-by: Radosław Kapka <radoslaw.kapka@gmail.com>
2025-09-26 14:23:22 +00:00
kasey
d68196822b additional log information around invalid payloads (#15754)
* additional log information around invalid payloads

* fix test with reversed require.ErrorIs args

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-09-26 02:33:57 +00:00
terence
924fe4de98 Restructure golangci-lint config: explicit opt-in (#15744)
* update golangci-lint configuration to enable basic linters only

* Add back formatter

* feedback

* Add nolint
2025-09-25 17:01:22 +00:00
Preston Van Loon
fe9dd255c7 slasherkv: Set a 1 minute timeout on PruneAttestationOnEpoch operations (#15746)
* slasherkv: Set a 1 minute timeout on PruneAttestationOnEpoch operations to prevent very large bolt transactions.

* Fix CI

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2025-09-25 15:46:28 +00:00
Potuz
83d75bcb78 Update quick-go (#15749) 2025-09-25 11:02:10 +00:00
james-prysm
ba5f7361ad flipping if statement check to fix metric (#15743) 2025-09-24 20:43:39 +00:00
Manu NALEPA
8fa956036a Update go.mod to v1.25.1. (#15740)
* Updated go.mod to v1.25.1.

Run `golangci-lint migrate`
GolangCI lint: Add `noinlineerr`.
Run `golangci-lint run --config=.golangci.yml`.
`golangci`: Add `--new`.

* `go.yml`: Added `fetch-depth: 0` to have something working with merge queue according to

https://github.com/golangci/golangci-lint-action/issues/956

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-09-24 19:57:17 +00:00
james-prysm
58ce1c25f5 fixing error handling of unfound block (#15742) 2025-09-24 19:03:55 +00:00
Potuz
98532a2df3 update spectests to 1.6.0-beta.0 (#15741)
* update spectests to 1.6.0-beta.0

* start fixing ethspecify
2025-09-24 18:13:46 +00:00
Ragnar
08be6fde92 fix: replace fmt.Printf with proper test error handling in web3signer… (#15723)
* fix: replace fmt.Printf with proper test error handling in web3signer test

* Update keymanager_test.go

* Create DeVikingMark_fix-web3signer-test-error-handling.md
2025-09-23 19:22:11 +00:00
Manu NALEPA
4585cdc932 createLocalNode: Wait before retrying to retrieve the custody group count if not present. (#15735) 2025-09-23 15:46:39 +00:00
Preston Van Loon
aa47435c91 Update eth clients pinned deps (#15733)
* Update eth-clients/hoodi

* Update eth-clients/holesky

* Update eth-clients/sepolia

* Changelog fragment

* Remove deprecated and unused eth2-networks dependency.
2025-09-23 14:26:46 +00:00
Manu NALEPA
80eba4e6dd Fix no custody info available at start (#15732)
* Change wrap message to avoid the could not...: could not...: could not... effect.

Reference: https://github.com/uber-go/guide/blob/master/style.md#error-wrapping.

* Log: remove period at the end of the latest sentence.

* Dirty quick fix to ensure that the custody group count is set at P2P service start.

A real fix would involve a chan implement a proper synchronization scheme.

* Add changelog.
2025-09-23 14:09:29 +00:00
Manu NALEPA
606294e17f Improve logging of data column sidecars (#15728)
* Implement `SortedSliceFromMap`, `PrettySlice`, and `SortedSliceFromMap`.

* Use `SortedPrettySliceFromMap` and `SortedSliceFromMap` when needed.
2025-09-23 02:10:23 +00:00
terence
977e923692 Set Fulu fork epochs for Holesky, Hoodi, and Sepolia testnets (#15721)
* Set Fulu fork epochs for Holesky, Hoodi, and Sepolia testnets

* Update commits

* Add fulu.yaml to the presets file path loader

* Add the following placeholder fields:
- CELLS_PER_EXT_BLOB
- FIELD_ELEMENTS_PER_CELL
- FIELD_ELEMENTS_PER_EXT_BLOB
- KZG_COMMITMENTS_INCLUSION_PROOF_DEPTH

---------

Co-authored-by: Preston Van Loon <preston@pvl.dev>
2025-09-23 02:10:20 +00:00
sashass1315
013b6b1d60 fix: race in PriorityQueue.Pop by checking emptiness under write lock (#15726)
* fix: race in PriorityQueue.Pop by checking emptiness under write lock

* Create sashass1315_fix-priority-queue-pop-lock-race

* Update changelog/sashass1315_fix-priority-queue-pop-lock-race

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>

* Move changelog/sashass1315_fix-priority-queue-pop-lock-race to changelog/sashass1315_fix-priority-queue-pop-lock-race.md

* Fix bullet in changelog

---------

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2025-09-22 20:12:56 +00:00
Preston Van Loon
66ff6f70b8 Update to Go 1.25 (#15641)
* Update rules_go to v0.54.1

* Fix NotEmpty assertion for new protobuf private fields.

* Update rules_go to v0.55.0

* Update protobuf to 28.3

* Update rules_go to v0.57.0

* Update go to v1.25.0

* Changelog fragment

* Update go to v1.25.1

* Update generated protobuf and ssz files
2025-09-22 19:22:32 +00:00
Preston Van Loon
5b1a9fb077 blst: Change from blst from http_archive to go_repository (#15709) 2025-09-22 19:00:57 +00:00
JihoonSong
35bc9b1a0f Support Fulu genesis block (#15652)
* Support Fulu genesis block

* `NewGenesisBlockForState`: Factorize Electra and Fulu, which share the same `BeaconBlock`.

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2025-09-22 19:00:26 +00:00
Muzry
02dca85251 fix getStateRandao not returning historic RANDAO mix values (#15653)
* fix getStateRandao not returning historic RANDAO mix values

* update: the lower bound statement

* Update changelog/muzry_fix_state_randao.md

* Update changelog/muzry_fix_state_randao.md

---------

Co-authored-by: Radosław Kapka <radoslaw.kapka@gmail.com>
2025-09-21 23:35:03 +00:00
Manu NALEPA
39b2163702 Fix orphaned blocks (#15720)
* `Broadcasted data column sidecar` log: Add `blobCount`.

* `broadcastAndReceiveDataColumns`: Broadcast and receive data columns in parallel.

* `ProposeBeaconBlock`: First broadcast/receive block, and then sidecars.

* `broadcastReceiveBlock`: Add log.

* Add changelog

* Fix deadlock-option 1.

* Fix deadlock-option 2.

* Take notifier out of the critical section

* only compute common info once, for all sidecars

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-09-19 15:45:44 +00:00
james-prysm
d67ee62efa Debug data columns API endpoint (#15701)
* initial

* fixing from self review

* changelog

* fixing endpoints and adding test

* removed unneeded test

* self review

* fixing mock columns

* fixing tests

* gofmt

* fixing endpoint

* gaz

* gofmt

* fixing tests

* gofmt

* gaz

* radek comments

* gaz

* fixing formatting

* deduplicating and fixing an old bug, will break into separate PR

* better way for version

* optimizing post merge and fixing tests

* Update beacon-chain/rpc/eth/debug/handlers.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/rpc/eth/debug/handlers.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/rpc/lookup/blocker.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/rpc/lookup/blocker.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/rpc/lookup/blocker.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* adding some of radek's feedback

* reverting and gaz

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-09-19 15:20:43 +00:00
Jun Song
9f9401e615 SSZ-QL: Handle Bitlist and Bitvector (#15704)
* Add bitvector field for FixedTestContainer

* Handle Bitvector type using isBitfield flag

* Add Bitvector case for Stringify

* Add bitlist field for VariableTestContainer

* Add bitlistInfo

* Changelog

* Add bitvectorInfo

* Remove analyzeBit* functions and just inline them

* Fix misleading comments

* Add comments for bitlistInfo's Size

* Apply reviews from Radek
2025-09-19 13:58:56 +00:00
Muzry
6b89d839f6 Fix prysmctl panic when baseFee is not set in genesis.json (#15687)
* Fix prysmctl panic when baseFee is not set in genesis.json

* update add unit test

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-09-18 21:48:28 +00:00
james-prysm
d826a3c7fe adding justified support to blocker and optimizing logic (#15715)
* adding justified support to blocker and optimizing logic

* gaz

* fixing tests

* Update beacon-chain/rpc/lookup/blocker.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/rpc/lookup/blocker.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* better error handling from radek feedback

* unneeded change

* more radek review

* addressing more feedback

* fixing small comment

* fixing errors

* removing fallback

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-09-18 17:01:27 +00:00
terence
900f162467 fix: use v2 endpoint for blinded block submission post-Fulu (#15716) 2025-09-18 00:27:46 +00:00
hyunchel
5266d34a22 Fix misleading log msg on shutdown (#13063)
* Fix misleading log msg on shutdown

gRPCServer.GracefulStop blocks until it has been shutdown. Logging
"Initiated graceful stop" after it has been completed is misleading.
Names are added to the message to discern services. Also, a minimum test
is added mainly to verify the change made with this commit.

* Add changelog fragment file

* Capitalize log messages

* Update endtoend test for fixed log messages

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-09-17 22:14:40 +00:00
Manu NALEPA
c941e47b7e PeerDAS: Modify the reconstruction and reseed process. (#15705)
* `reconstructSaveBroadcastDataColumnSidecars`: Use the `s.columnIndicesToSample` instead of recoding its content.

ddd

* Rename `custodyColumns` ==> `columnIndicesToSample`.

* `DataColumnStorage.Save`: Remove wrong godoc.

* Implement `receiveDataColumnSidecars` and transform `receiveDataColumnSidecar` as a subcase of the plural version.

* `dataColumnSubscriber`: Add godoc and remove only once used variable.

* `processDataColumnSidecarsFromExecution`: Use single flight directly in the function.

So the caller does not have any more the responsability to deal with multiple simultaneous calls.

* `processDataColumnSidecarsFromReconstruction`: Guard against a single flight.

In `dataColumnSubscriber`, trig in parallel `processDataColumnSidecarsFromReconstruction` and `processDataColumnSidecarsFromExecution`.

Stop when the first of them is successful.

* `processDataColumnSidecarsFromExecution`: Use `receiveDataColumnSidecars` instead of `receiveDataColumnSidecar`.

* Implement and use `broadcastAndReceiveUnseenDataColumnSidecars`.

* Add changelog.

* Fix James' comment.

* Fix James' comment.

* `processDataColumnSidecarsFromReconstruction`: Log reconstruction duration.
2025-09-17 20:03:23 +00:00
Muzry
54991bbc52 fix: use assigned committee index in attestation data after Electra (#15696)
* fix: use assigned committee index in attestation data after Electra

* update change log

* update add unit test

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-09-17 17:08:56 +00:00
james-prysm
7e32bbc199 Get blob fulu (#15610)
* wip

* wip

* adding ssz marshalling

* updating function for readability

* adding unit tests and fixing ssz

* gaz

* linting

* fixing test

* wip

* fixing mock blocker

* fixing test

* fixing tests and handler

* updating unit test for more coverage

* adding some comments

* self review

* gofmt

* updating and consolidating tests

* moving functional options so it can be used properly

* gofmt

* more missed gofmt

* Update beacon-chain/rpc/endpoints.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/rpc/lookup/blocker.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/rpc/eth/beacon/handlers.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* radek feedback

* fixing tests

* fixing test

* moving endpoint in test

* removed unneeded comment

* fixing linting from latest develop merge

* fixing linting from latest develop merge

* Update beacon-chain/rpc/eth/blob/handlers.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/rpc/eth/blob/handlers.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* reverting change

* reverting change

* adding in better error for which hashes are missing

* Update beacon-chain/rpc/lookup/blocker.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* fixing unit test

* gofmt

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-09-17 15:42:25 +00:00
kasey
a1be3d68cd fix e2e noise from a goroutine exits after test cleanup (#15703)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-09-17 12:14:14 +00:00
Potuz
88af3f90a5 Change insertchain (#15688)
* Change InsertChain

InsertChain uses `ROBlock` since #14571, this allows it to insert the
last block of the chain as well. We change the semantics of InsertChain
to include all blocks and take them in increasing order.

* Fix tests

* Use slices.Reverse
2025-09-16 23:53:19 +00:00
satushh
600169a53b Retry logic for getBlobsV2 in peerDAS (#15520)
* PeerDAS: Implement sync

* Fix Potuz's comment.

* Fix Potuz's comment.

* Fix Potuz's comment.

* Fix Potuz's comment.

* Fix Potuz's comment.

* Implement `TestFetchDataColumnSidecarsFromPeers`.

* Implement `TestSelectPeers`.

* Fix James' comment.

* Fix flakiness in `TestSelectPeers`.

* Revert "Fix Potuz's comment."

This reverts commit c45230b455.

* Revert "Fix James' comment."

This reverts commit a3f919205a.

* `selectPeers`: Avoid map with key but empty value.

* Fix Potuz's comment.

* Add DataColumnStorage and SubscribeAllDataSubnets flag.

* getBlobsV2: retry if reconstruction isnt successful

* test: engine client and sync package, metrics

* lint: fmt and log capitalisation

* lint: return error when it is not nil

* config: make retry interval configurable

* sidecar: recover function and different context for retrying

* lint: remove unused field

* beacon: default retry interval

* reconstruct: load once, correctly deliver the result to all waiting goroutines

* reconstruct: simplify multi goroutine case and avoid race condition

* engine: remove isDataAlreadyAvailable function

* sync: no goroutine, getblobsv2 in absence of block as well, wrap error

* exec: hardcode retry interval

* da: non blocking checks

* sync: remove unwanted checks

* execution: fix test

* execution: retry atomicity test

* da: updated IsDataAvailable

* sync: remove unwanted tests

* bazel: bazel run //:gazelle -- fix

* blockchain: fix CustodyGroupCount return

* lint: formatting

* lint: lint and use unused metrics

* execution: retry logic inside ReconstructDataColumnSidecars itself

* lint: format

* execution: ensure the retry actually happens when it needs to

* execution: ensure single responsibility, execution should not do DA check

* sync: don't call ReconstructDataColumnSidecars if not required

* blockchain: move IsDataAvailable interface to blockchain package

* execution: make reconstructSingleflight part of the service struct

* blockchain: cleaner DA check

* lint: formatting and remove confusing comment

* sync: fix lint, test and add extra test for when data is actually not available

* sync: new appropriate mock service

* execution: edge case - delete activeRetries on success

* execution: use service context instead of function's for retry

* blockchain: get variable samplesPerSlot only when required

* remove redundant function and fix name

* fix test

* fix more tests

* put samplesPerSlot at appropriate place

* tidy up IsDataAvailable

* correct bad merge

* fix bad merge

* remove redundant flag option

* refactor to deduplicate sidecar construction code

* - Add godocs
- Rename some functions to be closer to the spec
- Add err in return of commitments

* Replace mutating public method (but only internally used) `Populate` but private not mutating method `extract`.

* Implement a unique `processDataColumnSidecarsFromExecution` instead 2 separate functions from block and from sidecar.

* `ReceiveBlock`: Wrap errors.

* Remove useless tests.

* `ConstructionPopulator`: Add tests.

* Fix tests

* Move functions to be consistent with blobs.

* `fetchCellsAndProofsFromExecution`: Avoid useless flattening.

* `processDataColumnSidecarsFromExecution`: Stop using DB cache.

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-09-16 20:35:35 +00:00
terence
a5e4fccb47 update ssz generated files (#15700) 2025-09-16 19:52:25 +00:00
Bastin
e589588f47 move lc package out of core (#15698) 2025-09-16 15:23:00 +00:00
Bastin
41884d8d9d add light client fulu spec tests (#15697) 2025-09-16 14:21:54 +00:00
terence
db074cbf12 Refactor more beacon core types to beacon_core_types.proto (#15695) 2025-09-16 13:28:34 +00:00
Bastin
360e89767f Canonical LC (#15585)
* create lc cache to track branches

* save lc stuff

* remove finalized data from LC cache on finalization

* read lc stuff

* edit tests

* changelog

* linter

* address commments

* address commments 2

* address commments 3

* address commments 4

* lint

* address commments 5 x_x

* set beacon lcStore to mimick registrable services

* clean up the error propagation

* pass the state to saveLCBootstrap since it's not saved in db yet
2025-09-16 12:20:07 +00:00
kasey
238d5c07df remove "experimental" from backfill flag name (#15690)
* remove "experimental" from backfill flag name

* backwards-compatible alias

* changelog

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-09-15 20:11:53 +00:00
Manu NALEPA
2292d955a3 PeerDAS: Implement syncing in a disjoint network (Also know as "perfect PeerDAS" network). (#15644)
* `computeIndicesByRootByPeer`: Add 1 slack epoch regarding peer head slot.

* `FetchDataColumnSidecars`: Switch mode.

Before this commit, this function returned on error as long as at least ONE requested sidecar was not retrieved.

Now, this function retrieves what it can (best effort mode) and returns an additional value which is the map of missing sidecars after running this function.

It is now the role of the caller to check this extra returned value and decide what to do in case some requested sidecars are still missing.

* `fetchOriginDataColumnSidecars`: Optimize

Before this commit, when running `fetchOriginDataColumnSidecars`, all the missing sidecars had to been retrieved in a single shot for the sidecars to be considered as available. The issue was, if for example `sync.FetchDataColumnSidecars` returned all but one sidecar, the returned sidecars were NOT saved, and on the next iteration, all the previously fetched sidecars had to be requested again (from peers.)

After this commit, we greedily save all fetched sidecars, solving this issue.

* Initial sync: Do not fetch data column sidecars before the retention period.

* Implement perfect peerdas syncing.

* Add changelog.

* Fix James' comment.

* Fix James' comment.

* Fix James' comment.

* Fix James' comment.

* Fix James' comment.

* Fix James' comment.

* Fix James' comment.

* Update beacon-chain/sync/data_column_sidecars.go

Co-authored-by: Potuz <potuz@prysmaticlabs.com>

* Update beacon-chain/sync/data_column_sidecars.go

Co-authored-by: Potuz <potuz@prysmaticlabs.com>

* Update beacon-chain/sync/data_column_sidecars.go

Co-authored-by: Potuz <potuz@prysmaticlabs.com>

* Update after Potuz's comment.

* Fix Potuz's commit.

* Fix James' comment.

---------

Co-authored-by: Potuz <potuz@prysmaticlabs.com>
2025-09-15 19:21:49 +00:00
james-prysm
76bc30e8ba Fulu general spectests (#15682)
* batch test

* removing log

* fixing test eval

* adding recover cells tests

* remaining tests

* exclusion
2025-09-15 15:27:45 +00:00
terence
a5c7c6da06 Refactor proto definitions: extract common beacon block types and components (#15689) 2025-09-15 15:13:09 +00:00
kasey
4b09dd4aa5 prevent ConnManager from pruning peers excessively (#15681)
* prevent ConnManager from pruning peers excessively

* manu feedback

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-09-14 00:03:48 +00:00
Preston Van Loon
1dab5a9f8a Feature: --p2p-colocation-whitelist flag to allow certain IPs to bypass colocation restrictions (#15685)
* Add flag for colocation whitelisting. --p2p-ip-colocation-whitelist

This change updates the peer IP colocation checking to respect the
configured CIDR whitelist (--p2p-ip-colocation-whitelist flag).

Changes:
- Added IPColocationWhitelist field to peers.StatusConfig
- Added ipColocationWhitelist field to Status struct to store parsed IPNets
- Parse CIDR strings into net.IPNet in NewStatus constructor
- Updated isfromBadIP method to skip colocation limits for whitelisted IPs
- Pass IPColocationWhitelist from Service config when creating Status

The IP colocation whitelist allows operators to exempt specific IP ranges
from the colocation limit, useful for deployments with known trusted
address ranges or legitimate node clustering.

Only check if an IP is in the whitelist when the colocation limit
is actually exceeded, rather than checking for every IP. This is
more efficient and matches the intended behavior.

* Changelog fragment

* Apply suggestion from @nalepae

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Apply suggestion from @nalepae

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* @kasey feedback: Move IP colocation parsing to the node construction

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2025-09-12 16:03:54 +00:00
Potuz
d681232fe6 Fix setup forkchoice (#15684)
* Insert the head block to forkchoice

* Add test

* Rename file

* add changelog

* Remove test
2025-09-12 13:58:00 +00:00
Preston Van Loon
1d24f89c96 Downscore peers when they give invalid responses during initial sync (#15686)
* Downscore peers when they give invalid responses during initial sync.

* @nalepae feedback on adding a log message after downscoring
2025-09-12 13:29:08 +00:00
kasey
967193e6a2 update go-libp2p-pubsub=v0.14.2, go-libp2p=v0.39.1 (#15677)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-09-11 21:04:56 +00:00
terence
9e40551852 Implement KZG proof batch verification (option 2) - uses worker pool (#15617)
* Implement KZG proof batch verification for data column gossip validation

* Manu's feedback

* Add tests

* Update beacon-chain/sync/batch_verifier.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Update beacon-chain/sync/batch_verifier.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Update beacon-chain/sync/kzg_batch_verifier_test.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Update beacon-chain/sync/kzg_batch_verifier_test.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Update beacon-chain/sync/kzg_batch_verifier_test.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Update beacon-chain/sync/kzg_batch_verifier_test.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Update beacon-chain/sync/kzg_batch_verifier_test.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Fix tests

* Kasey's feedback

* `validateWithKzgBatchVerifier`: Give up after a full slot.

Before this commit:
After 100 ms, an un-batched verification is launched concurrently to the batched one.
As a result, a stressed node could start to be even more stressed by the multiple verifications.
Also, it is always hard to choose a correct timeout value.
100ms may be OK for a given node with a given BPO version, and not ok for the same node with a BPO version with 10x more blobs.

However, we know this gossip validation won't be useful after a full slot duration.

After this commit:
After a full slot duration, we just ignore the incoming gossip message.
It's important to ignore it and not to reject it, since rejecting it would downscore the peer sending this message.

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2025-09-11 15:52:25 +00:00
fernantho
a8cab58f7e SSZ-QL: populate variable-length metadata in AnalyzeObject (#15676)
* increased AnalyzeObject functionality to return a fully populated SSZInfo object including runtime lengths for lists

* reverted formatting change

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-09-11 11:39:02 +00:00
james-prysm
df86f57507 deprecate /prysm/v1/beacon/blobs (#15643)
* adding in check for post fulu support

* gaz

* fixing unit tests and typo

* gaz

* manu feedback

* addressing manu's suggestion

* fixing tests
2025-09-10 21:44:50 +00:00
james-prysm
9b0a3e9632 default validator client rest mode to ssz for post block (#15645)
* wip

* fixing tests

* updating unit tests for coverage

* gofmt

* changelog

* consolidated propose beacon block tests and improved code coverage

* gaz

* partial review feedback

* continuation fixing feedback on functions

* adding log

* only fall back on 406

* fixing broken tests

* Update validator/client/beacon-api/propose_beacon_block.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/beacon-api/propose_beacon_block.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/beacon-api/propose_beacon_block.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/beacon-api/propose_beacon_block.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/beacon-api/propose_beacon_block.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* radek review feedback

* more feedback

* radek suggestion

* fixing unit tests

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-09-10 21:08:11 +00:00
kasey
5e079aa62c justin's suggestion to unwedge CI (#15679)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-09-10 20:31:29 +00:00
terence
5c68ec5c39 spectests: Add Fulu proposer lookahead epoch processing tests (#15667) 2025-09-10 18:51:19 +00:00
terence
5410232bef spectests: Add fulu fork transition tests for mainnet and minimal (#15666) 2025-09-10 18:51:04 +00:00
Radosław Kapka
3f5c4df7e0 Optimize processing of slashings (#14990)
* Calculate max epoch and churn for slashing once

* calculate once for proposer and attester slashings

* changelog <3

* introduce struct

* check if err is nil in ProcessVoluntaryExits

* rename exitData to exitInfo and return from functions

* cleanup + tests

* cleanup after rebase

* Potuz's review

* pre-calculate total active balance

* remove `slashValidatorFunc` closure

* Avoid a second validator loop

    🤖 Generated with [Claude Code](https://claude.ai/code)

    Co-Authored-By: Claude <noreply@anthropic.com>

* remove balance parameter from slashing functions

---------

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: potuz <potuz@prysmaticlabs.com>
2025-09-10 18:14:11 +00:00
Preston Van Loon
5c348dff59 Add erigon/caplin to list of known p2p agents. (#15678) 2025-09-10 17:57:24 +00:00
james-prysm
8136ff7c3a adding remote singer changes for fulu (#15498)
* adding web 3 signer changes for fulu

* missed adding fulu values

* add accounting for fulu version

* updated web3signer version and fixing linting

* Update validator/keymanager/remote-web3signer/types/requests_test.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/keymanager/remote-web3signer/types/mock/mocks.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* radek suggestions

* removing redundant check and removing old function, changed changelog to reflect

* gaz

* radek's suggestion

* fixing test as v1 proof is no longer used

* fixing more tests

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-09-10 16:58:53 +00:00
Preston Van Loon
f690af81fa Add diagnostic logging for 'invalid data returned from peer' errors (#15674)
When peers return invalid data during initial sync, log the specific
validation failure reason. This helps identify:
- Whether peer exceeded requested block count
- Whether peer exceeded MAX_REQUEST_BLOCKS protocol limit
- Whether blocks are outside the requested slot range
- Whether blocks are out of order (not increasing or wrong step)

Each log includes the specific condition that failed, making it easier
to debug whether the issue is with peer implementations or request
validation logic.
2025-09-09 22:08:37 +00:00
kasey
029b896c79 refactor subscription code to enable peer discovery asap (#15660)
* refactor subscription code to enable peer discovery asap

* fix subscription tests

* hunting down the other test initialization bugs

* refactor subscription parameter tracking type

* manu naming feedback

* manu naming feedback

* missing godoc

* protect tracker from nil subscriptions

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-09-09 21:00:34 +00:00
Jun Song
e1117a7de2 SSZ-QL: Handle Vector type & Add SSZ tag parser for multi-dimensional parsing (#15668)
* Add vectorInfo

* Add 2D bytes field for test

* Add tag_parser for parsing SSZ tags

* Integrate tag parser with analyzer

* Add ByteList test case

* Changelog

* Better printing feature with Stringer implementation

* Return error for non-determined case without printing other values

* Update tag_parser.go: handle Vector and List mutually exclusive (inspired by OffchainLabs/fastssz)

* Make linter happy

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-09-09 20:35:06 +00:00
Potuz
39b2a02f66 Remove BLS change broadcasting at the fork (#15659)
* Remove BLS change broadcasting at the fork

* Changelog
2025-09-09 16:47:16 +00:00
Galoretka
4e8a710b64 chore: Remove duplicated test case from TestCollector (#15672)
* chore: Remove duplicated test case from TestCollector

* Create Galoretka_fix-leakybucket-test-duplicate.md
2025-09-09 12:49:54 +00:00
Preston Van Loon
7191a5bcdf PeerDAS: find data column peers when tracking validators (#15654)
* Subscription: Get fanout peers in all data column subnets when at least one validator is connected

* Apply suggestion from @nalepae

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Apply suggestion from @nalepae

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Apply suggestion from @nalepae

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Apply suggestion from @nalepae

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Updated test structure to @nalepae suggestion

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2025-09-09 00:31:15 +00:00
Muzry
d335a52c49 fixed the empty dirs not being removed (#15573)
* fixed the empty dirs not being removed

* update list empty dirs first

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Co-authored-by: kasey <489222+kasey@users.noreply.github.com>
2025-09-08 20:55:50 +00:00
NikolaiKryshnev
c7401f5e75 Fix link (#15631)
* Update CHANGELOG.md

* Update DEPENDENCIES.md

* Create 2025-08-25-docs-links.md
2025-09-08 20:40:18 +00:00
Preston Van Loon
0057cc57b5 spectests: Set mainnet spectests to size=large. This helps with some CI timeouts. CI is showing an average time of 4m39s, which is too close to the 5m timeout for a moderate test (#15664) 2025-09-08 20:01:34 +00:00
Jun Song
b1dc5e485d SSZ-QL: Handle List type & Populate the actual value dynamically (#15637)
* Add VariableTestContainer in ssz_query.proto

* Add listInfo

* Use errors.New for making an error with a static string literal

* Add listInfo field when analyzing the List type

* Persist the field order in the container

* Add actualOffset and goFieldName at fieldInfo

* Add PopulateFromValue function & update test runner

* Handle slice of ssz object for marshalling

* Add CalculateOffsetAndLength test

* Add comments for better doc

* Changelog :)

* Apply reviews from Radek

* Remove actualOffset and update offset field instead

* Add Nested container of variable-sized for testing nested path

* Fix offset adding logics: for variable-sized field, always add 4 instead of its fixed size

* Fix multiple import issue

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-09-08 16:50:24 +00:00
Radosław Kapka
f035da6fc5 Aggregate and pack sync committee messages (#15608)
* Aggregate and pack sync committee messages

* test

* simplify error check

* changelog <3

* fix assert import

* remove parallelization

* use sync committee cache

* ignore bits already set in pool messages

* fuzz fix

* cleanup

* test panic fix

* clear cache in tests
2025-09-08 15:17:19 +00:00
Muzry
854f4bc9a3 fix getBlockAttestationsV2 to return [] instead of null when data is empty (#15651) 2025-09-08 14:45:08 +00:00
james-prysm
1933adedbf update to v1.6.0-alpha.6 (#15658)
* wip workspace update

* update ethspecify yml

* fixing ethspecify

* fixing ethspecify

* fixing loader test

* changelog and reverting something in configs

* adding bad hash
2025-09-05 17:20:13 +00:00
Potuz
278b796e43 Fix next epoch proposer duties (#15642)
* Fix next epoch proposer duties

* Do not update state's slot when computing the proposer

Also do not call Fulu's proposer lookahead if the requested epoch is not
current or next.

* retract Terence's test

* Fix tests

* removing epoch check to pass spec test

* reverting rollback and fixing test setup

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Co-authored-by: james-prysm <james@prysmaticlabs.com>
2025-08-29 21:45:23 +00:00
Manu NALEPA
8e52d0c3c6 Retry fetch origin data column sidecars. (#15634)
* `convertToAddrInfo`: Add peer details on error.

* `fetchOriginColumns`: Remove unused argument.

* `fetchOriginColumns`: Fix typo

* `isSidecarIndexRequested`: Log requested indices.

* `fetchOriginColumns`: Retry.

* Add changelog.

* Fix Preston comment.

* `custodyGroupCountFromPeerENR`: Add agent on error messages.

* Update beacon-chain/sync/initial-sync/service.go

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

* Fix `TestCustodyGroupCountFromPeer`.

* `s.fetchOriginColumns`: Use `maxAttempts` and `delay` as parameters to ease unit testing.

* Implement `TestFetchOriginColumns`.

* `SendDataColumnSidecarsByRangeRequest` and `SendDataColumnSidecarsByRootRequest`: Add option to downscore the peer on RPC error.

* `fetchOriginColumns`: Remove max attempts, and downscore peers on RPC fault.

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2025-08-28 21:16:34 +00:00
james-prysm
d339e09509 fulu block proposals with datacolumn broadcast (#15628)
* propose block changes from peerdas branch

* breaking out broadcast code into its own helper, changing fulu broadcast for rest api to properly send datasidecars

* renamed validate blobsidecars to validate blobs, and added check for max blobs

* gofmt

* adding in batch verification for blobs"

* changelog

* adding kzg tests, moving new kzg functions to validation.go

* linting and other small fixes

* fixing linting issues and adding some proposer tests

* missing dependencies

* fixing test

* fixing more tests

* gaz

* removed return on broadcast data columns

* more cleanup and unit test adjustments

* missed removal of unneeded field

* adding data column receiver initialization

* Update beacon-chain/rpc/eth/beacon/handlers.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* partial review feedback from manu

* gaz

* reverting some code to peerdas as I don't believe the broadcast code needs to be reused

* missed removal of build dependency

* fixing tests and adding another test based on manu's suggestion

* fixing linting

* Update beacon-chain/rpc/eth/beacon/handlers.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/blockchain/kzg/validation.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* radek's review changes

* adding missed test

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-08-28 16:27:30 +00:00
Pop Chunhapanya
8ec460223c Swap the wrong arguments in a call (#15639)
* Swap the wrong arguments in a call

I saw that the names of the passed arguments and the ones of the
function parameters don't match, so I suspect that it's a bug.

* Add changelog

* Add validation for the fillInForkChocieMissingBlocks checkpoints.

* Add test for checkpoint epoch validation in fillInForkChoiceMissingBlocks.

* Use a sentinel error rather than error string

---------

Co-authored-by: kasey <489222+kasey@users.noreply.github.com>
Co-authored-by: Preston Van Loon <preston@pvl.dev>
2025-08-27 21:29:44 +00:00
Potuz
349d9d2fd0 Start from Justified checkpoint by default (#15636)
By default when starting a node, we load the finalized checkpoint from
db and set it as head. When the chain has not been finalizing for a
while and the user does not start from the latest head, it may still be
benefitial to start from the latest justified checkpoint that has to be
a descendant of the finalized one.
2025-08-27 15:50:10 +00:00
Sahil Sojitra
e0aecb9c32 Refactor to use atomic types (#15625)
* refactor to use atomic types

* added changlog fragment file

* update change log
2025-08-26 19:57:14 +00:00
Potuz
4a1ab70929 Change error message (#15635) 2025-08-26 17:24:32 +00:00
Manu NALEPA
240cd1d058 FetchDataColumnSidecars: If possible, try to reconstruct after retrieving sidecars from peers if not some are still missing. (#15593)
* `FetchDataColumnSidecars`: If possible, try to reconstruct after retrieving sidecars from peers if not some are still missing.

* `randomPeer`: Deterministic randomness.
Before this commit, `randomPeer` was non derterministic, even with a deterministic random source. There reason is we iterated over a map (which is fully random) and then stopped the iteration on a chosen random index (which can be deterministic if the random source is deterministic.)
After this commit, `randomPeer` and all its callers are fully deterministic when using a deterministic random source.

* Fix Potuz's comment.

* Fix James' comment.

* `tryReconstructFromStorageAndPeers`: Improve godoc.
2025-08-25 14:31:02 +00:00
Jun Song
26d8b6b786 Initialize SSZ-QL package with support for fixed-size types (#15588)
* Add basic PathElement

* Add ssz_type.go

* Add basic sszInfo

* Add containerInfo

* Add basic analyzer without analyzing list/vector

* Add analyzer for homogeneous collection types

* Add offset/length calculator

* Add testutil package in encoding/ssz/query

* Add first round trip test for IndexedAttestationElectra

* Go mod tidy

* Add Print function for debugging purpose

* Add changelog

* Add testonly flag for testutil package & Nit for nogo

* Apply reviews from Radek

* Replace fastssz with prysmaticlabs one

* Add proto/ssz_query package for testing purpose

* Update encoding/ssz/query tests to decouple with beacon types

* Use require.* instead of assert.*

* Fix import name for proto ssz_query package

* Remove uint8/uint16 and some byte arrays in FixedTestContainer

* Add newline for files

* Fix comment about byte array in ssz_query.proto

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-08-25 14:29:26 +00:00
Manu NALEPA
92c359456e PeerDAS: Add various missing items (#15629)
* `startBaseServices`: Warm data column storage cache.

* `TestFindPeers_NodeDeduplication`: Use `t.context`.

* `BUILD.bazel`: Moge `# gazelle.ignore` at the top of the file.

Rationale: This directive is applied to the whole file, regardless its position in the file.

* Improve `TestConstructGenericBeaconBlock`: Courtesy of Terence

* Add `TestDataColumnStoragePath_FlagSpecified`.

* `appFlags`: Move `flags.SubscribeAllDataSubnets` (cosmetic).

* `appFlags`: Add `storage.DataColumnStoragePathFlag`.

* Add changelog.
2025-08-25 13:06:57 +00:00
terence
d48ed44c4c Update consensus spec to v1.6.0-alpha.5 and adjust minimal config (#15621) 2025-08-23 01:19:07 +00:00
Radosław Kapka
dbab624f3d Update Github bug template (#15623) 2025-08-22 13:53:05 +00:00
Potuz
83e9cc3dbb Update gohashtree (#15619) 2025-08-22 13:19:28 +00:00
Justin Traglia
ee03c7cce2 Add spec references, a mapping of spec to implementation (#15592)
* Add spec references, a mapping of spec to implementation

* Add changelog fragment

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-08-21 14:54:04 +00:00
kasey
c5135f6995 enforce schedule alignment when next_fork_epoch matches (#15604)
* enforce schedule alignment when next_fork_epoch matches

* lint & typo

* James feedback

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-08-20 23:57:30 +00:00
Pop Chunhapanya
29aedac113 Fix subnet peer discovery (#15603)
* Fix subnet peer discovery

Currently computeAllNeededSubnets is called only once when the subnets
are subscribed. It should have been called regularly.

* changelog

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-08-20 16:52:11 +00:00
kasey
08fb3812b7 provide data column storage to rpc handlers (#15606)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-08-20 15:51:11 +00:00
kasey
07738dd9a4 improve pubsub topic subscription failure logging (#15600)
* improve pubsub topic subscription failure logging

* Errorf doesn't support %w, so use %v

* log capitalization

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-08-19 12:24:07 +00:00
james-prysm
53df29b07f Fix find peers regression (#15578)
* adding what I think could be a fix for find peer

* removing uneeded comment

* unit tests

* linting

* gofmt

* changelog

* Update beacon-chain/p2p/discovery_test.go

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update changelog/james-prysm_fix-find-peers.md

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* fixing test import

* applying suggestions

* fixing typo

* manu feedback

* accidently checked in files

* addressing manu's edgecase, old bug

* moving tests from service-test.go to subnets_test.go and adding coverage for receiving bad existing node with higher seq

* cleanup

* updating for clarity

* missingPeerCount should increment if we are removing the peer from map

* manu's recommendations on defective subnet rollback edge case

* rollback introduced too much complication as well as a new bug so we are removing it

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-08-18 19:41:32 +00:00
Manu NALEPA
00cf1f2507 Implement PeerDAS sync (#15564)
* PeerDAS: Implement sync

* Fix Potuz's comment.

* Fix Potuz's comment.

* Fix Potuz's comment.

* Fix Satyajit's comment.

* Partially fix Potuz's comment.

* Fix Potuz's comment.

* Fix Potuz's comment.

* Fix Potuz's comment.

* Fix Potuz's comment.

* Fix Potuz's comment.

* Fix Potuz's comment.

* Fix Potuz's comment.

* Add tests for `sendDataColumnSidecarsRequest`.

* Fix Satyajit's comment.

* Implement `TestSendDataColumnSidecarsRequest`.

* Implement `TestFetchDataColumnSidecarsFromPeers`.

* Implement `TestUpdateResults`.

* Implement `TestSelectPeers`.

* Implement `TestCategorizeIndices`.

* Fix James' comment.

* Fix James's comment.

* Fix James' commit.

* Fix James' comment.

* Fix James' comment.

* Fix flakiness in `TestSelectPeers`.

* Update cmd/beacon-chain/flags/config.go

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

* Fix Preston's comment.

* Fix James's comment.

* Implement `TestFetchDataColumnSidecars`.

* Revert "Fix Potuz's comment."

This reverts commit c45230b455.

* Fix Potuz's comment.

* Revert "Fix James' comment."

This reverts commit a3f919205a.

* Fix James' comment.

* Fix Preston's comment.

* Fix James' comment.

* `selectPeers`: Avoid map with key but empty value.

* Fix typo.

* Fix Potuz's comment.

* Fix Potuz's comment.

* Fix James' comment.

* Add DataColumnStorage and SubscribeAllDataSubnets flag.

* Add extra flags

* Fix Potuz's and Preston's comment.

* Add rate limiter check.

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-08-18 14:36:07 +00:00
623 changed files with 48039 additions and 23755 deletions

View File

@@ -34,6 +34,7 @@ build:minimal --@io_bazel_rules_go//go/config:tags=minimal
build:release --compilation_mode=opt
build:release --stamp
build:release --define pgo_enabled=1
build:release --strip=always
# Build binary with cgo symbolizer for debugging / profiling.
build:cgo_symbolizer --copt=-g

View File

@@ -1,6 +1,6 @@
name: 🐞 Bug report
description: Report a bug or problem with running Prysm
labels: ["Bug"]
type: "Bug"
body:
- type: markdown
attributes:

View File

@@ -1,4 +1,4 @@
FROM golang:1.24-alpine
FROM golang:1.25.1-alpine
COPY entrypoint.sh /entrypoint.sh

View File

@@ -9,7 +9,7 @@ on:
jobs:
run-changelog-check:
runs-on: ubuntu-latest
runs-on: ubuntu-4
steps:
- name: Checkout source code
uses: actions/checkout@f43a0e5ff2bd294095638e18286ca9a3d1956744 # v3.6.0

43
.github/workflows/check-specrefs.yml vendored Normal file
View File

@@ -0,0 +1,43 @@
name: Check Spec References
on: [push, pull_request]
jobs:
check-specrefs:
runs-on: ubuntu-4
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Check version consistency
run: |
WORKSPACE_VERSION=$(grep 'consensus_spec_version = ' WORKSPACE | sed 's/.*"\(.*\)"/\1/')
ETHSPECIFY_VERSION=$(grep '^version:' specrefs/.ethspecify.yml | sed 's/version: //')
if [ "$WORKSPACE_VERSION" != "$ETHSPECIFY_VERSION" ]; then
echo "Version mismatch between WORKSPACE and ethspecify"
echo " WORKSPACE: $WORKSPACE_VERSION"
echo " specrefs/.ethspecify.yml: $ETHSPECIFY_VERSION"
exit 1
else
echo "Versions match: $WORKSPACE_VERSION"
fi
- name: Install ethspecify
run: python3 -mpip install ethspecify
- name: Update spec references
run: ethspecify process --path=specrefs
- name: Check for differences
run: |
if ! git diff --exit-code specrefs >/dev/null; then
echo "Spec references are out-of-date!"
echo ""
git --no-pager diff specrefs
exit 1
else
echo "Spec references are up-to-date!"
fi
- name: Check spec references
run: ethspecify check --path=specrefs

View File

@@ -10,7 +10,7 @@ on:
jobs:
clang-format-checking:
runs-on: ubuntu-latest
runs-on: ubuntu-4
steps:
- uses: actions/checkout@v2
# Is this step failing for you?

View File

@@ -10,13 +10,13 @@ permissions:
jobs:
list:
runs-on: ubuntu-latest
runs-on: ubuntu-4
timeout-minutes: 180
steps:
- uses: actions/checkout@v3
- uses: actions/setup-go@v4
with:
go-version: '1.23.5'
go-version: '1.25.1'
- id: list
uses: shogo82148/actions-go-fuzz/list@v0
with:
@@ -25,7 +25,7 @@ jobs:
fuzz-tests: ${{steps.list.outputs.fuzz-tests}}
fuzz:
runs-on: ubuntu-latest
runs-on: ubuntu-4
timeout-minutes: 360
needs: list
strategy:
@@ -36,7 +36,7 @@ jobs:
- uses: actions/checkout@v3
- uses: actions/setup-go@v4
with:
go-version: '1.23.5'
go-version: '1.25.1'
- uses: shogo82148/actions-go-fuzz/run@v0
with:
packages: ${{ matrix.package }}

View File

@@ -11,7 +11,7 @@ on:
jobs:
formatting:
name: Formatting
runs-on: ubuntu-latest
runs-on: ubuntu-4
steps:
- name: Checkout
uses: actions/checkout@v4
@@ -22,7 +22,7 @@ jobs:
gosec:
name: Gosec scan
runs-on: ubuntu-latest
runs-on: ubuntu-4
env:
GO111MODULE: on
steps:
@@ -31,7 +31,7 @@ jobs:
- name: Set up Go 1.24
uses: actions/setup-go@v4
with:
go-version: '1.24.0'
go-version: '1.25.1'
- name: Run Gosec Security Scanner
run: | # https://github.com/securego/gosec/issues/469
export PATH=$PATH:$(go env GOPATH)/bin
@@ -40,31 +40,31 @@ jobs:
lint:
name: Lint
runs-on: ubuntu-latest
runs-on: ubuntu-4
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up Go 1.24
uses: actions/setup-go@v4
with:
go-version: '1.24.0'
id: go
fetch-depth: 0
- name: Set up Go 1.25.1
uses: actions/setup-go@v5
with:
go-version: '1.25.1'
- name: Golangci-lint
uses: golangci/golangci-lint-action@v5
uses: golangci/golangci-lint-action@v8
with:
version: v1.64.5
args: --config=.golangci.yml --out-${NO_FUTURE}format colored-line-number
version: v2.4
build:
name: Build
runs-on: ubuntu-latest
runs-on: ubuntu-4
steps:
- name: Set up Go 1.x
- name: Set up Go 1.25.1
uses: actions/setup-go@v4
with:
go-version: '1.24.0'
go-version: '1.25.1'
id: go
- name: Check out code into the Go module directory

View File

@@ -8,7 +8,7 @@ on:
jobs:
Horusec_Scan:
name: horusec-Scan
runs-on: ubuntu-latest
runs-on: ubuntu-4
if: github.ref == 'refs/heads/develop'
steps:
- name: Check out code
@@ -19,4 +19,4 @@ jobs:
- name: Running Security Scan
run: |
curl -fsSL https://raw.githubusercontent.com/ZupIT/horusec/main/deployments/scripts/install.sh | bash -s latest
horusec start -t="10000" -p="./" -e="true" -i="**/crypto/bls/herumi/**, **/**/*_test.go, **/third_party/afl/**, **/crypto/keystore/key.go"
horusec start -t="10000" -p="./" -e="true" -i="**/crypto/bls/herumi/**, **/**/*_test.go, **/third_party/afl/**, **/crypto/keystore/key.go"

View File

@@ -1,90 +1,41 @@
version: "2"
run:
timeout: 10m
go: '1.23.5'
issues:
exclude-files:
- validator/web/site_data.go
- .*_test.go
exclude-dirs:
- proto
- tools/analyzers
go: 1.23.5
linters:
enable-all: true
disable:
# Deprecated linters:
enable:
- errcheck
- ineffassign
- govet
# Disabled for now:
- asasalint
- bodyclose
- containedctx
- contextcheck
- cyclop
- depguard
- dogsled
- dupl
- durationcheck
- errname
- err113
- exhaustive
- exhaustruct
- forbidigo
- forcetypeassert
- funlen
- gci
- gochecknoglobals
- gochecknoinits
- goconst
- gocritic
- gocyclo
- godot
- godox
- gofumpt
- gomoddirectives
- gosec
- inamedparam
- interfacebloat
- intrange
- ireturn
- lll
- maintidx
- makezero
- mnd
- musttag
- nakedret
- nestif
- nilnil
- nlreturn
- noctx
- nolintlint
- nonamedreturns
- nosprintfhostport
- perfsprint
- prealloc
- predeclared
- promlinter
- protogetter
- recvcheck
- revive
- spancheck
disable:
- staticcheck
- stylecheck
- tagalign
- tagliatelle
- thelper
- unparam
- usetesting
- varnamelen
- wrapcheck
- wsl
- unused
exclusions:
generated: lax
presets:
- comments
- common-false-positives
- legacy
- std-error-handling
paths:
- validator/web/site_data.go
- .*_test.go
- proto
- tools/analyzers
- third_party$
- builtin$
- examples$
linters-settings:
gocognit:
# TODO: We should target for < 50
min-complexity: 65
output:
print-issued-lines: true
sort-results: true
formatters:
enable:
- gofmt
- goimports
exclusions:
generated: lax
paths:
- validator/web/site_data.go
- .*_test.go
- proto
- tools/analyzers
- third_party$
- builtin$
- examples$

View File

@@ -2993,7 +2993,7 @@ There are two known issues with this release:
### Added
- Web3Signer support. See the [documentation](https://docs.prylabs.network/docs/next/wallet/web3signer) for more
- Web3Signer support. See the [documentation](https://prysm.offchainlabs.com/docs/manage-wallet/web3signer/) for more
details.
- Bellatrix support. See [kiln testnet instructions](https://hackmd.io/OqIoTiQvS9KOIataIFksBQ?view)
- Weak subjectivity sync / checkpoint sync. This is an experimental feature and may have unintended side effects for

View File

@@ -2,7 +2,7 @@
Prysm is go project with many complicated dependencies, including some c++ based libraries. There
are two parts to Prysm's dependency management. Go modules and bazel managed dependencies. Be sure
to read [Why Bazel?](https://github.com/OffchainLabs/documentation/issues/138) to fully
to read [Why Bazel?](https://prysm.offchainlabs.com/docs/install-prysm/install-with-bazel/#why-bazel) to fully
understand the reasoning behind an additional layer of build tooling via Bazel rather than a pure
"go build" project.

View File

@@ -158,15 +158,15 @@ oci_register_toolchains(
http_archive(
name = "io_bazel_rules_go",
integrity = "sha256-JD8o94crTb2DFiJJR8nMAGdBAW95zIENB4cbI+JnrI4=",
patch_args = ["-p1"],
patches = [
# Expose internals of go_test for custom build transitions.
"//third_party:io_bazel_rules_go_test.patch",
],
strip_prefix = "rules_go-cf3c3af34bd869b864f5f2b98e2f41c2b220d6c9",
sha256 = "a729c8ed2447c90fe140077689079ca0acfb7580ec41637f312d650ce9d93d96",
urls = [
"https://github.com/bazel-contrib/rules_go/archive/cf3c3af34bd869b864f5f2b98e2f41c2b220d6c9.tar.gz",
"https://mirror.bazel.build/github.com/bazel-contrib/rules_go/releases/download/v0.57.0/rules_go-v0.57.0.zip",
"https://github.com/bazel-contrib/rules_go/releases/download/v0.57.0/rules_go-v0.57.0.zip",
],
)
@@ -208,7 +208,7 @@ load("@io_bazel_rules_go//go:deps.bzl", "go_register_toolchains", "go_rules_depe
go_rules_dependencies()
go_register_toolchains(
go_version = "1.24.6",
go_version = "1.25.1",
nogo = "@//:nogo",
)
@@ -253,16 +253,16 @@ filegroup(
url = "https://github.com/ethereum/EIPs/archive/5480440fe51742ed23342b68cf106cefd427e39d.tar.gz",
)
consensus_spec_version = "v1.6.0-alpha.4"
consensus_spec_version = "v1.6.0-beta.0"
load("@prysm//tools:download_spectests.bzl", "consensus_spec_tests")
consensus_spec_tests(
name = "consensus_spec_tests",
flavors = {
"general": "sha256-MaN4zu3o0vWZypUHS5r4D8WzJF4wANoadM8qm6iyDs4=",
"minimal": "sha256-aZGNPp/bBvJgq3Wf6vyR0H6G3DOkbSuggEmOL4jEmtg=",
"mainnet": "sha256-C7jjosvpzUgw3GPajlsWBV02ZbkZ5Uv4ikmOqfDGajI=",
"general": "sha256-rT3jQp2+ZaDiO66gIQggetzqr+kGeexaLqEhbx4HDMY=",
"minimal": "sha256-wowwwyvd0KJLsE+oDOtPkrhZyJndJpJ0lbXYsLH6XBw=",
"mainnet": "sha256-4ZLrLNeO7NihZ4TuWH5V5fUhvW9Y3mAPBQDCqrfShps=",
},
version = consensus_spec_version,
)
@@ -278,7 +278,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-qreawRS77l8CebiNww8z727qUItw7KlHY1Xqj7IrPdk=",
integrity = "sha256-sBe3Rx8zGq9IrvfgIhZQpYidGjy3mE1SiCb6/+pjLdY=",
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
)
@@ -300,22 +300,6 @@ filegroup(
url = "https://github.com/ethereum/bls12-381-tests/releases/download/%s/bls_tests_yaml.tar.gz" % bls_test_version,
)
http_archive(
name = "eth2_networks",
build_file_content = """
filegroup(
name = "configs",
srcs = glob([
"shared/**/config.yaml",
]),
visibility = ["//visibility:public"],
)
""",
sha256 = "77e7e3ed65e33b7bb19d30131f4c2bb39e4dfeb188ab9ae84651c3cc7600131d",
strip_prefix = "eth2-networks-934c948e69205dcf2deb87e4ae6cc140c335f94d",
url = "https://github.com/eth-clients/eth2-networks/archive/934c948e69205dcf2deb87e4ae6cc140c335f94d.tar.gz",
)
http_archive(
name = "holesky_testnet",
build_file_content = """
@@ -327,9 +311,9 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-YVFFrCmjoGZ3fXMWpsCpSsYbANy1grnqYwOLKIg2SsA=",
strip_prefix = "holesky-32a72e21c6e53c262f27d50dd540cb654517d03a",
url = "https://github.com/eth-clients/holesky/archive/32a72e21c6e53c262f27d50dd540cb654517d03a.tar.gz", # 2025-03-17
integrity = "sha256-htyxg8Ln2o8eCiifFN7/hcHGZg8Ir9CPzCEx+FUnnCs=",
strip_prefix = "holesky-8aec65f11f0c986d6b76b2eb902420635eb9b815",
url = "https://github.com/eth-clients/holesky/archive/8aec65f11f0c986d6b76b2eb902420635eb9b815.tar.gz",
)
http_archive(
@@ -359,9 +343,9 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-b5F7Wg9LLMqGRIpP2uqb/YsSFVn2ynzlV7g/Nb1EFLk=",
strip_prefix = "sepolia-562d9938f08675e9ba490a1dfba21fb05843f39f",
url = "https://github.com/eth-clients/sepolia/archive/562d9938f08675e9ba490a1dfba21fb05843f39f.tar.gz", # 2025-03-17
integrity = "sha256-+UZgfvBcea0K0sbvAJZOz5ZNmxdWZYbohP38heUuc6w=",
strip_prefix = "sepolia-f9158732adb1a2a6440613ad2232eb50e7384c4f",
url = "https://github.com/eth-clients/sepolia/archive/f9158732adb1a2a6440613ad2232eb50e7384c4f.tar.gz",
)
http_archive(
@@ -375,17 +359,17 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-dPiEWUd8QvbYGwGtIm0QtCekitVLOLsW5rpQIGzz8PU=",
strip_prefix = "hoodi-828c2c940e1141092bd4bb979cef547ea926d272",
url = "https://github.com/eth-clients/hoodi/archive/828c2c940e1141092bd4bb979cef547ea926d272.tar.gz",
integrity = "sha256-G+4c9c/vci1OyPrQJnQCI+ZCv/E0cWN4hrHDY3i7ns0=",
strip_prefix = "hoodi-b6ee51b2045a5e7fe3efac52534f75b080b049c6",
url = "https://github.com/eth-clients/hoodi/archive/b6ee51b2045a5e7fe3efac52534f75b080b049c6.tar.gz",
)
http_archive(
name = "com_google_protobuf",
sha256 = "9bd87b8280ef720d3240514f884e56a712f2218f0d693b48050c836028940a42",
strip_prefix = "protobuf-25.1",
sha256 = "7c3ebd7aaedd86fa5dc479a0fda803f602caaf78d8aff7ce83b89e1b8ae7442a",
strip_prefix = "protobuf-28.3",
urls = [
"https://github.com/protocolbuffers/protobuf/archive/v25.1.tar.gz",
"https://github.com/protocolbuffers/protobuf/archive/v28.3.tar.gz",
],
)

View File

@@ -59,6 +59,7 @@ go_test(
"//runtime/version:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",

View File

@@ -30,10 +30,11 @@ import (
)
const (
getExecHeaderPath = "/eth/v1/builder/header/{{.Slot}}/{{.ParentHash}}/{{.Pubkey}}"
getStatus = "/eth/v1/builder/status"
postBlindedBeaconBlockPath = "/eth/v1/builder/blinded_blocks"
postRegisterValidatorPath = "/eth/v1/builder/validators"
getExecHeaderPath = "/eth/v1/builder/header/{{.Slot}}/{{.ParentHash}}/{{.Pubkey}}"
getStatus = "/eth/v1/builder/status"
postBlindedBeaconBlockPath = "/eth/v1/builder/blinded_blocks"
postBlindedBeaconBlockV2Path = "/eth/v2/builder/blinded_blocks"
postRegisterValidatorPath = "/eth/v1/builder/validators"
)
var (
@@ -512,7 +513,7 @@ func (c *Client) SubmitBlindedBlockPostFulu(ctx context.Context, sb interfaces.R
}
// Post the blinded block - the response should only contain a status code (no payload)
_, _, err = c.do(ctx, http.MethodPost, postBlindedBeaconBlockPath, bytes.NewBuffer(body), http.StatusAccepted, postOpts)
_, _, err = c.do(ctx, http.MethodPost, postBlindedBeaconBlockV2Path, bytes.NewBuffer(body), http.StatusAccepted, postOpts)
if err != nil {
return errors.Wrap(err, "error posting the blinded block to the builder api post-Fulu")
}

View File

@@ -12,6 +12,7 @@ import (
"github.com/OffchainLabs/prysm/v6/api"
"github.com/OffchainLabs/prysm/v6/api/server/structs"
"github.com/OffchainLabs/prysm/v6/testing/util"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
@@ -170,8 +171,11 @@ func TestClient_RegisterValidator(t *testing.T) {
func TestClient_GetHeader(t *testing.T) {
ctx := t.Context()
expectedPath := "/eth/v1/builder/header/23/0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2/0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a"
var slot primitives.Slot = 23
ds := util.SlotAtEpoch(t, params.BeaconConfig().DenebForkEpoch)
es := util.SlotAtEpoch(t, params.BeaconConfig().ElectraForkEpoch)
expectedPath := "/eth/v1/builder/header/%d/0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2/0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a"
expectedPath = fmt.Sprintf(expectedPath, ds)
var slot primitives.Slot = ds
parentHash := ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")
pubkey := ezDecode(t, "0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a")
t.Run("server error", func(t *testing.T) {
@@ -533,7 +537,7 @@ func TestClient_GetHeader(t *testing.T) {
require.Equal(t, expectedPath, r.URL.Path)
epr := &ExecHeaderResponseElectra{}
require.NoError(t, json.Unmarshal([]byte(testExampleHeaderResponseElectra), epr))
pro, err := epr.ToProto(100)
pro, err := epr.ToProto(es)
require.NoError(t, err)
ssz, err := pro.MarshalSSZ()
require.NoError(t, err)
@@ -1561,7 +1565,7 @@ func TestSubmitBlindedBlockPostFulu(t *testing.T) {
t.Run("success", func(t *testing.T) {
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, postBlindedBeaconBlockPath, r.URL.Path)
require.Equal(t, postBlindedBeaconBlockV2Path, r.URL.Path)
require.Equal(t, "bellatrix", r.Header.Get("Eth-Consensus-Version"))
require.Equal(t, api.JsonMediaType, r.Header.Get("Content-Type"))
require.Equal(t, api.JsonMediaType, r.Header.Get("Accept"))
@@ -1586,7 +1590,7 @@ func TestSubmitBlindedBlockPostFulu(t *testing.T) {
t.Run("success_ssz", func(t *testing.T) {
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, postBlindedBeaconBlockPath, r.URL.Path)
require.Equal(t, postBlindedBeaconBlockV2Path, r.URL.Path)
require.Equal(t, "bellatrix", r.Header.Get(api.VersionHeader))
require.Equal(t, api.OctetStreamMediaType, r.Header.Get("Content-Type"))
require.Equal(t, api.OctetStreamMediaType, r.Header.Get("Accept"))
@@ -1612,7 +1616,7 @@ func TestSubmitBlindedBlockPostFulu(t *testing.T) {
t.Run("error_response", func(t *testing.T) {
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, postBlindedBeaconBlockPath, r.URL.Path)
require.Equal(t, postBlindedBeaconBlockV2Path, r.URL.Path)
require.Equal(t, "bellatrix", r.Header.Get("Eth-Consensus-Version"))
message := ErrorMessage{
Code: 400,

View File

@@ -1,9 +1,8 @@
package httprest
import (
"time"
"net/http"
"time"
"github.com/OffchainLabs/prysm/v6/api/server/middleware"
)

View File

@@ -290,3 +290,9 @@ type GetProposerLookaheadResponse struct {
Finalized bool `json:"finalized"`
Data []string `json:"data"` // validator indexes
}
type GetBlobsResponse struct {
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
Data []string `json:"data"` //blobs
}

View File

@@ -56,3 +56,19 @@ type ForkChoiceNodeExtraData struct {
TimeStamp string `json:"timestamp"`
Target string `json:"target"`
}
type GetDebugDataColumnSidecarsResponse struct {
Version string `json:"version"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
Data []*DataColumnSidecar `json:"data"`
}
type DataColumnSidecar struct {
Index string `json:"index"`
Column []string `json:"column"`
KzgCommitments []string `json:"kzg_commitments"`
KzgProofs []string `json:"kzg_proofs"`
SignedBeaconBlockHeader *SignedBeaconBlockHeader `json:"signed_block_header"`
KzgCommitmentsInclusionProof []string `json:"kzg_commitments_inclusion_proof"`
}

View File

@@ -27,7 +27,7 @@ go_library(
"receive_block.go",
"receive_data_column.go",
"service.go",
"setup_forchoice.go",
"setup_forkchoice.go",
"tracked_proposer.go",
"weak_subjectivity_checks.go",
],
@@ -50,7 +50,6 @@ go_library(
"//beacon-chain/core/feed:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/light-client:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/time:go_default_library",
@@ -63,6 +62,7 @@ go_library(
"//beacon-chain/forkchoice:go_default_library",
"//beacon-chain/forkchoice/doubly-linked-tree:go_default_library",
"//beacon-chain/forkchoice/types:go_default_library",
"//beacon-chain/light-client:go_default_library",
"//beacon-chain/operations/attestations:go_default_library",
"//beacon-chain/operations/blstoexec:go_default_library",
"//beacon-chain/operations/slashings:go_default_library",
@@ -148,7 +148,6 @@ go_test(
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/light-client:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/transition:go_default_library",
@@ -161,6 +160,7 @@ go_test(
"//beacon-chain/forkchoice:go_default_library",
"//beacon-chain/forkchoice/doubly-linked-tree:go_default_library",
"//beacon-chain/forkchoice/types:go_default_library",
"//beacon-chain/light-client:go_default_library",
"//beacon-chain/operations/attestations:go_default_library",
"//beacon-chain/operations/attestations/kv:go_default_library",
"//beacon-chain/operations/blstoexec:go_default_library",

View File

@@ -16,7 +16,7 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/config/features"
"github.com/OffchainLabs/prysm/v6/config/params"
consensusblocks "github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
blocktypes "github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
payloadattribute "github.com/OffchainLabs/prysm/v6/consensus-types/payload-attribute"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
@@ -218,24 +218,18 @@ func (s *Service) getPayloadHash(ctx context.Context, root []byte) ([32]byte, er
// notifyNewPayload signals execution engine on a new payload.
// It returns true if the EL has returned VALID for the block
func (s *Service) notifyNewPayload(ctx context.Context, preStateVersion int,
preStateHeader interfaces.ExecutionData, blk interfaces.ReadOnlySignedBeaconBlock) (bool, error) {
// stVersion should represent the version of the pre-state; header should also be from the pre-state.
func (s *Service) notifyNewPayload(ctx context.Context, stVersion int, header interfaces.ExecutionData, blk blocktypes.ROBlock) (bool, error) {
ctx, span := trace.StartSpan(ctx, "blockChain.notifyNewPayload")
defer span.End()
// Execution payload is only supported in Bellatrix and beyond. Pre
// merge blocks are never optimistic
if blk == nil {
return false, errors.New("signed beacon block can't be nil")
}
if preStateVersion < version.Bellatrix {
if stVersion < version.Bellatrix {
return true, nil
}
if err := consensusblocks.BeaconBlockIsNil(blk); err != nil {
return false, err
}
body := blk.Block().Body()
enabled, err := blocks.IsExecutionEnabledUsingHeader(preStateHeader, body)
enabled, err := blocks.IsExecutionEnabledUsingHeader(header, body)
if err != nil {
return false, errors.Wrap(invalidBlock{error: err}, "could not determine if execution is enabled")
}
@@ -268,28 +262,32 @@ func (s *Service) notifyNewPayload(ctx context.Context, preStateVersion int,
return false, errors.New("nil execution requests")
}
}
lastValidHash, err = s.cfg.ExecutionEngineCaller.NewPayload(ctx, payload, versionedHashes, parentRoot, requests)
switch {
case err == nil:
lastValidHash, err = s.cfg.ExecutionEngineCaller.NewPayload(ctx, payload, versionedHashes, parentRoot, requests)
if err == nil {
newPayloadValidNodeCount.Inc()
return true, nil
case errors.Is(err, execution.ErrAcceptedSyncingPayloadStatus):
}
logFields := logrus.Fields{
"slot": blk.Block().Slot(),
"parentRoot": fmt.Sprintf("%#x", parentRoot),
"root": fmt.Sprintf("%#x", blk.Root()),
"payloadBlockHash": fmt.Sprintf("%#x", bytesutil.Trunc(payload.BlockHash())),
}
if errors.Is(err, execution.ErrAcceptedSyncingPayloadStatus) {
newPayloadOptimisticNodeCount.Inc()
log.WithFields(logrus.Fields{
"slot": blk.Block().Slot(),
"payloadBlockHash": fmt.Sprintf("%#x", bytesutil.Trunc(payload.BlockHash())),
}).Info("Called new payload with optimistic block")
log.WithFields(logFields).Info("Called new payload with optimistic block")
return false, nil
case errors.Is(err, execution.ErrInvalidPayloadStatus):
lvh := bytesutil.ToBytes32(lastValidHash)
}
if errors.Is(err, execution.ErrInvalidPayloadStatus) {
log.WithFields(logFields).WithError(err).Error("Invalid payload status")
return false, invalidBlock{
error: ErrInvalidPayload,
lastValidHash: lvh,
lastValidHash: bytesutil.ToBytes32(lastValidHash),
}
default:
return false, errors.WithMessage(ErrUndefinedExecutionEngineError, err.Error())
}
log.WithFields(logFields).WithError(err).Error("Unexpected execution engine error")
return false, errors.WithMessage(ErrUndefinedExecutionEngineError, err.Error())
}
// reportInvalidBlock deals with the event that an invalid block was detected by the execution layer

View File

@@ -481,33 +481,12 @@ func Test_NotifyNewPayload(t *testing.T) {
phase0State, _ := util.DeterministicGenesisState(t, 1)
altairState, _ := util.DeterministicGenesisStateAltair(t, 1)
bellatrixState, _ := util.DeterministicGenesisStateBellatrix(t, 2)
a := &ethpb.SignedBeaconBlockAltair{
Block: &ethpb.BeaconBlockAltair{
Body: &ethpb.BeaconBlockBodyAltair{},
},
}
a := util.NewBeaconBlockAltair()
altairBlk, err := consensusblocks.NewSignedBeaconBlock(a)
require.NoError(t, err)
blk := &ethpb.SignedBeaconBlockBellatrix{
Block: &ethpb.BeaconBlockBellatrix{
Slot: 1,
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &v1.ExecutionPayload{
BlockNumber: 1,
ParentHash: make([]byte, fieldparams.RootLength),
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
StateRoot: make([]byte, fieldparams.RootLength),
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
},
},
},
}
blk := util.NewBeaconBlockBellatrix()
blk.Block.Slot = 1
blk.Block.Body.ExecutionPayload.BlockNumber = 1
bellatrixBlk, err := consensusblocks.NewSignedBeaconBlock(util.HydrateSignedBeaconBlockBellatrix(blk))
require.NoError(t, err)
st := params.BeaconConfig().SlotsPerEpoch.Mul(uint64(epochsSinceFinalitySaveHotStateDB))
@@ -544,12 +523,6 @@ func Test_NotifyNewPayload(t *testing.T) {
blk: altairBlk,
isValidPayload: true,
},
{
name: "nil beacon block",
postState: bellatrixState,
errString: "signed beacon block can't be nil",
isValidPayload: false,
},
{
name: "new payload with optimistic block",
postState: bellatrixState,
@@ -576,15 +549,8 @@ func Test_NotifyNewPayload(t *testing.T) {
name: "altair pre state, happy case",
postState: bellatrixState,
blk: func() interfaces.ReadOnlySignedBeaconBlock {
blk := &ethpb.SignedBeaconBlockBellatrix{
Block: &ethpb.BeaconBlockBellatrix{
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &v1.ExecutionPayload{
ParentHash: bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength),
},
},
},
}
blk := util.NewBeaconBlockBellatrix()
blk.Block.Body.ExecutionPayload.ParentHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
b, err := consensusblocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
return b
@@ -595,24 +561,7 @@ func Test_NotifyNewPayload(t *testing.T) {
name: "not at merge transition",
postState: bellatrixState,
blk: func() interfaces.ReadOnlySignedBeaconBlock {
blk := &ethpb.SignedBeaconBlockBellatrix{
Block: &ethpb.BeaconBlockBellatrix{
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &v1.ExecutionPayload{
ParentHash: make([]byte, fieldparams.RootLength),
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
StateRoot: make([]byte, fieldparams.RootLength),
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
},
},
},
}
blk := util.NewBeaconBlockBellatrix()
b, err := consensusblocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
return b
@@ -623,15 +572,8 @@ func Test_NotifyNewPayload(t *testing.T) {
name: "happy case",
postState: bellatrixState,
blk: func() interfaces.ReadOnlySignedBeaconBlock {
blk := &ethpb.SignedBeaconBlockBellatrix{
Block: &ethpb.BeaconBlockBellatrix{
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &v1.ExecutionPayload{
ParentHash: bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength),
},
},
},
}
blk := util.NewBeaconBlockBellatrix()
blk.Block.Body.ExecutionPayload.ParentHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
b, err := consensusblocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
return b
@@ -642,15 +584,8 @@ func Test_NotifyNewPayload(t *testing.T) {
name: "undefined error from ee",
postState: bellatrixState,
blk: func() interfaces.ReadOnlySignedBeaconBlock {
blk := &ethpb.SignedBeaconBlockBellatrix{
Block: &ethpb.BeaconBlockBellatrix{
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &v1.ExecutionPayload{
ParentHash: bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength),
},
},
},
}
blk := util.NewBeaconBlockBellatrix()
blk.Block.Body.ExecutionPayload.ParentHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
b, err := consensusblocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
return b
@@ -662,15 +597,8 @@ func Test_NotifyNewPayload(t *testing.T) {
name: "invalid block hash error from ee",
postState: bellatrixState,
blk: func() interfaces.ReadOnlySignedBeaconBlock {
blk := &ethpb.SignedBeaconBlockBellatrix{
Block: &ethpb.BeaconBlockBellatrix{
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &v1.ExecutionPayload{
ParentHash: bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength),
},
},
},
}
blk := util.NewBeaconBlockBellatrix()
blk.Block.Body.ExecutionPayload.ParentHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
b, err := consensusblocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
return b
@@ -701,7 +629,9 @@ func Test_NotifyNewPayload(t *testing.T) {
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
postVersion, postHeader, err := getStateVersionAndPayload(tt.postState)
require.NoError(t, err)
isValidPayload, err := service.notifyNewPayload(ctx, postVersion, postHeader, tt.blk)
rob, err := consensusblocks.NewROBlock(tt.blk)
require.NoError(t, err)
isValidPayload, err := service.notifyNewPayload(ctx, postVersion, postHeader, rob)
if tt.errString != "" {
require.ErrorContains(t, tt.errString, err)
if tt.invalidBlock {
@@ -725,17 +655,12 @@ func Test_NotifyNewPayload_SetOptimisticToValid(t *testing.T) {
ctx := tr.ctx
bellatrixState, _ := util.DeterministicGenesisStateBellatrix(t, 2)
blk := &ethpb.SignedBeaconBlockBellatrix{
Block: &ethpb.BeaconBlockBellatrix{
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &v1.ExecutionPayload{
ParentHash: bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength),
},
},
},
}
blk := util.NewBeaconBlockBellatrix()
blk.Block.Body.ExecutionPayload.ParentHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
bellatrixBlk, err := consensusblocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
rob, err := consensusblocks.NewROBlock(bellatrixBlk)
require.NoError(t, err)
e := &mockExecution.EngineClient{BlockByHashMap: map[[32]byte]*v1.ExecutionBlock{}}
e.BlockByHashMap[[32]byte{'a'}] = &v1.ExecutionBlock{
Header: gethtypes.Header{
@@ -752,7 +677,7 @@ func Test_NotifyNewPayload_SetOptimisticToValid(t *testing.T) {
service.cfg.ExecutionEngineCaller = e
postVersion, postHeader, err := getStateVersionAndPayload(bellatrixState)
require.NoError(t, err)
validated, err := service.notifyNewPayload(ctx, postVersion, postHeader, bellatrixBlk)
validated, err := service.notifyNewPayload(ctx, postVersion, postHeader, rob)
require.NoError(t, err)
require.Equal(t, true, validated)
}

View File

@@ -16,7 +16,6 @@ import (
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/math"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
ethpbv1 "github.com/OffchainLabs/prysm/v6/proto/eth/v1"
"github.com/OffchainLabs/prysm/v6/runtime/version"
@@ -108,7 +107,7 @@ func (s *Service) saveHead(ctx context.Context, newHeadRoot [32]byte, headBlock
commonRoot = params.BeaconConfig().ZeroHash
}
dis := headSlot + newHeadSlot - 2*forkSlot
dep := math.Max(uint64(headSlot-forkSlot), uint64(newHeadSlot-forkSlot))
dep := max(uint64(headSlot-forkSlot), uint64(newHeadSlot-forkSlot))
oldWeight, err := s.cfg.ForkChoiceStore.Weight(oldHeadRoot)
if err != nil {
log.WithField("root", fmt.Sprintf("%#x", oldHeadRoot)).Warn("Could not determine node weight")
@@ -135,7 +134,7 @@ func (s *Service) saveHead(ctx context.Context, newHeadRoot [32]byte, headBlock
Type: statefeed.Reorg,
Data: &ethpbv1.EventChainReorg{
Slot: newHeadSlot,
Depth: math.Max(uint64(headSlot-forkSlot), uint64(newHeadSlot-forkSlot)),
Depth: max(uint64(headSlot-forkSlot), uint64(newHeadSlot-forkSlot)),
OldHeadBlock: oldHeadRoot[:],
NewHeadBlock: newHeadRoot[:],
OldHeadState: oldStateRoot[:],

View File

@@ -14,7 +14,10 @@ const BytesPerBlob = ckzg4844.BytesPerBlob
type Blob [BytesPerBlob]byte
// BytesPerCell is the number of bytes in a single cell.
const BytesPerCell = ckzg4844.BytesPerCell
const (
BytesPerCell = ckzg4844.BytesPerCell
BytesPerProof = ckzg4844.BytesPerProof
)
// Cell represents a chunk of an encoded Blob.
type Cell [BytesPerCell]byte
@@ -23,7 +26,7 @@ type Cell [BytesPerCell]byte
type Commitment [48]byte
// Proof represents a KZG proof that attests to the validity of a Blob or parts of it.
type Proof [48]byte
type Proof [BytesPerProof]byte
// Bytes48 is a 48-byte array.
type Bytes48 = ckzg4844.Bytes48
@@ -102,11 +105,11 @@ func VerifyCellKZGProofBatch(commitmentsBytes []Bytes48, cellIndices []uint64, c
for i := range cells {
ckzgCells[i] = ckzg4844.Cell(cells[i])
}
return ckzg4844.VerifyCellKZGProofBatch(commitmentsBytes, cellIndices, ckzgCells, proofsBytes)
}
// RecoverCellsAndKZGProofs recovers the complete cells and KZG proofs from a given set of cell indices and partial cells.
// Note: `len(cellIndices)` must be equal to `len(partialCells)` and `cellIndices` must be sorted in ascending order.
func RecoverCellsAndKZGProofs(cellIndices []uint64, partialCells []Cell) (CellsAndProofs, error) {
// Convert `Cell` type to `ckzg4844.Cell`
ckzgPartialCells := make([]ckzg4844.Cell, len(partialCells))

View File

@@ -1,32 +1,14 @@
package kzg
import (
"fmt"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
GoKZG "github.com/crate-crypto/go-kzg-4844"
ckzg4844 "github.com/ethereum/c-kzg-4844/v2/bindings/go"
"github.com/pkg/errors"
)
// Verify performs single or batch verification of commitments depending on the number of given BlobSidecars.
func Verify(sidecars ...blocks.ROBlob) error {
if len(sidecars) == 0 {
return nil
}
if len(sidecars) == 1 {
return kzgContext.VerifyBlobKZGProof(
bytesToBlob(sidecars[0].Blob),
bytesToCommitment(sidecars[0].KzgCommitment),
bytesToKZGProof(sidecars[0].KzgProof))
}
blobs := make([]GoKZG.Blob, len(sidecars))
cmts := make([]GoKZG.KZGCommitment, len(sidecars))
proofs := make([]GoKZG.KZGProof, len(sidecars))
for i, sidecar := range sidecars {
blobs[i] = *bytesToBlob(sidecar.Blob)
cmts[i] = bytesToCommitment(sidecar.KzgCommitment)
proofs[i] = bytesToKZGProof(sidecar.KzgProof)
}
return kzgContext.VerifyBlobKZGProofBatch(blobs, cmts, proofs)
}
func bytesToBlob(blob []byte) *GoKZG.Blob {
var ret GoKZG.Blob
copy(ret[:], blob)
@@ -42,3 +24,144 @@ func bytesToKZGProof(proof []byte) (ret GoKZG.KZGProof) {
copy(ret[:], proof)
return
}
// Verify performs single or batch verification of commitments depending on the number of given BlobSidecars.
func Verify(blobSidecars ...blocks.ROBlob) error {
if len(blobSidecars) == 0 {
return nil
}
if len(blobSidecars) == 1 {
return kzgContext.VerifyBlobKZGProof(
bytesToBlob(blobSidecars[0].Blob),
bytesToCommitment(blobSidecars[0].KzgCommitment),
bytesToKZGProof(blobSidecars[0].KzgProof))
}
blobs := make([]GoKZG.Blob, len(blobSidecars))
cmts := make([]GoKZG.KZGCommitment, len(blobSidecars))
proofs := make([]GoKZG.KZGProof, len(blobSidecars))
for i, sidecar := range blobSidecars {
blobs[i] = *bytesToBlob(sidecar.Blob)
cmts[i] = bytesToCommitment(sidecar.KzgCommitment)
proofs[i] = bytesToKZGProof(sidecar.KzgProof)
}
return kzgContext.VerifyBlobKZGProofBatch(blobs, cmts, proofs)
}
// VerifyBlobKZGProofBatch verifies KZG proofs for multiple blobs using batch verification.
// This is more efficient than verifying each blob individually when len(blobs) > 1.
// For single blob verification, it uses the optimized single verification path.
func VerifyBlobKZGProofBatch(blobs [][]byte, commitments [][]byte, proofs [][]byte) error {
if len(blobs) != len(commitments) || len(blobs) != len(proofs) {
return errors.Errorf("number of blobs (%d), commitments (%d), and proofs (%d) must match", len(blobs), len(commitments), len(proofs))
}
if len(blobs) == 0 {
return nil
}
// Optimize for single blob case - use single verification to avoid batch overhead
if len(blobs) == 1 {
return kzgContext.VerifyBlobKZGProof(
bytesToBlob(blobs[0]),
bytesToCommitment(commitments[0]),
bytesToKZGProof(proofs[0]))
}
// Use batch verification for multiple blobs
ckzgBlobs := make([]ckzg4844.Blob, len(blobs))
ckzgCommitments := make([]ckzg4844.Bytes48, len(commitments))
ckzgProofs := make([]ckzg4844.Bytes48, len(proofs))
for i := range blobs {
if len(blobs[i]) != len(ckzg4844.Blob{}) {
return fmt.Errorf("blobs len (%d) differs from expected (%d)", len(blobs[i]), len(ckzg4844.Blob{}))
}
if len(commitments[i]) != len(ckzg4844.Bytes48{}) {
return fmt.Errorf("commitments len (%d) differs from expected (%d)", len(commitments[i]), len(ckzg4844.Blob{}))
}
if len(proofs[i]) != len(ckzg4844.Bytes48{}) {
return fmt.Errorf("proofs len (%d) differs from expected (%d)", len(proofs[i]), len(ckzg4844.Blob{}))
}
ckzgBlobs[i] = ckzg4844.Blob(blobs[i])
ckzgCommitments[i] = ckzg4844.Bytes48(commitments[i])
ckzgProofs[i] = ckzg4844.Bytes48(proofs[i])
}
valid, err := ckzg4844.VerifyBlobKZGProofBatch(ckzgBlobs, ckzgCommitments, ckzgProofs)
if err != nil {
return errors.Wrap(err, "batch verification")
}
if !valid {
return errors.New("batch KZG proof verification failed")
}
return nil
}
// VerifyCellKZGProofBatchFromBlobData verifies cell KZG proofs in batch format directly from blob data.
// This is more efficient than reconstructing data column sidecars when you have the raw blob data and cell proofs.
// For PeerDAS/Fulu, the execution client provides cell proofs in flattened format via BlobsBundleV2.
// For single blob verification, it optimizes by computing cells once and verifying efficiently.
func VerifyCellKZGProofBatchFromBlobData(blobs [][]byte, commitments [][]byte, cellProofs [][]byte, numberOfColumns uint64) error {
blobCount := uint64(len(blobs))
expectedCellProofs := blobCount * numberOfColumns
if uint64(len(cellProofs)) != expectedCellProofs {
return errors.Errorf("expected %d cell proofs, got %d", expectedCellProofs, len(cellProofs))
}
if len(commitments) != len(blobs) {
return errors.Errorf("number of commitments (%d) must match number of blobs (%d)", len(commitments), len(blobs))
}
if blobCount == 0 {
return nil
}
// Handle multiple blobs - compute cells for all blobs
allCells := make([]Cell, 0, expectedCellProofs)
allCommitments := make([]Bytes48, 0, expectedCellProofs)
allIndices := make([]uint64, 0, expectedCellProofs)
allProofs := make([]Bytes48, 0, expectedCellProofs)
for blobIndex := range blobs {
if len(blobs[blobIndex]) != len(Blob{}) {
return fmt.Errorf("blobs len (%d) differs from expected (%d)", len(blobs[blobIndex]), len(Blob{}))
}
// Convert blob to kzg.Blob type
blob := Blob(blobs[blobIndex])
// Compute cells for this blob
cells, err := ComputeCells(&blob)
if err != nil {
return errors.Wrapf(err, "failed to compute cells for blob %d", blobIndex)
}
// Add cells and corresponding data for each column
for columnIndex := range numberOfColumns {
cellProofIndex := uint64(blobIndex)*numberOfColumns + columnIndex
if len(commitments[blobIndex]) != len(Bytes48{}) {
return fmt.Errorf("commitments len (%d) differs from expected (%d)", len(commitments[blobIndex]), len(Bytes48{}))
}
if len(cellProofs[cellProofIndex]) != len(Bytes48{}) {
return fmt.Errorf("proofs len (%d) differs from expected (%d)", len(cellProofs[cellProofIndex]), len(Bytes48{}))
}
allCells = append(allCells, cells[columnIndex])
allCommitments = append(allCommitments, Bytes48(commitments[blobIndex]))
allIndices = append(allIndices, columnIndex)
allProofs = append(allProofs, Bytes48(cellProofs[cellProofIndex]))
}
}
// Batch verify all cells
valid, err := VerifyCellKZGProofBatch(allCommitments, allIndices, allCells, allProofs)
if err != nil {
return errors.Wrap(err, "cell batch verification")
}
if !valid {
return errors.New("cell KZG proof batch verification failed")
}
return nil
}

View File

@@ -22,8 +22,8 @@ func GenerateCommitmentAndProof(blob GoKZG.Blob) (GoKZG.KZGCommitment, GoKZG.KZG
}
func TestVerify(t *testing.T) {
sidecars := make([]blocks.ROBlob, 0)
require.NoError(t, Verify(sidecars...))
blobSidecars := make([]blocks.ROBlob, 0)
require.NoError(t, Verify(blobSidecars...))
}
func TestBytesToAny(t *testing.T) {
@@ -37,6 +37,7 @@ func TestBytesToAny(t *testing.T) {
}
func TestGenerateCommitmentAndProof(t *testing.T) {
require.NoError(t, Start())
blob := random.GetRandBlob(123)
commitment, proof, err := GenerateCommitmentAndProof(blob)
require.NoError(t, err)
@@ -45,3 +46,432 @@ func TestGenerateCommitmentAndProof(t *testing.T) {
require.Equal(t, expectedCommitment, commitment)
require.Equal(t, expectedProof, proof)
}
func TestVerifyBlobKZGProofBatch(t *testing.T) {
// Initialize KZG for testing
require.NoError(t, Start())
t.Run("valid single blob batch", func(t *testing.T) {
blob := random.GetRandBlob(123)
commitment, proof, err := GenerateCommitmentAndProof(blob)
require.NoError(t, err)
blobs := [][]byte{blob[:]}
commitments := [][]byte{commitment[:]}
proofs := [][]byte{proof[:]}
err = VerifyBlobKZGProofBatch(blobs, commitments, proofs)
require.NoError(t, err)
})
t.Run("valid multiple blob batch", func(t *testing.T) {
blobCount := 3
blobs := make([][]byte, blobCount)
commitments := make([][]byte, blobCount)
proofs := make([][]byte, blobCount)
for i := 0; i < blobCount; i++ {
blob := random.GetRandBlob(int64(i))
commitment, proof, err := GenerateCommitmentAndProof(blob)
require.NoError(t, err)
blobs[i] = blob[:]
commitments[i] = commitment[:]
proofs[i] = proof[:]
}
err := VerifyBlobKZGProofBatch(blobs, commitments, proofs)
require.NoError(t, err)
})
t.Run("empty inputs should pass", func(t *testing.T) {
err := VerifyBlobKZGProofBatch([][]byte{}, [][]byte{}, [][]byte{})
require.NoError(t, err)
})
t.Run("mismatched input lengths", func(t *testing.T) {
blob := random.GetRandBlob(123)
commitment, proof, err := GenerateCommitmentAndProof(blob)
require.NoError(t, err)
// Test different mismatch scenarios
err = VerifyBlobKZGProofBatch(
[][]byte{blob[:]},
[][]byte{},
[][]byte{proof[:]},
)
require.ErrorContains(t, "number of blobs (1), commitments (0), and proofs (1) must match", err)
err = VerifyBlobKZGProofBatch(
[][]byte{blob[:], blob[:]},
[][]byte{commitment[:]},
[][]byte{proof[:], proof[:]},
)
require.ErrorContains(t, "number of blobs (2), commitments (1), and proofs (2) must match", err)
})
t.Run("invalid commitment should fail", func(t *testing.T) {
blob := random.GetRandBlob(123)
_, proof, err := GenerateCommitmentAndProof(blob)
require.NoError(t, err)
// Use a different blob's commitment (mismatch)
differentBlob := random.GetRandBlob(456)
wrongCommitment, _, err := GenerateCommitmentAndProof(differentBlob)
require.NoError(t, err)
blobs := [][]byte{blob[:]}
commitments := [][]byte{wrongCommitment[:]}
proofs := [][]byte{proof[:]}
err = VerifyBlobKZGProofBatch(blobs, commitments, proofs)
// Single blob optimization uses different error message
require.ErrorContains(t, "can't verify opening proof", err)
})
t.Run("invalid proof should fail", func(t *testing.T) {
blob := random.GetRandBlob(123)
commitment, _, err := GenerateCommitmentAndProof(blob)
require.NoError(t, err)
// Use wrong proof
invalidProof := make([]byte, 48) // All zeros
blobs := [][]byte{blob[:]}
commitments := [][]byte{commitment[:]}
proofs := [][]byte{invalidProof}
err = VerifyBlobKZGProofBatch(blobs, commitments, proofs)
require.ErrorContains(t, "short buffer", err)
})
t.Run("mixed valid and invalid proofs should fail", func(t *testing.T) {
// First blob - valid
blob1 := random.GetRandBlob(123)
commitment1, proof1, err := GenerateCommitmentAndProof(blob1)
require.NoError(t, err)
// Second blob - invalid proof
blob2 := random.GetRandBlob(456)
commitment2, _, err := GenerateCommitmentAndProof(blob2)
require.NoError(t, err)
invalidProof := make([]byte, 48) // All zeros
blobs := [][]byte{blob1[:], blob2[:]}
commitments := [][]byte{commitment1[:], commitment2[:]}
proofs := [][]byte{proof1[:], invalidProof}
err = VerifyBlobKZGProofBatch(blobs, commitments, proofs)
require.ErrorContains(t, "batch verification", err)
})
t.Run("batch KZG proof verification failed", func(t *testing.T) {
// Create multiple blobs with mismatched commitments and proofs to trigger batch verification failure
blob1 := random.GetRandBlob(123)
blob2 := random.GetRandBlob(456)
// Generate valid proof for blob1
commitment1, proof1, err := GenerateCommitmentAndProof(blob1)
require.NoError(t, err)
// Generate valid proof for blob2 but use wrong commitment (from blob1)
_, proof2, err := GenerateCommitmentAndProof(blob2)
require.NoError(t, err)
// Use blob2 data with blob1's commitment and blob2's proof - this should cause batch verification to fail
blobs := [][]byte{blob1[:], blob2[:]}
commitments := [][]byte{commitment1[:], commitment1[:]} // Wrong commitment for blob2
proofs := [][]byte{proof1[:], proof2[:]}
err = VerifyBlobKZGProofBatch(blobs, commitments, proofs)
require.ErrorContains(t, "batch KZG proof verification failed", err)
})
}
func TestVerifyCellKZGProofBatchFromBlobData(t *testing.T) {
// Initialize KZG for testing
require.NoError(t, Start())
t.Run("valid single blob cell verification", func(t *testing.T) {
numberOfColumns := uint64(128)
// Generate blob and commitment
randBlob := random.GetRandBlob(123)
var blob Blob
copy(blob[:], randBlob[:])
commitment, err := BlobToKZGCommitment(&blob)
require.NoError(t, err)
// Compute cells and proofs
cellsAndProofs, err := ComputeCellsAndKZGProofs(&blob)
require.NoError(t, err)
// Create flattened cell proofs (like execution client format)
cellProofs := make([][]byte, numberOfColumns)
for i := range numberOfColumns {
cellProofs[i] = cellsAndProofs.Proofs[i][:]
}
blobs := [][]byte{blob[:]}
commitments := [][]byte{commitment[:]}
err = VerifyCellKZGProofBatchFromBlobData(blobs, commitments, cellProofs, numberOfColumns)
require.NoError(t, err)
})
t.Run("valid multiple blob cell verification", func(t *testing.T) {
numberOfColumns := uint64(128)
blobCount := 2
blobs := make([][]byte, blobCount)
commitments := make([][]byte, blobCount)
var allCellProofs [][]byte
for i := range blobCount {
// Generate blob and commitment
randBlob := random.GetRandBlob(int64(i))
var blob Blob
copy(blob[:], randBlob[:])
commitment, err := BlobToKZGCommitment(&blob)
require.NoError(t, err)
// Compute cells and proofs
cellsAndProofs, err := ComputeCellsAndKZGProofs(&blob)
require.NoError(t, err)
blobs[i] = blob[:]
commitments[i] = commitment[:]
// Add cell proofs for this blob
for j := range numberOfColumns {
allCellProofs = append(allCellProofs, cellsAndProofs.Proofs[j][:])
}
}
err := VerifyCellKZGProofBatchFromBlobData(blobs, commitments, allCellProofs, numberOfColumns)
require.NoError(t, err)
})
t.Run("empty inputs should pass", func(t *testing.T) {
err := VerifyCellKZGProofBatchFromBlobData([][]byte{}, [][]byte{}, [][]byte{}, 128)
require.NoError(t, err)
})
t.Run("mismatched blob and commitment count", func(t *testing.T) {
randBlob := random.GetRandBlob(123)
var blob Blob
copy(blob[:], randBlob[:])
err := VerifyCellKZGProofBatchFromBlobData(
[][]byte{blob[:]},
[][]byte{}, // Empty commitments
[][]byte{},
128,
)
require.ErrorContains(t, "expected 128 cell proofs", err)
})
t.Run("wrong cell proof count", func(t *testing.T) {
numberOfColumns := uint64(128)
randBlob := random.GetRandBlob(123)
var blob Blob
copy(blob[:], randBlob[:])
commitment, err := BlobToKZGCommitment(&blob)
require.NoError(t, err)
blobs := [][]byte{blob[:]}
commitments := [][]byte{commitment[:]}
// Wrong number of cell proofs - should be 128 for 1 blob, but provide 10
wrongCellProofs := make([][]byte, 10)
err = VerifyCellKZGProofBatchFromBlobData(blobs, commitments, wrongCellProofs, numberOfColumns)
require.ErrorContains(t, "expected 128 cell proofs, got 10", err)
})
t.Run("invalid cell proofs should fail", func(t *testing.T) {
numberOfColumns := uint64(128)
randBlob := random.GetRandBlob(123)
var blob Blob
copy(blob[:], randBlob[:])
commitment, err := BlobToKZGCommitment(&blob)
require.NoError(t, err)
blobs := [][]byte{blob[:]}
commitments := [][]byte{commitment[:]}
// Create invalid cell proofs (all zeros)
invalidCellProofs := make([][]byte, numberOfColumns)
for i := range numberOfColumns {
invalidCellProofs[i] = make([]byte, 48) // All zeros
}
err = VerifyCellKZGProofBatchFromBlobData(blobs, commitments, invalidCellProofs, numberOfColumns)
require.ErrorContains(t, "cell batch verification", err)
})
t.Run("mismatched commitment should fail", func(t *testing.T) {
numberOfColumns := uint64(128)
// Generate blob and correct cell proofs
randBlob := random.GetRandBlob(123)
var blob Blob
copy(blob[:], randBlob[:])
cellsAndProofs, err := ComputeCellsAndKZGProofs(&blob)
require.NoError(t, err)
// Generate wrong commitment from different blob
randBlob2 := random.GetRandBlob(456)
var differentBlob Blob
copy(differentBlob[:], randBlob2[:])
wrongCommitment, err := BlobToKZGCommitment(&differentBlob)
require.NoError(t, err)
cellProofs := make([][]byte, numberOfColumns)
for i := range numberOfColumns {
cellProofs[i] = cellsAndProofs.Proofs[i][:]
}
blobs := [][]byte{blob[:]}
commitments := [][]byte{wrongCommitment[:]}
err = VerifyCellKZGProofBatchFromBlobData(blobs, commitments, cellProofs, numberOfColumns)
require.ErrorContains(t, "cell KZG proof batch verification failed", err)
})
t.Run("invalid blob data that should cause ComputeCells to fail", func(t *testing.T) {
numberOfColumns := uint64(128)
// Create invalid blob (not properly formatted)
invalidBlobData := make([]byte, 10) // Too short
commitment := make([]byte, 48) // Dummy commitment
cellProofs := make([][]byte, numberOfColumns)
for i := range numberOfColumns {
cellProofs[i] = make([]byte, 48)
}
blobs := [][]byte{invalidBlobData}
commitments := [][]byte{commitment}
err := VerifyCellKZGProofBatchFromBlobData(blobs, commitments, cellProofs, numberOfColumns)
require.NotNil(t, err)
require.ErrorContains(t, "blobs len (10) differs from expected (131072)", err)
})
t.Run("invalid commitment size should fail", func(t *testing.T) {
numberOfColumns := uint64(128)
randBlob := random.GetRandBlob(123)
var blob Blob
copy(blob[:], randBlob[:])
// Create invalid commitment (wrong size)
invalidCommitment := make([]byte, 32) // Should be 48 bytes
cellProofs := make([][]byte, numberOfColumns)
for i := range numberOfColumns {
cellProofs[i] = make([]byte, 48)
}
blobs := [][]byte{blob[:]}
commitments := [][]byte{invalidCommitment}
err := VerifyCellKZGProofBatchFromBlobData(blobs, commitments, cellProofs, numberOfColumns)
require.ErrorContains(t, "commitments len (32) differs from expected (48)", err)
})
t.Run("invalid cell proof size should fail", func(t *testing.T) {
numberOfColumns := uint64(128)
randBlob := random.GetRandBlob(123)
var blob Blob
copy(blob[:], randBlob[:])
commitment, err := BlobToKZGCommitment(&blob)
require.NoError(t, err)
// Create invalid cell proofs (wrong size)
invalidCellProofs := make([][]byte, numberOfColumns)
for i := range numberOfColumns {
if i == 0 {
invalidCellProofs[i] = make([]byte, 32) // Wrong size - should be 48
} else {
invalidCellProofs[i] = make([]byte, 48)
}
}
blobs := [][]byte{blob[:]}
commitments := [][]byte{commitment[:]}
err = VerifyCellKZGProofBatchFromBlobData(blobs, commitments, invalidCellProofs, numberOfColumns)
require.ErrorContains(t, "proofs len (32) differs from expected (48)", err)
})
t.Run("multiple blobs with mixed invalid commitments", func(t *testing.T) {
numberOfColumns := uint64(128)
blobCount := 2
blobs := make([][]byte, blobCount)
commitments := make([][]byte, blobCount)
var allCellProofs [][]byte
// First blob - valid
randBlob1 := random.GetRandBlob(123)
var blob1 Blob
copy(blob1[:], randBlob1[:])
commitment1, err := BlobToKZGCommitment(&blob1)
require.NoError(t, err)
blobs[0] = blob1[:]
commitments[0] = commitment1[:]
// Second blob - use invalid commitment size
randBlob2 := random.GetRandBlob(456)
var blob2 Blob
copy(blob2[:], randBlob2[:])
blobs[1] = blob2[:]
commitments[1] = make([]byte, 32) // Wrong size
// Add cell proofs for both blobs
for i := 0; i < blobCount; i++ {
for j := uint64(0); j < numberOfColumns; j++ {
allCellProofs = append(allCellProofs, make([]byte, 48))
}
}
err = VerifyCellKZGProofBatchFromBlobData(blobs, commitments, allCellProofs, numberOfColumns)
require.ErrorContains(t, "commitments len (32) differs from expected (48)", err)
})
t.Run("multiple blobs with mixed invalid cell proof sizes", func(t *testing.T) {
numberOfColumns := uint64(128)
blobCount := 2
blobs := make([][]byte, blobCount)
commitments := make([][]byte, blobCount)
var allCellProofs [][]byte
for i := 0; i < blobCount; i++ {
randBlob := random.GetRandBlob(int64(i))
var blob Blob
copy(blob[:], randBlob[:])
commitment, err := BlobToKZGCommitment(&blob)
require.NoError(t, err)
blobs[i] = blob[:]
commitments[i] = commitment[:]
// Add cell proofs - make some invalid in the second blob
for j := uint64(0); j < numberOfColumns; j++ {
if i == 1 && j == 64 {
// Invalid proof size in middle of second blob's proofs
allCellProofs = append(allCellProofs, make([]byte, 20))
} else {
allCellProofs = append(allCellProofs, make([]byte, 48))
}
}
}
err := VerifyCellKZGProofBatchFromBlobData(blobs, commitments, allCellProofs, numberOfColumns)
require.ErrorContains(t, "proofs len (20) differs from expected (48)", err)
})
}

View File

@@ -6,11 +6,11 @@ import (
"github.com/OffchainLabs/prysm/v6/async/event"
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
lightclient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v6/beacon-chain/execution"
"github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice"
lightclient "github.com/OffchainLabs/prysm/v6/beacon-chain/light-client"
"github.com/OffchainLabs/prysm/v6/beacon-chain/operations/attestations"
"github.com/OffchainLabs/prysm/v6/beacon-chain/operations/blstoexec"
"github.com/OffchainLabs/prysm/v6/beacon-chain/operations/slashings"
@@ -35,7 +35,7 @@ func WithMaxGoroutines(x int) Option {
// WithLCStore for light client store access.
func WithLCStore() Option {
return func(s *Service) error {
s.lcStore = lightclient.NewLightClientStore(s.cfg.BeaconDB, s.cfg.P2P, s.cfg.StateNotifier.StateFeed())
s.lcStore = lightclient.NewLightClientStore(s.cfg.P2P, s.cfg.StateNotifier.StateFeed(), s.cfg.BeaconDB)
return nil
}
}

View File

@@ -3,7 +3,6 @@ package blockchain
import (
"context"
"fmt"
"slices"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/blocks"
@@ -159,7 +158,7 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlo
}
// Fill in missing blocks
if err := s.fillInForkChoiceMissingBlocks(ctx, blks[0], preState.CurrentJustifiedCheckpoint(), preState.FinalizedCheckpoint()); err != nil {
if err := s.fillInForkChoiceMissingBlocks(ctx, blks[0], preState.FinalizedCheckpoint(), preState.CurrentJustifiedCheckpoint()); err != nil {
return errors.Wrap(err, "could not fill in missing blocks to forkchoice")
}
@@ -240,13 +239,14 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlo
}
}
if err := avs.IsDataAvailable(ctx, s.CurrentSlot(), b); err != nil {
return errors.Wrapf(err, "could not validate sidecar availability at slot %d", b.Block().Slot())
if err := s.areSidecarsAvailable(ctx, avs, b); err != nil {
return errors.Wrapf(err, "could not validate sidecar availability for block %#x at slot %d", b.Root(), b.Block().Slot())
}
args := &forkchoicetypes.BlockAndCheckpoints{Block: b,
JustifiedCheckpoint: jCheckpoints[i],
FinalizedCheckpoint: fCheckpoints[i]}
pendingNodes[len(blks)-i-1] = args
pendingNodes[i] = args
if err := s.saveInitSyncBlock(ctx, root, b); err != nil {
tracing.AnnotateError(span, err)
return err
@@ -283,14 +283,10 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlo
if err := s.cfg.StateGen.SaveState(ctx, lastBR, preState); err != nil {
return err
}
// Insert all nodes but the last one to forkchoice
// Insert all nodes to forkchoice
if err := s.cfg.ForkChoiceStore.InsertChain(ctx, pendingNodes); err != nil {
return errors.Wrap(err, "could not insert batch to forkchoice")
}
// Insert the last block to forkchoice
if err := s.cfg.ForkChoiceStore.InsertNode(ctx, preState, lastB); err != nil {
return errors.Wrap(err, "could not insert last block in batch to forkchoice")
}
// Set their optimistic status
if isValidPayload {
if err := s.cfg.ForkChoiceStore.SetOptimisticToValid(ctx, lastBR); err != nil {
@@ -308,6 +304,30 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlo
return s.saveHeadNoDB(ctx, lastB, lastBR, preState, !isValidPayload)
}
func (s *Service) areSidecarsAvailable(ctx context.Context, avs das.AvailabilityStore, roBlock consensusblocks.ROBlock) error {
blockVersion := roBlock.Version()
block := roBlock.Block()
slot := block.Slot()
if blockVersion >= version.Fulu {
if err := s.areDataColumnsAvailable(ctx, roBlock.Root(), block); err != nil {
return errors.Wrapf(err, "are data columns available for block %#x with slot %d", roBlock.Root(), slot)
}
return nil
}
if blockVersion >= version.Deneb {
if err := avs.IsDataAvailable(ctx, s.CurrentSlot(), roBlock); err != nil {
return errors.Wrapf(err, "could not validate sidecar availability at slot %d", slot)
}
return nil
}
return nil
}
func (s *Service) updateEpochBoundaryCaches(ctx context.Context, st state.BeaconState) error {
e := coreTime.CurrentEpoch(st)
if err := helpers.UpdateCommitteeCache(ctx, st, e); err != nil {
@@ -639,14 +659,14 @@ func missingDataColumnIndices(store *filesystem.DataColumnStorage, root [fieldpa
// closed, the context hits cancellation/timeout, or notifications have been received for all the missing sidecars.
func (s *Service) isDataAvailable(
ctx context.Context,
root [fieldparams.RootLength]byte,
signedBlock interfaces.ReadOnlySignedBeaconBlock,
roBlock consensusblocks.ROBlock,
) error {
block := signedBlock.Block()
block := roBlock.Block()
if block == nil {
return errors.New("invalid nil beacon block")
}
root := roBlock.Root()
blockVersion := block.Version()
if blockVersion >= version.Fulu {
return s.areDataColumnsAvailable(ctx, root, block)
@@ -666,8 +686,6 @@ func (s *Service) areDataColumnsAvailable(
root [fieldparams.RootLength]byte,
block interfaces.ReadOnlyBeaconBlock,
) error {
samplesPerSlot := params.BeaconConfig().SamplesPerSlot
// We are only required to check within MIN_EPOCHS_FOR_DATA_COLUMN_SIDECARS_REQUESTS
blockSlot, currentSlot := block.Slot(), s.CurrentSlot()
blockEpoch, currentEpoch := slots.ToEpoch(blockSlot), slots.ToEpoch(currentSlot)
@@ -694,13 +712,14 @@ func (s *Service) areDataColumnsAvailable(
nodeID := s.cfg.P2P.NodeID()
// Get the custody group sampling size for the node.
custodyGroupCount, err := s.cfg.P2P.CustodyGroupCount()
custodyGroupCount, err := s.cfg.P2P.CustodyGroupCount(ctx)
if err != nil {
return errors.Wrap(err, "custody group count")
}
// Compute the sampling size.
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/das-core.md#custody-sampling
samplesPerSlot := params.BeaconConfig().SamplesPerSlot
samplingSize := max(samplesPerSlot, custodyGroupCount)
// Get the peer info for the node.
@@ -726,14 +745,14 @@ func (s *Service) areDataColumnsAvailable(
}
// Get a map of data column indices that are not currently available.
missingMap, err := missingDataColumnIndices(s.dataColumnStorage, root, peerInfo.CustodyColumns)
missing, err := missingDataColumnIndices(s.dataColumnStorage, root, peerInfo.CustodyColumns)
if err != nil {
return errors.Wrap(err, "missing data columns")
}
// If there are no missing indices, all data column sidecars are available.
// This is the happy path.
if len(missingMap) == 0 {
if len(missing) == 0 {
return nil
}
@@ -750,33 +769,17 @@ func (s *Service) areDataColumnsAvailable(
// Avoid logging if DA check is called after next slot start.
if nextSlot.After(time.Now()) {
timer := time.AfterFunc(time.Until(nextSlot), func() {
missingMapCount := uint64(len(missingMap))
missingCount := uint64(len(missing))
if missingMapCount == 0 {
if missingCount == 0 {
return
}
var (
expected interface{} = "all"
missing interface{} = "all"
)
numberOfColumns := params.BeaconConfig().NumberOfColumns
colMapCount := uint64(len(peerInfo.CustodyColumns))
if colMapCount < numberOfColumns {
expected = uint64MapToSortedSlice(peerInfo.CustodyColumns)
}
if missingMapCount < numberOfColumns {
missing = uint64MapToSortedSlice(missingMap)
}
log.WithFields(logrus.Fields{
"slot": block.Slot(),
"root": fmt.Sprintf("%#x", root),
"columnsExpected": expected,
"columnsWaiting": missing,
"columnsExpected": helpers.SortedPrettySliceFromMap(peerInfo.CustodyColumns),
"columnsWaiting": helpers.SortedPrettySliceFromMap(missing),
}).Warning("Data columns still missing at slot end")
})
defer timer.Stop()
@@ -792,7 +795,7 @@ func (s *Service) areDataColumnsAvailable(
for _, index := range idents.Indices {
// This is a data column we are expecting.
if _, ok := missingMap[index]; ok {
if _, ok := missing[index]; ok {
storedDataColumnsCount++
}
@@ -803,10 +806,10 @@ func (s *Service) areDataColumnsAvailable(
}
// Remove the index from the missing map.
delete(missingMap, index)
delete(missing, index)
// Return if there is no more missing data columns.
if len(missingMap) == 0 {
if len(missing) == 0 {
return nil
}
}
@@ -814,10 +817,10 @@ func (s *Service) areDataColumnsAvailable(
case <-ctx.Done():
var missingIndices interface{} = "all"
numberOfColumns := params.BeaconConfig().NumberOfColumns
missingIndicesCount := uint64(len(missingMap))
missingIndicesCount := uint64(len(missing))
if missingIndicesCount < numberOfColumns {
missingIndices = uint64MapToSortedSlice(missingMap)
missingIndices = helpers.SortedPrettySliceFromMap(missing)
}
return errors.Wrapf(ctx.Err(), "data column sidecars slot: %d, BlockRoot: %#x, missing: %v", block.Slot(), root, missingIndices)
@@ -901,16 +904,6 @@ func (s *Service) areBlobsAvailable(ctx context.Context, root [fieldparams.RootL
}
}
// uint64MapToSortedSlice produces a sorted uint64 slice from a map.
func uint64MapToSortedSlice(input map[uint64]bool) []uint64 {
output := make([]uint64, 0, len(input))
for idx := range input {
output = append(output, idx)
}
slices.Sort[[]uint64](output)
return output
}
// lateBlockTasks is called 4 seconds into the slot and performs tasks
// related to late blocks. It emits a MissedSlot state feed event.
// It calls FCU and sets the right attributes if we are proposing next slot

View File

@@ -3,14 +3,12 @@ package blockchain
import (
"context"
"fmt"
"strings"
"slices"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
lightclient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed"
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/transition"
doublylinkedtree "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/doubly-linked-tree"
forkchoicetypes "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/types"
@@ -30,6 +28,10 @@ import (
"github.com/sirupsen/logrus"
)
// ErrInvalidCheckpointArgs may be returned when the finalized checkpoint has an epoch greater than the justified checkpoint epoch.
// If you are seeing this error, make sure you haven't mixed up the order of the arguments in the method you are calling.
var ErrInvalidCheckpointArgs = errors.New("finalized checkpoint cannot be greater than justified checkpoint")
// CurrentSlot returns the current slot based on time.
func (s *Service) CurrentSlot() primitives.Slot {
return slots.CurrentSlot(s.genesisTime)
@@ -129,35 +131,26 @@ func (s *Service) sendStateFeedOnBlock(cfg *postBlockProcessConfig) {
})
}
// processLightClientUpdates saves the light client data in lcStore, when feature flag is enabled.
func (s *Service) processLightClientUpdates(cfg *postBlockProcessConfig) {
if err := s.processLightClientUpdate(cfg); err != nil {
log.WithError(err).Error("Failed to process light client update")
}
if err := s.processLightClientOptimisticUpdate(cfg.ctx, cfg.roblock, cfg.postState); err != nil {
log.WithError(err).Error("Failed to process light client optimistic update")
}
if err := s.processLightClientFinalityUpdate(cfg.ctx, cfg.roblock, cfg.postState); err != nil {
log.WithError(err).Error("Failed to process light client finality update")
}
}
// processLightClientUpdate saves the light client update for this block
// if it's better than the already saved one, when feature flag is enabled.
func (s *Service) processLightClientUpdate(cfg *postBlockProcessConfig) error {
attestedRoot := cfg.roblock.Block().ParentRoot()
attestedBlock, err := s.getBlock(cfg.ctx, attestedRoot)
if err != nil {
return errors.Wrapf(err, "could not get attested block for root %#x", attestedRoot)
log.WithError(err).Error("processLightClientUpdates: Could not get attested block")
return
}
if attestedBlock == nil || attestedBlock.IsNil() {
return errors.New("attested block is nil")
log.Error("processLightClientUpdates: Could not get attested block")
return
}
attestedState, err := s.cfg.StateGen.StateByRoot(cfg.ctx, attestedRoot)
if err != nil {
return errors.Wrapf(err, "could not get attested state for root %#x", attestedRoot)
log.WithError(err).Error("processLightClientUpdates: Could not get attested state")
return
}
if attestedState == nil || attestedState.IsNil() {
return errors.New("attested state is nil")
log.Error("processLightClientUpdates: Could not get attested state")
return
}
finalizedRoot := attestedState.FinalizedCheckpoint().Root
@@ -165,98 +158,17 @@ func (s *Service) processLightClientUpdate(cfg *postBlockProcessConfig) error {
if err != nil {
if errors.Is(err, errBlockNotFoundInCacheOrDB) {
log.Debugf("Skipping saving light client update because finalized block is nil for root %#x", finalizedRoot)
return nil
return
}
return errors.Wrapf(err, "could not get finalized block for root %#x", finalizedRoot)
log.WithError(err).Error("processLightClientUpdates: Could not get finalized block")
return
}
update, err := lightclient.NewLightClientUpdateFromBeaconState(cfg.ctx, cfg.postState, cfg.roblock, attestedState, attestedBlock, finalizedBlock)
err = s.lcStore.SaveLCData(cfg.ctx, cfg.postState, cfg.roblock, attestedState, attestedBlock, finalizedBlock, s.headRoot())
if err != nil {
return errors.Wrapf(err, "could not create light client update")
log.WithError(err).Error("processLightClientUpdates: Could not save light client data")
}
period := slots.SyncCommitteePeriod(slots.ToEpoch(attestedState.Slot()))
return s.lcStore.SaveLightClientUpdate(cfg.ctx, period, update)
}
func (s *Service) processLightClientFinalityUpdate(
ctx context.Context,
signed interfaces.ReadOnlySignedBeaconBlock,
postState state.BeaconState,
) error {
attestedRoot := signed.Block().ParentRoot()
attestedBlock, err := s.cfg.BeaconDB.Block(ctx, attestedRoot)
if err != nil {
return errors.Wrapf(err, "could not get attested block for root %#x", attestedRoot)
}
attestedState, err := s.cfg.StateGen.StateByRoot(ctx, attestedRoot)
if err != nil {
return errors.Wrapf(err, "could not get attested state for root %#x", attestedRoot)
}
finalizedCheckpoint := attestedState.FinalizedCheckpoint()
if finalizedCheckpoint == nil {
return nil
}
finalizedRoot := bytesutil.ToBytes32(finalizedCheckpoint.Root)
finalizedBlock, err := s.cfg.BeaconDB.Block(ctx, finalizedRoot)
if err != nil {
if errors.Is(err, errBlockNotFoundInCacheOrDB) {
log.Debugf("Skipping processing light client finality update: Finalized block is nil for root %#x", finalizedRoot)
return nil
}
return errors.Wrapf(err, "could not get finalized block for root %#x", finalizedRoot)
}
newUpdate, err := lightclient.NewLightClientFinalityUpdateFromBeaconState(ctx, postState, signed, attestedState, attestedBlock, finalizedBlock)
if err != nil {
return errors.Wrap(err, "could not create light client finality update")
}
if !lightclient.IsBetterFinalityUpdate(newUpdate, s.lcStore.LastFinalityUpdate()) {
log.Debug("Skip saving light client finality update: current update is better")
return nil
}
s.lcStore.SetLastFinalityUpdate(newUpdate, true)
return nil
}
func (s *Service) processLightClientOptimisticUpdate(ctx context.Context, signed interfaces.ReadOnlySignedBeaconBlock,
postState state.BeaconState) error {
attestedRoot := signed.Block().ParentRoot()
attestedBlock, err := s.cfg.BeaconDB.Block(ctx, attestedRoot)
if err != nil {
return errors.Wrapf(err, "could not get attested block for root %#x", attestedRoot)
}
attestedState, err := s.cfg.StateGen.StateByRoot(ctx, attestedRoot)
if err != nil {
return errors.Wrapf(err, "could not get attested state for root %#x", attestedRoot)
}
newUpdate, err := lightclient.NewLightClientOptimisticUpdateFromBeaconState(ctx, postState, signed, attestedState, attestedBlock)
if err != nil {
if strings.Contains(err.Error(), lightclient.ErrNotEnoughSyncCommitteeBits) {
log.WithError(err).Debug("Skipping processing light client optimistic update")
return nil
}
return errors.Wrap(err, "could not create light client optimistic update")
}
if !lightclient.IsBetterOptimisticUpdate(newUpdate, s.lcStore.LastOptimisticUpdate()) {
log.Debug("Skip saving light client optimistic update: current update is better")
return nil
}
s.lcStore.SetLastOptimisticUpdate(newUpdate, true)
return nil
log.Debug("Processed light client updates")
}
// updateCachesPostBlockProcessing updates the next slot cache and handles the epoch
@@ -454,6 +366,9 @@ func (s *Service) ancestorByDB(ctx context.Context, r [32]byte, slot primitives.
// This is useful for block tree visualizer and additional vote accounting.
func (s *Service) fillInForkChoiceMissingBlocks(ctx context.Context, signed interfaces.ReadOnlySignedBeaconBlock,
fCheckpoint, jCheckpoint *ethpb.Checkpoint) error {
if fCheckpoint.Epoch > jCheckpoint.Epoch {
return ErrInvalidCheckpointArgs
}
pendingNodes := make([]*forkchoicetypes.BlockAndCheckpoints, 0)
// Fork choice only matters from last finalized slot.
@@ -462,15 +377,8 @@ func (s *Service) fillInForkChoiceMissingBlocks(ctx context.Context, signed inte
if err != nil {
return err
}
// The first block can have a bogus root since the block is not inserted in forkchoice
roblock, err := consensus_blocks.NewROBlockWithRoot(signed, [32]byte{})
if err != nil {
return err
}
pendingNodes = append(pendingNodes, &forkchoicetypes.BlockAndCheckpoints{Block: roblock,
JustifiedCheckpoint: jCheckpoint, FinalizedCheckpoint: fCheckpoint})
root := signed.Block().ParentRoot()
// As long as parent node is not in fork choice store, and parent node is in DB.
root := roblock.Block().ParentRoot()
for !s.cfg.ForkChoiceStore.HasNode(root) && s.cfg.BeaconDB.HasBlock(ctx, root) {
b, err := s.getBlock(ctx, root)
if err != nil {
@@ -489,12 +397,13 @@ func (s *Service) fillInForkChoiceMissingBlocks(ctx context.Context, signed inte
FinalizedCheckpoint: fCheckpoint}
pendingNodes = append(pendingNodes, args)
}
if len(pendingNodes) == 1 {
if len(pendingNodes) == 0 {
return nil
}
if root != s.ensureRootNotZeros(finalized.Root) && !s.cfg.ForkChoiceStore.HasNode(root) {
return ErrNotDescendantOfFinalized
}
slices.Reverse(pendingNodes)
return s.cfg.ForkChoiceStore.InsertChain(ctx, pendingNodes)
}

View File

@@ -12,7 +12,6 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/blocks"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
lightClient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/signing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/transition"
@@ -24,6 +23,7 @@ import (
mockExecution "github.com/OffchainLabs/prysm/v6/beacon-chain/execution/testing"
doublylinkedtree "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/doubly-linked-tree"
forkchoicetypes "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/types"
lightClient "github.com/OffchainLabs/prysm/v6/beacon-chain/light-client"
"github.com/OffchainLabs/prysm/v6/beacon-chain/operations/attestations/kv"
mockp2p "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
@@ -375,6 +375,81 @@ func TestFillForkChoiceMissingBlocks_FinalizedSibling(t *testing.T) {
require.Equal(t, ErrNotDescendantOfFinalized.Error(), err.Error())
}
func TestFillForkChoiceMissingBlocks_ErrorCases(t *testing.T) {
tests := []struct {
name string
finalizedEpoch primitives.Epoch
justifiedEpoch primitives.Epoch
expectedError error
}{
{
name: "finalized epoch greater than justified epoch",
finalizedEpoch: 5,
justifiedEpoch: 3,
expectedError: ErrInvalidCheckpointArgs,
},
{
name: "valid case - finalized equal to justified",
finalizedEpoch: 3,
justifiedEpoch: 3,
expectedError: nil,
},
{
name: "valid case - finalized less than justified",
finalizedEpoch: 2,
justifiedEpoch: 3,
expectedError: nil,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
service, tr := minimalTestService(t)
ctx, beaconDB := tr.ctx, tr.db
st, _ := util.DeterministicGenesisState(t, 64)
require.NoError(t, service.saveGenesisData(ctx, st))
// Create a simple block for testing
blk := util.NewBeaconBlock()
blk.Block.Slot = 10
blk.Block.ParentRoot = service.originBlockRoot[:]
wsb, err := consensusblocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
util.SaveBlock(t, ctx, beaconDB, blk)
// Create checkpoints with test case epochs
finalizedCheckpoint := &ethpb.Checkpoint{
Epoch: tt.finalizedEpoch,
Root: service.originBlockRoot[:],
}
justifiedCheckpoint := &ethpb.Checkpoint{
Epoch: tt.justifiedEpoch,
Root: service.originBlockRoot[:],
}
// Set up forkchoice store to avoid other errors
fcp := &ethpb.Checkpoint{Epoch: 0, Root: service.originBlockRoot[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, 0, service.originBlockRoot, service.originBlockRoot, [32]byte{}, fcp, fcp)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
err = service.fillInForkChoiceMissingBlocks(
t.Context(), wsb, finalizedCheckpoint, justifiedCheckpoint)
if tt.expectedError != nil {
require.ErrorIs(t, err, tt.expectedError)
} else {
// For valid cases, we might get other errors (like block not being descendant of finalized)
// but we shouldn't get the checkpoint validation error
if err != nil && errors.Is(err, tt.expectedError) {
t.Errorf("Unexpected checkpoint validation error: %v", err)
}
}
})
}
}
// blockTree1 constructs the following tree:
//
// /- B1
@@ -2132,13 +2207,13 @@ func TestNoViableHead_Reboot(t *testing.T) {
// Forkchoice has the genesisRoot loaded at startup
require.Equal(t, genesisRoot, service.ensureRootNotZeros(service.cfg.ForkChoiceStore.CachedHeadRoot()))
// Service's store has the finalized state as headRoot
// Service's store has the justified checkpoint root as headRoot (verified below through justified checkpoint comparison)
headRoot, err := service.HeadRoot(ctx)
require.NoError(t, err)
require.Equal(t, genesisRoot, bytesutil.ToBytes32(headRoot))
require.NotEqual(t, bytesutil.ToBytes32(params.BeaconConfig().ZeroHash[:]), bytesutil.ToBytes32(headRoot)) // Ensure head is not zero
optimistic, err := service.IsOptimistic(ctx)
require.NoError(t, err)
require.Equal(t, false, optimistic)
require.Equal(t, true, optimistic) // Head is now optimistic when starting from justified checkpoint
// Check that the node's justified checkpoint does not agree with the
// last valid state's justified checkpoint
@@ -2338,6 +2413,8 @@ func driftGenesisTime(s *Service, slot primitives.Slot, delay time.Duration) {
}
func TestMissingBlobIndices(t *testing.T) {
ds := util.SlotAtEpoch(t, params.BeaconConfig().DenebForkEpoch)
maxBlobs := params.BeaconConfig().MaxBlobsPerBlock(ds)
cases := []struct {
name string
expected [][]byte
@@ -2351,23 +2428,23 @@ func TestMissingBlobIndices(t *testing.T) {
},
{
name: "expected exceeds max",
expected: fakeCommitments(params.BeaconConfig().MaxBlobsPerBlock(0) + 1),
expected: fakeCommitments(maxBlobs + 1),
err: errMaxBlobsExceeded,
},
{
name: "first missing",
expected: fakeCommitments(params.BeaconConfig().MaxBlobsPerBlock(0)),
expected: fakeCommitments(maxBlobs),
present: []uint64{1, 2, 3, 4, 5},
result: fakeResult([]uint64{0}),
},
{
name: "all missing",
expected: fakeCommitments(params.BeaconConfig().MaxBlobsPerBlock(0)),
expected: fakeCommitments(maxBlobs),
result: fakeResult([]uint64{0, 1, 2, 3, 4, 5}),
},
{
name: "none missing",
expected: fakeCommitments(params.BeaconConfig().MaxBlobsPerBlock(0)),
expected: fakeCommitments(maxBlobs),
present: []uint64{0, 1, 2, 3, 4, 5},
result: fakeResult([]uint64{}),
},
@@ -2400,8 +2477,8 @@ func TestMissingBlobIndices(t *testing.T) {
for _, c := range cases {
bm, bs := filesystem.NewEphemeralBlobStorageWithMocker(t)
t.Run(c.name, func(t *testing.T) {
require.NoError(t, bm.CreateFakeIndices(c.root, 0, c.present...))
missing, err := missingBlobIndices(bs, c.root, c.expected, 0)
require.NoError(t, bm.CreateFakeIndices(c.root, ds, c.present...))
missing, err := missingBlobIndices(bs, c.root, c.expected, ds)
if c.err != nil {
require.ErrorIs(t, err, c.err)
return
@@ -2720,6 +2797,11 @@ func TestProcessLightClientUpdate(t *testing.T) {
s, tr := minimalTestService(t, WithLCStore())
ctx := tr.ctx
headState, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, s.cfg.BeaconDB.SaveState(ctx, headState, [32]byte{1, 2}))
require.NoError(t, s.cfg.BeaconDB.SaveHeadBlockRoot(ctx, [32]byte{1, 2}))
for testVersion := version.Altair; testVersion <= version.Electra; testVersion++ {
t.Run(version.String(testVersion), func(t *testing.T) {
l := util.NewTestLightClient(t, testVersion)
@@ -2742,6 +2824,8 @@ func TestProcessLightClientUpdate(t *testing.T) {
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveHeadBlockRoot(ctx, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, l.FinalizedBlock)
require.NoError(t, err)
@@ -2756,10 +2840,9 @@ func TestProcessLightClientUpdate(t *testing.T) {
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
t.Run("no old update", func(t *testing.T) {
require.NoError(t, s.processLightClientUpdate(cfg))
s.processLightClientUpdates(cfg)
// Check that the light client update is saved
u, err := s.lcStore.LightClientUpdate(ctx, period)
u, err := s.lcStore.LightClientUpdate(ctx, period, l.Block)
require.NoError(t, err)
require.NotNil(t, u)
attestedStateRoot, err := l.AttestedState.HashTreeRoot(ctx)
@@ -2773,12 +2856,12 @@ func TestProcessLightClientUpdate(t *testing.T) {
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(l.AttestedBlock)
require.NoError(t, err)
err = s.lcStore.SaveLightClientUpdate(ctx, period, oldUpdate)
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
require.NoError(t, err)
require.NoError(t, s.processLightClientUpdate(cfg))
s.processLightClientUpdates(cfg)
u, err := s.lcStore.LightClientUpdate(ctx, period)
u, err := s.lcStore.LightClientUpdate(ctx, period, l.Block)
require.NoError(t, err)
require.NotNil(t, u)
attestedStateRoot, err := l.AttestedState.HashTreeRoot(ctx)
@@ -2802,12 +2885,12 @@ func TestProcessLightClientUpdate(t *testing.T) {
SyncCommitteeSignature: make([]byte, 96),
})
err = s.lcStore.SaveLightClientUpdate(ctx, period, oldUpdate)
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
require.NoError(t, err)
require.NoError(t, s.processLightClientUpdate(cfg))
s.processLightClientUpdates(cfg)
u, err := s.lcStore.LightClientUpdate(ctx, period)
u, err := s.lcStore.LightClientUpdate(ctx, period, l.Block)
require.NoError(t, err)
require.NotNil(t, u)
require.DeepEqual(t, oldUpdate, u)
@@ -2823,22 +2906,21 @@ type testIsAvailableParams struct {
columnsToSave []uint64
}
func testIsAvailableSetup(t *testing.T, params testIsAvailableParams) (context.Context, context.CancelFunc, *Service, [fieldparams.RootLength]byte, interfaces.SignedBeaconBlock) {
func testIsAvailableSetup(t *testing.T, p testIsAvailableParams) (context.Context, context.CancelFunc, *Service, [fieldparams.RootLength]byte, interfaces.SignedBeaconBlock) {
ctx, cancel := context.WithCancel(t.Context())
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
options := append(params.options, WithDataColumnStorage(dataColumnStorage))
options := append(p.options, WithDataColumnStorage(dataColumnStorage))
service, _ := minimalTestService(t, options...)
fs := util.SlotAtEpoch(t, params.BeaconConfig().FuluForkEpoch)
genesisState, secretKeys := util.DeterministicGenesisStateElectra(t, 32 /*validator count*/)
err := service.saveGenesisData(ctx, genesisState)
require.NoError(t, err)
genesisState, secretKeys := util.DeterministicGenesisStateElectra(t, 32, util.WithElectraStateSlot(fs))
require.NoError(t, service.saveGenesisData(ctx, genesisState))
conf := util.DefaultBlockGenConfig()
conf.NumBlobKzgCommitments = params.blobKzgCommitmentsCount
conf.NumBlobKzgCommitments = p.blobKzgCommitmentsCount
signedBeaconBlock, err := util.GenerateFullBlockFulu(genesisState, secretKeys, conf, 10 /*block slot*/)
signedBeaconBlock, err := util.GenerateFullBlockFulu(genesisState, secretKeys, conf, fs+1)
require.NoError(t, err)
block := signedBeaconBlock.Block
@@ -2848,8 +2930,8 @@ func testIsAvailableSetup(t *testing.T, params testIsAvailableParams) (context.C
root, err := block.HashTreeRoot()
require.NoError(t, err)
dataColumnsParams := make([]util.DataColumnParam, 0, len(params.columnsToSave))
for _, i := range params.columnsToSave {
dataColumnsParams := make([]util.DataColumnParam, 0, len(p.columnsToSave))
for _, i := range p.columnsToSave {
dataColumnParam := util.DataColumnParam{
Index: i,
Slot: block.Slot,
@@ -2873,21 +2955,28 @@ func testIsAvailableSetup(t *testing.T, params testIsAvailableParams) (context.C
}
func TestIsDataAvailable(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.AltairForkEpoch, cfg.BellatrixForkEpoch, cfg.CapellaForkEpoch, cfg.DenebForkEpoch, cfg.ElectraForkEpoch, cfg.FuluForkEpoch = 0, 0, 0, 0, 0, 0
params.OverrideBeaconConfig(cfg)
t.Run("Fulu - out of retention window", func(t *testing.T) {
params := testIsAvailableParams{options: []Option{WithGenesisTime(time.Unix(0, 0))}}
params := testIsAvailableParams{}
ctx, _, service, root, signed := testIsAvailableSetup(t, params)
err := service.isDataAvailable(ctx, root, signed)
roBlock, err := consensusblocks.NewROBlockWithRoot(signed, root)
require.NoError(t, err)
err = service.isDataAvailable(ctx, roBlock)
require.NoError(t, err)
})
t.Run("Fulu - no commitment in blocks", func(t *testing.T) {
ctx, _, service, root, signed := testIsAvailableSetup(t, testIsAvailableParams{})
err := service.isDataAvailable(ctx, root, signed)
roBlock, err := consensusblocks.NewROBlockWithRoot(signed, root)
require.NoError(t, err)
err = service.isDataAvailable(ctx, roBlock)
require.NoError(t, err)
})
t.Run("Fulu - more than half of the columns in custody", func(t *testing.T) {
minimumColumnsCountToReconstruct := peerdas.MinimumColumnCountToReconstruct()
indices := make([]uint64, 0, minimumColumnsCountToReconstruct)
@@ -2902,7 +2991,9 @@ func TestIsDataAvailable(t *testing.T) {
ctx, _, service, root, signed := testIsAvailableSetup(t, params)
err := service.isDataAvailable(ctx, root, signed)
roBlock, err := consensusblocks.NewROBlockWithRoot(signed, root)
require.NoError(t, err)
err = service.isDataAvailable(ctx, roBlock)
require.NoError(t, err)
})
@@ -2914,7 +3005,9 @@ func TestIsDataAvailable(t *testing.T) {
ctx, _, service, root, signed := testIsAvailableSetup(t, params)
err := service.isDataAvailable(ctx, root, signed)
roBlock, err := consensusblocks.NewROBlockWithRoot(signed, root)
require.NoError(t, err)
err = service.isDataAvailable(ctx, roBlock)
require.NoError(t, err)
})
@@ -2962,7 +3055,9 @@ func TestIsDataAvailable(t *testing.T) {
ctx, cancel := context.WithTimeout(ctx, time.Second*2)
defer cancel()
err = service.isDataAvailable(ctx, root, signed)
roBlock, err := consensusblocks.NewROBlockWithRoot(signed, root)
require.NoError(t, err)
err = service.isDataAvailable(ctx, roBlock)
require.NoError(t, err)
})
@@ -3024,7 +3119,9 @@ func TestIsDataAvailable(t *testing.T) {
ctx, cancel := context.WithTimeout(ctx, time.Second*2)
defer cancel()
err = service.isDataAvailable(ctx, root, signed)
roBlock, err := consensusblocks.NewROBlockWithRoot(signed, root)
require.NoError(t, err)
err = service.isDataAvailable(ctx, roBlock)
require.NoError(t, err)
})
@@ -3043,7 +3140,9 @@ func TestIsDataAvailable(t *testing.T) {
cancel()
}()
err := service.isDataAvailable(ctx, root, signed)
roBlock, err := consensusblocks.NewROBlockWithRoot(signed, root)
require.NoError(t, err)
err = service.isDataAvailable(ctx, roBlock)
require.NotNil(t, err)
})
}
@@ -3115,6 +3214,11 @@ func TestProcessLightClientOptimisticUpdate(t *testing.T) {
s.cfg.P2P = &mockp2p.FakeP2P{}
ctx := tr.ctx
headState, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, s.cfg.BeaconDB.SaveState(ctx, headState, [32]byte{1, 2}))
require.NoError(t, s.cfg.BeaconDB.SaveHeadBlockRoot(ctx, [32]byte{1, 2}))
testCases := []struct {
name string
oldOptions []util.LightClientOption
@@ -3130,7 +3234,7 @@ func TestProcessLightClientOptimisticUpdate(t *testing.T) {
{
name: "Same age",
oldOptions: []util.LightClientOption{},
newOptions: []util.LightClientOption{util.WithSupermajority()}, // supermajority does not matter here and is only added to result in two different updates
newOptions: []util.LightClientOption{util.WithSupermajority(0)}, // supermajority does not matter here and is only added to result in two different updates
expectReplace: false,
},
{
@@ -3174,14 +3278,14 @@ func TestProcessLightClientOptimisticUpdate(t *testing.T) {
t.Run(version.String(testVersion)+"_"+tc.name, func(t *testing.T) {
s.genesisTime = time.Unix(time.Now().Unix()-(int64(forkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
s.lcStore = lightClient.NewLightClientStore(s.cfg.BeaconDB, s.cfg.P2P, s.cfg.StateNotifier.StateFeed())
s.lcStore = lightClient.NewLightClientStore(s.cfg.P2P, s.cfg.StateNotifier.StateFeed(), s.cfg.BeaconDB)
var oldActualUpdate interfaces.LightClientOptimisticUpdate
var err error
if tc.oldOptions != nil {
// config for old update
lOld, cfgOld := setupLightClientTestRequirements(ctx, t, s, testVersion, tc.oldOptions...)
require.NoError(t, s.processLightClientOptimisticUpdate(cfgOld.ctx, cfgOld.roblock, cfgOld.postState))
s.processLightClientUpdates(cfgOld)
oldActualUpdate, err = lightClient.NewLightClientOptimisticUpdateFromBeaconState(lOld.Ctx, lOld.State, lOld.Block, lOld.AttestedState, lOld.AttestedBlock)
require.NoError(t, err)
@@ -3195,7 +3299,7 @@ func TestProcessLightClientOptimisticUpdate(t *testing.T) {
// config for new update
lNew, cfgNew := setupLightClientTestRequirements(ctx, t, s, testVersion, tc.newOptions...)
require.NoError(t, s.processLightClientOptimisticUpdate(cfgNew.ctx, cfgNew.roblock, cfgNew.postState))
s.processLightClientUpdates(cfgNew)
newActualUpdate, err := lightClient.NewLightClientOptimisticUpdateFromBeaconState(lNew.Ctx, lNew.State, lNew.Block, lNew.AttestedState, lNew.AttestedBlock)
require.NoError(t, err)
@@ -3236,6 +3340,7 @@ func TestProcessLightClientFinalityUpdate(t *testing.T) {
s, tr := minimalTestService(t)
s.cfg.P2P = &mockp2p.FakeP2P{}
ctx := tr.ctx
s.head = &head{}
testCases := []struct {
name string
@@ -3314,15 +3419,18 @@ func TestProcessLightClientFinalityUpdate(t *testing.T) {
t.Run(version.String(testVersion)+"_"+tc.name, func(t *testing.T) {
s.genesisTime = time.Unix(time.Now().Unix()-(int64(forkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
s.lcStore = lightClient.NewLightClientStore(s.cfg.BeaconDB, s.cfg.P2P, s.cfg.StateNotifier.StateFeed())
s.lcStore = lightClient.NewLightClientStore(s.cfg.P2P, s.cfg.StateNotifier.StateFeed(), s.cfg.BeaconDB)
var actualOldUpdate, actualNewUpdate interfaces.LightClientFinalityUpdate
var err error
if tc.oldOptions != nil {
// config for old update
lOld, cfgOld := setupLightClientTestRequirements(ctx, t, s, testVersion, tc.oldOptions...)
require.NoError(t, s.processLightClientFinalityUpdate(cfgOld.ctx, cfgOld.roblock, cfgOld.postState))
blkRoot, err := lOld.Block.Block().HashTreeRoot()
require.NoError(t, err)
s.head.block = lOld.Block
s.head.root = blkRoot
s.processLightClientUpdates(cfgOld)
// check that the old update is saved
actualOldUpdate, err = lightClient.NewLightClientFinalityUpdateFromBeaconState(ctx, cfgOld.postState, cfgOld.roblock, lOld.AttestedState, lOld.AttestedBlock, lOld.FinalizedBlock)
@@ -3333,7 +3441,11 @@ func TestProcessLightClientFinalityUpdate(t *testing.T) {
// config for new update
lNew, cfgNew := setupLightClientTestRequirements(ctx, t, s, testVersion, tc.newOptions...)
require.NoError(t, s.processLightClientFinalityUpdate(cfgNew.ctx, cfgNew.roblock, cfgNew.postState))
blkRoot, err := lNew.Block.Block().HashTreeRoot()
require.NoError(t, err)
s.head.block = lNew.Block
s.head.root = blkRoot
s.processLightClientUpdates(cfgNew)
// check that the actual old update and the actual new update are different
actualNewUpdate, err = lightClient.NewLightClientFinalityUpdateFromBeaconState(ctx, cfgNew.postState, cfgNew.roblock, lNew.AttestedState, lNew.AttestedBlock, lNew.FinalizedBlock)

View File

@@ -16,7 +16,6 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/slasher/types"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/config/features"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
@@ -93,7 +92,7 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
blockCopy, err := block.Copy()
if err != nil {
return err
return errors.Wrap(err, "block copy")
}
preState, err := s.getBlockPreState(ctx, blockCopy.Block())
@@ -104,17 +103,17 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
currentCheckpoints := s.saveCurrentCheckpoints(preState)
roblock, err := blocks.NewROBlockWithRoot(blockCopy, blockRoot)
if err != nil {
return err
return errors.Wrap(err, "new ro block with root")
}
postState, isValidPayload, err := s.validateExecutionAndConsensus(ctx, preState, roblock)
if err != nil {
return err
return errors.Wrap(err, "validator execution and consensus")
}
daWaitedTime, err := s.handleDA(ctx, blockCopy, blockRoot, avs)
daWaitedTime, err := s.handleDA(ctx, avs, roblock)
if err != nil {
return err
return errors.Wrap(err, "handle da")
}
// Defragment the state before continuing block processing.
@@ -135,10 +134,10 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
if err := s.postBlockProcess(args); err != nil {
err := errors.Wrap(err, "could not process block")
tracing.AnnotateError(span, err)
return err
return errors.Wrap(err, "post block process")
}
if err := s.updateCheckpoints(ctx, currentCheckpoints, preState, postState, blockRoot); err != nil {
return err
return errors.Wrap(err, "update checkpoints")
}
// If slasher is configured, forward the attestations in the block via an event feed for processing.
if s.slasherEnabled {
@@ -152,12 +151,12 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
// Have we been finalizing? Should we start saving hot states to db?
if err := s.checkSaveHotStateDB(ctx); err != nil {
return err
return errors.Wrap(err, "check save hot state db")
}
// We apply the same heuristic to some of our more important caches.
if err := s.handleCaches(); err != nil {
return err
return errors.Wrap(err, "handle caches")
}
s.reportPostBlockProcessing(blockCopy, blockRoot, receivedTime, daWaitedTime)
return nil
@@ -240,37 +239,19 @@ func (s *Service) validateExecutionAndConsensus(
return postState, isValidPayload, nil
}
func (s *Service) handleDA(
ctx context.Context,
block interfaces.SignedBeaconBlock,
blockRoot [fieldparams.RootLength]byte,
avs das.AvailabilityStore,
) (elapsed time.Duration, err error) {
defer func(start time.Time) {
elapsed = time.Since(start)
if err == nil {
dataAvailWaitedTime.Observe(float64(elapsed.Milliseconds()))
}
}(time.Now())
if avs == nil {
if err = s.isDataAvailable(ctx, blockRoot, block); err != nil {
return
}
return
func (s *Service) handleDA(ctx context.Context, avs das.AvailabilityStore, block blocks.ROBlock) (time.Duration, error) {
var err error
start := time.Now()
if avs != nil {
err = avs.IsDataAvailable(ctx, s.CurrentSlot(), block)
} else {
err = s.isDataAvailable(ctx, block)
}
var rob blocks.ROBlock
rob, err = blocks.NewROBlockWithRoot(block, blockRoot)
if err != nil {
return
elapsed := time.Since(start)
if err == nil {
dataAvailWaitedTime.Observe(float64(elapsed.Milliseconds()))
}
err = avs.IsDataAvailable(ctx, s.CurrentSlot(), rob)
return
return elapsed, err
}
func (s *Service) reportPostBlockProcessing(
@@ -320,13 +301,28 @@ func (s *Service) executePostFinalizationTasks(ctx context.Context, finalizedSta
if features.Get().EnableLightClient {
// Save a light client bootstrap for the finalized checkpoint
go func() {
err := s.lcStore.SaveLightClientBootstrap(s.ctx, finalized.Root)
st, err := s.cfg.StateGen.StateByRoot(ctx, finalized.Root)
if err != nil {
log.WithError(err).Error("Could not retrieve state for finalized root to save light client bootstrap")
return
}
err = s.lcStore.SaveLightClientBootstrap(s.ctx, finalized.Root, st)
if err != nil {
log.WithError(err).Error("Could not save light client bootstrap by block root")
} else {
log.Debugf("Saved light client bootstrap for finalized root %#x", finalized.Root)
}
}()
// Clean up the light client store caches
go func() {
err := s.lcStore.MigrateToCold(s.ctx, finalized.Root)
if err != nil {
log.WithError(err).Error("Could not migrate light client store to cold storage")
} else {
log.Debugf("Migrated light client store to cold storage for finalized root %#x", finalized.Root)
}
}()
}
}

View File

@@ -9,9 +9,9 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
lightClient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
"github.com/OffchainLabs/prysm/v6/beacon-chain/das"
forkchoicetypes "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/types"
lightClient "github.com/OffchainLabs/prysm/v6/beacon-chain/light-client"
"github.com/OffchainLabs/prysm/v6/beacon-chain/operations/voluntaryexits"
"github.com/OffchainLabs/prysm/v6/config/features"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
@@ -192,7 +192,9 @@ func TestHandleDA(t *testing.T) {
require.NoError(t, err)
s, _ := minimalTestService(t)
elapsed, err := s.handleDA(t.Context(), signedBeaconBlock, [fieldparams.RootLength]byte{}, nil)
block, err := blocks.NewROBlockWithRoot(signedBeaconBlock, [32]byte{})
require.NoError(t, err)
elapsed, err := s.handleDA(t.Context(), nil, block)
require.NoError(t, err)
require.Equal(t, true, elapsed > 0, "Elapsed time should be greater than 0")
}
@@ -594,11 +596,7 @@ func TestProcessLightClientBootstrap(t *testing.T) {
require.NoError(t, s.cfg.ForkChoiceStore.UpdateFinalizedCheckpoint(&forkchoicetypes.Checkpoint{Epoch: cp.Epoch, Root: [32]byte(cp.Root)}))
sss, err := s.cfg.BeaconDB.State(ctx, finalizedBlockRoot)
require.NoError(t, err)
require.NotNil(t, sss)
s.executePostFinalizationTasks(s.ctx, l.FinalizedState)
s.executePostFinalizationTasks(s.ctx, l.AttestedState)
// wait for the goroutine to finish processing
time.Sleep(1 * time.Second)

View File

@@ -14,13 +14,13 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
lightClient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
coreTime "github.com/OffchainLabs/prysm/v6/beacon-chain/core/time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/transition"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v6/beacon-chain/execution"
f "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice"
lightClient "github.com/OffchainLabs/prysm/v6/beacon-chain/light-client"
"github.com/OffchainLabs/prysm/v6/beacon-chain/operations/attestations"
"github.com/OffchainLabs/prysm/v6/beacon-chain/operations/blstoexec"
"github.com/OffchainLabs/prysm/v6/beacon-chain/operations/slashings"

View File

@@ -562,8 +562,9 @@ func TestNotifyIndex(t *testing.T) {
var root [32]byte
copy(root[:], "exampleRoot")
ds := util.SlotAtEpoch(t, params.BeaconConfig().DenebForkEpoch)
// Test notifying a new index
bn.notifyIndex(root, 1, 1)
bn.notifyIndex(root, 1, ds)
if !bn.seenIndex[root][1] {
t.Errorf("Index was not marked as seen")
}
@@ -580,7 +581,7 @@ func TestNotifyIndex(t *testing.T) {
}
// Test notifying a new index again
bn.notifyIndex(root, 2, 1)
bn.notifyIndex(root, 2, ds)
if !bn.seenIndex[root][2] {
t.Errorf("Index was not marked as seen")
}

View File

@@ -4,6 +4,7 @@ import (
"bytes"
"context"
"fmt"
"slices"
forkchoicetypes "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/types"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
@@ -20,7 +21,7 @@ func (s *Service) setupForkchoice(st state.BeaconState) error {
return errors.Wrap(err, "could not set up forkchoice checkpoints")
}
if err := s.setupForkchoiceTree(st); err != nil {
return errors.Wrap(err, "could not set up forkchoice root")
return errors.Wrap(err, "could not set up forkchoice tree")
}
if err := s.initializeHead(s.ctx, st); err != nil {
return errors.Wrap(err, "could not initialize head from db")
@@ -30,24 +31,24 @@ func (s *Service) setupForkchoice(st state.BeaconState) error {
func (s *Service) startupHeadRoot() [32]byte {
headStr := features.Get().ForceHead
cp := s.FinalizedCheckpt()
fRoot := s.ensureRootNotZeros([32]byte(cp.Root))
jp := s.CurrentJustifiedCheckpt()
jRoot := s.ensureRootNotZeros([32]byte(jp.Root))
if headStr == "" {
return fRoot
return jRoot
}
if headStr == "head" {
root, err := s.cfg.BeaconDB.HeadBlockRoot()
if err != nil {
log.WithError(err).Error("Could not get head block root, starting with finalized block as head")
return fRoot
log.WithError(err).Error("Could not get head block root, starting with justified block as head")
return jRoot
}
log.Infof("Using Head root of %#x", root)
return root
}
root, err := bytesutil.DecodeHexWithLength(headStr, 32)
if err != nil {
log.WithError(err).Error("Could not parse head root, starting with finalized block as head")
return fRoot
log.WithError(err).Error("Could not parse head root, starting with justified block as head")
return jRoot
}
return [32]byte(root)
}
@@ -112,6 +113,7 @@ func (s *Service) buildForkchoiceChain(ctx context.Context, head interfaces.Read
return nil, errors.New("head block is not a descendant of the finalized checkpoint")
}
}
slices.Reverse(chain)
return chain, nil
}

View File

@@ -32,7 +32,7 @@ func Test_startupHeadRoot(t *testing.T) {
})
defer resetCfg()
require.Equal(t, service.startupHeadRoot(), gr)
require.LogsContain(t, hook, "Could not get head block root, starting with finalized block as head")
require.LogsContain(t, hook, "Could not get head block root, starting with justified block as head")
})
st, _ := util.DeterministicGenesisState(t, 64)
@@ -124,5 +124,5 @@ func Test_setupForkchoiceTree_Head(t *testing.T) {
require.NotEqual(t, fRoot, root)
require.Equal(t, root, service.startupHeadRoot())
require.NoError(t, service.setupForkchoiceTree(st))
require.Equal(t, 2, service.cfg.ForkChoiceStore.NodeCount())
require.Equal(t, 3, service.cfg.ForkChoiceStore.NodeCount())
}

View File

@@ -11,21 +11,21 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache/depositsnapshot"
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
lightclient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
testDB "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
mockExecution "github.com/OffchainLabs/prysm/v6/beacon-chain/execution/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice"
doublylinkedtree "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/doubly-linked-tree"
lightclient "github.com/OffchainLabs/prysm/v6/beacon-chain/light-client"
"github.com/OffchainLabs/prysm/v6/beacon-chain/operations/attestations"
"github.com/OffchainLabs/prysm/v6/beacon-chain/operations/blstoexec"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
p2pTesting "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state/stategen"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
@@ -89,7 +89,7 @@ func (mb *mockBroadcaster) BroadcastLightClientFinalityUpdate(_ context.Context,
return nil
}
func (mb *mockBroadcaster) BroadcastDataColumnSidecar(_ [fieldparams.RootLength]byte, _ uint64, _ *ethpb.DataColumnSidecar) error {
func (mb *mockBroadcaster) BroadcastDataColumnSidecars(_ context.Context, _ []blocks.VerifiedRODataColumn) error {
mb.broadcastCalled = true
return nil
}
@@ -106,14 +106,14 @@ type mockCustodyManager struct {
custodyGroupCount uint64
}
func (dch *mockCustodyManager) EarliestAvailableSlot() (primitives.Slot, error) {
func (dch *mockCustodyManager) EarliestAvailableSlot(context.Context) (primitives.Slot, error) {
dch.mut.RLock()
defer dch.mut.RUnlock()
return dch.earliestAvailableSlot, nil
}
func (dch *mockCustodyManager) CustodyGroupCount() (uint64, error) {
func (dch *mockCustodyManager) CustodyGroupCount(context.Context) (uint64, error) {
dch.mut.RLock()
defer dch.mut.RUnlock()

View File

@@ -723,7 +723,8 @@ func (c *ChainService) ReceiveDataColumn(dc blocks.VerifiedRODataColumn) error {
}
// ReceiveDataColumns implements the same method in chain service
func (*ChainService) ReceiveDataColumns(_ []blocks.VerifiedRODataColumn) error {
func (c *ChainService) ReceiveDataColumns(dcs []blocks.VerifiedRODataColumn) error {
c.DataColumns = append(c.DataColumns, dcs...)
return nil
}

View File

@@ -5,7 +5,6 @@ package cache
import (
"context"
"errors"
"math"
"sync"
"time"
@@ -272,7 +271,7 @@ func (c *CommitteeCache) checkInProgress(ctx context.Context, seed [32]byte) err
// for the in progress boolean to flip to false.
time.Sleep(time.Duration(delay) * time.Nanosecond)
delay *= delayFactor
delay = math.Min(delay, maxDelay)
delay = min(delay, maxDelay)
}
return nil
}

View File

@@ -52,7 +52,7 @@ func create(leaves [][32]byte, depth uint64) MerkleTreeNode {
if depth == 0 {
return &LeafNode{hash: leaves[0]}
}
split := math.Min(math.PowerOf2(depth-1), length)
split := min(math.PowerOf2(depth-1), length)
left := create(leaves[0:split], depth-1)
right := create(leaves[split:], depth-1)
return &InnerNode{left: left, right: right}

View File

@@ -2,7 +2,6 @@ package cache
import (
"context"
"math"
"sync"
"time"
@@ -90,7 +89,7 @@ func (c *SkipSlotCache) Get(ctx context.Context, r [32]byte) (state.BeaconState,
// for the in progress boolean to flip to false.
time.Sleep(time.Duration(delay) * time.Nanosecond)
delay *= delayFactor
delay = math.Min(delay, maxDelay)
delay = min(delay, maxDelay)
}
span.SetAttributes(trace.BoolAttribute("inProgress", inProgress))

View File

@@ -67,6 +67,30 @@ func (s *SyncCommitteeCache) Clear() {
s.cache = cache.NewFIFO(keyFn)
}
// CurrentPeriodPositions returns current period positions of validator indices with respect with
// sync committee. If any input validator index has no assignment, an empty list will be returned
// for that validator. If the input root does not exist in cache, `ErrNonExistingSyncCommitteeKey` is returned.
// Manual checking of state for index position in state is recommended when `ErrNonExistingSyncCommitteeKey` is returned.
func (s *SyncCommitteeCache) CurrentPeriodPositions(root [32]byte, indices []primitives.ValidatorIndex) ([][]primitives.CommitteeIndex, error) {
s.lock.RLock()
defer s.lock.RUnlock()
pos, err := s.positionsInCommittee(root, indices)
if err != nil {
return nil, err
}
result := make([][]primitives.CommitteeIndex, len(pos))
for i, p := range pos {
if p == nil {
result[i] = []primitives.CommitteeIndex{}
} else {
result[i] = p.currentPeriod
}
}
return result, nil
}
// CurrentPeriodIndexPosition returns current period index position of a validator index with respect with
// sync committee. If the input validator index has no assignment, an empty list will be returned.
// If the input root does not exist in cache, `ErrNonExistingSyncCommitteeKey` is returned.
@@ -104,11 +128,7 @@ func (s *SyncCommitteeCache) NextPeriodIndexPosition(root [32]byte, valIdx primi
return pos.nextPeriod, nil
}
// Helper function for `CurrentPeriodIndexPosition` and `NextPeriodIndexPosition` to return a mapping
// of validator index to its index(s) position in the sync committee.
func (s *SyncCommitteeCache) idxPositionInCommittee(
root [32]byte, valIdx primitives.ValidatorIndex,
) (*positionInCommittee, error) {
func (s *SyncCommitteeCache) positionsInCommittee(root [32]byte, indices []primitives.ValidatorIndex) ([]*positionInCommittee, error) {
obj, exists, err := s.cache.GetByKey(key(root))
if err != nil {
return nil, err
@@ -121,13 +141,33 @@ func (s *SyncCommitteeCache) idxPositionInCommittee(
if !ok {
return nil, errNotSyncCommitteeIndexPosition
}
idxInCommittee, ok := item.vIndexToPositionMap[valIdx]
if !ok {
SyncCommitteeCacheMiss.Inc()
result := make([]*positionInCommittee, len(indices))
for i, idx := range indices {
idxInCommittee, ok := item.vIndexToPositionMap[idx]
if ok {
SyncCommitteeCacheHit.Inc()
result[i] = idxInCommittee
} else {
SyncCommitteeCacheMiss.Inc()
result[i] = nil
}
}
return result, nil
}
// Helper function for `CurrentPeriodIndexPosition` and `NextPeriodIndexPosition` to return a mapping
// of validator index to its index(s) position in the sync committee.
func (s *SyncCommitteeCache) idxPositionInCommittee(
root [32]byte, valIdx primitives.ValidatorIndex,
) (*positionInCommittee, error) {
positions, err := s.positionsInCommittee(root, []primitives.ValidatorIndex{valIdx})
if err != nil {
return nil, err
}
if len(positions) == 0 {
return nil, nil
}
SyncCommitteeCacheHit.Inc()
return idxInCommittee, nil
return positions[0], nil
}
// UpdatePositionsInCommittee updates caching of validators position in sync committee in respect to

View File

@@ -16,6 +16,11 @@ func NewSyncCommittee() *FakeSyncCommitteeCache {
return &FakeSyncCommitteeCache{}
}
// CurrentPeriodPositions -- fake
func (s *FakeSyncCommitteeCache) CurrentPeriodPositions(root [32]byte, indices []primitives.ValidatorIndex) ([][]primitives.CommitteeIndex, error) {
return nil, nil
}
// CurrentEpochIndexPosition -- fake.
func (s *FakeSyncCommitteeCache) CurrentPeriodIndexPosition(root [32]byte, valIdx primitives.ValidatorIndex) ([]primitives.CommitteeIndex, error) {
return nil, nil

View File

@@ -16,7 +16,6 @@ import (
"github.com/OffchainLabs/prysm/v6/crypto/bls"
"github.com/OffchainLabs/prysm/v6/crypto/hash"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/math"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/time/slots"
@@ -209,7 +208,7 @@ func IsSyncCommitteeAggregator(sig []byte) (bool, error) {
}
cfg := params.BeaconConfig()
modulo := math.Max(1, cfg.SyncCommitteeSize/cfg.SyncCommitteeSubnetCount/cfg.TargetAggregatorsPerSyncSubcommittee)
modulo := max(1, cfg.SyncCommitteeSize/cfg.SyncCommitteeSubnetCount/cfg.TargetAggregatorsPerSyncSubcommittee)
hashedSig := hash.Hash(sig)
return bytesutil.FromBytes8(hashedSig[:8])%modulo == 0, nil
}

View File

@@ -39,7 +39,6 @@ go_library(
"//crypto/hash:go_default_library",
"//encoding/bytesutil:go_default_library",
"//encoding/ssz:go_default_library",
"//math:go_default_library",
"//monitoring/tracing/trace:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",

View File

@@ -5,6 +5,7 @@ import (
"sort"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/validators"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/container/slice"
@@ -39,11 +40,14 @@ func ProcessAttesterSlashings(
ctx context.Context,
beaconState state.BeaconState,
slashings []ethpb.AttSlashing,
slashFunc slashValidatorFunc,
exitInfo *validators.ExitInfo,
) (state.BeaconState, error) {
if exitInfo == nil && len(slashings) > 0 {
return nil, errors.New("exit info required to process attester slashings")
}
var err error
for _, slashing := range slashings {
beaconState, err = ProcessAttesterSlashing(ctx, beaconState, slashing, slashFunc)
beaconState, err = ProcessAttesterSlashing(ctx, beaconState, slashing, exitInfo)
if err != nil {
return nil, err
}
@@ -56,8 +60,11 @@ func ProcessAttesterSlashing(
ctx context.Context,
beaconState state.BeaconState,
slashing ethpb.AttSlashing,
slashFunc slashValidatorFunc,
exitInfo *validators.ExitInfo,
) (state.BeaconState, error) {
if exitInfo == nil {
return nil, errors.New("exit info is required to process attester slashing")
}
if err := VerifyAttesterSlashing(ctx, beaconState, slashing); err != nil {
return nil, errors.Wrap(err, "could not verify attester slashing")
}
@@ -75,10 +82,9 @@ func ProcessAttesterSlashing(
return nil, err
}
if helpers.IsSlashableValidator(val.ActivationEpoch(), val.WithdrawableEpoch(), val.Slashed(), currentEpoch) {
beaconState, err = slashFunc(ctx, beaconState, primitives.ValidatorIndex(validatorIndex))
beaconState, err = validators.SlashValidator(ctx, beaconState, primitives.ValidatorIndex(validatorIndex), exitInfo)
if err != nil {
return nil, errors.Wrapf(err, "could not slash validator index %d",
validatorIndex)
return nil, errors.Wrapf(err, "could not slash validator index %d", validatorIndex)
}
slashedAny = true
}

View File

@@ -4,6 +4,7 @@ import (
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/blocks"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/signing"
v "github.com/OffchainLabs/prysm/v6/beacon-chain/core/validators"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
@@ -44,11 +45,10 @@ func TestProcessAttesterSlashings_DataNotSlashable(t *testing.T) {
Target: &ethpb.Checkpoint{Epoch: 1}},
})}}
var registry []*ethpb.Validator
currentSlot := primitives.Slot(0)
beaconState, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Validators: registry,
Validators: []*ethpb.Validator{{}},
Slot: currentSlot,
})
require.NoError(t, err)
@@ -62,16 +62,15 @@ func TestProcessAttesterSlashings_DataNotSlashable(t *testing.T) {
for i, s := range b.Block.Body.AttesterSlashings {
ss[i] = s
}
_, err = blocks.ProcessAttesterSlashings(t.Context(), beaconState, ss, v.SlashValidator)
_, err = blocks.ProcessAttesterSlashings(t.Context(), beaconState, ss, v.ExitInformation(beaconState))
assert.ErrorContains(t, "attestations are not slashable", err)
}
func TestProcessAttesterSlashings_IndexedAttestationFailedToVerify(t *testing.T) {
var registry []*ethpb.Validator
currentSlot := primitives.Slot(0)
beaconState, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Validators: registry,
Validators: []*ethpb.Validator{{}},
Slot: currentSlot,
})
require.NoError(t, err)
@@ -101,7 +100,7 @@ func TestProcessAttesterSlashings_IndexedAttestationFailedToVerify(t *testing.T)
for i, s := range b.Block.Body.AttesterSlashings {
ss[i] = s
}
_, err = blocks.ProcessAttesterSlashings(t.Context(), beaconState, ss, v.SlashValidator)
_, err = blocks.ProcessAttesterSlashings(t.Context(), beaconState, ss, v.ExitInformation(beaconState))
assert.ErrorContains(t, "validator indices count exceeds MAX_VALIDATORS_PER_COMMITTEE", err)
}
@@ -243,7 +242,7 @@ func TestProcessAttesterSlashings_AppliesCorrectStatus(t *testing.T) {
currentSlot := 2 * params.BeaconConfig().SlotsPerEpoch
require.NoError(t, tc.st.SetSlot(currentSlot))
newState, err := blocks.ProcessAttesterSlashings(t.Context(), tc.st, []ethpb.AttSlashing{tc.slashing}, v.SlashValidator)
newState, err := blocks.ProcessAttesterSlashings(t.Context(), tc.st, []ethpb.AttSlashing{tc.slashing}, v.ExitInformation(tc.st))
require.NoError(t, err)
newRegistry := newState.Validators()
@@ -265,3 +264,83 @@ func TestProcessAttesterSlashings_AppliesCorrectStatus(t *testing.T) {
})
}
}
func TestProcessAttesterSlashing_ExitEpochGetsUpdated(t *testing.T) {
st, keys := util.DeterministicGenesisStateElectra(t, 8)
bal, err := helpers.TotalActiveBalance(st)
require.NoError(t, err)
perEpochChurn := helpers.ActivationExitChurnLimit(primitives.Gwei(bal))
vals := st.Validators()
// We set the total effective balance of slashed validators
// higher than the churn limit for a single epoch.
vals[0].EffectiveBalance = uint64(perEpochChurn / 3)
vals[1].EffectiveBalance = uint64(perEpochChurn / 3)
vals[2].EffectiveBalance = uint64(perEpochChurn / 3)
vals[3].EffectiveBalance = uint64(perEpochChurn / 3)
require.NoError(t, st.SetValidators(vals))
sl1att1 := util.HydrateIndexedAttestationElectra(&ethpb.IndexedAttestationElectra{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 1},
},
AttestingIndices: []uint64{0, 1},
})
sl1att2 := util.HydrateIndexedAttestationElectra(&ethpb.IndexedAttestationElectra{
AttestingIndices: []uint64{0, 1},
})
slashing1 := &ethpb.AttesterSlashingElectra{
Attestation_1: sl1att1,
Attestation_2: sl1att2,
}
sl2att1 := util.HydrateIndexedAttestationElectra(&ethpb.IndexedAttestationElectra{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 1},
},
AttestingIndices: []uint64{2, 3},
})
sl2att2 := util.HydrateIndexedAttestationElectra(&ethpb.IndexedAttestationElectra{
AttestingIndices: []uint64{2, 3},
})
slashing2 := &ethpb.AttesterSlashingElectra{
Attestation_1: sl2att1,
Attestation_2: sl2att2,
}
domain, err := signing.Domain(st.Fork(), 0, params.BeaconConfig().DomainBeaconAttester, st.GenesisValidatorsRoot())
require.NoError(t, err)
signingRoot, err := signing.ComputeSigningRoot(sl1att1.GetData(), domain)
assert.NoError(t, err, "Could not get signing root of beacon block header")
sig0 := keys[0].Sign(signingRoot[:])
sig1 := keys[1].Sign(signingRoot[:])
aggregateSig := bls.AggregateSignatures([]bls.Signature{sig0, sig1})
sl1att1.Signature = aggregateSig.Marshal()
signingRoot, err = signing.ComputeSigningRoot(sl1att2.GetData(), domain)
assert.NoError(t, err, "Could not get signing root of beacon block header")
sig0 = keys[0].Sign(signingRoot[:])
sig1 = keys[1].Sign(signingRoot[:])
aggregateSig = bls.AggregateSignatures([]bls.Signature{sig0, sig1})
sl1att2.Signature = aggregateSig.Marshal()
signingRoot, err = signing.ComputeSigningRoot(sl2att1.GetData(), domain)
assert.NoError(t, err, "Could not get signing root of beacon block header")
sig0 = keys[2].Sign(signingRoot[:])
sig1 = keys[3].Sign(signingRoot[:])
aggregateSig = bls.AggregateSignatures([]bls.Signature{sig0, sig1})
sl2att1.Signature = aggregateSig.Marshal()
signingRoot, err = signing.ComputeSigningRoot(sl2att2.GetData(), domain)
assert.NoError(t, err, "Could not get signing root of beacon block header")
sig0 = keys[2].Sign(signingRoot[:])
sig1 = keys[3].Sign(signingRoot[:])
aggregateSig = bls.AggregateSignatures([]bls.Signature{sig0, sig1})
sl2att2.Signature = aggregateSig.Marshal()
exitInfo := v.ExitInformation(st)
assert.Equal(t, primitives.Epoch(0), exitInfo.HighestExitEpoch)
_, err = blocks.ProcessAttesterSlashings(t.Context(), st, []ethpb.AttSlashing{slashing1, slashing2}, exitInfo)
require.NoError(t, err)
assert.Equal(t, primitives.Epoch(6), exitInfo.HighestExitEpoch)
}

View File

@@ -191,7 +191,7 @@ func TestFuzzProcessProposerSlashings_10000(t *testing.T) {
fuzzer.Fuzz(p)
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
require.NoError(t, err)
r, err := ProcessProposerSlashings(ctx, s, []*ethpb.ProposerSlashing{p}, v.SlashValidator)
r, err := ProcessProposerSlashings(ctx, s, []*ethpb.ProposerSlashing{p}, v.ExitInformation(s))
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and slashing: %v", r, err, state, p)
}
@@ -224,7 +224,7 @@ func TestFuzzProcessAttesterSlashings_10000(t *testing.T) {
fuzzer.Fuzz(a)
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
require.NoError(t, err)
r, err := ProcessAttesterSlashings(ctx, s, []ethpb.AttSlashing{a}, v.SlashValidator)
r, err := ProcessAttesterSlashings(ctx, s, []ethpb.AttSlashing{a}, v.ExitInformation(s))
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and slashing: %v", r, err, state, a)
}
@@ -334,7 +334,7 @@ func TestFuzzProcessVoluntaryExits_10000(t *testing.T) {
fuzzer.Fuzz(e)
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
require.NoError(t, err)
r, err := ProcessVoluntaryExits(ctx, s, []*ethpb.SignedVoluntaryExit{e})
r, err := ProcessVoluntaryExits(ctx, s, []*ethpb.SignedVoluntaryExit{e}, v.ExitInformation(s))
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and exit: %v", r, err, state, e)
}
@@ -351,7 +351,7 @@ func TestFuzzProcessVoluntaryExitsNoVerify_10000(t *testing.T) {
fuzzer.Fuzz(e)
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
require.NoError(t, err)
r, err := ProcessVoluntaryExits(t.Context(), s, []*ethpb.SignedVoluntaryExit{e})
r, err := ProcessVoluntaryExits(t.Context(), s, []*ethpb.SignedVoluntaryExit{e}, v.ExitInformation(s))
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, e)
}

View File

@@ -94,7 +94,7 @@ func TestProcessAttesterSlashings_RegressionSlashableIndices(t *testing.T) {
for i, s := range b.Block.Body.AttesterSlashings {
ss[i] = s
}
newState, err := blocks.ProcessAttesterSlashings(t.Context(), beaconState, ss, v.SlashValidator)
newState, err := blocks.ProcessAttesterSlashings(t.Context(), beaconState, ss, v.ExitInformation(beaconState))
require.NoError(t, err)
newRegistry := newState.Validators()
if !newRegistry[expectedSlashedVal].Slashed {

View File

@@ -11,7 +11,6 @@ import (
"github.com/OffchainLabs/prysm/v6/contracts/deposit"
"github.com/OffchainLabs/prysm/v6/crypto/bls"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/math"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/pkg/errors"
)
@@ -34,7 +33,7 @@ func ActivateValidatorWithEffectiveBalance(beaconState state.BeaconState, deposi
if err != nil {
return nil, err
}
validator.EffectiveBalance = math.Min(balance-balance%params.BeaconConfig().EffectiveBalanceIncrement, params.BeaconConfig().MaxEffectiveBalance)
validator.EffectiveBalance = min(balance-balance%params.BeaconConfig().EffectiveBalanceIncrement, params.BeaconConfig().MaxEffectiveBalance)
if validator.EffectiveBalance ==
params.BeaconConfig().MaxEffectiveBalance {
validator.ActivationEligibilityEpoch = 0

View File

@@ -9,7 +9,6 @@ import (
v "github.com/OffchainLabs/prysm/v6/beacon-chain/core/validators"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/time/slots"
@@ -50,13 +49,15 @@ func ProcessVoluntaryExits(
ctx context.Context,
beaconState state.BeaconState,
exits []*ethpb.SignedVoluntaryExit,
exitInfo *v.ExitInfo,
) (state.BeaconState, error) {
// Avoid calculating the epoch churn if no exits exist.
if len(exits) == 0 {
return beaconState, nil
}
maxExitEpoch, churn := v.MaxExitEpochAndChurn(beaconState)
var exitEpoch primitives.Epoch
if exitInfo == nil {
return nil, errors.New("exit info required to process voluntary exits")
}
for idx, exit := range exits {
if exit == nil || exit.Exit == nil {
return nil, errors.New("nil voluntary exit in block body")
@@ -68,15 +69,8 @@ func ProcessVoluntaryExits(
if err := VerifyExitAndSignature(val, beaconState, exit); err != nil {
return nil, errors.Wrapf(err, "could not verify exit %d", idx)
}
beaconState, exitEpoch, err = v.InitiateValidatorExit(ctx, beaconState, exit.Exit.ValidatorIndex, maxExitEpoch, churn)
if err == nil {
if exitEpoch > maxExitEpoch {
maxExitEpoch = exitEpoch
churn = 1
} else if exitEpoch == maxExitEpoch {
churn++
}
} else if !errors.Is(err, v.ErrValidatorAlreadyExited) {
beaconState, err = v.InitiateValidatorExit(ctx, beaconState, exit.Exit.ValidatorIndex, exitInfo)
if err != nil && !errors.Is(err, v.ErrValidatorAlreadyExited) {
return nil, err
}
}

View File

@@ -7,6 +7,7 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/signing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/validators"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
state_native "github.com/OffchainLabs/prysm/v6/beacon-chain/state/state-native"
"github.com/OffchainLabs/prysm/v6/config/params"
@@ -46,7 +47,7 @@ func TestProcessVoluntaryExits_NotActiveLongEnoughToExit(t *testing.T) {
}
want := "validator has not been active long enough to exit"
_, err = blocks.ProcessVoluntaryExits(t.Context(), state, b.Block.Body.VoluntaryExits)
_, err = blocks.ProcessVoluntaryExits(t.Context(), state, b.Block.Body.VoluntaryExits, validators.ExitInformation(state))
assert.ErrorContains(t, want, err)
}
@@ -76,7 +77,7 @@ func TestProcessVoluntaryExits_ExitAlreadySubmitted(t *testing.T) {
}
want := "validator with index 0 has already submitted an exit, which will take place at epoch: 10"
_, err = blocks.ProcessVoluntaryExits(t.Context(), state, b.Block.Body.VoluntaryExits)
_, err = blocks.ProcessVoluntaryExits(t.Context(), state, b.Block.Body.VoluntaryExits, validators.ExitInformation(state))
assert.ErrorContains(t, want, err)
}
@@ -124,7 +125,7 @@ func TestProcessVoluntaryExits_AppliesCorrectStatus(t *testing.T) {
},
}
newState, err := blocks.ProcessVoluntaryExits(t.Context(), state, b.Block.Body.VoluntaryExits)
newState, err := blocks.ProcessVoluntaryExits(t.Context(), state, b.Block.Body.VoluntaryExits, validators.ExitInformation(state))
require.NoError(t, err, "Could not process exits")
newRegistry := newState.Validators()
if newRegistry[0].ExitEpoch != helpers.ActivationExitEpoch(primitives.Epoch(state.Slot()/params.BeaconConfig().SlotsPerEpoch)) {

View File

@@ -184,45 +184,54 @@ func NewGenesisBlockForState(ctx context.Context, st state.BeaconState) (interfa
})
case *ethpb.BeaconStateElectra:
return blocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlockElectra{
Block: &ethpb.BeaconBlockElectra{
ParentRoot: params.BeaconConfig().ZeroHash[:],
StateRoot: root[:],
Body: &ethpb.BeaconBlockBodyElectra{
RandaoReveal: make([]byte, 96),
Eth1Data: &ethpb.Eth1Data{
DepositRoot: make([]byte, 32),
BlockHash: make([]byte, 32),
},
Graffiti: make([]byte, 32),
SyncAggregate: &ethpb.SyncAggregate{
SyncCommitteeBits: make([]byte, fieldparams.SyncCommitteeLength/8),
SyncCommitteeSignature: make([]byte, fieldparams.BLSSignatureLength),
},
ExecutionPayload: &enginev1.ExecutionPayloadDeneb{
ParentHash: make([]byte, 32),
FeeRecipient: make([]byte, 20),
StateRoot: make([]byte, 32),
ReceiptsRoot: make([]byte, 32),
LogsBloom: make([]byte, 256),
PrevRandao: make([]byte, 32),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, 32),
BlockHash: make([]byte, 32),
Transactions: make([][]byte, 0),
Withdrawals: make([]*enginev1.Withdrawal, 0),
},
BlsToExecutionChanges: make([]*ethpb.SignedBLSToExecutionChange, 0),
BlobKzgCommitments: make([][]byte, 0),
ExecutionRequests: &enginev1.ExecutionRequests{
Withdrawals: make([]*enginev1.WithdrawalRequest, 0),
Deposits: make([]*enginev1.DepositRequest, 0),
Consolidations: make([]*enginev1.ConsolidationRequest, 0),
},
},
},
Block: electraGenesisBlock(root),
Signature: params.BeaconConfig().EmptySignature[:],
})
case *ethpb.BeaconStateFulu:
return blocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlockFulu{
Block: electraGenesisBlock(root),
Signature: params.BeaconConfig().EmptySignature[:],
})
default:
return nil, ErrUnrecognizedState
}
}
func electraGenesisBlock(root [fieldparams.RootLength]byte) *ethpb.BeaconBlockElectra {
return &ethpb.BeaconBlockElectra{
ParentRoot: params.BeaconConfig().ZeroHash[:],
StateRoot: root[:],
Body: &ethpb.BeaconBlockBodyElectra{
RandaoReveal: make([]byte, 96),
Eth1Data: &ethpb.Eth1Data{
DepositRoot: make([]byte, 32),
BlockHash: make([]byte, 32),
},
Graffiti: make([]byte, 32),
SyncAggregate: &ethpb.SyncAggregate{
SyncCommitteeBits: make([]byte, fieldparams.SyncCommitteeLength/8),
SyncCommitteeSignature: make([]byte, fieldparams.BLSSignatureLength),
},
ExecutionPayload: &enginev1.ExecutionPayloadDeneb{
ParentHash: make([]byte, 32),
FeeRecipient: make([]byte, 20),
StateRoot: make([]byte, 32),
ReceiptsRoot: make([]byte, 32),
LogsBloom: make([]byte, 256),
PrevRandao: make([]byte, 32),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, 32),
BlockHash: make([]byte, 32),
Transactions: make([][]byte, 0),
Withdrawals: make([]*enginev1.Withdrawal, 0),
},
BlsToExecutionChanges: make([]*ethpb.SignedBLSToExecutionChange, 0),
BlobKzgCommitments: make([][]byte, 0),
ExecutionRequests: &enginev1.ExecutionRequests{
Withdrawals: make([]*enginev1.WithdrawalRequest, 0),
Deposits: make([]*enginev1.DepositRequest, 0),
Consolidations: make([]*enginev1.ConsolidationRequest, 0),
},
},
}
}

View File

@@ -7,9 +7,9 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/signing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/validators"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/pkg/errors"
@@ -19,11 +19,6 @@ import (
// ErrCouldNotVerifyBlockHeader is returned when a block header's signature cannot be verified.
var ErrCouldNotVerifyBlockHeader = errors.New("could not verify beacon block header")
type slashValidatorFunc func(
ctx context.Context,
st state.BeaconState,
vid primitives.ValidatorIndex) (state.BeaconState, error)
// ProcessProposerSlashings is one of the operations performed
// on each processed beacon block to slash proposers based on
// slashing conditions if any slashable events occurred.
@@ -54,11 +49,14 @@ func ProcessProposerSlashings(
ctx context.Context,
beaconState state.BeaconState,
slashings []*ethpb.ProposerSlashing,
slashFunc slashValidatorFunc,
exitInfo *validators.ExitInfo,
) (state.BeaconState, error) {
if exitInfo == nil && len(slashings) > 0 {
return nil, errors.New("exit info required to process proposer slashings")
}
var err error
for _, slashing := range slashings {
beaconState, err = ProcessProposerSlashing(ctx, beaconState, slashing, slashFunc)
beaconState, err = ProcessProposerSlashing(ctx, beaconState, slashing, exitInfo)
if err != nil {
return nil, err
}
@@ -71,7 +69,7 @@ func ProcessProposerSlashing(
ctx context.Context,
beaconState state.BeaconState,
slashing *ethpb.ProposerSlashing,
slashFunc slashValidatorFunc,
exitInfo *validators.ExitInfo,
) (state.BeaconState, error) {
var err error
if slashing == nil {
@@ -80,7 +78,10 @@ func ProcessProposerSlashing(
if err = VerifyProposerSlashing(beaconState, slashing); err != nil {
return nil, errors.Wrap(err, "could not verify proposer slashing")
}
beaconState, err = slashFunc(ctx, beaconState, slashing.Header_1.Header.ProposerIndex)
if exitInfo == nil {
return nil, errors.New("exit info is required to process proposer slashing")
}
beaconState, err = validators.SlashValidator(ctx, beaconState, slashing.Header_1.Header.ProposerIndex, exitInfo)
if err != nil {
return nil, errors.Wrapf(err, "could not slash proposer index %d", slashing.Header_1.Header.ProposerIndex)
}

View File

@@ -50,7 +50,7 @@ func TestProcessProposerSlashings_UnmatchedHeaderSlots(t *testing.T) {
},
}
want := "mismatched header slots"
_, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, b.Block.Body.ProposerSlashings, v.SlashValidator)
_, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, b.Block.Body.ProposerSlashings, v.ExitInformation(beaconState))
assert.ErrorContains(t, want, err)
}
@@ -83,7 +83,7 @@ func TestProcessProposerSlashings_SameHeaders(t *testing.T) {
},
}
want := "expected slashing headers to differ"
_, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, b.Block.Body.ProposerSlashings, v.SlashValidator)
_, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, b.Block.Body.ProposerSlashings, v.ExitInformation(beaconState))
assert.ErrorContains(t, want, err)
}
@@ -133,7 +133,7 @@ func TestProcessProposerSlashings_ValidatorNotSlashable(t *testing.T) {
"validator with key %#x is not slashable",
bytesutil.ToBytes48(beaconState.Validators()[0].PublicKey),
)
_, err = blocks.ProcessProposerSlashings(t.Context(), beaconState, b.Block.Body.ProposerSlashings, v.SlashValidator)
_, err = blocks.ProcessProposerSlashings(t.Context(), beaconState, b.Block.Body.ProposerSlashings, v.ExitInformation(beaconState))
assert.ErrorContains(t, want, err)
}
@@ -172,7 +172,7 @@ func TestProcessProposerSlashings_AppliesCorrectStatus(t *testing.T) {
block := util.NewBeaconBlock()
block.Block.Body.ProposerSlashings = slashings
newState, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, block.Block.Body.ProposerSlashings, v.SlashValidator)
newState, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, block.Block.Body.ProposerSlashings, v.ExitInformation(beaconState))
require.NoError(t, err)
newStateVals := newState.Validators()
@@ -220,7 +220,7 @@ func TestProcessProposerSlashings_AppliesCorrectStatusAltair(t *testing.T) {
block := util.NewBeaconBlock()
block.Block.Body.ProposerSlashings = slashings
newState, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, block.Block.Body.ProposerSlashings, v.SlashValidator)
newState, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, block.Block.Body.ProposerSlashings, v.ExitInformation(beaconState))
require.NoError(t, err)
newStateVals := newState.Validators()
@@ -268,7 +268,7 @@ func TestProcessProposerSlashings_AppliesCorrectStatusBellatrix(t *testing.T) {
block := util.NewBeaconBlock()
block.Block.Body.ProposerSlashings = slashings
newState, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, block.Block.Body.ProposerSlashings, v.SlashValidator)
newState, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, block.Block.Body.ProposerSlashings, v.ExitInformation(beaconState))
require.NoError(t, err)
newStateVals := newState.Validators()
@@ -316,7 +316,7 @@ func TestProcessProposerSlashings_AppliesCorrectStatusCapella(t *testing.T) {
block := util.NewBeaconBlock()
block.Block.Body.ProposerSlashings = slashings
newState, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, block.Block.Body.ProposerSlashings, v.SlashValidator)
newState, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, block.Block.Body.ProposerSlashings, v.ExitInformation(beaconState))
require.NoError(t, err)
newStateVals := newState.Validators()

View File

@@ -84,8 +84,8 @@ func ProcessRegistryUpdates(ctx context.Context, st state.BeaconState) error {
// Handle validator ejections.
for _, idx := range eligibleForEjection {
var err error
// exitQueueEpoch and churn arguments are not used in electra.
st, _, err = validators.InitiateValidatorExit(ctx, st, idx, 0 /*exitQueueEpoch*/, 0 /*churn*/)
// exit info is not used in electra
st, err = validators.InitiateValidatorExit(ctx, st, idx, &validators.ExitInfo{})
if err != nil && !errors.Is(err, validators.ErrValidatorAlreadyExited) {
return fmt.Errorf("failed to initiate validator exit at index %d: %w", idx, err)
}

View File

@@ -4,6 +4,7 @@ import (
"context"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/blocks"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
v "github.com/OffchainLabs/prysm/v6/beacon-chain/core/validators"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
@@ -46,18 +47,27 @@ var (
// # [New in Electra:EIP7251]
// for_ops(body.execution_payload.consolidation_requests, process_consolidation_request)
func ProcessOperations(
ctx context.Context,
st state.BeaconState,
block interfaces.ReadOnlyBeaconBlock) (state.BeaconState, error) {
func ProcessOperations(ctx context.Context, st state.BeaconState, block interfaces.ReadOnlyBeaconBlock) (state.BeaconState, error) {
var err error
// 6110 validations are in VerifyOperationLengths
bb := block.Body()
// Electra extends the altair operations.
st, err := ProcessProposerSlashings(ctx, st, bb.ProposerSlashings(), v.SlashValidator)
var exitInfo *v.ExitInfo
hasSlashings := len(bb.ProposerSlashings()) > 0 || len(bb.AttesterSlashings()) > 0
hasExits := len(bb.VoluntaryExits()) > 0
if hasSlashings || hasExits {
// ExitInformation is expensive to compute, only do it if we need it.
exitInfo = v.ExitInformation(st)
if err := helpers.UpdateTotalActiveBalanceCache(st, exitInfo.TotalActiveBalance); err != nil {
return nil, errors.Wrap(err, "could not update total active balance cache")
}
}
st, err = ProcessProposerSlashings(ctx, st, bb.ProposerSlashings(), exitInfo)
if err != nil {
return nil, errors.Wrap(err, "could not process altair proposer slashing")
}
st, err = ProcessAttesterSlashings(ctx, st, bb.AttesterSlashings(), v.SlashValidator)
st, err = ProcessAttesterSlashings(ctx, st, bb.AttesterSlashings(), exitInfo)
if err != nil {
return nil, errors.Wrap(err, "could not process altair attester slashing")
}
@@ -68,7 +78,7 @@ func ProcessOperations(
if _, err := ProcessDeposits(ctx, st, bb.Deposits()); err != nil { // new in electra
return nil, errors.Wrap(err, "could not process altair deposit")
}
st, err = ProcessVoluntaryExits(ctx, st, bb.VoluntaryExits())
st, err = ProcessVoluntaryExits(ctx, st, bb.VoluntaryExits(), exitInfo)
if err != nil {
return nil, errors.Wrap(err, "could not process voluntary exits")
}

View File

@@ -13,6 +13,7 @@ import (
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
enginev1 "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
@@ -91,6 +92,18 @@ func ProcessWithdrawalRequests(ctx context.Context, st state.BeaconState, wrs []
ctx, span := trace.StartSpan(ctx, "electra.ProcessWithdrawalRequests")
defer span.End()
currentEpoch := slots.ToEpoch(st.Slot())
if len(wrs) == 0 {
return st, nil
}
// It is correct to compute exitInfo once for all withdrawals in the block, as the ExitInfo pointer is
// updated within InitiateValidatorExit which is the only function that uses it.
var exitInfo *validators.ExitInfo
if st.Version() < version.Electra {
exitInfo = validators.ExitInformation(st)
} else {
// After Electra, the function InitiateValidatorExit ignores the exitInfo passed to it and recomputes it anyway.
exitInfo = &validators.ExitInfo{}
}
for _, wr := range wrs {
if wr == nil {
return nil, errors.New("nil execution layer withdrawal request")
@@ -147,9 +160,9 @@ func ProcessWithdrawalRequests(ctx context.Context, st state.BeaconState, wrs []
if isFullExitRequest {
// Only exit validator if it has no pending withdrawals in the queue
if pendingBalanceToWithdraw == 0 {
maxExitEpoch, churn := validators.MaxExitEpochAndChurn(st)
var err error
st, _, err = validators.InitiateValidatorExit(ctx, st, vIdx, maxExitEpoch, churn)
// exitInfo is updated within InitiateValidatorExit
st, err = validators.InitiateValidatorExit(ctx, st, vIdx, exitInfo)
if err != nil {
return nil, err
}

View File

@@ -96,13 +96,17 @@ func ProcessRegistryUpdates(ctx context.Context, st state.BeaconState) (state.Be
}
// Process validators eligible for ejection.
for _, idx := range eligibleForEjection {
// Here is fine to do a quadratic loop since this should
// barely happen
maxExitEpoch, churn := validators.MaxExitEpochAndChurn(st)
st, _, err = validators.InitiateValidatorExit(ctx, st, idx, maxExitEpoch, churn)
if err != nil && !errors.Is(err, validators.ErrValidatorAlreadyExited) {
return nil, errors.Wrapf(err, "could not initiate exit for validator %d", idx)
if len(eligibleForEjection) > 0 {
// It is safe to compute exitInfo once for all ejections in the epoch, as the ExitInfo pointer is
// updated within InitiateValidatorExit which is the only function that uses it.
exitInfo := validators.ExitInformation(st)
for _, idx := range eligibleForEjection {
// Here is fine to do a quadratic loop since this should
// barely happen
st, err = validators.InitiateValidatorExit(ctx, st, idx, exitInfo)
if err != nil && !errors.Is(err, validators.ErrValidatorAlreadyExited) {
return nil, errors.Wrapf(err, "could not initiate exit for validator %d", idx)
}
}
}
@@ -229,7 +233,7 @@ func ProcessSlashings(st state.BeaconState) error {
// a callback is used here to apply the following actions to all validators
// below equally.
increment := params.BeaconConfig().EffectiveBalanceIncrement
minSlashing := math.Min(totalSlashing*slashingMultiplier, totalBalance)
minSlashing := min(totalSlashing*slashingMultiplier, totalBalance)
// Modified in Electra:EIP7251
var penaltyPerEffectiveBalanceIncrement uint64

View File

@@ -5,7 +5,6 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/math"
)
// ProcessSlashingsPrecompute processes the slashed validators during epoch processing.
@@ -21,7 +20,7 @@ func ProcessSlashingsPrecompute(s state.BeaconState, pBal *Balance) error {
totalSlashing += slashing
}
minSlashing := math.Min(totalSlashing*params.BeaconConfig().ProportionalSlashingMultiplier, pBal.ActiveCurrentEpoch)
minSlashing := min(totalSlashing*params.BeaconConfig().ProportionalSlashingMultiplier, pBal.ActiveCurrentEpoch)
epochToWithdraw := currentEpoch + exitLength/2
var hasSlashing bool

View File

@@ -16,10 +16,10 @@ func ProcessEpoch(ctx context.Context, state state.BeaconState) error {
if err := electra.ProcessEpoch(ctx, state); err != nil {
return errors.Wrap(err, "could not process epoch in fulu transition")
}
return processProposerLookahead(ctx, state)
return ProcessProposerLookahead(ctx, state)
}
func processProposerLookahead(ctx context.Context, state state.BeaconState) error {
func ProcessProposerLookahead(ctx context.Context, state state.BeaconState) error {
_, span := trace.StartSpan(ctx, "fulu.processProposerLookahead")
defer span.End()

View File

@@ -10,6 +10,7 @@ go_library(
"legacy.go",
"metrics.go",
"randao.go",
"ranges.go",
"rewards_penalties.go",
"shuffle.go",
"sync_committee.go",
@@ -56,6 +57,7 @@ go_test(
"private_access_fuzz_noop_test.go", # keep
"private_access_test.go",
"randao_test.go",
"ranges_test.go",
"rewards_penalties_test.go",
"shuffle_test.go",
"sync_committee_test.go",

View File

@@ -317,23 +317,15 @@ func ProposerAssignments(ctx context.Context, state state.BeaconState, epoch pri
}
proposerAssignments := make(map[primitives.ValidatorIndex][]primitives.Slot)
originalStateSlot := state.Slot()
for slot := startSlot; slot < startSlot+params.BeaconConfig().SlotsPerEpoch; slot++ {
// Skip proposer assignment for genesis slot.
if slot == 0 {
continue
}
// Set the state's current slot.
if err := state.SetSlot(slot); err != nil {
return nil, err
}
// Determine the proposer index for the current slot.
i, err := BeaconProposerIndex(ctx, state)
i, err := BeaconProposerIndexAtSlot(ctx, state, slot)
if err != nil {
return nil, errors.Wrapf(err, "could not check proposer at slot %d", state.Slot())
return nil, errors.Wrapf(err, "could not check proposer at slot %d", slot)
}
// Append the slot to the proposer's assignments.
@@ -342,12 +334,6 @@ func ProposerAssignments(ctx context.Context, state state.BeaconState, epoch pri
}
proposerAssignments[i] = append(proposerAssignments[i], slot)
}
// Reset state back to its original slot.
if err := state.SetSlot(originalStateSlot); err != nil {
return nil, err
}
return proposerAssignments, nil
}
@@ -413,7 +399,6 @@ func CommitteeAssignments(ctx context.Context, state state.BeaconState, epoch pr
ctx, span := trace.StartSpan(ctx, "helpers.CommitteeAssignments")
defer span.End()
// Verify if the epoch is valid for assignment based on the provided state.
if err := VerifyAssignmentEpoch(epoch, state); err != nil {
return nil, err
}
@@ -421,12 +406,15 @@ func CommitteeAssignments(ctx context.Context, state state.BeaconState, epoch pr
if err != nil {
return nil, err
}
vals := make(map[primitives.ValidatorIndex]struct{})
// Deduplicate and make set for O(1) membership checks.
vals := make(map[primitives.ValidatorIndex]struct{}, len(validators))
for _, v := range validators {
vals[v] = struct{}{}
}
assignments := make(map[primitives.ValidatorIndex]*CommitteeAssignment)
// Compute committee assignments for each slot in the epoch.
remaining := len(vals)
assignments := make(map[primitives.ValidatorIndex]*CommitteeAssignment, len(vals))
for slot := startSlot; slot < startSlot+params.BeaconConfig().SlotsPerEpoch; slot++ {
committees, err := BeaconCommittees(ctx, state, slot)
if err != nil {
@@ -434,7 +422,7 @@ func CommitteeAssignments(ctx context.Context, state state.BeaconState, epoch pr
}
for j, committee := range committees {
for _, vIndex := range committee {
if _, ok := vals[vIndex]; !ok { // Skip if the validator is not in the provided validators slice.
if _, ok := vals[vIndex]; !ok {
continue
}
if _, ok := assignments[vIndex]; !ok {
@@ -443,6 +431,11 @@ func CommitteeAssignments(ctx context.Context, state state.BeaconState, epoch pr
assignments[vIndex].Committee = committee
assignments[vIndex].AttesterSlot = slot
assignments[vIndex].CommitteeIndex = primitives.CommitteeIndex(j)
delete(vals, vIndex)
remaining--
if remaining == 0 {
return assignments, nil // early exit
}
}
}
}

View File

@@ -0,0 +1,62 @@
package helpers
import (
"fmt"
"slices"
)
// SortedSliceFromMap takes a map with uint64 keys and returns a sorted slice of the keys.
func SortedSliceFromMap(toSort map[uint64]bool) []uint64 {
slice := make([]uint64, 0, len(toSort))
for key := range toSort {
slice = append(slice, key)
}
slices.Sort(slice)
return slice
}
// PrettySlice returns a pretty string representation of a sorted slice of uint64.
// `sortedSlice` must be sorted in ascending order.
// Example: [1,2,3,5,6,7,8,10] -> "1-3,5-8,10"
func PrettySlice(sortedSlice []uint64) string {
if len(sortedSlice) == 0 {
return ""
}
var result string
start := sortedSlice[0]
end := sortedSlice[0]
for i := 1; i < len(sortedSlice); i++ {
if sortedSlice[i] == end+1 {
end = sortedSlice[i]
continue
}
if start == end {
result += fmt.Sprintf("%d,", start)
start = sortedSlice[i]
end = sortedSlice[i]
continue
}
result += fmt.Sprintf("%d-%d,", start, end)
start = sortedSlice[i]
end = sortedSlice[i]
}
if start == end {
result += fmt.Sprintf("%d", start)
return result
}
result += fmt.Sprintf("%d-%d", start, end)
return result
}
// SortedPrettySliceFromMap combines SortedSliceFromMap and PrettySlice to return a pretty string representation of the keys in a map.
func SortedPrettySliceFromMap(toSort map[uint64]bool) string {
sorted := SortedSliceFromMap(toSort)
return PrettySlice(sorted)
}

View File

@@ -0,0 +1,64 @@
package helpers_test
import (
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/testing/require"
)
func TestSortedSliceFromMap(t *testing.T) {
input := map[uint64]bool{5: true, 3: true, 8: true, 1: true}
expected := []uint64{1, 3, 5, 8}
actual := helpers.SortedSliceFromMap(input)
require.Equal(t, len(expected), len(actual))
for i := range expected {
require.Equal(t, expected[i], actual[i])
}
}
func TestPrettySlice(t *testing.T) {
tests := []struct {
name string
input []uint64
expected string
}{
{
name: "empty slice",
input: []uint64{},
expected: "",
},
{
name: "only distinct elements",
input: []uint64{1, 3, 5, 7, 9},
expected: "1,3,5,7,9",
},
{
name: "single range",
input: []uint64{1, 2, 3, 4, 5},
expected: "1-5",
},
{
name: "multiple ranges and distinct elements",
input: []uint64{1, 2, 3, 5, 6, 7, 8, 10, 12, 13, 14},
expected: "1-3,5-8,10,12-14",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
actual := helpers.PrettySlice(tt.input)
require.Equal(t, tt.expected, actual)
})
}
}
func TestSortedPrettySliceFromMap(t *testing.T) {
input := map[uint64]bool{5: true, 7: true, 8: true, 10: true}
expected := "5,7-8,10"
actual := helpers.SortedPrettySliceFromMap(input)
require.Equal(t, expected, actual)
}

View File

@@ -79,7 +79,7 @@ func TotalActiveBalance(s state.ReadOnlyBeaconState) (uint64, error) {
}
// Spec defines `EffectiveBalanceIncrement` as min to avoid divisions by zero.
total = mathutil.Max(params.BeaconConfig().EffectiveBalanceIncrement, total)
total = max(params.BeaconConfig().EffectiveBalanceIncrement, total)
if err := balanceCache.AddTotalEffectiveBalance(s, total); err != nil {
return 0, err
}
@@ -87,6 +87,11 @@ func TotalActiveBalance(s state.ReadOnlyBeaconState) (uint64, error) {
return total, nil
}
// UpdateTotalActiveBalanceCache updates the cache with the given total active balance.
func UpdateTotalActiveBalanceCache(s state.BeaconState, total uint64) error {
return balanceCache.AddTotalEffectiveBalance(s, total)
}
// IncreaseBalance increases validator with the given 'index' balance by 'delta' in Gwei.
//
// Spec pseudocode definition:

View File

@@ -297,3 +297,30 @@ func TestIncreaseBadBalance_NotOK(t *testing.T) {
require.ErrorContains(t, "addition overflows", helpers.IncreaseBalance(state, test.i, test.nb))
}
}
func TestUpdateTotalActiveBalanceCache(t *testing.T) {
helpers.ClearCache()
// Create a test state with some validators
validators := []*ethpb.Validator{
{EffectiveBalance: 32 * 1e9, ExitEpoch: params.BeaconConfig().FarFutureEpoch, ActivationEpoch: 0},
{EffectiveBalance: 32 * 1e9, ExitEpoch: params.BeaconConfig().FarFutureEpoch, ActivationEpoch: 0},
{EffectiveBalance: 31 * 1e9, ExitEpoch: params.BeaconConfig().FarFutureEpoch, ActivationEpoch: 0},
}
state, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Validators: validators,
Slot: 0,
})
require.NoError(t, err)
// Test updating cache with a specific total
testTotal := uint64(95 * 1e9) // 32 + 32 + 31 = 95
err = helpers.UpdateTotalActiveBalanceCache(state, testTotal)
require.NoError(t, err)
// Verify the cache was updated by retrieving the total active balance
// which should now return the cached value
cachedTotal, err := helpers.TotalActiveBalance(state)
require.NoError(t, err)
assert.Equal(t, testTotal, cachedTotal, "Cache should return the updated total")
}

View File

@@ -21,6 +21,39 @@ var (
syncCommitteeCache = cache.NewSyncCommittee()
)
// CurrentPeriodPositions returns committee indices of the current period sync committee for input validators.
func CurrentPeriodPositions(st state.BeaconState, indices []primitives.ValidatorIndex) ([][]primitives.CommitteeIndex, error) {
root, err := SyncPeriodBoundaryRoot(st)
if err != nil {
return nil, err
}
pos, err := syncCommitteeCache.CurrentPeriodPositions(root, indices)
if errors.Is(err, cache.ErrNonExistingSyncCommitteeKey) {
committee, err := st.CurrentSyncCommittee()
if err != nil {
return nil, err
}
// Fill in the cache on miss.
go func() {
if err := syncCommitteeCache.UpdatePositionsInCommittee(root, st); err != nil {
log.WithError(err).Error("Could not fill sync committee cache on miss")
}
}()
pos = make([][]primitives.CommitteeIndex, len(indices))
for i, idx := range indices {
pubkey := st.PubkeyAtIndex(idx)
pos[i] = findSubCommitteeIndices(pubkey[:], committee.Pubkeys)
}
return pos, nil
}
if err != nil {
return nil, err
}
return pos, nil
}
// IsCurrentPeriodSyncCommittee returns true if the input validator index belongs in the current period sync committee
// along with the sync committee root.
// 1. Checks if the public key exists in the sync committee cache

View File

@@ -17,6 +17,38 @@ import (
"github.com/OffchainLabs/prysm/v6/testing/require"
)
func TestCurrentPeriodPositions(t *testing.T) {
helpers.ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
Pubkeys: make([][]byte, params.BeaconConfig().SyncCommitteeSize),
}
for i := 0; i < len(validators); i++ {
k := make([]byte, 48)
copy(k, strconv.Itoa(i))
validators[i] = &ethpb.Validator{
PublicKey: k,
}
syncCommittee.Pubkeys[i] = bytesutil.PadTo(k, 48)
}
state, err := state_native.InitializeFromProtoAltair(&ethpb.BeaconStateAltair{
Validators: validators,
})
require.NoError(t, err)
require.NoError(t, state.SetCurrentSyncCommittee(syncCommittee))
require.NoError(t, state.SetNextSyncCommittee(syncCommittee))
require.NoError(t, err, helpers.SyncCommitteeCache().UpdatePositionsInCommittee([32]byte{}, state))
positions, err := helpers.CurrentPeriodPositions(state, []primitives.ValidatorIndex{0, 1})
require.NoError(t, err)
require.Equal(t, 2, len(positions))
require.Equal(t, 1, len(positions[0]))
assert.Equal(t, primitives.CommitteeIndex(0), positions[0][0])
require.Equal(t, 1, len(positions[1]))
assert.Equal(t, primitives.CommitteeIndex(1), positions[1][0])
}
func TestIsCurrentEpochSyncCommittee_UsingCache(t *testing.T) {
helpers.ClearCache()
@@ -78,6 +110,7 @@ func TestIsCurrentEpochSyncCommittee_UsingCommittee(t *testing.T) {
func TestIsCurrentEpochSyncCommittee_DoesNotExist(t *testing.T) {
helpers.ClearCache()
params.SetupTestConfigCleanup(t)
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
@@ -264,6 +297,7 @@ func TestCurrentEpochSyncSubcommitteeIndices_UsingCommittee(t *testing.T) {
}
func TestCurrentEpochSyncSubcommitteeIndices_DoesNotExist(t *testing.T) {
params.SetupTestConfigCleanup(t)
helpers.ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)

View File

@@ -309,23 +309,29 @@ func beaconProposerIndexAtSlotFulu(state state.ReadOnlyBeaconState, slot primiti
if err != nil {
return 0, errors.Wrap(err, "could not get proposer lookahead")
}
spe := params.BeaconConfig().SlotsPerEpoch
if e == stateEpoch {
return lookAhead[slot%params.BeaconConfig().SlotsPerEpoch], nil
return lookAhead[slot%spe], nil
}
// The caller is requesting the proposer for the next epoch
return lookAhead[slot%params.BeaconConfig().SlotsPerEpoch+params.BeaconConfig().SlotsPerEpoch], nil
return lookAhead[spe+slot%spe], nil
}
// BeaconProposerIndexAtSlot returns proposer index at the given slot from the
// point of view of the given state as head state
func BeaconProposerIndexAtSlot(ctx context.Context, state state.ReadOnlyBeaconState, slot primitives.Slot) (primitives.ValidatorIndex, error) {
if state.Version() >= version.Fulu {
return beaconProposerIndexAtSlotFulu(state, slot)
}
e := slots.ToEpoch(slot)
stateEpoch := slots.ToEpoch(state.Slot())
// Even if the state is post Fulu, we may request a past proposer index.
if state.Version() >= version.Fulu && e >= params.BeaconConfig().FuluForkEpoch {
// We can use the cached lookahead only for the current and the next epoch.
if e == stateEpoch || e == stateEpoch+1 {
return beaconProposerIndexAtSlotFulu(state, slot)
}
}
// The cache uses the state root of the previous epoch - minimum_seed_lookahead last slot as key. (e.g. Starting epoch 1, slot 32, the key would be block root at slot 31)
// For simplicity, the node will skip caching of genesis epoch.
if e > params.BeaconConfig().GenesisEpoch+params.BeaconConfig().MinSeedLookahead {
// For simplicity, the node will skip caching of genesis epoch. If the passed state has not yet reached this slot then we do not check the cache.
if e <= stateEpoch && e > params.BeaconConfig().GenesisEpoch+params.BeaconConfig().MinSeedLookahead {
s, err := slots.EpochEnd(e - 1)
if err != nil {
return 0, err

View File

@@ -1161,6 +1161,10 @@ func TestValidatorMaxEffectiveBalance(t *testing.T) {
}
func TestBeaconProposerIndexAtSlotFulu(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.FuluForkEpoch = 1
params.OverrideBeaconConfig(cfg)
lookahead := make([]uint64, 64)
lookahead[0] = 15
lookahead[1] = 16
@@ -1180,8 +1184,4 @@ func TestBeaconProposerIndexAtSlotFulu(t *testing.T) {
idx, err = helpers.BeaconProposerIndexAtSlot(t.Context(), st, 130)
require.NoError(t, err)
require.Equal(t, primitives.ValidatorIndex(42), idx)
_, err = helpers.BeaconProposerIndexAtSlot(t.Context(), st, 95)
require.ErrorContains(t, "slot 95 is not in the current epoch 3 or the next epoch", err)
_, err = helpers.BeaconProposerIndexAtSlot(t.Context(), st, 160)
require.ErrorContains(t, "slot 160 is not in the current epoch 3 or the next epoch", err)
}

View File

@@ -14,7 +14,6 @@ import (
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/math"
v1alpha1 "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/time/slots"
)
@@ -95,7 +94,7 @@ func ComputeWeakSubjectivityPeriod(ctx context.Context, st state.ReadOnlyBeaconS
if T*(200+3*D) < t*(200+12*D) {
epochsForValidatorSetChurn := N * (t*(200+12*D) - T*(200+3*D)) / (600 * delta * (2*t + T))
epochsForBalanceTopUps := N * (200 + 3*D) / (600 * Delta)
wsp += math.Max(epochsForValidatorSetChurn, epochsForBalanceTopUps)
wsp += max(epochsForValidatorSetChurn, epochsForBalanceTopUps)
} else {
wsp += 3 * N * D * t / (200 * Delta * (T - t))
}

View File

@@ -1,202 +0,0 @@
package light_client
import (
"context"
"sync"
"github.com/OffchainLabs/prysm/v6/async/event"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed"
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/iface"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/pkg/errors"
log "github.com/sirupsen/logrus"
)
var ErrLightClientBootstrapNotFound = errors.New("light client bootstrap not found")
type Store struct {
mu sync.RWMutex
beaconDB iface.HeadAccessDatabase
lastFinalityUpdate interfaces.LightClientFinalityUpdate // tracks the best finality update seen so far
lastOptimisticUpdate interfaces.LightClientOptimisticUpdate // tracks the best optimistic update seen so far
p2p p2p.Accessor
stateFeed event.SubscriberSender
}
func NewLightClientStore(db iface.HeadAccessDatabase, p p2p.Accessor, e event.SubscriberSender) *Store {
return &Store{
beaconDB: db,
p2p: p,
stateFeed: e,
}
}
func (s *Store) LightClientBootstrap(ctx context.Context, blockRoot [32]byte) (interfaces.LightClientBootstrap, error) {
s.mu.RLock()
defer s.mu.RUnlock()
// Fetch the light client bootstrap from the database
bootstrap, err := s.beaconDB.LightClientBootstrap(ctx, blockRoot[:])
if err != nil {
return nil, err
}
if bootstrap == nil { // not found
return nil, ErrLightClientBootstrapNotFound
}
return bootstrap, nil
}
func (s *Store) SaveLightClientBootstrap(ctx context.Context, blockRoot [32]byte) error {
s.mu.Lock()
defer s.mu.Unlock()
blk, err := s.beaconDB.Block(ctx, blockRoot)
if err != nil {
return errors.Wrapf(err, "failed to fetch block for root %x", blockRoot)
}
if blk == nil {
return errors.Errorf("failed to fetch block for root %x", blockRoot)
}
state, err := s.beaconDB.State(ctx, blockRoot)
if err != nil {
return errors.Wrapf(err, "failed to fetch state for block root %x", blockRoot)
}
if state == nil {
return errors.Errorf("failed to fetch state for block root %x", blockRoot)
}
bootstrap, err := NewLightClientBootstrapFromBeaconState(ctx, state.Slot(), state, blk)
if err != nil {
return errors.Wrapf(err, "failed to create light client bootstrap for block root %x", blockRoot)
}
// Save the light client bootstrap to the database
if err := s.beaconDB.SaveLightClientBootstrap(ctx, blockRoot[:], bootstrap); err != nil {
return err
}
return nil
}
func (s *Store) LightClientUpdates(ctx context.Context, startPeriod, endPeriod uint64) ([]interfaces.LightClientUpdate, error) {
s.mu.RLock()
defer s.mu.RUnlock()
// Fetch the light client updatesMap from the database
updatesMap, err := s.beaconDB.LightClientUpdates(ctx, startPeriod, endPeriod)
if err != nil {
return nil, err
}
var updates []interfaces.LightClientUpdate
for i := startPeriod; i <= endPeriod; i++ {
update, ok := updatesMap[i]
if !ok {
// Only return the first contiguous range of updates
break
}
updates = append(updates, update)
}
return updates, nil
}
func (s *Store) LightClientUpdate(ctx context.Context, period uint64) (interfaces.LightClientUpdate, error) {
s.mu.RLock()
defer s.mu.RUnlock()
// Fetch the light client update for the given period from the database
update, err := s.beaconDB.LightClientUpdate(ctx, period)
if err != nil {
return nil, err
}
return update, nil
}
func (s *Store) SaveLightClientUpdate(ctx context.Context, period uint64, update interfaces.LightClientUpdate) error {
s.mu.Lock()
defer s.mu.Unlock()
oldUpdate, err := s.beaconDB.LightClientUpdate(ctx, period)
if err != nil {
return errors.Wrapf(err, "could not get current light client update")
}
if oldUpdate == nil {
if err := s.beaconDB.SaveLightClientUpdate(ctx, period, update); err != nil {
return errors.Wrapf(err, "could not save light client update")
}
log.WithField("period", period).Debug("Saved new light client update")
return nil
}
isNewUpdateBetter, err := IsBetterUpdate(update, oldUpdate)
if err != nil {
return errors.Wrapf(err, "could not compare light client updates")
}
if isNewUpdateBetter {
if err := s.beaconDB.SaveLightClientUpdate(ctx, period, update); err != nil {
return errors.Wrapf(err, "could not save light client update")
}
log.WithField("period", period).Debug("Saved new light client update")
return nil
}
log.WithField("period", period).Debug("New light client update is not better than the current one, skipping save")
return nil
}
func (s *Store) SetLastFinalityUpdate(update interfaces.LightClientFinalityUpdate, broadcast bool) {
s.mu.Lock()
defer s.mu.Unlock()
if broadcast && IsFinalityUpdateValidForBroadcast(update, s.lastFinalityUpdate) {
if err := s.p2p.BroadcastLightClientFinalityUpdate(context.Background(), update); err != nil {
log.WithError(err).Error("Could not broadcast light client finality update")
}
}
s.lastFinalityUpdate = update
log.Debug("Saved new light client finality update")
s.stateFeed.Send(&feed.Event{
Type: statefeed.LightClientFinalityUpdate,
Data: update,
})
}
func (s *Store) LastFinalityUpdate() interfaces.LightClientFinalityUpdate {
s.mu.RLock()
defer s.mu.RUnlock()
return s.lastFinalityUpdate
}
func (s *Store) SetLastOptimisticUpdate(update interfaces.LightClientOptimisticUpdate, broadcast bool) {
s.mu.Lock()
defer s.mu.Unlock()
if broadcast {
if err := s.p2p.BroadcastLightClientOptimisticUpdate(context.Background(), update); err != nil {
log.WithError(err).Error("Could not broadcast light client optimistic update")
}
}
s.lastOptimisticUpdate = update
log.Debug("Saved new light client optimistic update")
s.stateFeed.Send(&feed.Event{
Type: statefeed.LightClientOptimisticUpdate,
Data: update,
})
}
func (s *Store) LastOptimisticUpdate() interfaces.LightClientOptimisticUpdate {
s.mu.RLock()
defer s.mu.RUnlock()
return s.lastOptimisticUpdate
}

View File

@@ -1,165 +0,0 @@
package light_client_test
import (
"testing"
"github.com/OffchainLabs/prysm/v6/async/event"
lightClient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
testDB "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
p2pTesting "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
)
func TestLightClientStore(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.AltairForkEpoch = 1
cfg.BellatrixForkEpoch = 2
cfg.CapellaForkEpoch = 3
cfg.DenebForkEpoch = 4
cfg.ElectraForkEpoch = 5
params.OverrideBeaconConfig(cfg)
// Initialize the light client store
lcStore := lightClient.NewLightClientStore(testDB.SetupDB(t), &p2pTesting.FakeP2P{}, new(event.Feed))
// Create test light client updates for Capella and Deneb
lCapella := util.NewTestLightClient(t, version.Capella)
opUpdateCapella, err := lightClient.NewLightClientOptimisticUpdateFromBeaconState(lCapella.Ctx, lCapella.State, lCapella.Block, lCapella.AttestedState, lCapella.AttestedBlock)
require.NoError(t, err)
require.NotNil(t, opUpdateCapella, "OptimisticUpdateCapella is nil")
finUpdateCapella, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(lCapella.Ctx, lCapella.State, lCapella.Block, lCapella.AttestedState, lCapella.AttestedBlock, lCapella.FinalizedBlock)
require.NoError(t, err)
require.NotNil(t, finUpdateCapella, "FinalityUpdateCapella is nil")
lDeneb := util.NewTestLightClient(t, version.Deneb)
opUpdateDeneb, err := lightClient.NewLightClientOptimisticUpdateFromBeaconState(lDeneb.Ctx, lDeneb.State, lDeneb.Block, lDeneb.AttestedState, lDeneb.AttestedBlock)
require.NoError(t, err)
require.NotNil(t, opUpdateDeneb, "OptimisticUpdateDeneb is nil")
finUpdateDeneb, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(lDeneb.Ctx, lDeneb.State, lDeneb.Block, lDeneb.AttestedState, lDeneb.AttestedBlock, lDeneb.FinalizedBlock)
require.NoError(t, err)
require.NotNil(t, finUpdateDeneb, "FinalityUpdateDeneb is nil")
// Initially the store should have nil values for both updates
require.IsNil(t, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should be nil")
require.IsNil(t, lcStore.LastOptimisticUpdate(), "lastOptimisticUpdate should be nil")
// Set and get finality with Capella update. Optimistic update should be nil
lcStore.SetLastFinalityUpdate(finUpdateCapella, false)
require.Equal(t, finUpdateCapella, lcStore.LastFinalityUpdate(), "lastFinalityUpdate is wrong")
require.IsNil(t, lcStore.LastOptimisticUpdate(), "lastOptimisticUpdate should be nil")
// Set and get optimistic with Capella update. Finality update should be Capella
lcStore.SetLastOptimisticUpdate(opUpdateCapella, false)
require.Equal(t, opUpdateCapella, lcStore.LastOptimisticUpdate(), "lastOptimisticUpdate is wrong")
require.Equal(t, finUpdateCapella, lcStore.LastFinalityUpdate(), "lastFinalityUpdate is wrong")
// Set and get finality and optimistic with Deneb update
lcStore.SetLastFinalityUpdate(finUpdateDeneb, false)
lcStore.SetLastOptimisticUpdate(opUpdateDeneb, false)
require.Equal(t, finUpdateDeneb, lcStore.LastFinalityUpdate(), "lastFinalityUpdate is wrong")
require.Equal(t, opUpdateDeneb, lcStore.LastOptimisticUpdate(), "lastOptimisticUpdate is wrong")
}
func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
p2p := p2pTesting.NewTestP2P(t)
lcStore := lightClient.NewLightClientStore(testDB.SetupDB(t), p2p, new(event.Feed))
// update 0 with basic data and no supermajority following an empty lastFinalityUpdate - should save and broadcast
l0 := util.NewTestLightClient(t, version.Altair)
update0, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l0.Ctx, l0.State, l0.Block, l0.AttestedState, l0.AttestedBlock, l0.FinalizedBlock)
require.NoError(t, err, "Failed to create light client finality update")
require.Equal(t, true, lightClient.IsBetterFinalityUpdate(update0, lcStore.LastFinalityUpdate()), "update0 should be better than nil")
// update0 should be valid for broadcast - meaning it should be broadcasted
require.Equal(t, true, lightClient.IsFinalityUpdateValidForBroadcast(update0, lcStore.LastFinalityUpdate()), "update0 should be valid for broadcast")
lcStore.SetLastFinalityUpdate(update0, true)
require.Equal(t, update0, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
require.Equal(t, true, p2p.BroadcastCalled.Load(), "Broadcast should have been called after setting a new last finality update when previous is nil")
p2p.BroadcastCalled.Store(false) // Reset for next test
// update 1 with same finality slot, increased attested slot, and no supermajority - should save but not broadcast
l1 := util.NewTestLightClient(t, version.Altair, util.WithIncreasedAttestedSlot(1))
update1, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l1.Ctx, l1.State, l1.Block, l1.AttestedState, l1.AttestedBlock, l1.FinalizedBlock)
require.NoError(t, err, "Failed to create light client finality update")
require.Equal(t, true, lightClient.IsBetterFinalityUpdate(update1, update0), "update1 should be better than update0")
// update1 should not be valid for broadcast - meaning it should not be broadcasted
require.Equal(t, false, lightClient.IsFinalityUpdateValidForBroadcast(update1, lcStore.LastFinalityUpdate()), "update1 should not be valid for broadcast")
lcStore.SetLastFinalityUpdate(update1, true)
require.Equal(t, update1, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
require.Equal(t, false, p2p.BroadcastCalled.Load(), "Broadcast should not have been called after setting a new last finality update without supermajority")
p2p.BroadcastCalled.Store(false) // Reset for next test
// update 2 with same finality slot, increased attested slot, and supermajority - should save and broadcast
l2 := util.NewTestLightClient(t, version.Altair, util.WithIncreasedAttestedSlot(2), util.WithSupermajority())
update2, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l2.Ctx, l2.State, l2.Block, l2.AttestedState, l2.AttestedBlock, l2.FinalizedBlock)
require.NoError(t, err, "Failed to create light client finality update")
require.Equal(t, true, lightClient.IsBetterFinalityUpdate(update2, update1), "update2 should be better than update1")
// update2 should be valid for broadcast - meaning it should be broadcasted
require.Equal(t, true, lightClient.IsFinalityUpdateValidForBroadcast(update2, lcStore.LastFinalityUpdate()), "update2 should be valid for broadcast")
lcStore.SetLastFinalityUpdate(update2, true)
require.Equal(t, update2, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
require.Equal(t, true, p2p.BroadcastCalled.Load(), "Broadcast should have been called after setting a new last finality update with supermajority")
p2p.BroadcastCalled.Store(false) // Reset for next test
// update 3 with same finality slot, increased attested slot, and supermajority - should save but not broadcast
l3 := util.NewTestLightClient(t, version.Altair, util.WithIncreasedAttestedSlot(3), util.WithSupermajority())
update3, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l3.Ctx, l3.State, l3.Block, l3.AttestedState, l3.AttestedBlock, l3.FinalizedBlock)
require.NoError(t, err, "Failed to create light client finality update")
require.Equal(t, true, lightClient.IsBetterFinalityUpdate(update3, update2), "update3 should be better than update2")
// update3 should not be valid for broadcast - meaning it should not be broadcasted
require.Equal(t, false, lightClient.IsFinalityUpdateValidForBroadcast(update3, lcStore.LastFinalityUpdate()), "update3 should not be valid for broadcast")
lcStore.SetLastFinalityUpdate(update3, true)
require.Equal(t, update3, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
require.Equal(t, false, p2p.BroadcastCalled.Load(), "Broadcast should not have been when previous was already broadcast")
// update 4 with increased finality slot, increased attested slot, and supermajority - should save and broadcast
l4 := util.NewTestLightClient(t, version.Altair, util.WithIncreasedFinalizedSlot(1), util.WithIncreasedAttestedSlot(1), util.WithSupermajority())
update4, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l4.Ctx, l4.State, l4.Block, l4.AttestedState, l4.AttestedBlock, l4.FinalizedBlock)
require.NoError(t, err, "Failed to create light client finality update")
require.Equal(t, true, lightClient.IsBetterFinalityUpdate(update4, update3), "update4 should be better than update3")
// update4 should be valid for broadcast - meaning it should be broadcasted
require.Equal(t, true, lightClient.IsFinalityUpdateValidForBroadcast(update4, lcStore.LastFinalityUpdate()), "update4 should be valid for broadcast")
lcStore.SetLastFinalityUpdate(update4, true)
require.Equal(t, update4, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
require.Equal(t, true, p2p.BroadcastCalled.Load(), "Broadcast should have been called after a new finality update with increased finality slot")
p2p.BroadcastCalled.Store(false) // Reset for next test
// update 5 with the same new finality slot, increased attested slot, and supermajority - should save but not broadcast
l5 := util.NewTestLightClient(t, version.Altair, util.WithIncreasedFinalizedSlot(1), util.WithIncreasedAttestedSlot(2), util.WithSupermajority())
update5, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l5.Ctx, l5.State, l5.Block, l5.AttestedState, l5.AttestedBlock, l5.FinalizedBlock)
require.NoError(t, err, "Failed to create light client finality update")
require.Equal(t, true, lightClient.IsBetterFinalityUpdate(update5, update4), "update5 should be better than update4")
// update5 should not be valid for broadcast - meaning it should not be broadcasted
require.Equal(t, false, lightClient.IsFinalityUpdateValidForBroadcast(update5, lcStore.LastFinalityUpdate()), "update5 should not be valid for broadcast")
lcStore.SetLastFinalityUpdate(update5, true)
require.Equal(t, update5, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
require.Equal(t, false, p2p.BroadcastCalled.Load(), "Broadcast should not have been called when previous was already broadcast with supermajority")
// update 6 with the same new finality slot, increased attested slot, and no supermajority - should save but not broadcast
l6 := util.NewTestLightClient(t, version.Altair, util.WithIncreasedFinalizedSlot(1), util.WithIncreasedAttestedSlot(3))
update6, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l6.Ctx, l6.State, l6.Block, l6.AttestedState, l6.AttestedBlock, l6.FinalizedBlock)
require.NoError(t, err, "Failed to create light client finality update")
require.Equal(t, true, lightClient.IsBetterFinalityUpdate(update6, update5), "update6 should be better than update5")
// update6 should not be valid for broadcast - meaning it should not be broadcasted
require.Equal(t, false, lightClient.IsFinalityUpdateValidForBroadcast(update6, lcStore.LastFinalityUpdate()), "update6 should not be valid for broadcast")
lcStore.SetLastFinalityUpdate(update6, true)
require.Equal(t, update6, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
require.Equal(t, false, p2p.BroadcastCalled.Load(), "Broadcast should not have been called when previous was already broadcast with supermajority")
}

View File

@@ -19,11 +19,11 @@ go_library(
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//container/trie:go_default_library",
"//crypto/hash:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",

View File

@@ -4,15 +4,10 @@ import (
"encoding/binary"
"math"
"slices"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/crypto/hash"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/holiman/uint256"
"github.com/pkg/errors"
@@ -20,12 +15,9 @@ import (
var (
// Custom errors
ErrCustodyGroupTooLarge = errors.New("custody group too large")
ErrCustodyGroupCountTooLarge = errors.New("custody group count too large")
ErrSizeMismatch = errors.New("mismatch in the number of blob KZG commitments and cellsAndProofs")
ErrNotEnoughDataColumnSidecars = errors.New("not enough columns")
ErrDataColumnSidecarsNotSortedByIndex = errors.New("data column sidecars are not sorted by index")
errWrongComputedCustodyGroupCount = errors.New("wrong computed custody group count, should never happen")
ErrCustodyGroupTooLarge = errors.New("custody group too large")
ErrCustodyGroupCountTooLarge = errors.New("custody group count too large")
errWrongComputedCustodyGroupCount = errors.New("wrong computed custody group count, should never happen")
// maxUint256 is the maximum value of an uint256.
maxUint256 = &uint256.Int{math.MaxUint64, math.MaxUint64, math.MaxUint64, math.MaxUint64}
@@ -117,44 +109,6 @@ func ComputeColumnsForCustodyGroup(custodyGroup uint64) ([]uint64, error) {
return columns, nil
}
// DataColumnSidecars computes the data column sidecars from the signed block, cells and cell proofs.
// The returned value contains pointers to function parameters.
// (If the caller alterates `cellsAndProofs` afterwards, the returned value will be modified as well.)
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/validator.md#get_data_column_sidecars_from_block
func DataColumnSidecars(signedBlock interfaces.ReadOnlySignedBeaconBlock, cellsAndProofs []kzg.CellsAndProofs) ([]*ethpb.DataColumnSidecar, error) {
if signedBlock == nil || signedBlock.IsNil() || len(cellsAndProofs) == 0 {
return nil, nil
}
block := signedBlock.Block()
blockBody := block.Body()
blobKzgCommitments, err := blockBody.BlobKzgCommitments()
if err != nil {
return nil, errors.Wrap(err, "blob KZG commitments")
}
if len(blobKzgCommitments) != len(cellsAndProofs) {
return nil, ErrSizeMismatch
}
signedBlockHeader, err := signedBlock.Header()
if err != nil {
return nil, errors.Wrap(err, "signed block header")
}
kzgCommitmentsInclusionProof, err := blocks.MerkleProofKZGCommitments(blockBody)
if err != nil {
return nil, errors.Wrap(err, "merkle proof KZG commitments")
}
dataColumnSidecars, err := dataColumnsSidecars(signedBlockHeader, blobKzgCommitments, kzgCommitmentsInclusionProof, cellsAndProofs)
if err != nil {
return nil, errors.Wrap(err, "data column sidecars")
}
return dataColumnSidecars, nil
}
// ComputeCustodyGroupForColumn computes the custody group for a given column.
// It is the reciprocal function of ComputeColumnsForCustodyGroup.
func ComputeCustodyGroupForColumn(columnIndex uint64) (uint64, error) {
@@ -194,72 +148,3 @@ func CustodyColumns(custodyGroups []uint64) (map[uint64]bool, error) {
return columns, nil
}
// dataColumnsSidecars computes the data column sidecars from the signed block header, the blob KZG commiments,
// the KZG commitment includion proofs and cells and cell proofs.
// The returned value contains pointers to function parameters.
// (If the caller alterates input parameters afterwards, the returned value will be modified as well.)
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/validator.md#get_data_column_sidecars
func dataColumnsSidecars(
signedBlockHeader *ethpb.SignedBeaconBlockHeader,
blobKzgCommitments [][]byte,
kzgCommitmentsInclusionProof [][]byte,
cellsAndProofs []kzg.CellsAndProofs,
) ([]*ethpb.DataColumnSidecar, error) {
start := time.Now()
if len(blobKzgCommitments) != len(cellsAndProofs) {
return nil, ErrSizeMismatch
}
numberOfColumns := params.BeaconConfig().NumberOfColumns
blobsCount := len(cellsAndProofs)
sidecars := make([]*ethpb.DataColumnSidecar, 0, numberOfColumns)
for columnIndex := range numberOfColumns {
column := make([]kzg.Cell, 0, blobsCount)
kzgProofOfColumn := make([]kzg.Proof, 0, blobsCount)
for rowIndex := range blobsCount {
cellsForRow := cellsAndProofs[rowIndex].Cells
proofsForRow := cellsAndProofs[rowIndex].Proofs
// Validate that we have enough cells and proofs for this column index
if columnIndex >= uint64(len(cellsForRow)) {
return nil, errors.Errorf("column index %d exceeds cells length %d for blob %d", columnIndex, len(cellsForRow), rowIndex)
}
if columnIndex >= uint64(len(proofsForRow)) {
return nil, errors.Errorf("column index %d exceeds proofs length %d for blob %d", columnIndex, len(proofsForRow), rowIndex)
}
cell := cellsForRow[columnIndex]
column = append(column, cell)
kzgProof := proofsForRow[columnIndex]
kzgProofOfColumn = append(kzgProofOfColumn, kzgProof)
}
columnBytes := make([][]byte, 0, blobsCount)
for i := range column {
columnBytes = append(columnBytes, column[i][:])
}
kzgProofOfColumnBytes := make([][]byte, 0, blobsCount)
for _, kzgProof := range kzgProofOfColumn {
kzgProofOfColumnBytes = append(kzgProofOfColumnBytes, kzgProof[:])
}
sidecar := &ethpb.DataColumnSidecar{
Index: columnIndex,
Column: columnBytes,
KzgCommitments: blobKzgCommitments,
KzgProofs: kzgProofOfColumnBytes,
SignedBlockHeader: signedBlockHeader,
KzgCommitmentsInclusionProof: kzgCommitmentsInclusionProof,
}
sidecars = append(sidecars, sidecar)
}
dataColumnComputationTime.Observe(float64(time.Since(start).Milliseconds()))
return sidecars, nil
}

View File

@@ -3,13 +3,9 @@ package peerdas_test
import (
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
"github.com/ethereum/go-ethereum/p2p/enode"
)
@@ -31,93 +27,6 @@ func TestComputeColumnsForCustodyGroup(t *testing.T) {
require.ErrorIs(t, err, peerdas.ErrCustodyGroupTooLarge)
}
func TestDataColumnSidecars(t *testing.T) {
t.Run("nil signed block", func(t *testing.T) {
var expected []*ethpb.DataColumnSidecar = nil
actual, err := peerdas.DataColumnSidecars(nil, []kzg.CellsAndProofs{})
require.NoError(t, err)
require.DeepSSZEqual(t, expected, actual)
})
t.Run("empty cells and proofs", func(t *testing.T) {
// Create a protobuf signed beacon block.
signedBeaconBlockPb := util.NewBeaconBlockDeneb()
// Create a signed beacon block from the protobuf.
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(signedBeaconBlockPb)
require.NoError(t, err)
actual, err := peerdas.DataColumnSidecars(signedBeaconBlock, []kzg.CellsAndProofs{})
require.NoError(t, err)
require.IsNil(t, actual)
})
t.Run("sizes mismatch", func(t *testing.T) {
// Create a protobuf signed beacon block.
signedBeaconBlockPb := util.NewBeaconBlockDeneb()
// Create a signed beacon block from the protobuf.
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(signedBeaconBlockPb)
require.NoError(t, err)
// Create cells and proofs.
cellsAndProofs := make([]kzg.CellsAndProofs, 1)
_, err = peerdas.DataColumnSidecars(signedBeaconBlock, cellsAndProofs)
require.ErrorIs(t, err, peerdas.ErrSizeMismatch)
})
t.Run("cells array too short for column index", func(t *testing.T) {
// Create a Fulu block with a blob commitment.
signedBeaconBlockPb := util.NewBeaconBlockFulu()
signedBeaconBlockPb.Block.Body.BlobKzgCommitments = [][]byte{make([]byte, 48)}
// Create a signed beacon block from the protobuf.
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(signedBeaconBlockPb)
require.NoError(t, err)
// Create cells and proofs with insufficient cells for the number of columns.
// This simulates a scenario where cellsAndProofs has fewer cells than expected columns.
cellsAndProofs := []kzg.CellsAndProofs{
{
Cells: make([]kzg.Cell, 10), // Only 10 cells
Proofs: make([]kzg.Proof, 10), // Only 10 proofs
},
}
// This should fail because the function will try to access columns up to NumberOfColumns
// but we only have 10 cells/proofs.
_, err = peerdas.DataColumnSidecars(signedBeaconBlock, cellsAndProofs)
require.ErrorContains(t, "column index", err)
require.ErrorContains(t, "exceeds cells length", err)
})
t.Run("proofs array too short for column index", func(t *testing.T) {
// Create a Fulu block with a blob commitment.
signedBeaconBlockPb := util.NewBeaconBlockFulu()
signedBeaconBlockPb.Block.Body.BlobKzgCommitments = [][]byte{make([]byte, 48)}
// Create a signed beacon block from the protobuf.
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(signedBeaconBlockPb)
require.NoError(t, err)
// Create cells and proofs with sufficient cells but insufficient proofs.
numberOfColumns := params.BeaconConfig().NumberOfColumns
cellsAndProofs := []kzg.CellsAndProofs{
{
Cells: make([]kzg.Cell, numberOfColumns),
Proofs: make([]kzg.Proof, 5), // Only 5 proofs, less than columns
},
}
// This should fail when trying to access proof beyond index 4.
_, err = peerdas.DataColumnSidecars(signedBeaconBlock, cellsAndProofs)
require.ErrorContains(t, "column index", err)
require.ErrorContains(t, "exceeds proofs length", err)
})
}
func TestComputeCustodyGroupForColumn(t *testing.T) {
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()

View File

@@ -63,17 +63,14 @@ func TestVerifyDataColumnSidecarKZGProofs(t *testing.T) {
t.Run("invalid proof", func(t *testing.T) {
sidecars := generateRandomSidecars(t, seed, blobCount)
sidecars[0].Column[0][0]++ // It is OK to overflow
roDataColumnSidecars := generateRODataColumnSidecars(t, sidecars)
err := peerdas.VerifyDataColumnsSidecarKZGProofs(roDataColumnSidecars)
err := peerdas.VerifyDataColumnsSidecarKZGProofs(sidecars)
require.ErrorIs(t, err, peerdas.ErrInvalidKZGProof)
})
t.Run("nominal", func(t *testing.T) {
sidecars := generateRandomSidecars(t, seed, blobCount)
roDataColumnSidecars := generateRODataColumnSidecars(t, sidecars)
err := peerdas.VerifyDataColumnsSidecarKZGProofs(roDataColumnSidecars)
err := peerdas.VerifyDataColumnsSidecarKZGProofs(sidecars)
require.NoError(t, err)
})
}
@@ -256,9 +253,8 @@ func BenchmarkVerifyDataColumnSidecarKZGProofs_SameCommitments_NoBatch(b *testin
for i := range int64(b.N) {
// Generate new random sidecars to ensure the KZG backend does not cache anything.
sidecars := generateRandomSidecars(b, i, blobCount)
roDataColumnSidecars := generateRODataColumnSidecars(b, sidecars)
for _, sidecar := range roDataColumnSidecars {
for _, sidecar := range sidecars {
sidecars := []blocks.RODataColumn{sidecar}
b.StartTimer()
err := peerdas.VerifyDataColumnsSidecarKZGProofs(sidecars)
@@ -282,7 +278,7 @@ func BenchmarkVerifyDataColumnSidecarKZGProofs_DiffCommitments_Batch(b *testing.
b.ResetTimer()
for j := range int64(b.N) {
allSidecars := make([]*ethpb.DataColumnSidecar, 0, numberOfColumns)
allSidecars := make([]blocks.RODataColumn, 0, numberOfColumns)
for k := int64(0); k < numberOfColumns; k += columnsCount {
// Use different seeds to generate different blobs/commitments
seed := int64(b.N*i) + numberOfColumns*j + blobCount*k
@@ -292,10 +288,8 @@ func BenchmarkVerifyDataColumnSidecarKZGProofs_DiffCommitments_Batch(b *testing.
allSidecars = append(allSidecars, sidecars[k:k+columnsCount]...)
}
roDataColumnSidecars := generateRODataColumnSidecars(b, allSidecars)
b.StartTimer()
err := peerdas.VerifyDataColumnsSidecarKZGProofs(roDataColumnSidecars)
err := peerdas.VerifyDataColumnsSidecarKZGProofs(allSidecars)
b.StopTimer()
require.NoError(b, err)
}
@@ -323,8 +317,7 @@ func BenchmarkVerifyDataColumnSidecarKZGProofs_DiffCommitments_Batch4(b *testing
for j := range int64(batchCount) {
// Use different seeds to generate different blobs/commitments
sidecars := generateRandomSidecars(b, int64(batchCount)*i+j*blobCount, blobCount)
roDataColumnSidecars := generateRODataColumnSidecars(b, sidecars[:columnsCount])
allSidecars = append(allSidecars, roDataColumnSidecars)
allSidecars = append(allSidecars, sidecars)
}
for _, sidecars := range allSidecars {
@@ -358,7 +351,7 @@ func createTestSidecar(t *testing.T, index uint64, column, kzgCommitments, kzgPr
return roSidecar
}
func generateRandomSidecars(t testing.TB, seed, blobCount int64) []*ethpb.DataColumnSidecar {
func generateRandomSidecars(t testing.TB, seed, blobCount int64) []blocks.RODataColumn {
dbBlock := util.NewBeaconBlockDeneb()
commitments := make([][]byte, 0, blobCount)
@@ -379,20 +372,10 @@ func generateRandomSidecars(t testing.TB, seed, blobCount int64) []*ethpb.DataCo
require.NoError(t, err)
cellsAndProofs := util.GenerateCellsAndProofs(t, blobs)
sidecars, err := peerdas.DataColumnSidecars(sBlock, cellsAndProofs)
rob, err := blocks.NewROBlock(sBlock)
require.NoError(t, err)
sidecars, err := peerdas.DataColumnSidecars(cellsAndProofs, peerdas.PopulateFromBlock(rob))
require.NoError(t, err)
return sidecars
}
func generateRODataColumnSidecars(t testing.TB, sidecars []*ethpb.DataColumnSidecar) []blocks.RODataColumn {
roDataColumnSidecars := make([]blocks.RODataColumn, 0, len(sidecars))
for _, sidecar := range sidecars {
roCol, err := blocks.NewRODataColumn(sidecar)
require.NoError(t, err)
roDataColumnSidecars = append(roDataColumnSidecars, roCol)
}
return roDataColumnSidecars
}

View File

@@ -1,11 +1,13 @@
package peerdas
import (
"sort"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
pb "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/pkg/errors"
"golang.org/x/sync/errgroup"
@@ -16,6 +18,7 @@ var (
ErrBlobIndexTooHigh = errors.New("blob index is too high")
ErrBlockRootMismatch = errors.New("block root mismatch")
ErrBlobsCellsProofsMismatch = errors.New("blobs and cells proofs mismatch")
ErrNilBlobAndProof = errors.New("nil blob and proof")
)
// MinimumColumnCountToReconstruct return the minimum number of columns needed to proceed to a reconstruction.
@@ -27,20 +30,21 @@ func MinimumColumnCountToReconstruct() uint64 {
// ReconstructDataColumnSidecars reconstructs all the data column sidecars from the given input data column sidecars.
// All input sidecars must be committed to the same block.
// `inVerifiedRoSidecars` should contain enough (unique) sidecars to reconstruct the missing columns.
func ReconstructDataColumnSidecars(inVerifiedRoSidecars []blocks.VerifiedRODataColumn) ([]blocks.VerifiedRODataColumn, error) {
// `inVerifiedRoSidecars` should contain enough sidecars to reconstruct the missing columns, and should not contain any duplicate.
// WARNING: This function sorts inplace `verifiedRoSidecars` by index.
func ReconstructDataColumnSidecars(verifiedRoSidecars []blocks.VerifiedRODataColumn) ([]blocks.VerifiedRODataColumn, error) {
// Check if there is at least one input sidecar.
if len(inVerifiedRoSidecars) == 0 {
if len(verifiedRoSidecars) == 0 {
return nil, ErrNotEnoughDataColumnSidecars
}
// Safely retrieve the first sidecar as a reference.
referenceSidecar := inVerifiedRoSidecars[0]
referenceSidecar := verifiedRoSidecars[0]
// Check if all columns have the same length and are commmitted to the same block.
blobCount := len(referenceSidecar.Column)
blockRoot := referenceSidecar.BlockRoot()
for _, sidecar := range inVerifiedRoSidecars[1:] {
for _, sidecar := range verifiedRoSidecars[1:] {
if len(sidecar.Column) != blobCount {
return nil, ErrColumnLengthsDiffer
}
@@ -50,23 +54,16 @@ func ReconstructDataColumnSidecars(inVerifiedRoSidecars []blocks.VerifiedRODataC
}
}
// Deduplicate sidecars.
sidecarByIndex := make(map[uint64]blocks.VerifiedRODataColumn, len(inVerifiedRoSidecars))
for _, inVerifiedRoSidecar := range inVerifiedRoSidecars {
sidecarByIndex[inVerifiedRoSidecar.Index] = inVerifiedRoSidecar
}
// Check if there is enough sidecars to reconstruct the missing columns.
sidecarCount := len(sidecarByIndex)
sidecarCount := len(verifiedRoSidecars)
if uint64(sidecarCount) < MinimumColumnCountToReconstruct() {
return nil, ErrNotEnoughDataColumnSidecars
}
// Sidecars are verified and are committed to the same block.
// All signed block headers, KZG commitments, and inclusion proofs are the same.
signedBlockHeader := referenceSidecar.SignedBlockHeader
kzgCommitments := referenceSidecar.KzgCommitments
kzgCommitmentsInclusionProof := referenceSidecar.KzgCommitmentsInclusionProof
// Sort the input sidecars by index.
sort.Slice(verifiedRoSidecars, func(i, j int) bool {
return verifiedRoSidecars[i].Index < verifiedRoSidecars[j].Index
})
// Recover cells and compute proofs in parallel.
var wg errgroup.Group
@@ -76,10 +73,10 @@ func ReconstructDataColumnSidecars(inVerifiedRoSidecars []blocks.VerifiedRODataC
cellsIndices := make([]uint64, 0, sidecarCount)
cells := make([]kzg.Cell, 0, sidecarCount)
for columnIndex, sidecar := range sidecarByIndex {
for _, sidecar := range verifiedRoSidecars {
cell := sidecar.Column[blobIndex]
cells = append(cells, kzg.Cell(cell))
cellsIndices = append(cellsIndices, columnIndex)
cellsIndices = append(cellsIndices, sidecar.Index)
}
// Recover the cells and proofs for the corresponding blob
@@ -100,78 +97,20 @@ func ReconstructDataColumnSidecars(inVerifiedRoSidecars []blocks.VerifiedRODataC
return nil, errors.Wrap(err, "wait for RecoverCellsAndKZGProofs")
}
outSidecars, err := dataColumnsSidecars(signedBlockHeader, kzgCommitments, kzgCommitmentsInclusionProof, cellsAndProofs)
outSidecars, err := DataColumnSidecars(cellsAndProofs, PopulateFromSidecar(referenceSidecar))
if err != nil {
return nil, errors.Wrap(err, "data column sidecars from items")
}
// Input sidecars are verified, and we reconstructed ourselves the missing sidecars.
// As a consequence, reconstructed sidecars are also verified.
outVerifiedRoSidecars := make([]blocks.VerifiedRODataColumn, 0, len(outSidecars))
reconstructedVerifiedRoSidecars := make([]blocks.VerifiedRODataColumn, 0, len(outSidecars))
for _, sidecar := range outSidecars {
roSidecar, err := blocks.NewRODataColumnWithRoot(sidecar, blockRoot)
if err != nil {
return nil, errors.Wrap(err, "new RO data column with root")
}
verifiedRoSidecar := blocks.NewVerifiedRODataColumn(roSidecar)
outVerifiedRoSidecars = append(outVerifiedRoSidecars, verifiedRoSidecar)
verifiedRoSidecar := blocks.NewVerifiedRODataColumn(sidecar)
reconstructedVerifiedRoSidecars = append(reconstructedVerifiedRoSidecars, verifiedRoSidecar)
}
return outVerifiedRoSidecars, nil
}
// ConstructDataColumnSidecars constructs data column sidecars from a block, (un-extended) blobs and
// cell proofs corresponding the extended blobs. The main purpose of this function is to
// construct data column sidecars from data obtained from the execution client via:
// - `engine_getBlobsV2` - https://github.com/ethereum/execution-apis/blob/main/src/engine/osaka.md#engine_getblobsv2, or
// - `engine_getPayloadV5` - https://github.com/ethereum/execution-apis/blob/main/src/engine/osaka.md#engine_getpayloadv5
// Note: In this function, to stick with the `BlobsBundleV2` format returned by the execution client in `engine_getPayloadV5`,
// cell proofs are "flattened".
func ConstructDataColumnSidecars(block interfaces.ReadOnlySignedBeaconBlock, blobs [][]byte, cellProofs [][]byte) ([]*ethpb.DataColumnSidecar, error) {
// Check if the cells count is equal to the cell proofs count.
numberOfColumns := params.BeaconConfig().NumberOfColumns
blobCount := uint64(len(blobs))
cellProofsCount := uint64(len(cellProofs))
cellsCount := blobCount * numberOfColumns
if cellsCount != cellProofsCount {
return nil, ErrBlobsCellsProofsMismatch
}
cellsAndProofs := make([]kzg.CellsAndProofs, 0, blobCount)
for i, blob := range blobs {
var kzgBlob kzg.Blob
if copy(kzgBlob[:], blob) != len(kzgBlob) {
return nil, errors.New("wrong blob size - should never happen")
}
// Compute the extended cells from the (non-extended) blob.
cells, err := kzg.ComputeCells(&kzgBlob)
if err != nil {
return nil, errors.Wrap(err, "compute cells")
}
var proofs []kzg.Proof
for idx := uint64(i) * numberOfColumns; idx < (uint64(i)+1)*numberOfColumns; idx++ {
var kzgProof kzg.Proof
if copy(kzgProof[:], cellProofs[idx]) != len(kzgProof) {
return nil, errors.New("wrong KZG proof size - should never happen")
}
proofs = append(proofs, kzgProof)
}
cellsProofs := kzg.CellsAndProofs{Cells: cells, Proofs: proofs}
cellsAndProofs = append(cellsAndProofs, cellsProofs)
}
dataColumnSidecars, err := DataColumnSidecars(block, cellsAndProofs)
if err != nil {
return nil, errors.Wrap(err, "data column sidcars")
}
return dataColumnSidecars, nil
return reconstructedVerifiedRoSidecars, nil
}
// ReconstructBlobs constructs verified read only blobs sidecars from verified read only blob sidecars.
@@ -256,6 +195,89 @@ func ReconstructBlobs(block blocks.ROBlock, verifiedDataColumnSidecars []blocks.
return blobSidecars, nil
}
// ComputeCellsAndProofsFromFlat computes the cells and proofs from blobs and cell flat proofs.
func ComputeCellsAndProofsFromFlat(blobs [][]byte, cellProofs [][]byte) ([]kzg.CellsAndProofs, error) {
numberOfColumns := params.BeaconConfig().NumberOfColumns
blobCount := uint64(len(blobs))
cellProofsCount := uint64(len(cellProofs))
cellsCount := blobCount * numberOfColumns
if cellsCount != cellProofsCount {
return nil, ErrBlobsCellsProofsMismatch
}
cellsAndProofs := make([]kzg.CellsAndProofs, 0, blobCount)
for i, blob := range blobs {
var kzgBlob kzg.Blob
if copy(kzgBlob[:], blob) != len(kzgBlob) {
return nil, errors.New("wrong blob size - should never happen")
}
// Compute the extended cells from the (non-extended) blob.
cells, err := kzg.ComputeCells(&kzgBlob)
if err != nil {
return nil, errors.Wrap(err, "compute cells")
}
var proofs []kzg.Proof
for idx := uint64(i) * numberOfColumns; idx < (uint64(i)+1)*numberOfColumns; idx++ {
var kzgProof kzg.Proof
if copy(kzgProof[:], cellProofs[idx]) != len(kzgProof) {
return nil, errors.New("wrong KZG proof size - should never happen")
}
proofs = append(proofs, kzgProof)
}
cellsProofs := kzg.CellsAndProofs{Cells: cells, Proofs: proofs}
cellsAndProofs = append(cellsAndProofs, cellsProofs)
}
return cellsAndProofs, nil
}
// ComputeCellsAndProofs computes the cells and proofs from blobs and cell proofs.
func ComputeCellsAndProofsFromStructured(blobsAndProofs []*pb.BlobAndProofV2) ([]kzg.CellsAndProofs, error) {
numberOfColumns := params.BeaconConfig().NumberOfColumns
cellsAndProofs := make([]kzg.CellsAndProofs, 0, len(blobsAndProofs))
for _, blobAndProof := range blobsAndProofs {
if blobAndProof == nil {
return nil, ErrNilBlobAndProof
}
var kzgBlob kzg.Blob
if copy(kzgBlob[:], blobAndProof.Blob) != len(kzgBlob) {
return nil, errors.New("wrong blob size - should never happen")
}
// Compute the extended cells from the (non-extended) blob.
cells, err := kzg.ComputeCells(&kzgBlob)
if err != nil {
return nil, errors.Wrap(err, "compute cells")
}
kzgProofs := make([]kzg.Proof, 0, numberOfColumns*kzg.BytesPerProof)
for _, kzgProofBytes := range blobAndProof.KzgProofs {
if len(kzgProofBytes) != kzg.BytesPerProof {
return nil, errors.New("wrong KZG proof size - should never happen")
}
var kzgProof kzg.Proof
if copy(kzgProof[:], kzgProofBytes) != len(kzgProof) {
return nil, errors.New("wrong copied KZG proof size - should never happen")
}
kzgProofs = append(kzgProofs, kzgProof)
}
cellsProofs := kzg.CellsAndProofs{Cells: cells, Proofs: kzgProofs}
cellsAndProofs = append(cellsAndProofs, cellsProofs)
}
return cellsAndProofs, nil
}
// blobSidecarsFromDataColumnSidecars converts verified data column sidecars to verified blob sidecars.
func blobSidecarsFromDataColumnSidecars(roBlock blocks.ROBlock, dataColumnSidecars []blocks.VerifiedRODataColumn, indices []int) ([]*blocks.VerifiedROBlob, error) {
referenceSidecar := dataColumnSidecars[0]

View File

@@ -9,7 +9,7 @@ import (
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
pb "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
"github.com/pkg/errors"
@@ -124,56 +124,13 @@ func TestReconstructDataColumnSidecars(t *testing.T) {
})
}
func TestConstructDataColumnSidecars(t *testing.T) {
const (
blobCount = 3
cellsPerBlob = fieldparams.CellsPerBlob
)
numberOfColumns := params.BeaconConfig().NumberOfColumns
// Start the trusted setup.
err := kzg.Start()
require.NoError(t, err)
roBlock, _, baseVerifiedRoSidecars := util.GenerateTestFuluBlockWithSidecars(t, blobCount)
// Extract blobs and proofs from the sidecars.
blobs := make([][]byte, 0, blobCount)
cellProofs := make([][]byte, 0, cellsPerBlob)
for blobIndex := range blobCount {
blob := make([]byte, 0, cellsPerBlob)
for columnIndex := range cellsPerBlob {
cell := baseVerifiedRoSidecars[columnIndex].Column[blobIndex]
blob = append(blob, cell...)
}
blobs = append(blobs, blob)
for columnIndex := range numberOfColumns {
cellProof := baseVerifiedRoSidecars[columnIndex].KzgProofs[blobIndex]
cellProofs = append(cellProofs, cellProof)
}
}
actual, err := peerdas.ConstructDataColumnSidecars(roBlock, blobs, cellProofs)
require.NoError(t, err)
// Extract the base verified ro sidecars into sidecars.
expected := make([]*ethpb.DataColumnSidecar, 0, len(baseVerifiedRoSidecars))
for _, verifiedRoSidecar := range baseVerifiedRoSidecars {
expected = append(expected, verifiedRoSidecar.DataColumnSidecar)
}
require.DeepSSZEqual(t, expected, actual)
}
func TestReconstructBlobs(t *testing.T) {
// Start the trusted setup.
err := kzg.Start()
require.NoError(t, err)
params.SetupTestConfigCleanup(t)
params.BeaconConfig().FuluForkEpoch = params.BeaconConfig().ElectraForkEpoch + 4096*2
require.NoError(t, kzg.Start())
var emptyBlock blocks.ROBlock
fs := util.SlotAtEpoch(t, params.BeaconConfig().FuluForkEpoch)
t.Run("no index", func(t *testing.T) {
actual, err := peerdas.ReconstructBlobs(emptyBlock, nil, nil)
@@ -234,10 +191,10 @@ func TestReconstructBlobs(t *testing.T) {
})
t.Run("not committed to the same block", func(t *testing.T) {
_, _, verifiedRoSidecars := util.GenerateTestFuluBlockWithSidecars(t, 3, util.WithParentRoot([fieldparams.RootLength]byte{1}))
roBlock, _, _ := util.GenerateTestFuluBlockWithSidecars(t, 3, util.WithParentRoot([fieldparams.RootLength]byte{2}))
_, _, verifiedRoSidecars := util.GenerateTestFuluBlockWithSidecars(t, 3, util.WithParentRoot([fieldparams.RootLength]byte{1}), util.WithSlot(fs))
roBlock, _, _ := util.GenerateTestFuluBlockWithSidecars(t, 3, util.WithParentRoot([fieldparams.RootLength]byte{2}), util.WithSlot(fs))
_, err = peerdas.ReconstructBlobs(roBlock, verifiedRoSidecars, []int{0})
_, err := peerdas.ReconstructBlobs(roBlock, verifiedRoSidecars, []int{0})
require.ErrorContains(t, peerdas.ErrRootMismatch.Error(), err)
})
@@ -250,7 +207,7 @@ func TestReconstructBlobs(t *testing.T) {
// Compute cells and proofs from blob sidecars.
var wg errgroup.Group
blobs := make([][]byte, blobCount)
cellsAndProofs := make([]kzg.CellsAndProofs, blobCount)
inputCellsAndProofs := make([]kzg.CellsAndProofs, blobCount)
for i := range blobCount {
blob := roBlobSidecars[i].Blob
blobs[i] = blob
@@ -267,7 +224,7 @@ func TestReconstructBlobs(t *testing.T) {
// It is safe for multiple goroutines to concurrently write to the same slice,
// as long as they are writing to different indices, which is the case here.
cellsAndProofs[i] = cp
inputCellsAndProofs[i] = cp
return nil
})
@@ -278,25 +235,24 @@ func TestReconstructBlobs(t *testing.T) {
// Flatten proofs.
cellProofs := make([][]byte, 0, blobCount*numberOfColumns)
for _, cp := range cellsAndProofs {
for _, cp := range inputCellsAndProofs {
for _, proof := range cp.Proofs {
cellProofs = append(cellProofs, proof[:])
}
}
// Construct data column sidecars.
// It is OK to use the public function `ConstructDataColumnSidecars`, as long as
// `TestConstructDataColumnSidecars` tests pass.
dataColumnSidecars, err := peerdas.ConstructDataColumnSidecars(roBlock, blobs, cellProofs)
// Compute celles and proofs from the blobs and cell proofs.
cellsAndProofs, err := peerdas.ComputeCellsAndProofsFromFlat(blobs, cellProofs)
require.NoError(t, err)
// Construct data column sidears from the signed block and cells and proofs.
roDataColumnSidecars, err := peerdas.DataColumnSidecars(cellsAndProofs, peerdas.PopulateFromBlock(roBlock))
require.NoError(t, err)
// Convert to verified data column sidecars.
verifiedRoSidecars := make([]blocks.VerifiedRODataColumn, 0, len(dataColumnSidecars))
for _, dataColumnSidecar := range dataColumnSidecars {
roSidecar, err := blocks.NewRODataColumn(dataColumnSidecar)
require.NoError(t, err)
verifiedRoSidecar := blocks.NewVerifiedRODataColumn(roSidecar)
verifiedRoSidecars := make([]blocks.VerifiedRODataColumn, 0, len(roDataColumnSidecars))
for _, roDataColumnSidecar := range roDataColumnSidecars {
verifiedRoSidecar := blocks.NewVerifiedRODataColumn(roDataColumnSidecar)
verifiedRoSidecars = append(verifiedRoSidecars, verifiedRoSidecar)
}
@@ -339,3 +295,162 @@ func TestReconstructBlobs(t *testing.T) {
})
}
func TestComputeCellsAndProofsFromFlat(t *testing.T) {
// Start the trusted setup.
err := kzg.Start()
require.NoError(t, err)
t.Run("mismatched blob and proof counts", func(t *testing.T) {
numberOfColumns := params.BeaconConfig().NumberOfColumns
// Create one blob but proofs for two blobs
blobs := [][]byte{{}}
// Create proofs for 2 blobs worth of columns
cellProofs := make([][]byte, 2*numberOfColumns)
_, err := peerdas.ComputeCellsAndProofsFromFlat(blobs, cellProofs)
require.ErrorIs(t, err, peerdas.ErrBlobsCellsProofsMismatch)
})
t.Run("nominal", func(t *testing.T) {
const blobCount = 2
numberOfColumns := params.BeaconConfig().NumberOfColumns
// Generate test blobs
_, roBlobSidecars := util.GenerateTestElectraBlockWithSidecar(t, [fieldparams.RootLength]byte{}, 42, blobCount)
// Extract blobs and compute expected cells and proofs
blobs := make([][]byte, blobCount)
expectedCellsAndProofs := make([]kzg.CellsAndProofs, blobCount)
var wg errgroup.Group
for i := range blobCount {
blob := roBlobSidecars[i].Blob
blobs[i] = blob
wg.Go(func() error {
var kzgBlob kzg.Blob
count := copy(kzgBlob[:], blob)
require.Equal(t, len(kzgBlob), count)
cp, err := kzg.ComputeCellsAndKZGProofs(&kzgBlob)
if err != nil {
return errors.Wrapf(err, "compute cells and kzg proofs for blob %d", i)
}
expectedCellsAndProofs[i] = cp
return nil
})
}
err := wg.Wait()
require.NoError(t, err)
// Flatten proofs
cellProofs := make([][]byte, 0, blobCount*numberOfColumns)
for _, cp := range expectedCellsAndProofs {
for _, proof := range cp.Proofs {
cellProofs = append(cellProofs, proof[:])
}
}
// Test ComputeCellsAndProofs
actualCellsAndProofs, err := peerdas.ComputeCellsAndProofsFromFlat(blobs, cellProofs)
require.NoError(t, err)
require.Equal(t, blobCount, len(actualCellsAndProofs))
// Verify the results match expected
for i := range blobCount {
require.Equal(t, len(expectedCellsAndProofs[i].Cells), len(actualCellsAndProofs[i].Cells))
require.Equal(t, len(expectedCellsAndProofs[i].Proofs), len(actualCellsAndProofs[i].Proofs))
// Compare cells
for j, expectedCell := range expectedCellsAndProofs[i].Cells {
require.Equal(t, expectedCell, actualCellsAndProofs[i].Cells[j])
}
// Compare proofs
for j, expectedProof := range expectedCellsAndProofs[i].Proofs {
require.Equal(t, expectedProof, actualCellsAndProofs[i].Proofs[j])
}
}
})
}
func TestComputeCellsAndProofsFromStructured(t *testing.T) {
t.Run("nil blob and proof", func(t *testing.T) {
_, err := peerdas.ComputeCellsAndProofsFromStructured([]*pb.BlobAndProofV2{nil})
require.ErrorIs(t, err, peerdas.ErrNilBlobAndProof)
})
t.Run("nominal", func(t *testing.T) {
// Start the trusted setup.
err := kzg.Start()
require.NoError(t, err)
const blobCount = 2
// Generate test blobs
_, roBlobSidecars := util.GenerateTestElectraBlockWithSidecar(t, [fieldparams.RootLength]byte{}, 42, blobCount)
// Extract blobs and compute expected cells and proofs
blobsAndProofs := make([]*pb.BlobAndProofV2, blobCount)
expectedCellsAndProofs := make([]kzg.CellsAndProofs, blobCount)
var wg errgroup.Group
for i := range blobCount {
blob := roBlobSidecars[i].Blob
wg.Go(func() error {
var kzgBlob kzg.Blob
count := copy(kzgBlob[:], blob)
require.Equal(t, len(kzgBlob), count)
cellsAndProofs, err := kzg.ComputeCellsAndKZGProofs(&kzgBlob)
if err != nil {
return errors.Wrapf(err, "compute cells and kzg proofs for blob %d", i)
}
expectedCellsAndProofs[i] = cellsAndProofs
kzgProofs := make([][]byte, 0, len(cellsAndProofs.Proofs))
for _, proof := range cellsAndProofs.Proofs {
kzgProofs = append(kzgProofs, proof[:])
}
blobAndProof := &pb.BlobAndProofV2{
Blob: blob,
KzgProofs: kzgProofs,
}
blobsAndProofs[i] = blobAndProof
return nil
})
}
err = wg.Wait()
require.NoError(t, err)
// Test ComputeCellsAndProofs
actualCellsAndProofs, err := peerdas.ComputeCellsAndProofsFromStructured(blobsAndProofs)
require.NoError(t, err)
require.Equal(t, blobCount, len(actualCellsAndProofs))
// Verify the results match expected
for i := range blobCount {
require.Equal(t, len(expectedCellsAndProofs[i].Cells), len(actualCellsAndProofs[i].Cells))
require.Equal(t, len(expectedCellsAndProofs[i].Proofs), len(actualCellsAndProofs[i].Proofs))
// Compare cells
for j, expectedCell := range expectedCellsAndProofs[i].Cells {
require.Equal(t, expectedCell, actualCellsAndProofs[i].Cells[j])
}
// Compare proofs
for j, expectedProof := range expectedCellsAndProofs[i].Proofs {
require.Equal(t, expectedProof, actualCellsAndProofs[i].Proofs[j])
}
}
})
}

View File

@@ -1,12 +1,76 @@
package peerdas
import (
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
beaconState "github.com/OffchainLabs/prysm/v6/beacon-chain/state"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/pkg/errors"
)
var (
ErrNilSignedBlockOrEmptyCellsAndProofs = errors.New("nil signed block or empty cells and proofs")
ErrSizeMismatch = errors.New("mismatch in the number of blob KZG commitments and cellsAndProofs")
ErrNotEnoughDataColumnSidecars = errors.New("not enough columns")
ErrDataColumnSidecarsNotSortedByIndex = errors.New("data column sidecars are not sorted by index")
)
var (
_ ConstructionPopulator = (*BlockReconstructionSource)(nil)
_ ConstructionPopulator = (*SidecarReconstructionSource)(nil)
)
const (
BlockType = "BeaconBlock"
SidecarType = "DataColumnSidecar"
)
type (
// ConstructionPopulator is an interface that can be satisfied by a type that can use data from a struct
// like a DataColumnSidecar or a BeaconBlock to set the fields in a data column sidecar that cannot
// be obtained from the engine api.
ConstructionPopulator interface {
Slot() primitives.Slot
Root() [fieldparams.RootLength]byte
ProposerIndex() primitives.ValidatorIndex
Commitments() ([][]byte, error)
Type() string
extract() (*blockInfo, error)
}
// BlockReconstructionSource is a ConstructionPopulator that uses a beacon block as the source of data
BlockReconstructionSource struct {
blocks.ROBlock
}
// DataColumnSidecar is a ConstructionPopulator that uses a data column sidecar as the source of data
SidecarReconstructionSource struct {
blocks.VerifiedRODataColumn
}
blockInfo struct {
signedBlockHeader *ethpb.SignedBeaconBlockHeader
kzgCommitments [][]byte
kzgInclusionProof [][]byte
}
)
// PopulateFromBlock creates a BlockReconstructionSource from a beacon block
func PopulateFromBlock(block blocks.ROBlock) *BlockReconstructionSource {
return &BlockReconstructionSource{ROBlock: block}
}
// PopulateFromSidecar creates a SidecarReconstructionSource from a data column sidecar
func PopulateFromSidecar(sidecar blocks.VerifiedRODataColumn) *SidecarReconstructionSource {
return &SidecarReconstructionSource{VerifiedRODataColumn: sidecar}
}
// ValidatorsCustodyRequirement returns the number of custody groups regarding the validator indices attached to the beacon node.
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/validator.md#validator-custody
func ValidatorsCustodyRequirement(state beaconState.ReadOnlyBeaconState, validatorsIndex map[primitives.ValidatorIndex]bool) (uint64, error) {
@@ -28,3 +92,158 @@ func ValidatorsCustodyRequirement(state beaconState.ReadOnlyBeaconState, validat
count := totalNodeBalance / balancePerAdditionalCustodyGroup
return min(max(count, validatorCustodyRequirement), numberOfCustodyGroups), nil
}
// DataColumnSidecars, given ConstructionPopulator and the cells/proofs associated with each blob in the
// block, assembles sidecars which can be distributed to peers.
// This is an adapted version of
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/validator.md#get_data_column_sidecars,
// which is designed to be used both when constructing sidecars from a block and from a sidecar, replacing
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/validator.md#get_data_column_sidecars_from_block and
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/validator.md#get_data_column_sidecars_from_column_sidecar
func DataColumnSidecars(rows []kzg.CellsAndProofs, src ConstructionPopulator) ([]blocks.RODataColumn, error) {
if len(rows) == 0 {
return nil, nil
}
start := time.Now()
cells, proofs, err := rotateRowsToCols(rows, params.BeaconConfig().NumberOfColumns)
if err != nil {
return nil, errors.Wrap(err, "rotate cells and proofs")
}
info, err := src.extract()
if err != nil {
return nil, errors.Wrap(err, "extract block info")
}
maxIdx := params.BeaconConfig().NumberOfColumns
roSidecars := make([]blocks.RODataColumn, 0, maxIdx)
for idx := range maxIdx {
sidecar := &ethpb.DataColumnSidecar{
Index: idx,
Column: cells[idx],
KzgCommitments: info.kzgCommitments,
KzgProofs: proofs[idx],
SignedBlockHeader: info.signedBlockHeader,
KzgCommitmentsInclusionProof: info.kzgInclusionProof,
}
if len(sidecar.KzgCommitments) != len(sidecar.Column) || len(sidecar.KzgCommitments) != len(sidecar.KzgProofs) {
return nil, ErrSizeMismatch
}
roSidecar, err := blocks.NewRODataColumnWithRoot(sidecar, src.Root())
if err != nil {
return nil, errors.Wrap(err, "new ro data column")
}
roSidecars = append(roSidecars, roSidecar)
}
dataColumnComputationTime.Observe(float64(time.Since(start).Milliseconds()))
return roSidecars, nil
}
// Slot returns the slot of the source
func (s *BlockReconstructionSource) Slot() primitives.Slot {
return s.Block().Slot()
}
// ProposerIndex returns the proposer index of the source
func (s *BlockReconstructionSource) ProposerIndex() primitives.ValidatorIndex {
return s.Block().ProposerIndex()
}
// Commitments returns the blob KZG commitments of the source
func (s *BlockReconstructionSource) Commitments() ([][]byte, error) {
c, err := s.Block().Body().BlobKzgCommitments()
if err != nil {
return nil, errors.Wrap(err, "blob KZG commitments")
}
return c, nil
}
// Type returns the type of the source
func (s *BlockReconstructionSource) Type() string {
return BlockType
}
// extract extracts the block information from the source
func (b *BlockReconstructionSource) extract() (*blockInfo, error) {
block := b.Block()
header, err := b.Header()
if err != nil {
return nil, errors.Wrap(err, "header")
}
commitments, err := block.Body().BlobKzgCommitments()
if err != nil {
return nil, errors.Wrap(err, "commitments")
}
inclusionProof, err := blocks.MerkleProofKZGCommitments(block.Body())
if err != nil {
return nil, errors.Wrap(err, "merkle proof kzg commitments")
}
info := &blockInfo{
signedBlockHeader: header,
kzgCommitments: commitments,
kzgInclusionProof: inclusionProof,
}
return info, nil
}
// rotateRowsToCols takes a 2D slice of cells and proofs, where the x is rows (blobs) and y is columns,
// and returns a 2D slice where x is columns and y is rows.
func rotateRowsToCols(rows []kzg.CellsAndProofs, numCols uint64) ([][][]byte, [][][]byte, error) {
if len(rows) == 0 {
return nil, nil, nil
}
cellCols := make([][][]byte, numCols)
proofCols := make([][][]byte, numCols)
for i, cp := range rows {
if uint64(len(cp.Cells)) != numCols {
return nil, nil, errors.Wrap(ErrNotEnoughDataColumnSidecars, "not enough cells")
}
if len(cp.Cells) != len(cp.Proofs) {
return nil, nil, errors.Wrap(ErrNotEnoughDataColumnSidecars, "not enough proofs")
}
for j := uint64(0); j < numCols; j++ {
if i == 0 {
cellCols[j] = make([][]byte, len(rows))
proofCols[j] = make([][]byte, len(rows))
}
cellCols[j][i] = cp.Cells[j][:]
proofCols[j][i] = cp.Proofs[j][:]
}
}
return cellCols, proofCols, nil
}
// Root returns the block root of the source
func (s *SidecarReconstructionSource) Root() [fieldparams.RootLength]byte {
return s.BlockRoot()
}
// Commmitments returns the blob KZG commitments of the source
func (s *SidecarReconstructionSource) Commitments() ([][]byte, error) {
return s.KzgCommitments, nil
}
// Type returns the type of the source
func (s *SidecarReconstructionSource) Type() string {
return SidecarType
}
// extract extracts the block information from the source
func (s *SidecarReconstructionSource) extract() (*blockInfo, error) {
info := &blockInfo{
signedBlockHeader: s.SignedBlockHeader,
kzgCommitments: s.KzgCommitments,
kzgInclusionProof: s.KzgCommitmentsInclusionProof,
}
return info, nil
}

View File

@@ -3,11 +3,15 @@ package peerdas_test
import (
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
state_native "github.com/OffchainLabs/prysm/v6/beacon-chain/state/state-native"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
)
func TestValidatorsCustodyRequirement(t *testing.T) {
@@ -53,3 +57,218 @@ func TestValidatorsCustodyRequirement(t *testing.T) {
})
}
}
func TestDataColumnSidecars(t *testing.T) {
t.Run("sizes mismatch", func(t *testing.T) {
// Create a protobuf signed beacon block.
signedBeaconBlockPb := util.NewBeaconBlockDeneb()
// Create a signed beacon block from the protobuf.
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(signedBeaconBlockPb)
require.NoError(t, err)
// Create cells and proofs.
cellsAndProofs := []kzg.CellsAndProofs{
{
Cells: make([]kzg.Cell, params.BeaconConfig().NumberOfColumns),
Proofs: make([]kzg.Proof, params.BeaconConfig().NumberOfColumns),
},
}
rob, err := blocks.NewROBlock(signedBeaconBlock)
require.NoError(t, err)
_, err = peerdas.DataColumnSidecars(cellsAndProofs, peerdas.PopulateFromBlock(rob))
require.ErrorIs(t, err, peerdas.ErrSizeMismatch)
})
t.Run("cells array too short for column index", func(t *testing.T) {
// Create a Fulu block with a blob commitment.
signedBeaconBlockPb := util.NewBeaconBlockFulu()
signedBeaconBlockPb.Block.Body.BlobKzgCommitments = [][]byte{make([]byte, 48)}
// Create a signed beacon block from the protobuf.
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(signedBeaconBlockPb)
require.NoError(t, err)
// Create cells and proofs with insufficient cells for the number of columns.
// This simulates a scenario where cellsAndProofs has fewer cells than expected columns.
cellsAndProofs := []kzg.CellsAndProofs{
{
Cells: make([]kzg.Cell, 10), // Only 10 cells
Proofs: make([]kzg.Proof, 10), // Only 10 proofs
},
}
// This should fail because the function will try to access columns up to NumberOfColumns
// but we only have 10 cells/proofs.
rob, err := blocks.NewROBlock(signedBeaconBlock)
require.NoError(t, err)
_, err = peerdas.DataColumnSidecars(cellsAndProofs, peerdas.PopulateFromBlock(rob))
require.ErrorIs(t, err, peerdas.ErrNotEnoughDataColumnSidecars)
})
t.Run("proofs array too short for column index", func(t *testing.T) {
// Create a Fulu block with a blob commitment.
signedBeaconBlockPb := util.NewBeaconBlockFulu()
signedBeaconBlockPb.Block.Body.BlobKzgCommitments = [][]byte{make([]byte, 48)}
// Create a signed beacon block from the protobuf.
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(signedBeaconBlockPb)
require.NoError(t, err)
// Create cells and proofs with sufficient cells but insufficient proofs.
numberOfColumns := params.BeaconConfig().NumberOfColumns
cellsAndProofs := []kzg.CellsAndProofs{
{
Cells: make([]kzg.Cell, numberOfColumns),
Proofs: make([]kzg.Proof, 5), // Only 5 proofs, less than columns
},
}
// This should fail when trying to access proof beyond index 4.
rob, err := blocks.NewROBlock(signedBeaconBlock)
require.NoError(t, err)
_, err = peerdas.DataColumnSidecars(cellsAndProofs, peerdas.PopulateFromBlock(rob))
require.ErrorIs(t, err, peerdas.ErrNotEnoughDataColumnSidecars)
require.ErrorContains(t, "not enough proofs", err)
})
t.Run("nominal", func(t *testing.T) {
// Create a Fulu block with blob commitments.
signedBeaconBlockPb := util.NewBeaconBlockFulu()
commitment1 := make([]byte, 48)
commitment2 := make([]byte, 48)
// Set different values to distinguish commitments
commitment1[0] = 0x01
commitment2[0] = 0x02
signedBeaconBlockPb.Block.Body.BlobKzgCommitments = [][]byte{commitment1, commitment2}
// Create a signed beacon block from the protobuf.
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(signedBeaconBlockPb)
require.NoError(t, err)
// Create cells and proofs with correct dimensions.
numberOfColumns := params.BeaconConfig().NumberOfColumns
cellsAndProofs := []kzg.CellsAndProofs{
{
Cells: make([]kzg.Cell, numberOfColumns),
Proofs: make([]kzg.Proof, numberOfColumns),
},
{
Cells: make([]kzg.Cell, numberOfColumns),
Proofs: make([]kzg.Proof, numberOfColumns),
},
}
// Set distinct values in cells and proofs for testing
for i := range numberOfColumns {
cellsAndProofs[0].Cells[i][0] = byte(i)
cellsAndProofs[0].Proofs[i][0] = byte(i)
cellsAndProofs[1].Cells[i][0] = byte(i + 128)
cellsAndProofs[1].Proofs[i][0] = byte(i + 128)
}
rob, err := blocks.NewROBlock(signedBeaconBlock)
require.NoError(t, err)
sidecars, err := peerdas.DataColumnSidecars(cellsAndProofs, peerdas.PopulateFromBlock(rob))
require.NoError(t, err)
require.NotNil(t, sidecars)
require.Equal(t, int(numberOfColumns), len(sidecars))
// Verify each sidecar has the expected structure
for i, sidecar := range sidecars {
require.Equal(t, uint64(i), sidecar.Index)
require.Equal(t, 2, len(sidecar.Column))
require.Equal(t, 2, len(sidecar.KzgCommitments))
require.Equal(t, 2, len(sidecar.KzgProofs))
// Verify commitments match what we set
require.DeepEqual(t, commitment1, sidecar.KzgCommitments[0])
require.DeepEqual(t, commitment2, sidecar.KzgCommitments[1])
// Verify column data comes from the correct cells
require.Equal(t, byte(i), sidecar.Column[0][0])
require.Equal(t, byte(i+128), sidecar.Column[1][0])
// Verify proofs come from the correct proofs
require.Equal(t, byte(i), sidecar.KzgProofs[0][0])
require.Equal(t, byte(i+128), sidecar.KzgProofs[1][0])
}
})
}
func TestReconstructionSource(t *testing.T) {
// Create a Fulu block with blob commitments.
signedBeaconBlockPb := util.NewBeaconBlockFulu()
commitment1 := make([]byte, 48)
commitment2 := make([]byte, 48)
// Set different values to distinguish commitments
commitment1[0] = 0x01
commitment2[0] = 0x02
signedBeaconBlockPb.Block.Body.BlobKzgCommitments = [][]byte{commitment1, commitment2}
// Create a signed beacon block from the protobuf.
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(signedBeaconBlockPb)
require.NoError(t, err)
// Create cells and proofs with correct dimensions.
numberOfColumns := params.BeaconConfig().NumberOfColumns
cellsAndProofs := []kzg.CellsAndProofs{
{
Cells: make([]kzg.Cell, numberOfColumns),
Proofs: make([]kzg.Proof, numberOfColumns),
},
{
Cells: make([]kzg.Cell, numberOfColumns),
Proofs: make([]kzg.Proof, numberOfColumns),
},
}
// Set distinct values in cells and proofs for testing
for i := range numberOfColumns {
cellsAndProofs[0].Cells[i][0] = byte(i)
cellsAndProofs[0].Proofs[i][0] = byte(i)
cellsAndProofs[1].Cells[i][0] = byte(i + 128)
cellsAndProofs[1].Proofs[i][0] = byte(i + 128)
}
rob, err := blocks.NewROBlock(signedBeaconBlock)
require.NoError(t, err)
sidecars, err := peerdas.DataColumnSidecars(cellsAndProofs, peerdas.PopulateFromBlock(rob))
require.NoError(t, err)
require.NotNil(t, sidecars)
require.Equal(t, int(numberOfColumns), len(sidecars))
t.Run("from block", func(t *testing.T) {
src := peerdas.PopulateFromBlock(rob)
require.Equal(t, rob.Block().Slot(), src.Slot())
require.Equal(t, rob.Root(), src.Root())
require.Equal(t, rob.Block().ProposerIndex(), src.ProposerIndex())
commitments, err := src.Commitments()
require.NoError(t, err)
require.Equal(t, 2, len(commitments))
require.DeepEqual(t, commitment1, commitments[0])
require.DeepEqual(t, commitment2, commitments[1])
require.Equal(t, peerdas.BlockType, src.Type())
})
t.Run("from sidecar", func(t *testing.T) {
referenceSidecar := blocks.NewVerifiedRODataColumn(sidecars[0])
src := peerdas.PopulateFromSidecar(referenceSidecar)
require.Equal(t, referenceSidecar.Slot(), src.Slot())
require.Equal(t, referenceSidecar.BlockRoot(), src.Root())
require.Equal(t, referenceSidecar.ProposerIndex(), src.ProposerIndex())
commitments, err := src.Commitments()
require.NoError(t, err)
require.Equal(t, 2, len(commitments))
require.DeepEqual(t, commitment1, commitments[0])
require.DeepEqual(t, commitment2, commitments[1])
require.Equal(t, peerdas.SidecarType, src.Type())
})
}

View File

@@ -16,61 +16,60 @@ func TestDataColumnsAlignWithBlock(t *testing.T) {
err := kzg.Start()
require.NoError(t, err)
params.BeaconConfig().FuluForkEpoch = params.BeaconConfig().ElectraForkEpoch + 4096*2
fs := util.SlotAtEpoch(t, params.BeaconConfig().ElectraForkEpoch)
require.NoError(t, err)
fuluMax := params.BeaconConfig().MaxBlobsPerBlock(fs)
t.Run("pre fulu", func(t *testing.T) {
block, _ := util.GenerateTestElectraBlockWithSidecar(t, [fieldparams.RootLength]byte{}, 0, 0)
block, _ := util.GenerateTestElectraBlockWithSidecar(t, [fieldparams.RootLength]byte{}, fs, 0)
err := peerdas.DataColumnsAlignWithBlock(block, nil)
require.NoError(t, err)
})
t.Run("too many commitmnets", func(t *testing.T) {
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
config.BlobSchedule = []params.BlobScheduleEntry{{}}
params.OverrideBeaconConfig(config)
block, _, _ := util.GenerateTestFuluBlockWithSidecars(t, 3)
t.Run("too many commitments", func(t *testing.T) {
block, _, _ := util.GenerateTestFuluBlockWithSidecars(t, fuluMax+1, util.WithSlot(fs))
err := peerdas.DataColumnsAlignWithBlock(block, nil)
require.ErrorIs(t, err, peerdas.ErrTooManyCommitments)
})
t.Run("root mismatch", func(t *testing.T) {
_, sidecars, _ := util.GenerateTestFuluBlockWithSidecars(t, 2)
block, _, _ := util.GenerateTestFuluBlockWithSidecars(t, 0)
_, sidecars, _ := util.GenerateTestFuluBlockWithSidecars(t, 2, util.WithSlot(fs))
block, _, _ := util.GenerateTestFuluBlockWithSidecars(t, 0, util.WithSlot(fs))
err := peerdas.DataColumnsAlignWithBlock(block, sidecars)
require.ErrorIs(t, err, peerdas.ErrRootMismatch)
})
t.Run("column size mismatch", func(t *testing.T) {
block, sidecars, _ := util.GenerateTestFuluBlockWithSidecars(t, 2)
block, sidecars, _ := util.GenerateTestFuluBlockWithSidecars(t, 2, util.WithSlot(fs))
sidecars[0].Column = [][]byte{}
err := peerdas.DataColumnsAlignWithBlock(block, sidecars)
require.ErrorIs(t, err, peerdas.ErrBlockColumnSizeMismatch)
})
t.Run("KZG commitments size mismatch", func(t *testing.T) {
block, sidecars, _ := util.GenerateTestFuluBlockWithSidecars(t, 2)
block, sidecars, _ := util.GenerateTestFuluBlockWithSidecars(t, 2, util.WithSlot(fs))
sidecars[0].KzgCommitments = [][]byte{}
err := peerdas.DataColumnsAlignWithBlock(block, sidecars)
require.ErrorIs(t, err, peerdas.ErrBlockColumnSizeMismatch)
})
t.Run("KZG proofs mismatch", func(t *testing.T) {
block, sidecars, _ := util.GenerateTestFuluBlockWithSidecars(t, 2)
block, sidecars, _ := util.GenerateTestFuluBlockWithSidecars(t, 2, util.WithSlot(fs))
sidecars[0].KzgProofs = [][]byte{}
err := peerdas.DataColumnsAlignWithBlock(block, sidecars)
require.ErrorIs(t, err, peerdas.ErrBlockColumnSizeMismatch)
})
t.Run("commitment mismatch", func(t *testing.T) {
block, _, _ := util.GenerateTestFuluBlockWithSidecars(t, 2)
_, alteredSidecars, _ := util.GenerateTestFuluBlockWithSidecars(t, 2)
block, _, _ := util.GenerateTestFuluBlockWithSidecars(t, 2, util.WithSlot(fs))
_, alteredSidecars, _ := util.GenerateTestFuluBlockWithSidecars(t, 2, util.WithSlot(fs))
alteredSidecars[1].KzgCommitments[0][0]++ // Overflow is OK
err := peerdas.DataColumnsAlignWithBlock(block, alteredSidecars)
require.ErrorIs(t, err, peerdas.ErrCommitmentMismatch)
})
t.Run("nominal", func(t *testing.T) {
block, sidecars, _ := util.GenerateTestFuluBlockWithSidecars(t, 2)
block, sidecars, _ := util.GenerateTestFuluBlockWithSidecars(t, 2, util.WithSlot(fs))
err := peerdas.DataColumnsAlignWithBlock(block, sidecars)
require.NoError(t, err)
})

View File

@@ -8,7 +8,9 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/altair"
b "github.com/OffchainLabs/prysm/v6/beacon-chain/core/blocks"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/electra"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/transition/interop"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/validators"
v "github.com/OffchainLabs/prysm/v6/beacon-chain/core/validators"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
@@ -374,15 +376,25 @@ func ProcessBlockForStateRoot(
}
// This calls altair block operations.
func altairOperations(
ctx context.Context,
st state.BeaconState,
beaconBlock interfaces.ReadOnlyBeaconBlock) (state.BeaconState, error) {
st, err := b.ProcessProposerSlashings(ctx, st, beaconBlock.Body().ProposerSlashings(), v.SlashValidator)
func altairOperations(ctx context.Context, st state.BeaconState, beaconBlock interfaces.ReadOnlyBeaconBlock) (state.BeaconState, error) {
var err error
hasSlashings := len(beaconBlock.Body().ProposerSlashings()) > 0 || len(beaconBlock.Body().AttesterSlashings()) > 0
// exitInfo is only needed for voluntary exits pre Electra.
hasExits := st.Version() < version.Electra && len(beaconBlock.Body().VoluntaryExits()) > 0
exitInfo := &validators.ExitInfo{}
if hasSlashings || hasExits {
// ExitInformation is expensive to compute, only do it if we need it.
exitInfo = v.ExitInformation(st)
if err := helpers.UpdateTotalActiveBalanceCache(st, exitInfo.TotalActiveBalance); err != nil {
return nil, errors.Wrap(err, "could not update total active balance cache")
}
}
st, err = b.ProcessProposerSlashings(ctx, st, beaconBlock.Body().ProposerSlashings(), exitInfo)
if err != nil {
return nil, errors.Wrap(err, "could not process altair proposer slashing")
}
st, err = b.ProcessAttesterSlashings(ctx, st, beaconBlock.Body().AttesterSlashings(), v.SlashValidator)
st, err = b.ProcessAttesterSlashings(ctx, st, beaconBlock.Body().AttesterSlashings(), exitInfo)
if err != nil {
return nil, errors.Wrap(err, "could not process altair attester slashing")
}
@@ -393,7 +405,7 @@ func altairOperations(
if _, err := altair.ProcessDeposits(ctx, st, beaconBlock.Body().Deposits()); err != nil {
return nil, errors.Wrap(err, "could not process altair deposit")
}
st, err = b.ProcessVoluntaryExits(ctx, st, beaconBlock.Body().VoluntaryExits())
st, err = b.ProcessVoluntaryExits(ctx, st, beaconBlock.Body().VoluntaryExits(), exitInfo)
if err != nil {
return nil, errors.Wrap(err, "could not process voluntary exits")
}
@@ -401,15 +413,23 @@ func altairOperations(
}
// This calls phase 0 block operations.
func phase0Operations(
ctx context.Context,
st state.BeaconState,
beaconBlock interfaces.ReadOnlyBeaconBlock) (state.BeaconState, error) {
st, err := b.ProcessProposerSlashings(ctx, st, beaconBlock.Body().ProposerSlashings(), v.SlashValidator)
func phase0Operations(ctx context.Context, st state.BeaconState, beaconBlock interfaces.ReadOnlyBeaconBlock) (state.BeaconState, error) {
var err error
hasSlashings := len(beaconBlock.Body().ProposerSlashings()) > 0 || len(beaconBlock.Body().AttesterSlashings()) > 0
hasExits := len(beaconBlock.Body().VoluntaryExits()) > 0
var exitInfo *v.ExitInfo
if hasSlashings || hasExits {
// ExitInformation is expensive to compute, only do it if we need it.
exitInfo = v.ExitInformation(st)
if err := helpers.UpdateTotalActiveBalanceCache(st, exitInfo.TotalActiveBalance); err != nil {
return nil, errors.Wrap(err, "could not update total active balance cache")
}
}
st, err = b.ProcessProposerSlashings(ctx, st, beaconBlock.Body().ProposerSlashings(), exitInfo)
if err != nil {
return nil, errors.Wrap(err, "could not process block proposer slashings")
}
st, err = b.ProcessAttesterSlashings(ctx, st, beaconBlock.Body().AttesterSlashings(), v.SlashValidator)
st, err = b.ProcessAttesterSlashings(ctx, st, beaconBlock.Body().AttesterSlashings(), exitInfo)
if err != nil {
return nil, errors.Wrap(err, "could not process block attester slashings")
}
@@ -420,5 +440,9 @@ func phase0Operations(
if _, err := altair.ProcessDeposits(ctx, st, beaconBlock.Body().Deposits()); err != nil {
return nil, errors.Wrap(err, "could not process deposits")
}
return b.ProcessVoluntaryExits(ctx, st, beaconBlock.Body().VoluntaryExits())
st, err = b.ProcessVoluntaryExits(ctx, st, beaconBlock.Body().VoluntaryExits(), exitInfo)
if err != nil {
return nil, errors.Wrap(err, "could not process voluntary exits")
}
return st, nil
}

View File

@@ -19,28 +19,48 @@ import (
"github.com/pkg/errors"
)
// ExitInfo provides information about validator exits in the state.
type ExitInfo struct {
HighestExitEpoch primitives.Epoch
Churn uint64
TotalActiveBalance uint64
}
// ErrValidatorAlreadyExited is an error raised when trying to process an exit of
// an already exited validator
var ErrValidatorAlreadyExited = errors.New("validator already exited")
// MaxExitEpochAndChurn returns the maximum non-FAR_FUTURE_EPOCH exit
// epoch and the number of them
func MaxExitEpochAndChurn(s state.BeaconState) (maxExitEpoch primitives.Epoch, churn uint64) {
// ExitInformation returns information about validator exits.
func ExitInformation(s state.BeaconState) *ExitInfo {
exitInfo := &ExitInfo{}
farFutureEpoch := params.BeaconConfig().FarFutureEpoch
currentEpoch := slots.ToEpoch(s.Slot())
totalActiveBalance := uint64(0)
err := s.ReadFromEveryValidator(func(idx int, val state.ReadOnlyValidator) error {
e := val.ExitEpoch()
if e != farFutureEpoch {
if e > maxExitEpoch {
maxExitEpoch = e
churn = 1
} else if e == maxExitEpoch {
churn++
if e > exitInfo.HighestExitEpoch {
exitInfo.HighestExitEpoch = e
exitInfo.Churn = 1
} else if e == exitInfo.HighestExitEpoch {
exitInfo.Churn++
}
}
// Calculate total active balance in the same loop
if helpers.IsActiveValidatorUsingTrie(val, currentEpoch) {
totalActiveBalance += val.EffectiveBalance()
}
return nil
})
_ = err
return
// Apply minimum balance as per spec
exitInfo.TotalActiveBalance = max(params.BeaconConfig().EffectiveBalanceIncrement, totalActiveBalance)
return exitInfo
}
// InitiateValidatorExit takes in validator index and updates
@@ -64,59 +84,122 @@ func MaxExitEpochAndChurn(s state.BeaconState) (maxExitEpoch primitives.Epoch, c
// # Set validator exit epoch and withdrawable epoch
// validator.exit_epoch = exit_queue_epoch
// validator.withdrawable_epoch = Epoch(validator.exit_epoch + MIN_VALIDATOR_WITHDRAWABILITY_DELAY)
func InitiateValidatorExit(ctx context.Context, s state.BeaconState, idx primitives.ValidatorIndex, exitQueueEpoch primitives.Epoch, churn uint64) (state.BeaconState, primitives.Epoch, error) {
func InitiateValidatorExit(
ctx context.Context,
s state.BeaconState,
idx primitives.ValidatorIndex,
exitInfo *ExitInfo,
) (state.BeaconState, error) {
validator, err := s.ValidatorAtIndex(idx)
if err != nil {
return nil, 0, err
return nil, err
}
if validator.ExitEpoch != params.BeaconConfig().FarFutureEpoch {
return s, validator.ExitEpoch, ErrValidatorAlreadyExited
return s, ErrValidatorAlreadyExited
}
if exitInfo == nil {
return nil, errors.New("exit info is required to process validator exit")
}
// Compute exit queue epoch.
if s.Version() < version.Electra {
// Relevant spec code from phase0:
//
// exit_epochs = [v.exit_epoch for v in state.validators if v.exit_epoch != FAR_FUTURE_EPOCH]
// exit_queue_epoch = max(exit_epochs + [compute_activation_exit_epoch(get_current_epoch(state))])
// exit_queue_churn = len([v for v in state.validators if v.exit_epoch == exit_queue_epoch])
// if exit_queue_churn >= get_validator_churn_limit(state):
// exit_queue_epoch += Epoch(1)
exitableEpoch := helpers.ActivationExitEpoch(time.CurrentEpoch(s))
if exitableEpoch > exitQueueEpoch {
exitQueueEpoch = exitableEpoch
churn = 0
}
activeValidatorCount, err := helpers.ActiveValidatorCount(ctx, s, time.CurrentEpoch(s))
if err != nil {
return nil, 0, errors.Wrap(err, "could not get active validator count")
}
currentChurn := helpers.ValidatorExitChurnLimit(activeValidatorCount)
if churn >= currentChurn {
exitQueueEpoch, err = exitQueueEpoch.SafeAdd(1)
if err != nil {
return nil, 0, err
}
if err = initiateValidatorExitPreElectra(ctx, s, exitInfo); err != nil {
return nil, err
}
} else {
// [Modified in Electra:EIP7251]
// exit_queue_epoch = compute_exit_epoch_and_update_churn(state, validator.effective_balance)
var err error
exitQueueEpoch, err = s.ExitEpochAndUpdateChurn(primitives.Gwei(validator.EffectiveBalance))
exitInfo.HighestExitEpoch, err = s.ExitEpochAndUpdateChurn(primitives.Gwei(validator.EffectiveBalance))
if err != nil {
return nil, 0, err
return nil, err
}
}
validator.ExitEpoch = exitQueueEpoch
validator.WithdrawableEpoch, err = exitQueueEpoch.SafeAddEpoch(params.BeaconConfig().MinValidatorWithdrawabilityDelay)
validator.ExitEpoch = exitInfo.HighestExitEpoch
validator.WithdrawableEpoch, err = exitInfo.HighestExitEpoch.SafeAddEpoch(params.BeaconConfig().MinValidatorWithdrawabilityDelay)
if err != nil {
return nil, 0, err
return nil, err
}
if err := s.UpdateValidatorAtIndex(idx, validator); err != nil {
return nil, 0, err
return nil, err
}
return s, exitQueueEpoch, nil
return s, nil
}
// InitiateValidatorExitForTotalBal has the same functionality as InitiateValidatorExit,
// the only difference being how total active balance is obtained. In InitiateValidatorExit
// it is calculated inside the function and in InitiateValidatorExitForTotalBal it's a
// function argument.
func InitiateValidatorExitForTotalBal(
ctx context.Context,
s state.BeaconState,
idx primitives.ValidatorIndex,
exitInfo *ExitInfo,
totalActiveBalance primitives.Gwei,
) (state.BeaconState, error) {
validator, err := s.ValidatorAtIndex(idx)
if err != nil {
return nil, err
}
if validator.ExitEpoch != params.BeaconConfig().FarFutureEpoch {
return s, ErrValidatorAlreadyExited
}
// Compute exit queue epoch.
if s.Version() < version.Electra {
if err = initiateValidatorExitPreElectra(ctx, s, exitInfo); err != nil {
return nil, err
}
} else {
// [Modified in Electra:EIP7251]
// exit_queue_epoch = compute_exit_epoch_and_update_churn(state, validator.effective_balance)
var err error
exitInfo.HighestExitEpoch, err = s.ExitEpochAndUpdateChurnForTotalBal(totalActiveBalance, primitives.Gwei(validator.EffectiveBalance))
if err != nil {
return nil, err
}
}
validator.ExitEpoch = exitInfo.HighestExitEpoch
validator.WithdrawableEpoch, err = exitInfo.HighestExitEpoch.SafeAddEpoch(params.BeaconConfig().MinValidatorWithdrawabilityDelay)
if err != nil {
return nil, err
}
if err := s.UpdateValidatorAtIndex(idx, validator); err != nil {
return nil, err
}
return s, nil
}
func initiateValidatorExitPreElectra(ctx context.Context, s state.BeaconState, exitInfo *ExitInfo) error {
// Relevant spec code from phase0:
//
// exit_epochs = [v.exit_epoch for v in state.validators if v.exit_epoch != FAR_FUTURE_EPOCH]
// exit_queue_epoch = max(exit_epochs + [compute_activation_exit_epoch(get_current_epoch(state))])
// exit_queue_churn = len([v for v in state.validators if v.exit_epoch == exit_queue_epoch])
// if exit_queue_churn >= get_validator_churn_limit(state):
// exit_queue_epoch += Epoch(1)
exitableEpoch := helpers.ActivationExitEpoch(time.CurrentEpoch(s))
if exitInfo == nil {
return errors.New("exit info is required to process validator exit")
}
if exitableEpoch > exitInfo.HighestExitEpoch {
exitInfo.HighestExitEpoch = exitableEpoch
exitInfo.Churn = 0
}
activeValidatorCount, err := helpers.ActiveValidatorCount(ctx, s, time.CurrentEpoch(s))
if err != nil {
return errors.Wrap(err, "could not get active validator count")
}
currentChurn := helpers.ValidatorExitChurnLimit(activeValidatorCount)
if exitInfo.Churn >= currentChurn {
exitInfo.HighestExitEpoch, err = exitInfo.HighestExitEpoch.SafeAdd(1)
if err != nil {
return err
}
exitInfo.Churn = 1
} else {
exitInfo.Churn = exitInfo.Churn + 1
}
return nil
}
// SlashValidator slashes the malicious validator's balance and awards
@@ -152,9 +235,14 @@ func InitiateValidatorExit(ctx context.Context, s state.BeaconState, idx primiti
func SlashValidator(
ctx context.Context,
s state.BeaconState,
slashedIdx primitives.ValidatorIndex) (state.BeaconState, error) {
maxExitEpoch, churn := MaxExitEpochAndChurn(s)
s, _, err := InitiateValidatorExit(ctx, s, slashedIdx, maxExitEpoch, churn)
slashedIdx primitives.ValidatorIndex,
exitInfo *ExitInfo,
) (state.BeaconState, error) {
var err error
if exitInfo == nil {
return nil, errors.New("exit info is required to slash validator")
}
s, err = InitiateValidatorExitForTotalBal(ctx, s, slashedIdx, exitInfo, primitives.Gwei(exitInfo.TotalActiveBalance))
if err != nil && !errors.Is(err, ErrValidatorAlreadyExited) {
return nil, errors.Wrapf(err, "could not initiate validator %d exit", slashedIdx)
}

View File

@@ -49,9 +49,11 @@ func TestInitiateValidatorExit_AlreadyExited(t *testing.T) {
}}
state, err := state_native.InitializeFromProtoPhase0(base)
require.NoError(t, err)
newState, epoch, err := validators.InitiateValidatorExit(t.Context(), state, 0, 199, 1)
exitInfo := &validators.ExitInfo{HighestExitEpoch: 199, Churn: 1}
newState, err := validators.InitiateValidatorExit(t.Context(), state, 0, exitInfo)
require.ErrorIs(t, err, validators.ErrValidatorAlreadyExited)
require.Equal(t, exitEpoch, epoch)
assert.Equal(t, primitives.Epoch(199), exitInfo.HighestExitEpoch)
assert.Equal(t, uint64(1), exitInfo.Churn)
v, err := newState.ValidatorAtIndex(0)
require.NoError(t, err)
assert.Equal(t, exitEpoch, v.ExitEpoch, "Already exited")
@@ -68,9 +70,11 @@ func TestInitiateValidatorExit_ProperExit(t *testing.T) {
}}
state, err := state_native.InitializeFromProtoPhase0(base)
require.NoError(t, err)
newState, epoch, err := validators.InitiateValidatorExit(t.Context(), state, idx, exitedEpoch+2, 1)
exitInfo := &validators.ExitInfo{HighestExitEpoch: exitedEpoch + 2, Churn: 1}
newState, err := validators.InitiateValidatorExit(t.Context(), state, idx, exitInfo)
require.NoError(t, err)
require.Equal(t, exitedEpoch+2, epoch)
assert.Equal(t, exitedEpoch+2, exitInfo.HighestExitEpoch)
assert.Equal(t, uint64(2), exitInfo.Churn)
v, err := newState.ValidatorAtIndex(idx)
require.NoError(t, err)
assert.Equal(t, exitedEpoch+2, v.ExitEpoch, "Exit epoch was not the highest")
@@ -88,9 +92,11 @@ func TestInitiateValidatorExit_ChurnOverflow(t *testing.T) {
}}
state, err := state_native.InitializeFromProtoPhase0(base)
require.NoError(t, err)
newState, epoch, err := validators.InitiateValidatorExit(t.Context(), state, idx, exitedEpoch+2, 4)
exitInfo := &validators.ExitInfo{HighestExitEpoch: exitedEpoch + 2, Churn: 4}
newState, err := validators.InitiateValidatorExit(t.Context(), state, idx, exitInfo)
require.NoError(t, err)
require.Equal(t, exitedEpoch+3, epoch)
assert.Equal(t, exitedEpoch+3, exitInfo.HighestExitEpoch)
assert.Equal(t, uint64(1), exitInfo.Churn)
// Because of exit queue overflow,
// validator who init exited has to wait one more epoch.
@@ -110,7 +116,8 @@ func TestInitiateValidatorExit_WithdrawalOverflows(t *testing.T) {
}}
state, err := state_native.InitializeFromProtoPhase0(base)
require.NoError(t, err)
_, _, err = validators.InitiateValidatorExit(t.Context(), state, 1, params.BeaconConfig().FarFutureEpoch-1, 1)
exitInfo := &validators.ExitInfo{HighestExitEpoch: params.BeaconConfig().FarFutureEpoch - 1, Churn: 1}
_, err = validators.InitiateValidatorExit(t.Context(), state, 1, exitInfo)
require.ErrorContains(t, "addition overflows", err)
}
@@ -146,12 +153,11 @@ func TestInitiateValidatorExit_ProperExit_Electra(t *testing.T) {
require.NoError(t, err)
require.Equal(t, primitives.Gwei(0), ebtc)
newState, epoch, err := validators.InitiateValidatorExit(t.Context(), state, idx, 0, 0) // exitQueueEpoch and churn are not used in electra
newState, err := validators.InitiateValidatorExit(t.Context(), state, idx, &validators.ExitInfo{}) // exit info is not used in electra
require.NoError(t, err)
// Expect that the exit epoch is the next available epoch with max seed lookahead.
want := helpers.ActivationExitEpoch(exitedEpoch + 1)
require.Equal(t, want, epoch)
v, err := newState.ValidatorAtIndex(idx)
require.NoError(t, err)
assert.Equal(t, want, v.ExitEpoch, "Exit epoch was not the highest")
@@ -190,7 +196,7 @@ func TestSlashValidator_OK(t *testing.T) {
require.NoError(t, err, "Could not get proposer")
proposerBal, err := state.BalanceAtIndex(proposer)
require.NoError(t, err)
slashedState, err := validators.SlashValidator(t.Context(), state, slashedIdx)
slashedState, err := validators.SlashValidator(t.Context(), state, slashedIdx, validators.ExitInformation(state))
require.NoError(t, err, "Could not slash validator")
require.Equal(t, true, slashedState.Version() == version.Phase0)
@@ -244,7 +250,7 @@ func TestSlashValidator_Electra(t *testing.T) {
require.NoError(t, err, "Could not get proposer")
proposerBal, err := state.BalanceAtIndex(proposer)
require.NoError(t, err)
slashedState, err := validators.SlashValidator(t.Context(), state, slashedIdx)
slashedState, err := validators.SlashValidator(t.Context(), state, slashedIdx, validators.ExitInformation(state))
require.NoError(t, err, "Could not slash validator")
require.Equal(t, true, slashedState.Version() == version.Electra)
@@ -505,8 +511,8 @@ func TestValidatorMaxExitEpochAndChurn(t *testing.T) {
for _, tt := range tests {
s, err := state_native.InitializeFromProtoPhase0(tt.state)
require.NoError(t, err)
epoch, churn := validators.MaxExitEpochAndChurn(s)
require.Equal(t, tt.wantedEpoch, epoch)
require.Equal(t, tt.wantedChurn, churn)
exitInfo := validators.ExitInformation(s)
require.Equal(t, tt.wantedEpoch, exitInfo.HighestExitEpoch)
require.Equal(t, tt.wantedChurn, exitInfo.Churn)
}
}

View File

@@ -4,7 +4,6 @@ go_library(
name = "go_default_library",
srcs = [
"availability_blobs.go",
"availability_columns.go",
"blob_cache.go",
"data_column_cache.go",
"iface.go",
@@ -13,7 +12,6 @@ go_library(
importpath = "github.com/OffchainLabs/prysm/v6/beacon-chain/das",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/db/filesystem:go_default_library",
"//beacon-chain/verification:go_default_library",
"//config/fieldparams:go_default_library",
@@ -23,7 +21,6 @@ go_library(
"//runtime/logging:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
@@ -33,7 +30,6 @@ go_test(
name = "go_default_test",
srcs = [
"availability_blobs_test.go",
"availability_columns_test.go",
"blob_cache_test.go",
"data_column_cache_test.go",
],
@@ -49,7 +45,6 @@ go_test(
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)

View File

@@ -53,30 +53,25 @@ func NewLazilyPersistentStore(store *filesystem.BlobStorage, verifier BlobBatchV
// Persist adds blobs to the working blob cache. Blobs stored in this cache will be persisted
// for at least as long as the node is running. Once IsDataAvailable succeeds, all blobs referenced
// by the given block are guaranteed to be persisted for the remainder of the retention period.
func (s *LazilyPersistentStoreBlob) Persist(current primitives.Slot, sidecars ...blocks.ROSidecar) error {
func (s *LazilyPersistentStoreBlob) Persist(current primitives.Slot, sidecars ...blocks.ROBlob) error {
if len(sidecars) == 0 {
return nil
}
blobSidecars, err := blocks.BlobSidecarsFromSidecars(sidecars)
if err != nil {
return errors.Wrap(err, "blob sidecars from sidecars")
}
if len(blobSidecars) > 1 {
firstRoot := blobSidecars[0].BlockRoot()
for _, sidecar := range blobSidecars[1:] {
if len(sidecars) > 1 {
firstRoot := sidecars[0].BlockRoot()
for _, sidecar := range sidecars[1:] {
if sidecar.BlockRoot() != firstRoot {
return errMixedRoots
}
}
}
if !params.WithinDAPeriod(slots.ToEpoch(blobSidecars[0].Slot()), slots.ToEpoch(current)) {
if !params.WithinDAPeriod(slots.ToEpoch(sidecars[0].Slot()), slots.ToEpoch(current)) {
return nil
}
key := keyFromSidecar(blobSidecars[0])
key := keyFromSidecar(sidecars[0])
entry := s.cache.ensure(key)
for _, blobSidecar := range blobSidecars {
for _, blobSidecar := range sidecars {
if err := entry.stash(&blobSidecar); err != nil {
return err
}

View File

@@ -18,13 +18,16 @@ import (
)
func Test_commitmentsToCheck(t *testing.T) {
params.SetupTestConfigCleanup(t)
params.BeaconConfig().FuluForkEpoch = params.BeaconConfig().ElectraForkEpoch + 4096*2
fulu := primitives.Slot(params.BeaconConfig().FuluForkEpoch) * params.BeaconConfig().SlotsPerEpoch
windowSlots, err := slots.EpochEnd(params.BeaconConfig().MinEpochsForBlobsSidecarsRequest)
require.NoError(t, err)
commits := [][]byte{
bytesutil.PadTo([]byte("a"), 48),
bytesutil.PadTo([]byte("b"), 48),
bytesutil.PadTo([]byte("c"), 48),
bytesutil.PadTo([]byte("d"), 48),
windowSlots = windowSlots + primitives.Slot(params.BeaconConfig().FuluForkEpoch)
maxBlobs := params.LastNetworkScheduleEntry().MaxBlobsPerBlock
commits := make([][]byte, maxBlobs+1)
for i := 0; i < len(commits); i++ {
commits[i] = bytesutil.PadTo([]byte{byte(i)}, 48)
}
cases := []struct {
name string
@@ -47,41 +50,40 @@ func Test_commitmentsToCheck(t *testing.T) {
{
name: "commitments within da",
block: func(t *testing.T) blocks.ROBlock {
d := util.NewBeaconBlockDeneb()
d.Block.Body.BlobKzgCommitments = commits
d.Block.Slot = 100
d := util.NewBeaconBlockFulu()
d.Block.Body.BlobKzgCommitments = commits[:maxBlobs]
d.Block.Slot = fulu + 100
sb, err := blocks.NewSignedBeaconBlock(d)
require.NoError(t, err)
rb, err := blocks.NewROBlock(sb)
require.NoError(t, err)
return rb
},
commits: commits,
slot: 100,
commits: commits[:maxBlobs],
slot: fulu + 100,
},
{
name: "commitments outside da",
block: func(t *testing.T) blocks.ROBlock {
d := util.NewBeaconBlockDeneb()
d := util.NewBeaconBlockFulu()
d.Block.Slot = fulu
// block is from slot 0, "current slot" is window size +1 (so outside the window)
d.Block.Body.BlobKzgCommitments = commits
d.Block.Body.BlobKzgCommitments = commits[:maxBlobs]
sb, err := blocks.NewSignedBeaconBlock(d)
require.NoError(t, err)
rb, err := blocks.NewROBlock(sb)
require.NoError(t, err)
return rb
},
slot: windowSlots + 1,
slot: fulu + windowSlots + 1,
},
{
name: "excessive commitments",
block: func(t *testing.T) blocks.ROBlock {
d := util.NewBeaconBlockDeneb()
d.Block.Slot = 100
d := util.NewBeaconBlockFulu()
d.Block.Slot = fulu + 100
// block is from slot 0, "current slot" is window size +1 (so outside the window)
d.Block.Body.BlobKzgCommitments = commits
// Double the number of commitments, assert that this is over the limit
d.Block.Body.BlobKzgCommitments = append(commits, d.Block.Body.BlobKzgCommitments...)
sb, err := blocks.NewSignedBeaconBlock(d)
require.NoError(t, err)
rb, err := blocks.NewROBlock(sb)
@@ -115,75 +117,69 @@ func Test_commitmentsToCheck(t *testing.T) {
func TestLazilyPersistent_Missing(t *testing.T) {
ctx := t.Context()
store := filesystem.NewEphemeralBlobStorage(t)
ds := util.SlotAtEpoch(t, params.BeaconConfig().DenebForkEpoch)
blk, blobSidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 3)
scs := blocks.NewSidecarsFromBlobSidecars(blobSidecars)
blk, blobSidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, ds, 3)
mbv := &mockBlobBatchVerifier{t: t, scs: blobSidecars}
as := NewLazilyPersistentStore(store, mbv)
// Only one commitment persisted, should return error with other indices
require.NoError(t, as.Persist(1, scs[2]))
err := as.IsDataAvailable(ctx, 1, blk)
require.NoError(t, as.Persist(ds, blobSidecars[2]))
err := as.IsDataAvailable(ctx, ds, blk)
require.ErrorIs(t, err, errMissingSidecar)
// All but one persisted, return missing idx
require.NoError(t, as.Persist(1, scs[0]))
err = as.IsDataAvailable(ctx, 1, blk)
require.NoError(t, as.Persist(ds, blobSidecars[0]))
err = as.IsDataAvailable(ctx, ds, blk)
require.ErrorIs(t, err, errMissingSidecar)
// All persisted, return nil
require.NoError(t, as.Persist(1, scs...))
require.NoError(t, as.Persist(ds, blobSidecars...))
require.NoError(t, as.IsDataAvailable(ctx, 1, blk))
require.NoError(t, as.IsDataAvailable(ctx, ds, blk))
}
func TestLazilyPersistent_Mismatch(t *testing.T) {
ctx := t.Context()
store := filesystem.NewEphemeralBlobStorage(t)
ds := util.SlotAtEpoch(t, params.BeaconConfig().DenebForkEpoch)
blk, blobSidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 3)
blk, blobSidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, ds, 3)
mbv := &mockBlobBatchVerifier{t: t, err: errors.New("kzg check should not run")}
blobSidecars[0].KzgCommitment = bytesutil.PadTo([]byte("nope"), 48)
as := NewLazilyPersistentStore(store, mbv)
scs := blocks.NewSidecarsFromBlobSidecars(blobSidecars)
// Only one commitment persisted, should return error with other indices
require.NoError(t, as.Persist(1, scs[0]))
err := as.IsDataAvailable(ctx, 1, blk)
require.NoError(t, as.Persist(ds, blobSidecars[0]))
err := as.IsDataAvailable(ctx, ds, blk)
require.NotNil(t, err)
require.ErrorIs(t, err, errCommitmentMismatch)
}
func TestLazyPersistOnceCommitted(t *testing.T) {
_, blobSidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 6)
scs := blocks.NewSidecarsFromBlobSidecars(blobSidecars)
ds := util.SlotAtEpoch(t, params.BeaconConfig().DenebForkEpoch)
_, blobSidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, ds, 6)
as := NewLazilyPersistentStore(filesystem.NewEphemeralBlobStorage(t), &mockBlobBatchVerifier{})
// stashes as expected
require.NoError(t, as.Persist(1, scs...))
require.NoError(t, as.Persist(ds, blobSidecars...))
// ignores duplicates
require.ErrorIs(t, as.Persist(1, scs...), ErrDuplicateSidecar)
require.ErrorIs(t, as.Persist(ds, blobSidecars...), ErrDuplicateSidecar)
// ignores index out of bound
blobSidecars[0].Index = 6
require.ErrorIs(t, as.Persist(1, blocks.NewSidecarFromBlobSidecar(blobSidecars[0])), errIndexOutOfBounds)
_, moreBlobSidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 4)
more := blocks.NewSidecarsFromBlobSidecars(moreBlobSidecars)
require.ErrorIs(t, as.Persist(ds, blobSidecars[0]), errIndexOutOfBounds)
_, moreBlobSidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, ds, 4)
// ignores sidecars before the retention period
slotOOB, err := slots.EpochStart(params.BeaconConfig().MinEpochsForBlobsSidecarsRequest)
require.NoError(t, err)
require.NoError(t, as.Persist(32+slotOOB, more[0]))
slotOOB := util.SlotAtEpoch(t, params.BeaconConfig().MinEpochsForBlobsSidecarsRequest)
slotOOB += ds + 32
require.NoError(t, as.Persist(slotOOB, moreBlobSidecars[0]))
// doesn't ignore new sidecars with a different block root
require.NoError(t, as.Persist(1, more...))
require.NoError(t, as.Persist(ds, moreBlobSidecars...))
}
type mockBlobBatchVerifier struct {

View File

@@ -1,213 +0,0 @@
package das
import (
"context"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v6/beacon-chain/verification"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/ethereum/go-ethereum/p2p/enode"
errors "github.com/pkg/errors"
)
// LazilyPersistentStoreColumn is an implementation of AvailabilityStore to be used when batch syncing data columns.
// This implementation will hold any data columns passed to Persist until the IsDataAvailable is called for their
// block, at which time they will undergo full verification and be saved to the disk.
type LazilyPersistentStoreColumn struct {
store *filesystem.DataColumnStorage
nodeID enode.ID
cache *dataColumnCache
newDataColumnsVerifier verification.NewDataColumnsVerifier
custodyGroupCount uint64
}
var _ AvailabilityStore = &LazilyPersistentStoreColumn{}
// DataColumnsVerifier enables LazilyPersistentStoreColumn to manage the verification process
// going from RODataColumn->VerifiedRODataColumn, while avoiding the decision of which individual verifications
// to run and in what order. Since LazilyPersistentStoreColumn always tries to verify and save data columns only when
// they are all available, the interface takes a slice of data column sidecars.
type DataColumnsVerifier interface {
VerifiedRODataColumns(ctx context.Context, blk blocks.ROBlock, scs []blocks.RODataColumn) ([]blocks.VerifiedRODataColumn, error)
}
// NewLazilyPersistentStoreColumn creates a new LazilyPersistentStoreColumn.
// WARNING: The resulting LazilyPersistentStoreColumn is NOT thread-safe.
func NewLazilyPersistentStoreColumn(
store *filesystem.DataColumnStorage,
nodeID enode.ID,
newDataColumnsVerifier verification.NewDataColumnsVerifier,
custodyGroupCount uint64,
) *LazilyPersistentStoreColumn {
return &LazilyPersistentStoreColumn{
store: store,
nodeID: nodeID,
cache: newDataColumnCache(),
newDataColumnsVerifier: newDataColumnsVerifier,
custodyGroupCount: custodyGroupCount,
}
}
// PersistColumns adds columns to the working column cache. Columns stored in this cache will be persisted
// for at least as long as the node is running. Once IsDataAvailable succeeds, all columns referenced
// by the given block are guaranteed to be persisted for the remainder of the retention period.
func (s *LazilyPersistentStoreColumn) Persist(current primitives.Slot, sidecars ...blocks.ROSidecar) error {
if len(sidecars) == 0 {
return nil
}
dataColumnSidecars, err := blocks.DataColumnSidecarsFromSidecars(sidecars)
if err != nil {
return errors.Wrap(err, "blob sidecars from sidecars")
}
// It is safe to retrieve the first sidecar.
firstSidecar := dataColumnSidecars[0]
if len(sidecars) > 1 {
firstRoot := firstSidecar.BlockRoot()
for _, sidecar := range dataColumnSidecars[1:] {
if sidecar.BlockRoot() != firstRoot {
return errMixedRoots
}
}
}
firstSidecarEpoch, currentEpoch := slots.ToEpoch(firstSidecar.Slot()), slots.ToEpoch(current)
if !params.WithinDAPeriod(firstSidecarEpoch, currentEpoch) {
return nil
}
key := cacheKey{slot: firstSidecar.Slot(), root: firstSidecar.BlockRoot()}
entry := s.cache.ensure(key)
for _, sidecar := range dataColumnSidecars {
if err := entry.stash(&sidecar); err != nil {
return errors.Wrap(err, "stash DataColumnSidecar")
}
}
return nil
}
// IsDataAvailable returns nil if all the commitments in the given block are persisted to the db and have been verified.
// DataColumnsSidecars already in the db are assumed to have been previously verified against the block.
func (s *LazilyPersistentStoreColumn) IsDataAvailable(ctx context.Context, currentSlot primitives.Slot, block blocks.ROBlock) error {
blockCommitments, err := s.fullCommitmentsToCheck(s.nodeID, block, currentSlot)
if err != nil {
return errors.Wrapf(err, "full commitments to check with block root `%#x` and current slot `%d`", block.Root(), currentSlot)
}
// Return early for blocks that do not have any commitments.
if blockCommitments.count() == 0 {
return nil
}
// Get the root of the block.
blockRoot := block.Root()
// Build the cache key for the block.
key := cacheKey{slot: block.Block().Slot(), root: blockRoot}
// Retrieve the cache entry for the block, or create an empty one if it doesn't exist.
entry := s.cache.ensure(key)
// Delete the cache entry for the block at the end.
defer s.cache.delete(key)
// Set the disk summary for the block in the cache entry.
entry.setDiskSummary(s.store.Summary(blockRoot))
// Verify we have all the expected sidecars, and fail fast if any are missing or inconsistent.
// We don't try to salvage problematic batches because this indicates a misbehaving peer and we'd rather
// ignore their response and decrease their peer score.
roDataColumns, err := entry.filter(blockRoot, blockCommitments)
if err != nil {
return errors.Wrap(err, "entry filter")
}
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/p2p-interface.md#datacolumnsidecarsbyrange-v1
verifier := s.newDataColumnsVerifier(roDataColumns, verification.ByRangeRequestDataColumnSidecarRequirements)
if err := verifier.ValidFields(); err != nil {
return errors.Wrap(err, "valid fields")
}
if err := verifier.SidecarInclusionProven(); err != nil {
return errors.Wrap(err, "sidecar inclusion proven")
}
if err := verifier.SidecarKzgProofVerified(); err != nil {
return errors.Wrap(err, "sidecar KZG proof verified")
}
verifiedRoDataColumns, err := verifier.VerifiedRODataColumns()
if err != nil {
return errors.Wrap(err, "verified RO data columns - should never happen")
}
if err := s.store.Save(verifiedRoDataColumns); err != nil {
return errors.Wrap(err, "save data column sidecars")
}
return nil
}
// fullCommitmentsToCheck returns the commitments to check for a given block.
func (s *LazilyPersistentStoreColumn) fullCommitmentsToCheck(nodeID enode.ID, block blocks.ROBlock, currentSlot primitives.Slot) (*safeCommitmentsArray, error) {
samplesPerSlot := params.BeaconConfig().SamplesPerSlot
// Return early for blocks that are pre-Fulu.
if block.Version() < version.Fulu {
return &safeCommitmentsArray{}, nil
}
// Compute the block epoch.
blockSlot := block.Block().Slot()
blockEpoch := slots.ToEpoch(blockSlot)
// Compute the current epoch.
currentEpoch := slots.ToEpoch(currentSlot)
// Return early if the request is out of the MIN_EPOCHS_FOR_DATA_COLUMN_SIDECARS_REQUESTS window.
if !params.WithinDAPeriod(blockEpoch, currentEpoch) {
return &safeCommitmentsArray{}, nil
}
// Retrieve the KZG commitments for the block.
kzgCommitments, err := block.Block().Body().BlobKzgCommitments()
if err != nil {
return nil, errors.Wrap(err, "blob KZG commitments")
}
// Return early if there are no commitments in the block.
if len(kzgCommitments) == 0 {
return &safeCommitmentsArray{}, nil
}
// Retrieve peer info.
samplingSize := max(s.custodyGroupCount, samplesPerSlot)
peerInfo, _, err := peerdas.Info(nodeID, samplingSize)
if err != nil {
return nil, errors.Wrap(err, "peer info")
}
// Create a safe commitments array for the custody columns.
commitmentsArray := &safeCommitmentsArray{}
commitmentsArraySize := uint64(len(commitmentsArray))
for column := range peerInfo.CustodyColumns {
if column >= commitmentsArraySize {
return nil, errors.Errorf("custody column index %d too high (max allowed %d) - should never happen", column, commitmentsArraySize)
}
commitmentsArray[column] = kzgCommitments
}
return commitmentsArray, nil
}

View File

@@ -1,313 +0,0 @@
package das
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v6/beacon-chain/verification"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/ethereum/go-ethereum/p2p/enode"
)
var commitments = [][]byte{
bytesutil.PadTo([]byte("a"), 48),
bytesutil.PadTo([]byte("b"), 48),
bytesutil.PadTo([]byte("c"), 48),
bytesutil.PadTo([]byte("d"), 48),
}
func TestPersist(t *testing.T) {
t.Run("no sidecars", func(t *testing.T) {
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, nil, 0)
err := lazilyPersistentStoreColumns.Persist(0)
require.NoError(t, err)
require.Equal(t, 0, len(lazilyPersistentStoreColumns.cache.entries))
})
t.Run("mixed roots", func(t *testing.T) {
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
dataColumnParamsByBlockRoot := []util.DataColumnParam{
{Slot: 1, Index: 1},
{Slot: 2, Index: 2},
}
roSidecars, _ := roSidecarsFromDataColumnParamsByBlockRoot(t, dataColumnParamsByBlockRoot)
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, nil, 0)
err := lazilyPersistentStoreColumns.Persist(0, roSidecars...)
require.ErrorIs(t, err, errMixedRoots)
require.Equal(t, 0, len(lazilyPersistentStoreColumns.cache.entries))
})
t.Run("outside DA period", func(t *testing.T) {
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
dataColumnParamsByBlockRoot := []util.DataColumnParam{
{Slot: 1, Index: 1},
}
roSidecars, _ := roSidecarsFromDataColumnParamsByBlockRoot(t, dataColumnParamsByBlockRoot)
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, nil, 0)
err := lazilyPersistentStoreColumns.Persist(1_000_000, roSidecars...)
require.NoError(t, err)
require.Equal(t, 0, len(lazilyPersistentStoreColumns.cache.entries))
})
t.Run("nominal", func(t *testing.T) {
const slot = 42
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
dataColumnParamsByBlockRoot := []util.DataColumnParam{
{Slot: slot, Index: 1},
{Slot: slot, Index: 5},
}
roSidecars, roDataColumns := roSidecarsFromDataColumnParamsByBlockRoot(t, dataColumnParamsByBlockRoot)
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, nil, 0)
err := lazilyPersistentStoreColumns.Persist(slot, roSidecars...)
require.NoError(t, err)
require.Equal(t, 1, len(lazilyPersistentStoreColumns.cache.entries))
key := cacheKey{slot: slot, root: roDataColumns[0].BlockRoot()}
entry, ok := lazilyPersistentStoreColumns.cache.entries[key]
require.Equal(t, true, ok)
// A call to Persist does NOT save the sidecars to disk.
require.Equal(t, uint64(0), entry.diskSummary.Count())
require.DeepSSZEqual(t, roDataColumns[0], *entry.scs[1])
require.DeepSSZEqual(t, roDataColumns[1], *entry.scs[5])
for i, roDataColumn := range entry.scs {
if map[int]bool{1: true, 5: true}[i] {
continue
}
require.IsNil(t, roDataColumn)
}
})
}
func TestIsDataAvailable(t *testing.T) {
newDataColumnsVerifier := func(dataColumnSidecars []blocks.RODataColumn, _ []verification.Requirement) verification.DataColumnsVerifier {
return &mockDataColumnsVerifier{t: t, dataColumnSidecars: dataColumnSidecars}
}
ctx := t.Context()
t.Run("without commitments", func(t *testing.T) {
signedBeaconBlockFulu := util.NewBeaconBlockFulu()
signedRoBlock := newSignedRoBlock(t, signedBeaconBlockFulu)
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, newDataColumnsVerifier, 0)
err := lazilyPersistentStoreColumns.IsDataAvailable(ctx, 0 /*current slot*/, signedRoBlock)
require.NoError(t, err)
})
t.Run("with commitments", func(t *testing.T) {
signedBeaconBlockFulu := util.NewBeaconBlockFulu()
signedBeaconBlockFulu.Block.Body.BlobKzgCommitments = commitments
signedRoBlock := newSignedRoBlock(t, signedBeaconBlockFulu)
block := signedRoBlock.Block()
slot := block.Slot()
proposerIndex := block.ProposerIndex()
parentRoot := block.ParentRoot()
stateRoot := block.StateRoot()
bodyRoot, err := block.Body().HashTreeRoot()
require.NoError(t, err)
root := signedRoBlock.Root()
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, newDataColumnsVerifier, 0)
indices := [...]uint64{1, 17, 19, 42, 75, 87, 102, 117}
dataColumnsParams := make([]util.DataColumnParam, 0, len(indices))
for _, index := range indices {
dataColumnParams := util.DataColumnParam{
Index: index,
KzgCommitments: commitments,
Slot: slot,
ProposerIndex: proposerIndex,
ParentRoot: parentRoot[:],
StateRoot: stateRoot[:],
BodyRoot: bodyRoot[:],
}
dataColumnsParams = append(dataColumnsParams, dataColumnParams)
}
_, verifiedRoDataColumns := util.CreateTestVerifiedRoDataColumnSidecars(t, dataColumnsParams)
key := cacheKey{root: root}
entry := lazilyPersistentStoreColumns.cache.ensure(key)
defer lazilyPersistentStoreColumns.cache.delete(key)
for _, verifiedRoDataColumn := range verifiedRoDataColumns {
err := entry.stash(&verifiedRoDataColumn.RODataColumn)
require.NoError(t, err)
}
err = lazilyPersistentStoreColumns.IsDataAvailable(ctx, slot, signedRoBlock)
require.NoError(t, err)
actual, err := dataColumnStorage.Get(root, indices[:])
require.NoError(t, err)
summary := dataColumnStorage.Summary(root)
require.Equal(t, uint64(len(indices)), summary.Count())
require.DeepSSZEqual(t, verifiedRoDataColumns, actual)
})
}
func TestFullCommitmentsToCheck(t *testing.T) {
windowSlots, err := slots.EpochEnd(params.BeaconConfig().MinEpochsForDataColumnSidecarsRequest)
require.NoError(t, err)
testCases := []struct {
name string
commitments [][]byte
block func(*testing.T) blocks.ROBlock
slot primitives.Slot
}{
{
name: "Pre-Fulu block",
block: func(t *testing.T) blocks.ROBlock {
return newSignedRoBlock(t, util.NewBeaconBlockElectra())
},
},
{
name: "Commitments outside data availability window",
block: func(t *testing.T) blocks.ROBlock {
beaconBlockElectra := util.NewBeaconBlockElectra()
// Block is from slot 0, "current slot" is window size +1 (so outside the window)
beaconBlockElectra.Block.Body.BlobKzgCommitments = commitments
return newSignedRoBlock(t, beaconBlockElectra)
},
slot: windowSlots + 1,
},
{
name: "Commitments within data availability window",
block: func(t *testing.T) blocks.ROBlock {
signedBeaconBlockFulu := util.NewBeaconBlockFulu()
signedBeaconBlockFulu.Block.Body.BlobKzgCommitments = commitments
signedBeaconBlockFulu.Block.Slot = 100
return newSignedRoBlock(t, signedBeaconBlockFulu)
},
commitments: commitments,
slot: 100,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
numberOfColumns := params.BeaconConfig().NumberOfColumns
b := tc.block(t)
s := NewLazilyPersistentStoreColumn(nil, enode.ID{}, nil, numberOfColumns)
commitmentsArray, err := s.fullCommitmentsToCheck(enode.ID{}, b, tc.slot)
require.NoError(t, err)
for _, commitments := range commitmentsArray {
require.DeepEqual(t, tc.commitments, commitments)
}
})
}
}
func roSidecarsFromDataColumnParamsByBlockRoot(t *testing.T, parameters []util.DataColumnParam) ([]blocks.ROSidecar, []blocks.RODataColumn) {
roDataColumns, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, parameters)
roSidecars := make([]blocks.ROSidecar, 0, len(roDataColumns))
for _, roDataColumn := range roDataColumns {
roSidecars = append(roSidecars, blocks.NewSidecarFromDataColumnSidecar(roDataColumn))
}
return roSidecars, roDataColumns
}
func newSignedRoBlock(t *testing.T, signedBeaconBlock interface{}) blocks.ROBlock {
sb, err := blocks.NewSignedBeaconBlock(signedBeaconBlock)
require.NoError(t, err)
rb, err := blocks.NewROBlock(sb)
require.NoError(t, err)
return rb
}
type mockDataColumnsVerifier struct {
t *testing.T
dataColumnSidecars []blocks.RODataColumn
validCalled, SidecarInclusionProvenCalled, SidecarKzgProofVerifiedCalled bool
}
var _ verification.DataColumnsVerifier = &mockDataColumnsVerifier{}
func (m *mockDataColumnsVerifier) VerifiedRODataColumns() ([]blocks.VerifiedRODataColumn, error) {
require.Equal(m.t, true, m.validCalled && m.SidecarInclusionProvenCalled && m.SidecarKzgProofVerifiedCalled)
verifiedDataColumnSidecars := make([]blocks.VerifiedRODataColumn, 0, len(m.dataColumnSidecars))
for _, dataColumnSidecar := range m.dataColumnSidecars {
verifiedDataColumnSidecar := blocks.NewVerifiedRODataColumn(dataColumnSidecar)
verifiedDataColumnSidecars = append(verifiedDataColumnSidecars, verifiedDataColumnSidecar)
}
return verifiedDataColumnSidecars, nil
}
func (m *mockDataColumnsVerifier) SatisfyRequirement(verification.Requirement) {}
func (m *mockDataColumnsVerifier) ValidFields() error {
m.validCalled = true
return nil
}
func (m *mockDataColumnsVerifier) CorrectSubnet(dataColumnSidecarSubTopic string, expectedTopics []string) error {
return nil
}
func (m *mockDataColumnsVerifier) NotFromFutureSlot() error { return nil }
func (m *mockDataColumnsVerifier) SlotAboveFinalized() error { return nil }
func (m *mockDataColumnsVerifier) ValidProposerSignature(ctx context.Context) error { return nil }
func (m *mockDataColumnsVerifier) SidecarParentSeen(parentSeen func([fieldparams.RootLength]byte) bool) error {
return nil
}
func (m *mockDataColumnsVerifier) SidecarParentValid(badParent func([fieldparams.RootLength]byte) bool) error {
return nil
}
func (m *mockDataColumnsVerifier) SidecarParentSlotLower() error { return nil }
func (m *mockDataColumnsVerifier) SidecarDescendsFromFinalized() error { return nil }
func (m *mockDataColumnsVerifier) SidecarInclusionProven() error {
m.SidecarInclusionProvenCalled = true
return nil
}
func (m *mockDataColumnsVerifier) SidecarKzgProofVerified() error {
m.SidecarKzgProofVerifiedCalled = true
return nil
}
func (m *mockDataColumnsVerifier) SidecarProposerExpected(ctx context.Context) error { return nil }

View File

@@ -39,7 +39,7 @@ func filterTestCaseSetup(slot primitives.Slot, nBlobs int, onDisk []int, numExpe
entry := &blobCacheEntry{}
if len(onDisk) > 0 {
od := map[[32]byte][]int{blk.Root(): onDisk}
sumz := filesystem.NewMockBlobStorageSummarizer(t, od)
sumz := filesystem.NewMockBlobStorageSummarizer(t, slots.ToEpoch(slot), od)
sum := sumz.Summary(blk.Root())
entry.setDiskSummary(sum)
}

View File

@@ -15,5 +15,5 @@ import (
// durably persisted before returning a non-error value.
type AvailabilityStore interface {
IsDataAvailable(ctx context.Context, current primitives.Slot, b blocks.ROBlock) error
Persist(current primitives.Slot, sc ...blocks.ROSidecar) error
Persist(current primitives.Slot, blobSidecar ...blocks.ROBlob) error
}

View File

@@ -5,13 +5,12 @@ import (
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
errors "github.com/pkg/errors"
)
// MockAvailabilityStore is an implementation of AvailabilityStore that can be used by other packages in tests.
type MockAvailabilityStore struct {
VerifyAvailabilityCallback func(ctx context.Context, current primitives.Slot, b blocks.ROBlock) error
PersistBlobsCallback func(current primitives.Slot, sc ...blocks.ROBlob) error
PersistBlobsCallback func(current primitives.Slot, blobSidecar ...blocks.ROBlob) error
}
var _ AvailabilityStore = &MockAvailabilityStore{}
@@ -25,13 +24,9 @@ func (m *MockAvailabilityStore) IsDataAvailable(ctx context.Context, current pri
}
// Persist satisfies the corresponding method of the AvailabilityStore interface in a way that is useful for tests.
func (m *MockAvailabilityStore) Persist(current primitives.Slot, sc ...blocks.ROSidecar) error {
blobSidecars, err := blocks.BlobSidecarsFromSidecars(sc)
if err != nil {
return errors.Wrap(err, "blob sidecars from sidecars")
}
func (m *MockAvailabilityStore) Persist(current primitives.Slot, blobSidecar ...blocks.ROBlob) error {
if m.PersistBlobsCallback != nil {
return m.PersistBlobsCallback(current, blobSidecars...)
return m.PersistBlobsCallback(current, blobSidecar...)
}
return nil
}

View File

@@ -122,18 +122,18 @@ type BlobStorage struct {
func (bs *BlobStorage) WarmCache() {
start := time.Now()
if bs.layoutName == LayoutNameFlat {
log.Info("Blob filesystem cache warm-up started. This may take a few minutes.")
log.Info("Blob filesystem cache warm-up started. This may take a few minutes")
} else {
log.Info("Blob filesystem cache warm-up started.")
log.Info("Blob filesystem cache warm-up started")
}
if err := warmCache(bs.layout, bs.cache); err != nil {
log.WithError(err).Error("Error encountered while warming up blob filesystem cache.")
log.WithError(err).Error("Error encountered while warming up blob filesystem cache")
}
if err := bs.migrateLayouts(); err != nil {
log.WithError(err).Error("Error encountered while migrating blob storage.")
log.WithError(err).Error("Error encountered while migrating blob storage")
}
log.WithField("elapsed", time.Since(start)).Info("Blob filesystem cache warm-up complete.")
log.WithField("elapsed", time.Since(start)).Info("Blob filesystem cache warm-up complete")
}
// If any blob storage directories are found for layouts besides the configured layout, migrate them.

View File

@@ -21,7 +21,8 @@ import (
)
func TestBlobStorage_SaveBlobData(t *testing.T) {
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, params.BeaconConfig().MaxBlobsPerBlock(1))
ds := util.SlotAtEpoch(t, params.BeaconConfig().DenebForkEpoch)
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, ds, params.BeaconConfig().MaxBlobsPerBlock(ds))
testSidecars := verification.FakeVerifySliceForTest(t, sidecars)
t.Run("no error for duplicate", func(t *testing.T) {
@@ -127,21 +128,22 @@ func TestBlobStorage_SaveBlobData(t *testing.T) {
}
func TestBlobIndicesBounds(t *testing.T) {
es := util.SlotAtEpoch(t, params.BeaconConfig().ElectraForkEpoch)
fs := afero.NewMemMapFs()
root := [32]byte{}
okIdx := uint64(params.BeaconConfig().MaxBlobsPerBlock(0)) - 1
writeFakeSSZ(t, fs, root, 0, okIdx)
okIdx := uint64(params.BeaconConfig().MaxBlobsPerBlock(es)) - 1
writeFakeSSZ(t, fs, root, es, okIdx)
bs := NewWarmedEphemeralBlobStorageUsingFs(t, fs, WithLayout(LayoutNameByEpoch))
indices := bs.Summary(root).mask
expected := make([]bool, params.BeaconConfig().MaxBlobsPerBlock(0))
expected := make([]bool, params.BeaconConfig().MaxBlobsPerBlock(es))
expected[okIdx] = true
for i := range expected {
require.Equal(t, expected[i], indices[i])
}
oobIdx := uint64(params.BeaconConfig().MaxBlobsPerBlock(0))
writeFakeSSZ(t, fs, root, 0, oobIdx)
oobIdx := uint64(params.BeaconConfig().MaxBlobsPerBlock(es))
writeFakeSSZ(t, fs, root, es, oobIdx)
// This now fails at cache warmup time.
require.ErrorIs(t, warmCache(bs.layout, bs.cache), errIndexOutOfBounds)
}

View File

@@ -6,14 +6,17 @@ import (
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
)
func TestSlotByRoot_Summary(t *testing.T) {
noneSet := make([]bool, params.BeaconConfig().MaxBlobsPerBlock(0))
allSet := make([]bool, params.BeaconConfig().MaxBlobsPerBlock(0))
firstSet := make([]bool, params.BeaconConfig().MaxBlobsPerBlock(0))
lastSet := make([]bool, params.BeaconConfig().MaxBlobsPerBlock(0))
oneSet := make([]bool, params.BeaconConfig().MaxBlobsPerBlock(0))
ee := params.BeaconConfig().ElectraForkEpoch
es := util.SlotAtEpoch(t, ee)
noneSet := make([]bool, params.BeaconConfig().MaxBlobsPerBlock(es))
allSet := make([]bool, params.BeaconConfig().MaxBlobsPerBlock(es))
firstSet := make([]bool, params.BeaconConfig().MaxBlobsPerBlock(es))
lastSet := make([]bool, params.BeaconConfig().MaxBlobsPerBlock(es))
oneSet := make([]bool, params.BeaconConfig().MaxBlobsPerBlock(es))
firstSet[0] = true
lastSet[len(lastSet)-1] = true
oneSet[1] = true
@@ -53,7 +56,7 @@ func TestSlotByRoot_Summary(t *testing.T) {
for _, c := range cases {
if c.expected != nil {
key := bytesutil.ToBytes32([]byte(c.name))
sc.cache[key] = BlobStorageSummary{epoch: 0, mask: c.expected}
sc.cache[key] = BlobStorageSummary{epoch: ee, mask: c.expected}
}
}
for _, c := range cases {
@@ -73,6 +76,7 @@ func TestSlotByRoot_Summary(t *testing.T) {
}
func TestAllAvailable(t *testing.T) {
es := util.SlotAtEpoch(t, params.BeaconConfig().ElectraForkEpoch)
idxUpTo := func(u int) []int {
r := make([]int, u)
for i := range r {
@@ -125,13 +129,13 @@ func TestAllAvailable(t *testing.T) {
},
{
name: "out of bound is safe",
count: params.BeaconConfig().MaxBlobsPerBlock(0) + 1,
count: params.BeaconConfig().MaxBlobsPerBlock(es) + 1,
aa: false,
},
{
name: "max present",
count: params.BeaconConfig().MaxBlobsPerBlock(0),
idxSet: idxUpTo(params.BeaconConfig().MaxBlobsPerBlock(0)),
count: params.BeaconConfig().MaxBlobsPerBlock(es),
idxSet: idxUpTo(params.BeaconConfig().MaxBlobsPerBlock(es)),
aa: true,
},
{
@@ -143,7 +147,7 @@ func TestAllAvailable(t *testing.T) {
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
mask := make([]bool, params.BeaconConfig().MaxBlobsPerBlock(0))
mask := make([]bool, params.BeaconConfig().MaxBlobsPerBlock(es))
for _, idx := range c.idxSet {
mask[idx] = true
}

View File

@@ -100,6 +100,14 @@ type (
}
)
// DataColumnStorageReader is an interface to read data column sidecars from the filesystem.
type DataColumnStorageReader interface {
Summary(root [fieldparams.RootLength]byte) DataColumnStorageSummary
Get(root [fieldparams.RootLength]byte, indices []uint64) ([]blocks.VerifiedRODataColumn, error)
}
var _ DataColumnStorageReader = &DataColumnStorage{}
// WithDataColumnBasePath is a required option that sets the base path of data column storage.
func WithDataColumnBasePath(base string) DataColumnStorageOption {
return func(b *DataColumnStorage) error {
@@ -251,7 +259,6 @@ func (dcs *DataColumnStorage) Summary(root [fieldparams.RootLength]byte) DataCol
}
// Save saves data column sidecars into the database and asynchronously performs pruning.
// The returned channel is closed when the pruning is complete.
func (dcs *DataColumnStorage) Save(dataColumnSidecars []blocks.VerifiedRODataColumn) error {
startTime := time.Now()

View File

@@ -84,12 +84,6 @@ func (s DataColumnStorageSummary) Stored() map[uint64]bool {
return stored
}
// DataColumnStorageSummarizer can be used to receive a summary of metadata about data columns on disk for a given root.
// The DataColumnStorageSummary can be used to check which indices (if any) are available for a given block by root.
type DataColumnStorageSummarizer interface {
Summary(root [fieldparams.RootLength]byte) DataColumnStorageSummary
}
type dataColumnStorageSummaryCache struct {
mu sync.RWMutex
dataColumnCount float64
@@ -98,8 +92,6 @@ type dataColumnStorageSummaryCache struct {
cache map[[fieldparams.RootLength]byte]DataColumnStorageSummary
}
var _ DataColumnStorageSummarizer = &dataColumnStorageSummaryCache{}
func newDataColumnStorageSummaryCache() *dataColumnStorageSummaryCache {
return &dataColumnStorageSummaryCache{
cache: make(map[[fieldparams.RootLength]byte]DataColumnStorageSummary),

View File

@@ -11,6 +11,7 @@ import (
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/verification"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
@@ -60,12 +61,13 @@ func TestRootFromDir(t *testing.T) {
}
func TestSlotFromFile(t *testing.T) {
es := util.SlotAtEpoch(t, params.BeaconConfig().ElectraForkEpoch)
cases := []struct {
slot primitives.Slot
}{
{slot: 0},
{slot: 2},
{slot: 1123581321},
{slot: es + 0},
{slot: es + 2},
{slot: es + 1123581321},
{slot: math.MaxUint64},
}
for _, c := range cases {
@@ -243,39 +245,40 @@ func TestSlotFromBlob(t *testing.T) {
}
func TestIterationComplete(t *testing.T) {
de := params.BeaconConfig().DenebForkEpoch
targets := []migrationTestTarget{
{
ident: ezIdent(t, "0x0125e54c64c925018c9296965a5b622d9f5ab626c10917860dcfb6aa09a0a00b", 1234, 0),
path: "by-epoch/0/1234/0x0125e54c64c925018c9296965a5b622d9f5ab626c10917860dcfb6aa09a0a00b/0.ssz",
ident: ezIdent(t, "0x0125e54c64c925018c9296965a5b622d9f5ab626c10917860dcfb6aa09a0a00b", de+1234, 0),
path: "by-epoch/%d/%d/0x0125e54c64c925018c9296965a5b622d9f5ab626c10917860dcfb6aa09a0a00b/0.ssz",
},
{
ident: ezIdent(t, "0x0127dba6fd30fdbb47e73e861d5c6e602b38ac3ddc945bb6a2fc4e10761e9a86", 5330, 0),
ident: ezIdent(t, "0x0127dba6fd30fdbb47e73e861d5c6e602b38ac3ddc945bb6a2fc4e10761e9a86", de+5330, 0),
slotOffset: 31,
path: "by-epoch/1/5330/0x0127dba6fd30fdbb47e73e861d5c6e602b38ac3ddc945bb6a2fc4e10761e9a86/0.ssz",
path: "by-epoch/%d/%d/0x0127dba6fd30fdbb47e73e861d5c6e602b38ac3ddc945bb6a2fc4e10761e9a86/0.ssz",
},
{
ident: ezIdent(t, "0x0127dba6fd30fdbb47e73e861d5c6e602b38ac3ddc945bb6a2fc4e10761e9a86", 5330, 1),
ident: ezIdent(t, "0x0127dba6fd30fdbb47e73e861d5c6e602b38ac3ddc945bb6a2fc4e10761e9a86", de+5330, 1),
slotOffset: 31,
path: "by-epoch/1/5330/0x0127dba6fd30fdbb47e73e861d5c6e602b38ac3ddc945bb6a2fc4e10761e9a86/1.ssz",
path: "by-epoch/%d/%d/0x0127dba6fd30fdbb47e73e861d5c6e602b38ac3ddc945bb6a2fc4e10761e9a86/1.ssz",
},
{
ident: ezIdent(t, "0x0232521756a0b965eab2c2245d7ad85feaeaf5f427cd14d1a7531f9d555b415c", 16777216, 0),
ident: ezIdent(t, "0x0232521756a0b965eab2c2245d7ad85feaeaf5f427cd14d1a7531f9d555b415c", -1+math.MaxUint64/32, 0),
slotOffset: 16,
path: "by-epoch/4096/16777216/0x0232521756a0b965eab2c2245d7ad85feaeaf5f427cd14d1a7531f9d555b415c/0.ssz",
path: "by-epoch/%d/%d/0x0232521756a0b965eab2c2245d7ad85feaeaf5f427cd14d1a7531f9d555b415c/0.ssz",
},
{
ident: ezIdent(t, "0x0232521756a0b965eab2c2245d7ad85feaeaf5f427cd14d1a7531f9d555b415c", 16777216, 1),
ident: ezIdent(t, "0x0232521756a0b965eab2c2245d7ad85feaeaf5f427cd14d1a7531f9d555b415c", -1+math.MaxUint64/32, 1),
slotOffset: 16,
path: "by-epoch/4096/16777216/0x0232521756a0b965eab2c2245d7ad85feaeaf5f427cd14d1a7531f9d555b415c/1.ssz",
path: "by-epoch/%d/%d/0x0232521756a0b965eab2c2245d7ad85feaeaf5f427cd14d1a7531f9d555b415c/1.ssz",
},
{
ident: ezIdent(t, "0x42eabe3d2c125410cd226de6f2825fb7575ab896c3f52e43de1fa29e4c809aba", 16777217, 0),
ident: ezIdent(t, "0x42eabe3d2c125410cd226de6f2825fb7575ab896c3f52e43de1fa29e4c809aba", -1+math.MaxUint64/32, 0),
slotOffset: 16,
path: "by-epoch/4096/16777217/0x42eabe3d2c125410cd226de6f2825fb7575ab896c3f52e43de1fa29e4c809aba/0.ssz",
path: "by-epoch/%d/%d/0x42eabe3d2c125410cd226de6f2825fb7575ab896c3f52e43de1fa29e4c809aba/0.ssz",
},
{
ident: ezIdent(t, "0x666cea5034e22bd3b849cb33914cad59afd88ee08e4d5bc0e997411c945fbc1d", 11235, 1),
path: "by-epoch/2/11235/0x666cea5034e22bd3b849cb33914cad59afd88ee08e4d5bc0e997411c945fbc1d/1.ssz",
ident: ezIdent(t, "0x666cea5034e22bd3b849cb33914cad59afd88ee08e4d5bc0e997411c945fbc1d", de+11235, 1),
path: "by-epoch/%d/%d/0x666cea5034e22bd3b849cb33914cad59afd88ee08e4d5bc0e997411c945fbc1d/1.ssz",
},
}
fs := afero.NewMemMapFs()
@@ -299,6 +302,7 @@ func TestIterationComplete(t *testing.T) {
require.Equal(t, true, ok)
require.Equal(t, tar.ident.epoch, entry.epoch)
require.Equal(t, true, entry.HasIndex(tar.ident.index))
require.Equal(t, tar.path, byEpoch.sszPath(tar.ident))
path := fmt.Sprintf(tar.path, periodForEpoch(tar.ident.epoch), tar.ident.epoch)
require.Equal(t, path, byEpoch.sszPath(tar.ident))
}
}

Some files were not shown because too many files have changed in this diff Show More