Compare commits

...

94 Commits

Author SHA1 Message Date
Bastin
00da23ba97 v2 2025-11-26 13:29:38 +01:00
Bastin
13ea9033b8 Apply suggestion from @rkapka
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-11-04 11:54:51 +01:00
Bastin
0de8fffd9e Apply suggestion from @rkapka
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-11-04 11:54:38 +01:00
Bastin
49b315dde0 address comments 2025-11-03 18:23:04 +01:00
Bastin
eae3bf4162 advanced logging 2025-10-22 13:38:39 +02:00
james-prysm
253f91930a changelog v6.1.3 (#15901)
* updating changelog

* adding changelog

* kasey's comment
2025-10-21 16:46:44 +00:00
terence
7c3e45637f Fix proposer to use advanced state for sync committee position calculation (#15905)
* Sync committee use correct state to calculate position

* Unit test
2025-10-21 15:29:46 +00:00
Manu NALEPA
96429c5089 updateCustodyInfoInDB: Use NumberOfCustodyGroups instead of NumberOfColumns. (#15908)
* `updateCustodyInfoInDB`: Add tests.

* `updateCustodyInfoInDB`: Use `NumberOfCustodyGroups` instead of `NumberOfColumns`.

* Add changelog.

* Fix Potuz's comment.
2025-10-21 14:37:04 +00:00
satushh
d613f3a262 Update Earliest available slot when pruning (#15694)
* Update Earliest available slot when pruning

* bazel run //:gazelle -- fix

* custodyUpdater interface to avoid import cycle

* bazel run //:gazelle -- fix

* simplify test

* separation of concerns

* debug log for updating eas

* UpdateEarliestAvailableSlot function in CustodyManager

* fix test

* UpdateEarliestAvailableSlot function for FakeP2P

* lint

* UpdateEarliestAvailableSlot instead of UpdateCustodyInfo + check for Fulu

* fix test and lint

* bugfix: enforce minimum retention period in pruner

* remove MinEpochsForBlockRequests function and use from config

* remove modifying earliest_available_slot after data column pruning

* correct earliestAvailableSlot validation: allow backfill decrease but prevent increase within MIN_EPOCHS_FOR_BLOCK_REQUESTS

* lint

* bazel run //:gazelle -- fix

* lint and remove unwanted debug logs

* Return a wrapped error, and let the caller decide what to do

* fix tests because updateEarliestSlot returns error now

* avoid re-doing computation in the test function

* lint and correct changelog

* custody updater should be a mandatory part of the pruner service

* ensure never increase eas if we are in the block requests window

* slot level granularity edge case

* update the value stored in the DB

* log tidy up

* use errNoCustodyInfo

* allow earliestAvailableSlot edit when custodyGroupCount doesnt change

* undo the minimal config change

* add context to CustodyGroupCount after merging from develop

* cosmetic change

* shift responsibility from caller to callee, protection for updateEarliestSlot. UpdateEarliestAvailableSlot returns cgc

* allow increase in earliestAvailableSlot only when custodyGroupCount also increases

* remove CustodyGroupCount as it is no longer needed as UpdateEarliestAvailableSlot returns cgc now

* proper place for log and name refactor

* test for Nil custody info

* allow decreasing earliest slot in DB (just like in memory)

* invert if statement to make more readable

* UpdateEarliestAvailableSlot for DB (equivalent of p2p's UpdateEarliestAvailableSlot) & undo changes made to UpdateCustodyInfo

* in UpdateEarliestAvailableSlot, no need to return unused values

* no need to log stored group count

* log.WithField instead of log.WithFields
2025-10-21 13:54:52 +00:00
MozirDmitriy
5751dbf134 kv: write recovered state summaries to stateSummaryBucket (#15896)
* kv: write recovered state summaries to stateSummaryBucket

* Create MozirDmitriy_fix_kv-recover-state-summurt-bucket.md

* add a test
2025-10-21 11:21:10 +00:00
Potuz
426fbcc3b0 Add state diff serialization (#15250)
* Add serialization code for state diffs

Adds serialization code for state diffs.
Adds code to create and apply state diffs
Adds fuzz tests and benchmarks for serialization/deserialization

Co-authored-by: Claude <noreply@anthropic.com>

* Add Fulu support

* Review #1

* gazelle

* Fix some fuzzers

* Failing cases from the fuzzers in consensus-types/hdiff

* Fix more fuzz tests

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* add comparison tests

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Use ConvertToElectra in UpgradeToElectra

* Add comments on constants

* Fix readEth1Data

* remove colons from error messages

* Add design doc

* Apply suggestions from code review

Bast

Co-authored-by: Bastin <43618253+Inspector-Butters@users.noreply.github.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Preston Van Loon <preston@pvl.dev>
Co-authored-by: Bastin <43618253+Inspector-Butters@users.noreply.github.com>
2025-10-20 21:52:32 +00:00
Manu NALEPA
a3baf98b05 VerifyDataColumnsSidecarKZGProofs: Check if sizes match. (#15892) 2025-10-20 17:06:13 +00:00
Jun Song
5a897dfa6b SSZ-QL: Add endpoints (BeaconState/BeaconBlock) (#15888)
* Move ssz_query objects into testing folder (ensuring test objects only used in test environment)

* Add containers for response

* Export sszInfo

* Add QueryBeaconState/Block

* Add comments and few refactor

* Fix merge conflict issues

* Return 500 when calculate offset fails

* Add test for QueryBeaconState

* Add test for QueryBeaconBlock

* Changelog :)

* Rename `QuerySSZRequest` to `SSZQueryRequest`

* Fix middleware hooks for RPC to accept JSON from client and return SSZ

* Convert to `SSZObject` directly from proto

* Move marshalling/calculating hash tree root part after `CalculateOffsetAndLength`

* Make nogo happy

* Add informing comment for using proto unsafe conversion

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-10-20 16:24:06 +00:00
Muzry
90190883bc Fixed metadata extraction on Windows by correctly splitting file paths (#15899)
* Fixed metadata extraction on Windows by correctly splitting file paths

* `TestExtractFileMetadata`: Refactor a bit.

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2025-10-20 14:17:32 +00:00
terence
64ec665890 Fix sync committee subscription to use subnet indices instead of committee indices (#15885)
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-10-17 19:03:53 +00:00
kasey
fdb06ea461 clear genesis state file when --(force-)clear-db is specified (#15883)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-10-17 14:03:15 +00:00
Manu NALEPA
0486631d73 Improve error message when the byte count read from disk when reading a data column sidecar is lower than expected. (Mostly, because the file is truncated.) (#15881)
* `VerifiedRODataColumnError`: Don't reuse Blob error.

* `VerifiedRODataColumnFromDisk`: Use a specific error when the count of read bytes is lower than expected.

* Add changelog.
2025-10-16 21:49:11 +00:00
Manu NALEPA
47764696ce randomPeer: Return if the context is cancelled when waiting for peers. (#15876)
* `randomPeer`: Return if the context is cancelled when waiting for peers.

* `randomPeer`: Refactor to reduce indentation.
2025-10-16 21:13:11 +00:00
Manu NALEPA
b2d350b988 Correctly advertise (in ENR and metadata) attestation subnets when using --subscribe-all-subnets. (#15880) 2025-10-16 21:12:00 +00:00
kasey
41e7607092 Decrease att batch deadline to 5ms for faster net prop (#15882)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-10-16 17:30:59 +00:00
Jun Song
cd429dc253 SSZ-QL: Access n-th element in List/Vector. (#15767)
* Add basic parsing feature for accessing by index

* Add more tests for 2d byte vector

* Add List case for access indexing

* Handle 2D bytes List example

* Fix misleading cases for CalculateOffsetAndLength

* Use elementSizes[index] if it is the last path element

* Add variable_container_list field for mocking attester_slashings in BeaconBlockBody

* Remove redundant protobuf message

* Better documentation

* Changelog

* Fix `expectedSize` of `VariableTestContainer`: as we added `variable_container_list` here

* Apply reviews from Radek
2025-10-15 16:11:12 +00:00
phrwlk
5ced1125f2 fix: reject out-of-range attestation committee index (#15855)
* reject committee index >= committees_per_slot in unaggregated attestation validation

* Create phrwlk_fix-attestation-committee-index-bound.md

* add a unit test

* fix test

* fixing test

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Co-authored-by: james-prysm <james@prysmaticlabs.com>
2025-10-15 16:02:08 +00:00
Potuz
f67ca6ae5e Fix epoch transition on head event (#15871)
h/t to the NuConstruct team for reporting this. The event feed
incorrectly sends epoch transition flag on head events when the first
slot of the epoch is missing (or reorgs across epoch transition).

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-10-15 15:13:49 +00:00
Manu NALEPA
9742333f68 WithDataColumnRetentionEpochs: Use dataColumnRetentionEpoch instead of blobColumnRetentionEpoch. (#15872) 2025-10-15 14:44:49 +00:00
Manu NALEPA
c811fadf33 VerifyDataColumnSidecar: Check if there is no too many commitments. (#15859)
* `VerifyDataColumnSidecar`: Check if there is no too many commitments.

* `TestVerifyDataColumnSidecar`: Refactor using test cases.

* Add changelog.
2025-10-15 12:18:04 +00:00
Manu NALEPA
55b9448d41 dataColumnSidecarsByRangeRPCHandler: Gracefully close the stream if no data to return. (#15866)
* `TestDataColumnSidecarsByRangeRPCHandler`: Remove commented code.

* Remove double import

* `dataColumnSidecarsByRangeRPCHandler`: Gracefully close the stream if no data to return.

* Tests: Change `require` to `assert` in goroutines in tests.

https://pkg.go.dev/github.com/stretchr/testify/require#hdr-Assertions

* Add changelog.
2025-10-15 12:16:05 +00:00
Manu NALEPA
10f8d8c26e Fix /eth/v1/beacon/blob_sidecars/ beacon API if the fulu fork epoch is set to the far future epoch. (#15867)
* Fix `/eth/v1/beacon/blob_sidecars/` beacon API is the fulu fork epoch is set to the far future epoch.

* Fix Terence's comment.

* adding a test

---------

Co-authored-by: james-prysm <james@prysmaticlabs.com>
2025-10-14 21:38:12 +00:00
Jun Song
4eab41ea4c SSZ-QL: use fastssz-generated SizeSSZ method & clarify Size method (#15864)
* Add SizeSSZ as a member of SSZObject

* Temporarily rename dereferencePointer function

* Fix analyzeType: use reflect.Value for analyzing

* Fix PopulateVariableLengthInfo: change function signature & reset pointer

* Remove Container arm for Size function as it'll be handled in the previous branch

* Remove OffsetBytes function in listInfo

* Refactor and document codes

* Remove misleading "fixedSize" concept & Add Uint8...64 SSZTypes

* Add size testing

* Move TestSSZObject_Batch and rename it as TestHashTreeRoot

* Changelog :)

* Rename endOffset to fixedOffset

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-10-14 17:33:52 +00:00
Radosław Kapka
683608e34a Improve returning individual message errors from Beacon API (#15835)
* Improve returning individual message errors from Beacon API

* changelog <3

* fix test

* add debug logs

* batch broadcast errors

* use logrus fields

* capitalize log messages
2025-10-14 15:22:00 +00:00
Manu NALEPA
fbbf2a1404 HasAtLeastOneIndex: Check the index is not too high. (#15865) 2025-10-14 14:39:38 +00:00
Potuz
82f556c50f Remove redundant check (#15844)
* Remove redundant check

* changelog

* fix gazelle
2025-10-14 12:39:19 +00:00
Radosław Kapka
c88aa77ac1 Display non-JSON error messages (#15860)
* Display non-JSON error messages

* changelog <3
2025-10-14 12:08:21 +00:00
fernantho
0568bec935 SSZ-QL: use FastSSZ-generated HashTreeRoot through SSZObject in sszInfo (#15805)
* stored CL object to enable the usage Fastssz's HashTreeRoot(). added basic test

* refactorization - using interfaces instead of storing original object

* added tests covering ssz custom types

* renamed hash_tree_root to ssz_interface as it contains MarshalSSZ and UnmarshalSSZ functions

* run gazelle

* renamed test and improved comments

* refactored test and extend to marshalSSZ and UnmarshalSSZ

* added changelog

* updated comment

* Changed SSZIface name to SSZObject. Removed MarshalSSZ and UnmarshalSSZ function signatures from interface as they are not used still. Refactored tests.

* renamed file ssz_interface.go to ssz_object.go. merge test from ssz_interface_test.go into query_test.go.
reordered source SSZObject field from sszInfo struct

* sticked SSZObject interface to HashTreeRoot() function, the only one needed so far

* run gazelle :)

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-10-13 21:39:15 +00:00
Potuz
e463bcd1e1 Mark block as invalid in gossip if it fails signature check (#15847)
* Mark block as invalid in gossip if it fails signature check

* Add tests

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-13 20:29:27 +00:00
terence
5f8eb69201 Add proper handling for submit blind block 502 error (#15848)
* Add proper handling for builder relay 502 BadGateway errors

* James feedback

* Change wording
2025-10-13 18:36:06 +00:00
Marco Munizaga
4b98451649 fix allocation size of proofs in ComputeCellsAndProofsFromStructured (#15809)
* fix allocation size of proofs in ComputeCellsAndProofsFromStructured

the preallocated slice for KZG Proofs was 48x bigger than it needed to
be.

* changelog

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-10-13 17:16:02 +00:00
satushh
0aa248e663 Fix ignored gossip attestation validation for early arriving attestations (#15840)
* debug log

* undo debug logs

* fix: return if block not available

* changelog
2025-10-12 19:19:45 +00:00
Preston Van Loon
6973cd2c5f Changelog v6.1.2 (#15843)
* Changelog for v6.0.5

* Changelog for v6.1.0

* Changelog for v6.1.2

* Removing old fragments
2025-10-10 21:04:38 +00:00
Potuz
4e47905884 Do not mark blocks as invalid unnecessarily (#15846) 2025-10-10 20:55:29 +00:00
Manu NALEPA
a94ea1e5f5 Add grandine in known agents (#15829)
* `knownAgentVersions`: Sort.

* `knownAgentVersions`: Add Grandine.

* Add changelog.
2025-10-09 19:30:26 +00:00
james-prysm
c0ad87df4b fixing web3signer for e2e (#15832)
* fixing web3signer for e2e

* fixing tests

* gaz

* reverting fix

* extra space
2025-10-09 19:21:56 +00:00
james-prysm
515590e7fe making block event only send on certain success (#15814)
* making block event only send on certain success

* potuz's comment

* potuz comment

* test
2025-10-09 16:42:48 +00:00
terence
83a171b439 Process pending atts after pending blocks clear (#15824) 2025-10-09 14:27:03 +00:00
Manu NALEPA
4946b007ab Data column sidecars fetch: Adjust log levels. (#15820) 2025-10-09 10:34:04 +00:00
Radosław Kapka
3f10439de1 Do not verify block data when calculating rewards (#15819)
* Do not verify block data when calculating rewards

* remove `Get` from function names

* changelog <3

* do not verify sync committee sig in handler

* Revert "remove `Get` from function names"

This reverts commit 770a89d990.

* typo fix

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-10-08 21:40:48 +00:00
james-prysm
5b20352ac6 cleaning up processAggregate (#15823)
* cleaning up some code

* kasey feedback

* further simplifying

* kasey's suggestion
2025-10-08 18:34:32 +00:00
Galoretka
d5ca327c30 feature: Use service context and continue on slasher attestation errors (#15803)
* feature: Use service context and continue on slasher attestation errors

* Create Galoretka_feature-slasher-feed-use-service-ctx

* Rename Galoretka_feature-slasher-feed-use-service-ctx to Galoretka_feature-slasher-feed-use-service-ctx.md

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-10-08 15:47:11 +00:00
terence
38955fd08c Optimize pending attestation processing by adding batching (#15801)
* Optimize pending attestation processing by adding batching

* Update beacon-chain/sync/pending_attestations_queue.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/sync/pending_attestations_queue.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Add root for debug

* Change it to map

* Dont need receiver

* Use two slices

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-10-08 05:18:13 +00:00
kasey
71f05b597f Use NetworkSchedule config to determine max blobs at epoch (#15714)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-10-08 04:02:05 +00:00
kasey
0d742c6f88 make registerSubscribers idempotent (#15779)
* make registerSubscribers idempotent

* clean up debugging changes

* test fix

* rm unused var

* sobbing noises

* naming feedback and separate test for digestActionDone

* gazelle

* manu's feedback

* refactor to enable immediate sub after init sync

* preston comment re panic causing db corruption risk

* ensure we check that we're 1 epoch past the fork

* manu feedback

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-10-08 01:09:22 +00:00
Manu NALEPA
06b5409ff0 When looking for peers, skip peers with error instead of aborting the whole function (#15815)
* `findPeersWithSubnets`: If the `filter` function returns an error for a given peer, log an error and skip the peer instead of aborting the whole function.

* `computeIndicesByRootByPeer`: If the loop returns an error for a given peer, log an error and skip the peer instead of aborting the whole function.

* Add changelog.

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-10-07 19:13:08 +00:00
Sahil Sojitra
9805e90d73 chore: refactor to use builtin max/min (#15817)
* passed the tests with inbuilt max func

* tested min changes

* fix bazel files

* added changelog
2025-10-07 19:02:47 +00:00
Manu NALEPA
537f3cb863 StatusV2: Send correct earliest available slot. (#15818)
* `buildStatusFromStream`: Use parent context.

* Status tests: Use `t.Context` everywhere.

* `buildStatusFromStream`: Respond statusV2 only if Fulu is enabled.

Without doing so, earliest available slot is never defined, and then `s.cfg.p2p.EarliestAvailableSlot` will block until the context is canceled.

* Send our real earliest available slot when sending a Status request post Fulu instead of `0`.

* Add changelog.
2025-10-07 18:45:47 +00:00
Manu NALEPA
b45e87abd6 Move some logs to trace (#15816)
* Prettify logs for byRange/byRoot data column sidecar requests.

* Moving byRoot/byRange data column sidecars requests from peers to TRACE level.

* Move "Peer requested blob sidecar by root not found in db" in TRACE.

* Add changelog.

* Fix Kasey's comment.

* Apply Kasey's suggestion.
2025-10-07 18:44:55 +00:00
Preston Van Loon
4c4b12cca7 Clear disconnected peers from connected_libp2p_peers (#15807)
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-10-06 19:59:42 +00:00
james-prysm
aabded250f fixing Origin block root is not set error log (#15808)
* fixing error log

* kasey's suggestion

* kasey's feedback replacing comment
2025-10-06 15:47:27 +00:00
Manu NALEPA
4f9e56fc70 Custody Info: Waits for initialization (#15804)
* Revert "`createLocalNode`: Wait before retrying to retrieve the custody group count if not present. (#15735)"

This reverts commit 4585cdc932.

* Revert "Fix no custody info available at start (#15732)"

This reverts commit 80eba4e6dd.

* Add context to `EarliestAvailableSlot` and `CustodyGroupCount` (no functional change).

* Remove double imports.

* `EarliestAvailableSlot` and `CustodyGroupCount`: Wait for custody info to be initialized.
2025-10-06 10:55:48 +00:00
james-prysm
2a86132994 removing old unused configs and hiding prysm specific configs (#15797)
* removing old unused configs and hiding configs to align with other clients

* gaz and removing unneeded from test

* fixing test

* fixing test
2025-10-03 18:32:44 +00:00
kasey
74c47e25a9 exclude unscheduled forks from the schedule (#15799)
* exclude unscheduled forks from the schedule

* update tests that relied on fulu being far future

* don't mess with the version->epoch mapping

* LastFork cheats the tests by filtering by far_future_epoch

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-10-03 15:29:27 +00:00
Bastin
28eb1a4c3c add delay before broadcasting lc objects (#15776)
* add delay before broadcasting lc objects

* address commments

* address commments

* reduce test runtime
2025-10-03 13:02:46 +00:00
terence
1f89394727 Return early if there's no block for data column sidecar (#15802) 2025-10-03 02:59:26 +00:00
kasey
bf1095c782 ignore version|digest mismatch if far future (#15798)
* ignore version/digest mismatch if far future

* bonus: this log generates a lot of noise, bump it down to trace

* unit test

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-10-02 19:57:54 +00:00
Manu NALEPA
b24fe0d23a requestAndSaveMissingDataColumnSidecars: Fix log (#15794) 2025-10-02 16:38:00 +00:00
Bastin
cbe50269de Change LC p2p validation rules (#15783)
* compare incoming lc message with locally computed object

* fix logs

* add comment
2025-10-02 15:04:33 +00:00
Manu NALEPA
4ed2953fcf inclusionProofKey: Include the commitments in the key. (#15795)
* `inclusionProofKey`: Include the commitments in the key.

* Fix Potuz's comment.

* Update beacon-chain/verification/data_column.go

Co-authored-by: Potuz <potuz@prysmaticlabs.com>

* Fix Potuz's comment.

---------

Co-authored-by: Potuz <potuz@prysmaticlabs.com>
2025-10-02 13:17:11 +00:00
Potuz
915837d059 Process pending on block (#15791)
* Process pending attestations with block insertion

* fix tests

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Terence's review

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-02 00:19:04 +00:00
Potuz
26b276660f Avoid unnecessary calls to ExitInformation() (#15764)
* Avoid unnecessary calls to ExitInformation()

ExitInformation runs a loop over the whole validator set. This is needed
in case that there are slashings or exits to be processed in a block (we
could be caching or avoid this entirely post-Electra though). This PR
removes these calls on normal state transition to this function. h/t to
@terencechain for finding out this bug.

In addition, on processing withdrawal requests and registry updates, we
kept recomputing the exit information at the same time that the state is
updated and the function that updates the state already takes care of
tracking and updating the right exit information. So this PR removes the
calls to compute this exit information on a loop. Notice that this bug
has been present even before we had a function `ExitInformation()` so I
will document here to help the reviewer

Our previous behavior is to do this in a loop

```
st, err = validators.InitiateValidatorExit(ctx, st, vIdx, validators.ExitInformation(st))
```

This is a bit problematic since `ExitInformation` loops over the whole validator set to compute the exit information (and the total active balance) and then the function `InitiateValidatorExit` actually recomputes the total active balance looping again over the whole validator set and overwriting the pointer returned by `ExitInformation`.

On the other hand, the funciton `InitiateValidatorExit` does mutate the state `st` itself. So each call to `ExitInformation(st)` may actually return a different pointer.

The function ExitInformation computes as follows

```
	err := s.ReadFromEveryValidator(func(idx int, val state.ReadOnlyValidator) error {
		e := val.ExitEpoch()
		if e != farFutureEpoch {
			if e > exitInfo.HighestExitEpoch {
				exitInfo.HighestExitEpoch = e
				exitInfo.Churn = 1
			} else if e == exitInfo.HighestExitEpoch {
				exitInfo.Churn++
			}
```

So it simply increases the churn for each validator that has epoch equal to the highest exit epoch.

The function `InitiateValidatorExit` mutates this pointer in the following way

if the state is post-electra, it disregards completely this pointer and computes the highest exit epoch and updates churn inconditionally, so the pointer `exitInfo.HighestExitEpoch` will always have the right value and is not even neded to be computed before. We could even avoid the fist loop even. If the state is pre-Electra then the function itself updates correctly the exit info for the next iteration.

* Only care about exits pre-Electra

* Update beacon-chain/core/transition/transition_no_verify_sig.go

Co-authored-by: terence <terence@prysmaticlabs.com>

* Radek's review

---------

Co-authored-by: terence <terence@prysmaticlabs.com>
2025-10-02 00:17:39 +00:00
james-prysm
580509f2f4 attempting to improve duties v2 (#15784)
* attempting to improve duties v2

* removing go routine

* changelog

* unnessesary variable

* fixing test

* small optimization existing early on  CommitteeAssignments function

* fixing small bug

* fixes performance issues with duties v2

* fixed changelog

* gofmt
2025-10-01 20:40:14 +00:00
Potuz
be144da099 fix test race conditions (#15792)
Fix race condition where svc.verifierWaiter was being set after
svc.Start() was already running, causing nil pointer dereference.
2025-10-01 19:29:20 +00:00
Manu NALEPA
cc2565a422 Update c-kzg-4844 to v2.1.5 (#15708)
* Sort sidecars by index before calling `RecoverCellsAndKZGProofs`.

Reason: Starting at `c-kzg-4844 v2.1.2`, the library needs input to be sorted.

* Update `c-kzg-4844` to `v2.1.3`

* Update `c-kzg-4844` to `v2.1.5`
2025-10-01 14:24:22 +00:00
Manu NALEPA
d86353ea9d P2P service: Remove unused clock. (#15786) 2025-10-01 11:50:30 +00:00
Manu NALEPA
45d6002411 Aggregate logs when broadcasting data column sidecars (#15748)
* Aggregate logs when broadcasting data column sidecars

* Fix James' comment.
2025-10-01 08:21:10 +00:00
Muzry
08c855fd4b fix ProduceSyncCommitteeContribution error on invalid index (#15770)
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-10-01 02:20:55 +00:00
Justin Traglia
7c86b5d737 Add sources for compute_fork_digest to specrefs (#15699)
* Add sources for compute_fork_digest to specrefs

* Delete non-existant exception keys

* Lint specref files

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-10-01 01:35:41 +00:00
james-prysm
023287f7df properly skipping ommitted values for /eth/v1/config/spec (#15777)
* properly skipping ommitted values

* simplifying

* Update beacon-chain/rpc/eth/config/handlers.go

Co-authored-by: Bastin <43618253+Inspector-Butters@users.noreply.github.com>

* Update beacon-chain/rpc/eth/config/handlers.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* removing omitempty support based on radek's feedback

---------

Co-authored-by: Bastin <43618253+Inspector-Butters@users.noreply.github.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-09-30 16:03:36 +00:00
Preston Van Loon
1432867c92 Github CI: Update runs-on to ubuntu-4 (#15778) 2025-09-30 03:39:20 +00:00
Preston Van Loon
18efd620dc Add strip=always to release builds (#15774) 2025-09-29 17:00:28 +00:00
james-prysm
6139d58fa5 fixing config string parsing regression (#15773)
* adding string parsing and test

* gaz
2025-09-29 16:15:01 +00:00
Sahil Sojitra
0ea5e2cf9d refactor to use reflect.TypeFor (#15627)
* refactor to use reflect.TypeFor

* added changelog fragment file

* update changelog

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-09-26 17:26:44 +00:00
Jun Song
29fe707143 SSZ-QL: Support nested List type (#15725)
* Add nested 2d list cases

* Add elementSize member for listInfo to track each element's byte size

* Fix misleading variable in RunStructTest

* Changelog

* Regen pb file

* Update encoding/ssz/query/list.go

Co-authored-by: Radosław Kapka <radoslaw.kapka@gmail.com>

* Rename elementSize into plural

* Update changelog/syjn99_ssz-ql-nested-list.md

---------

Co-authored-by: Radosław Kapka <radoslaw.kapka@gmail.com>
2025-09-26 14:23:22 +00:00
kasey
d68196822b additional log information around invalid payloads (#15754)
* additional log information around invalid payloads

* fix test with reversed require.ErrorIs args

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-09-26 02:33:57 +00:00
terence
924fe4de98 Restructure golangci-lint config: explicit opt-in (#15744)
* update golangci-lint configuration to enable basic linters only

* Add back formatter

* feedback

* Add nolint
2025-09-25 17:01:22 +00:00
Preston Van Loon
fe9dd255c7 slasherkv: Set a 1 minute timeout on PruneAttestationOnEpoch operations (#15746)
* slasherkv: Set a 1 minute timeout on PruneAttestationOnEpoch operations to prevent very large bolt transactions.

* Fix CI

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2025-09-25 15:46:28 +00:00
Potuz
83d75bcb78 Update quick-go (#15749) 2025-09-25 11:02:10 +00:00
james-prysm
ba5f7361ad flipping if statement check to fix metric (#15743) 2025-09-24 20:43:39 +00:00
Manu NALEPA
8fa956036a Update go.mod to v1.25.1. (#15740)
* Updated go.mod to v1.25.1.

Run `golangci-lint migrate`
GolangCI lint: Add `noinlineerr`.
Run `golangci-lint run --config=.golangci.yml`.
`golangci`: Add `--new`.

* `go.yml`: Added `fetch-depth: 0` to have something working with merge queue according to

https://github.com/golangci/golangci-lint-action/issues/956

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-09-24 19:57:17 +00:00
james-prysm
58ce1c25f5 fixing error handling of unfound block (#15742) 2025-09-24 19:03:55 +00:00
Potuz
98532a2df3 update spectests to 1.6.0-beta.0 (#15741)
* update spectests to 1.6.0-beta.0

* start fixing ethspecify
2025-09-24 18:13:46 +00:00
Ragnar
08be6fde92 fix: replace fmt.Printf with proper test error handling in web3signer… (#15723)
* fix: replace fmt.Printf with proper test error handling in web3signer test

* Update keymanager_test.go

* Create DeVikingMark_fix-web3signer-test-error-handling.md
2025-09-23 19:22:11 +00:00
Manu NALEPA
4585cdc932 createLocalNode: Wait before retrying to retrieve the custody group count if not present. (#15735) 2025-09-23 15:46:39 +00:00
Preston Van Loon
aa47435c91 Update eth clients pinned deps (#15733)
* Update eth-clients/hoodi

* Update eth-clients/holesky

* Update eth-clients/sepolia

* Changelog fragment

* Remove deprecated and unused eth2-networks dependency.
2025-09-23 14:26:46 +00:00
Manu NALEPA
80eba4e6dd Fix no custody info available at start (#15732)
* Change wrap message to avoid the could not...: could not...: could not... effect.

Reference: https://github.com/uber-go/guide/blob/master/style.md#error-wrapping.

* Log: remove period at the end of the latest sentence.

* Dirty quick fix to ensure that the custody group count is set at P2P service start.

A real fix would involve a chan implement a proper synchronization scheme.

* Add changelog.
2025-09-23 14:09:29 +00:00
Manu NALEPA
606294e17f Improve logging of data column sidecars (#15728)
* Implement `SortedSliceFromMap`, `PrettySlice`, and `SortedSliceFromMap`.

* Use `SortedPrettySliceFromMap` and `SortedSliceFromMap` when needed.
2025-09-23 02:10:23 +00:00
terence
977e923692 Set Fulu fork epochs for Holesky, Hoodi, and Sepolia testnets (#15721)
* Set Fulu fork epochs for Holesky, Hoodi, and Sepolia testnets

* Update commits

* Add fulu.yaml to the presets file path loader

* Add the following placeholder fields:
- CELLS_PER_EXT_BLOB
- FIELD_ELEMENTS_PER_CELL
- FIELD_ELEMENTS_PER_EXT_BLOB
- KZG_COMMITMENTS_INCLUSION_PROOF_DEPTH

---------

Co-authored-by: Preston Van Loon <preston@pvl.dev>
2025-09-23 02:10:20 +00:00
553 changed files with 16043 additions and 4916 deletions

View File

@@ -34,6 +34,7 @@ build:minimal --@io_bazel_rules_go//go/config:tags=minimal
build:release --compilation_mode=opt
build:release --stamp
build:release --define pgo_enabled=1
build:release --strip=always
# Build binary with cgo symbolizer for debugging / profiling.
build:cgo_symbolizer --copt=-g

View File

@@ -1,4 +1,4 @@
FROM golang:1.24-alpine
FROM golang:1.25.1-alpine
COPY entrypoint.sh /entrypoint.sh

View File

@@ -9,7 +9,7 @@ on:
jobs:
run-changelog-check:
runs-on: ubuntu-latest
runs-on: ubuntu-4
steps:
- name: Checkout source code
uses: actions/checkout@f43a0e5ff2bd294095638e18286ca9a3d1956744 # v3.6.0

View File

@@ -3,7 +3,7 @@ on: [push, pull_request]
jobs:
check-specrefs:
runs-on: ubuntu-latest
runs-on: ubuntu-4
steps:
- name: Checkout repository

View File

@@ -10,7 +10,7 @@ on:
jobs:
clang-format-checking:
runs-on: ubuntu-latest
runs-on: ubuntu-4
steps:
- uses: actions/checkout@v2
# Is this step failing for you?

View File

@@ -10,13 +10,13 @@ permissions:
jobs:
list:
runs-on: ubuntu-latest
runs-on: ubuntu-4
timeout-minutes: 180
steps:
- uses: actions/checkout@v3
- uses: actions/setup-go@v4
with:
go-version: '1.23.5'
go-version: '1.25.1'
- id: list
uses: shogo82148/actions-go-fuzz/list@v0
with:
@@ -25,7 +25,7 @@ jobs:
fuzz-tests: ${{steps.list.outputs.fuzz-tests}}
fuzz:
runs-on: ubuntu-latest
runs-on: ubuntu-4
timeout-minutes: 360
needs: list
strategy:
@@ -36,7 +36,7 @@ jobs:
- uses: actions/checkout@v3
- uses: actions/setup-go@v4
with:
go-version: '1.23.5'
go-version: '1.25.1'
- uses: shogo82148/actions-go-fuzz/run@v0
with:
packages: ${{ matrix.package }}

View File

@@ -11,7 +11,7 @@ on:
jobs:
formatting:
name: Formatting
runs-on: ubuntu-latest
runs-on: ubuntu-4
steps:
- name: Checkout
uses: actions/checkout@v4
@@ -22,7 +22,7 @@ jobs:
gosec:
name: Gosec scan
runs-on: ubuntu-latest
runs-on: ubuntu-4
env:
GO111MODULE: on
steps:
@@ -31,7 +31,7 @@ jobs:
- name: Set up Go 1.24
uses: actions/setup-go@v4
with:
go-version: '1.24.0'
go-version: '1.25.1'
- name: Run Gosec Security Scanner
run: | # https://github.com/securego/gosec/issues/469
export PATH=$PATH:$(go env GOPATH)/bin
@@ -40,31 +40,31 @@ jobs:
lint:
name: Lint
runs-on: ubuntu-latest
runs-on: ubuntu-4
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up Go 1.24
uses: actions/setup-go@v4
with:
go-version: '1.24.0'
id: go
fetch-depth: 0
- name: Set up Go 1.25.1
uses: actions/setup-go@v5
with:
go-version: '1.25.1'
- name: Golangci-lint
uses: golangci/golangci-lint-action@v5
uses: golangci/golangci-lint-action@v8
with:
version: v1.64.5
args: --config=.golangci.yml --out-${NO_FUTURE}format colored-line-number
version: v2.4
build:
name: Build
runs-on: ubuntu-latest
runs-on: ubuntu-4
steps:
- name: Set up Go 1.x
- name: Set up Go 1.25.1
uses: actions/setup-go@v4
with:
go-version: '1.24.0'
go-version: '1.25.1'
id: go
- name: Check out code into the Go module directory

View File

@@ -8,7 +8,7 @@ on:
jobs:
Horusec_Scan:
name: horusec-Scan
runs-on: ubuntu-latest
runs-on: ubuntu-4
if: github.ref == 'refs/heads/develop'
steps:
- name: Check out code
@@ -19,4 +19,4 @@ jobs:
- name: Running Security Scan
run: |
curl -fsSL https://raw.githubusercontent.com/ZupIT/horusec/main/deployments/scripts/install.sh | bash -s latest
horusec start -t="10000" -p="./" -e="true" -i="**/crypto/bls/herumi/**, **/**/*_test.go, **/third_party/afl/**, **/crypto/keystore/key.go"
horusec start -t="10000" -p="./" -e="true" -i="**/crypto/bls/herumi/**, **/**/*_test.go, **/third_party/afl/**, **/crypto/keystore/key.go"

View File

@@ -1,90 +1,41 @@
version: "2"
run:
timeout: 10m
go: '1.23.5'
issues:
exclude-files:
- validator/web/site_data.go
- .*_test.go
exclude-dirs:
- proto
- tools/analyzers
go: 1.23.5
linters:
enable-all: true
disable:
# Deprecated linters:
enable:
- errcheck
- ineffassign
- govet
# Disabled for now:
- asasalint
- bodyclose
- containedctx
- contextcheck
- cyclop
- depguard
- dogsled
- dupl
- durationcheck
- errname
- err113
- exhaustive
- exhaustruct
- forbidigo
- forcetypeassert
- funlen
- gci
- gochecknoglobals
- gochecknoinits
- goconst
- gocritic
- gocyclo
- godot
- godox
- gofumpt
- gomoddirectives
- gosec
- inamedparam
- interfacebloat
- intrange
- ireturn
- lll
- maintidx
- makezero
- mnd
- musttag
- nakedret
- nestif
- nilnil
- nlreturn
- noctx
- nolintlint
- nonamedreturns
- nosprintfhostport
- perfsprint
- prealloc
- predeclared
- promlinter
- protogetter
- recvcheck
- revive
- spancheck
disable:
- staticcheck
- stylecheck
- tagalign
- tagliatelle
- thelper
- unparam
- usetesting
- varnamelen
- wrapcheck
- wsl
- unused
exclusions:
generated: lax
presets:
- comments
- common-false-positives
- legacy
- std-error-handling
paths:
- validator/web/site_data.go
- .*_test.go
- proto
- tools/analyzers
- third_party$
- builtin$
- examples$
linters-settings:
gocognit:
# TODO: We should target for < 50
min-complexity: 65
output:
print-issued-lines: true
sort-results: true
formatters:
enable:
- gofmt
- goimports
exclusions:
generated: lax
paths:
- validator/web/site_data.go
- .*_test.go
- proto
- tools/analyzers
- third_party$
- builtin$
- examples$

View File

@@ -4,6 +4,345 @@ All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
## [v6.1.3](https://github.com/prysmaticlabs/prysm/compare/v6.1.2...v6.1.3) - 2025-10-20
This release has several important beacon API and p2p fixes.
### Added
- Add Grandine to P2P known agents. (Useful for metrics). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15829)
- Delegate sszInfo HashTreeRoot to FastSSZ-generated implementations via SSZObject, enabling roots calculation for generated types while avoiding duplicate logic. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15805)
- SSZ-QL: Use `fastssz`'s `SizeSSZ` method for calculating the size of `Container` type. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15864)
- SSZ-QL: Access n-th element in `List`/`Vector`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15767)
### Changed
- Do not verify block data when calculating rewards. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15819)
- Process pending attestations after pending blocks are cleared. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15824)
- updated web3signer to 25.9.1. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15832)
- Gracefully handle submit blind block returning 502 errors. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15848)
- Improve returning individual message errors from Beacon API. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15835)
- SSZ-QL: Clarify `Size` method with more sophisticated `SSZType`s. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15864)
### Fixed
- Use service context and continue on slasher attestation errors (#15803). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15803)
- block event probably shouldn't be sent on certain block processing failures, now sends only on successing processing Block is NON-CANONICAL, Block IS CANONICAL but getFCUArgs FAILS, and Full success. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15814)
- Fixed web3signer e2e, issues caused due to a regression on old fork support. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15832)
- Do not mark blocks as invalid from ErrNotDescendantOfFinalized. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15846)
- Fixed [#15812](https://github.com/OffchainLabs/prysm/issues/15812): Gossip attestation validation incorrectly rejecting attestations that arrive before their referenced blocks. Previously, attestations were saved to the pending queue but immediately rejected by forkchoice validation, causing "not descendant of finalized checkpoint" errors. Now attestations for missing blocks return `ValidationIgnore` without error, allowing them to be properly processed when their blocks arrive. This eliminates false positive rejections and prevents potential incorrect peer downscoring during network congestion. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15840)
- Mark the block as invalid if it has an invalid signature. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15847)
- Display error messages from the server verbatim when they are not encoded as `application/json`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15860)
- `HasAtLeastOneIndex`: Check the index is not too high. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15865)
- Fix `/eth/v1/beacon/blob_sidecars/` beacon API is the fulu fork epoch is set to the far future epoch. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15867)
- `dataColumnSidecarsByRangeRPCHandler`: Gracefully close the stream if no data to return. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15866)
- `VerifyDataColumnSidecar`: Check if there is no too many commitments. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15859)
- `WithDataColumnRetentionEpochs`: Use `dataColumnRetentionEpoch` instead of `blobColumnRetentionEpoch`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15872)
- Mark epoch transition correctly on new head events. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15871)
- reject committee index >= committees_per_slot in unaggregated attestation validation. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15855)
- Decreased attestation gossip validation batch deadline to 5ms. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15882)
## [v6.1.2](https://github.com/prysmaticlabs/prysm/compare/v6.1.1...v6.1.2) - 2025-10-10
This release has several important fixes to improve Prysm's peering, stability, and attestation inclusion on mainnet and all testnets. All node operators are encouraged to update to this release as soon as practical for the best mainnet performance.
### Added
- Added a 1 minute timeout on PruneAttestationOnEpoch operations to prevent very large bolt transactions. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15746)
- Added expected delay before broadcasting light client p2p messages. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15776)
### Changed
- Replaced reflect.TypeOf with reflect.TypeFor. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15627)
- Bazel builds with `--config=release` now properly apply `--strip=always` to strip debug symbols from the release assets. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15774)
- Add sources for compute_fork_digest to specrefs. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15699)
- Aggregate logs when broadcasting data column sidecars (one per root instead of one per sidecar). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15748)
- `c-kzg-4844`: Update from `v2.1.1` to `v2.1.5`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15708)
- Process pending attestations as soon as the block arrives. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15791)
- Compare received LC messages over gossipsub with locally computed ones before forwarding. Also no longer save updates. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15783)
- Optimize pending attestation processing by adding batching. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15801)
### Removed
- removed unused configs and hides prysm specific configs from `/eth/v1/config/spec` endpoint. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15797)
### Fixed
- SSZ-QL: Support nested `List` type (e.g., `ExecutionPayload.Transactions`). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15725)
- Fixing Unsupported config field kind; value forwarded verbatim errors for type string. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15773)
- fix /eth/v1/config/spec endpoint to properly skip omitted values. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15777)
- Fix ProduceSyncCommitteeContribution not returning error when committee index is out of range. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15770)
- adding in improvements to getduties v2, replaces helpers.PrecomputeCommittees() ( exepensive ) with CommitteeAssignments. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15784)
- Avoid unnecessary calls to `ExitInformation()`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15764)
- `inclusionProofKey`: Include the commitments in the key. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15795)
- Do not reject peers if they have a mismatched version|digest when the next for epoch is FAR_FUTURE_EPOCH. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15798)
- Don't include entries in the fork schedule if their epoch is set to far future epoch. Avoids reporting next_fork_version == <unscheduled fork>. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15799)
- Wait for custody info to be initialized before querying them. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15804)
- fixes level=error msg="Could not clean up dirty states" error="OriginBlockRoot: not found in db" prefix=state-gen error when starting in kurtosis. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15808)
- Correctly clear disconnected peers from `connected_libp2p_peers` and `connected_libp2p_peers_average_scores`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15807)
- `buildStatusFromStream`: Respond `statusV2` only if Fulu is enabled. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15818)
- Send our real earliest available slot when sending a Status request post Fulu instead of `0`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15818)
- switch to built-in min/max. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15817)
- `findPeersWithSubnets`: If the filter function returns an error for a given peer, log an error and skip the peer instead of aborting the whole function. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15815)
- `computeIndicesByRootByPeer`: If the loop returns an error for a given peer, log an error and skip the peer instead of aborting the whole function. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15815)
- Fixed issue #15738 where separate goroutines assume sole responsibility for topic registration. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15779)
## [v6.1.0](https://github.com/prysmaticlabs/prysm/compare/v6.0.5...v6.1.0) and [v6.1.1](https://github.com/prysmaticlabs/prysm/compare/v6.1.0...v6.1.1) - 2025-09-26
This release has support for Fusaka testnets as well as many mainnet improvements. Testnet operators are required to updated prior to the testnet fork date. See [PR #15721](https://github.com/OffchainLabs/prysm/pull/15721).
Mainnet operators are encouraged to update per their regular update cadence.
Note: This release was re-issued as v6.1.1 to distribute release assets without debug symbols. See issue [#15760](https://github.com/OffchainLabs/prysm/issues/15760).
#### Noteworthy improvements, changes and bugfixes:
- The `--disable-experimental-state` beacon-node flag has been removed, marking the full graduation of the [Copy-on-write design](https://hackmd.io/zlTJ6Qe_RiueT3y2R77BvA) for BeaconState fields, which reduces the memory overhead of keeping multiple BeaconStates in RAM for block processing. Congrats @rkapka!
- The behavior set by the `--attest_timely` flag is now on by default, with the flag itself deprecated.
- GetDutiesV2 introduced, lowering duty request latency and beacon-node load. Multiple other improvements and bugfixes have been made to harden the validator run loop.
- New validator flag `--max-health-checks` configures a validator to switch to a fallback beacon node after the given number of health check failures.
- Improvements to rest-mode validator, defaulting to SSZ where available and adding SSZ support to more Beacon API endpoints.
- Beacon API now honors the gzip content-encoding header.
- Log timestamps now include milliseconds.
- Full fusaka support for testnets!
**Special thanks to external contributors!**: @Alleysira, @KaloyanTanev, @rose2221
[1] To override this limit, use the validator flag `--suggested-gas-limit` or set the `builder.gas_limit` setting in your [proposer settings file](https://prysm.offchainlabs.com/docs/configure-prysm/fee-recipient/#advanced-configure-mev-builder-and-gas-limit).
### Added
- PeerDAS: Add `CustodyInfo` in `BeaconNode`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15378)
- GetDutiesV2 gRPC function, removes committee list from duties, replaced with committee length, validator committee index. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15273)
- Add SSZ support for two attestation APIs: `/eth/v1/validator/attestation_data` and. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15377)
- Added feature flag for validator client to use get duties v2. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15380)
- PeerDAS: Implement DAS. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15367)
- `verifyBlobCommitmentCount`: Print max allowed blob count in error message. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15386)
- Data column support for beacon api event end point. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15387)
- Implement EIP-7917: Stable proposer lookahead. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15129)
- Implement `dataColumnSidecarByRootRPCHandler`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15405)
- New ssz-only flag for validator client to enable calling rest apis in SSZ, starting with get block endpoint. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15390)
- Implement `dataColumnSidecarsByRangeRPCHandler`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15421)
- Add SSZ support for `submitPoolAttestationsV2` beacon API. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15422)
- New `StatusV2` proto message. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15423)
- Implement `SendDataColumnSidecarsByRangeRequest`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15430)
- Implement `SendDataColumnSidecarsByRootRequest`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15430)
- Implement beacon API blob sidecar enpoint for Fulu. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15436)
- PeerDAS: Implement the new Fulu Metadata. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15440)
- PeerDAS: Implement reconstruction. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15454)
- Implement engine method `GetBlobsV2`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15469)
- Implement execution `ReconstructDataColumnSidecars`, which reconstruct data column sidecars from data fetched from the execution layer. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15469)
- new `--batch-verifier-limit` flag to configure max number of signatures to batch verify on gossip. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15467)
- `disable-attest-timely` flag to disable attest timely. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15410)
- Added `max-health-checks` flag that sets the maximum times the validator tries to check the health of the beacon node before timing out. 0 or a negative number is indefinite. (the default is 0). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15401)
- Add method `VersionToForkEpochMap()` to the `BeaconChainConfig` in the `params` package. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15482)
- Add log capitalization analyzer and apply changes across codebase. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15452)
- Slot aware cache for seen data column gossip p2p to reduce memory usages. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15477)
- **Gzip Compression for Beacon API:**. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14982)
- Implement data column sidecars reconstruction with data retrieved from the execution client when receiving a block via gossip. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15483)
- Add support for parsing and handling `ExecutionPayloadAndBlobsBundleV2`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15503)
- Added new PRYSM_API_OVERRIDE_ACCEPT environment variable to override ssz accept header as a replacement to flag. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15433)
- Implements the `/eth/v1/beacon/states/{state_id}/proposer_lookahead` beacon api endpoint. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15525)
- Added new metadata fields (attnets,syncnets,custody_group_count) to `/eth/v1/node/identity`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15506)
- Add BLOB_SCHEDULE field to `/eth/v1/config/spec` endpoint response to expose blob scheduling configuration for networks. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15485)
- Add timing metric `publish_block_v2_duration_milliseconds` to measure processing duration of the `PublishBlockV2` beacon API endpoint. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15539)
- Add Fulu case for `saveStatesEfficientInternal`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15553)
- Support for fusaka `nfd` enr field, and changes to the semantics of the eth2 field. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15501)
- Implement post-Fulu MEV-boost protocol changes where relays only return status codes for blinded block submissions. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15486)
- Added fulu block support to StreamBlocksAltair. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15583)
- All outbound HTTP requests from the validator client now include a custom `User-Agent` header in the format `Prysm/<name>/<version>`. This enhances observability and enables upstream systems to correctly identify Prysm validator clients by their name and version. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15574)
- Fixes [#15435](https://github.com/OffchainLabs/prysm/issues/15435). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15574)
- Data columns syncing for Fusaka. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15564)
- Added specification references which map spec to implementation. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15592)
- Warm data columns storage cache at start. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15629)
- Add `--data-column-path` flag. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15629)
- Initialize package for SSZ Query Language. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15588)
- In FetchDataColumnSidecars, after retrieving sidecars from peers, if still some sidecars are missing for a given root and if a reconstruction is possible (combining sidecars already retrieved from peers and sidecars in the storage), then reconstruct missing sidecars instead of trying to fetch the missing ones from peers. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15593)
- Fulu block proposal changes for beacon api and gRPC. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15628)
- Retry to fetch origin data column sidecars when starting from a checkpoint. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15634)
- Aggregate and pack sync committee messages into blocks. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15608)
- Support `List` type for SSZ-QL. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15637)
- Configured the beacon node to seek peers when we have validator custody requirements. If one or more validators are connected to the beacon node, then the beacon node should seek a diverse set of peers such that broadcasting to all data column subnets for a block proposal is more efficient. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15654)
- SSZ-QL: Add element information for `Vector` type. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15668)
- SSZ-QL: Support multi-dimensional tag parsing. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15668)
- Added more metadata for debug logs when initial sync requests fail for "invalid data returned from peer" errors. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15674)
- Adding Fulu types for web3signer. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15498)
- Added erigon/caplin to known p2p agent strings. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15678)
- Add Fulu fork transition tests for mainnet and minimal configurations. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15666)
- Fulu proposer lookahead epoch processing tests for mainnet and minimal configurations. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15667)
- Populate sszInfo of variable-length fields in AnalyzeObjects. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15676)
- KZG proof batch verification for data column gossip validation. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15617)
- Added flag `--p2p-colocation-whitelist` to accept CIDRs which will bypass the p2p colocation restrictions. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15685)
- Fulu spec tests coverage for covering the general package. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15682)
- Implemented syncing in a disjoint network with respect to data column sidecars subscribed by peers. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15644)
- Add retry logic when GetBlobsV2 is called. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15520)
- Call GetBlobsV2 as soon as we receive the first data column sidecar or block. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15520)
- Added new post fulu /eth/v1/beacon/blobs/{block_id} endpoint. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15610)
- SSZ-QL: Handle `Bitlist` and `Bitvector` types. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15704)
- Adding `/eth/v1/debug/beacon/data_column_sidecars/{block_id}` endpoint. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15701)
- Support Fulu genesis block. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15652)
- Update spectests to 1.6.0-beta.0. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15741)
### Changed
- `parseIndices`: Return `[]int` instead of `[]uint64`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15386)
- Reclaim memory manually in some tests that fuzz the beacon state. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15395)
- when REST api is enabled the get Block api defaults to requesting and receiving SSZ instead of JSON, JSON is the fallback. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15390)
- Remove "invalid" from logs for incoming blob sidecar that is missing parent or out of range slot. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15428)
- In `TopicFromMessage`: Do not assume anymore that all Fulu specific topic are V3 only. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15423)
- `readChunkedDataColumnSidecar`: Add `validationFunctions` parameter and add tests. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15423)
- Put the initiation of LC Store behind the `enable-light-client` flag. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15464)
- default batch signature verification limit increased from 50 to 1000. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15467)
- Increase mainnet DefaultBuilderGasLimit from 36M to 45M. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15455)
- Attest timely is now default. `attest-timely` flag is now deprecated. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15410)
- Move data col reconstruction log to a more accurate place in the code. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15475)
- Makes the multivalue slice permanent in the state and removes old paths. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15414)
- Previously, we optimistically believed the beacon node was healthy and tried to get chain start, but now we do a health check at the start. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15401)
- Optimize proposer inclusion proof calcuation by pre caching subtries. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15473)
- Move setter/getter functions for LC Bootstrap into LcStore for a unified interface. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15476)
- Changed `enable-duties-v2` to `disable-duties-v2` to default to using duties v2. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15445)
- Changed `uint64` genesis time to use `time.Time`. Also did some refactoring and cleanup that was enabled by these changes. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15419)
- Add milliseconds to log timestamps. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15496)
- Move setter/getter functions for LC Updates into LcStore for a unified interface. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15488)
- Change LC Bootstrap logic to only save bootstraps on finalized checkpoints instead of every block. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15497)
- Update links to consensus-specs to point to `master` branch. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15523)
- changed from in-memory to persistent discv5 db to keep local node information persistent for the key to keep the ENR sequence number deterministic when restarting. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15519)
- Fix some nits associated with data column sidecar verification. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15521)
- Include state root in StateNotFoundError for better debugging of consensus validation failures. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15533)
- when shutting down the sync service we now send p2p goodbye messages in parallel to maxmimize changes of propogating goodbyes to all peers before an unsafe shutdown. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15542)
- Do not compare liveness response with LH in e2e Beacon API evaluator. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15556)
- Moved the broadcast and event notifier logic for saving LC updates to the store function. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15540)
- Fixed the issue with broadcasting more than twice per LC Finality update, and the if-case bug. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15540)
- Separated the finality update validation rules for saving and broadcasting. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15540)
- Update validator custody to the latest specification, including the new status message. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15532)
- Beacon api optimize validator lookup for large batch request size. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15558)
- Check pending block is in forkchoice before importing pending attestation. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15547)
- Redesign the pending attestation queue. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15024)
- Replaced hardcoded `grpc-gateway-port` with `flags.HTTPServerPort.Name` in `testing/endtoend/components/validator.go`, resolving an inline TODO for improved flag consistency. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15236)
- Refactor `htrutil.go` by removing redundant codes. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15453)
- Improved sync unaggregated attestation cache key outside of lock path. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15572)
- Move aggregated attestation cache key generation outside of critical locks to improve performance. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15579)
- Renamed various variables/functions to be more clear. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15529)
- Update consensus spec to v1.6.0-alpha.4 and implement data column support for forkchoice spectests. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15590)
- Reject incoming connections when the fork schedule of the connecting peer (parsed from their ENR) has a matching next_fork_epoch, but mismatched next_fork_version or nfd (next fork digest). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15604)
- Update gohashtree to v0.0.5-beta. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15619)
- Updated consensus spec from v1.6.0-alpha.4 to v1.6.0-alpha.5 with adjusted minimal config parameters. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15621)
- Changed old atomic functions to new atomic.Int for safer and clearer code. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15625)
- Start from justified checkpoint by default. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15636)
- Updated consensus spec from v1.6.0-alpha.5 to v1.6.0-alpha.6. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15658)
- Updated outdated documentation links for Web3Signer and Why Bazel. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15631)
- changed validatorpb.SignRequest_AggregateAttestationAndProof signing type to use AggregateAttestationAndProofV2 on web3signer. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15498)
- Pre-calculate exit epoch, churn and active balance before processing slashings to reduce CPU load. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14990)
- Switching default of validator client rest call for submit block from JSON to SSZ. Fallback json will be attempted. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15645)
- Deprecated and added error to /prysm/v1/beacon/blobs endpoint for post Fulu fork. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15643)
- Upgraded gossipsub to v0.14.2 and libp2p to v0.39.1. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15677)
- Prysm will now downscore peers that return invalid block_by_range responses. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15686)
- Filtering peers for data column subnets: Added a one-epoch slack to the peers head slot view. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15644)
- Fetching data column sidecars: If not all requested sidecars are available for a given root, return the successfully retrieved ones along with a map indicating which could not be fetched. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15644)
- Fetching origin data column sidecars: If only some sidecars are fetched, save the retrieved ones and retry fetching the missing ones on the next attempt. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15644)
- Renamed the `--enable-experimental-backfill` flag to `--enable-backfill` to signal that it is more mature. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15690)
- Restrict best LC update collection to canonical blocks. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15585)
- PeerDAS: Wait for a random delay, then reconstruct data column sidecars and immediately reseed instead of immediately reconstructing, waiting and then reseeding. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15705)
- Clarified misleading log messages in beacon-chain/rpc/service gRPC module. [[PR]](https://github.com/prysmaticlabs/prysm/pull/13063)
- Broadcast block then sidecars, instead block and sidecars concurrently. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15720)
- Broadcast and receive sidecars in concurrently, instead sequentially. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15720)
- Changed blst dependency from `http_archive` to `go_repository` so that gazelle can keep it in sync with go.mod. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15709)
- Updated go to v1.25.1. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15641)
- Updated rules_go to v0.57.0. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15641)
- Updated protobuf to 28.3. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15641)
- Set Fulu fork epochs for Holesky, Hoodi, and Sepolia testnets. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15721)
- Improve logging of data column sidecars. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15728)
- Updated go.mod to v1.25.1. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15740)
### Deprecated
- Deprecated `p2p-metadata` flag. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15554)
### Removed
- Removed //tools/eth1voting tool. This is no longer needed as the beacon chain no longer uses eth1data voting since Electra. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15415)
- Remove deposit count from sync new block log. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15420)
- Unused `DataColumnIdentifier` proto message. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15423)
- Validator client will no longer need to call the canonical head api. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15480)
- Partially reverting pr #15390 removing the `ssz-only` debug flag until there is a real usecase for the flag. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15433)
### Fixed
- Added regression test for [PR 15369](https://github.com/OffchainLabs/prysm/pull/15369). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15379)
- Added missing `meta` field to the response of the endpoint `/eth/v1/node/peers` to align with the Beacon API spec (#15370). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15371)
- Fix blob metric name for peer count. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15412)
- Non deterministic output order of `dataColumnSidecarByRootRPCHandler`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15441)
- Fixed the versioning bug for light client data types in the Beacon API. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15400)
- `--chain-config-file`: Do not use any more mainnet boot nodes. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15460)
- Fix panic on dutiesv2 when there is no committee assignment on the epoch. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15466)
- Allow SSZ requests for pending deposits, partial withdrawals and consolidations. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15474)
- Validator client shuts down cleanly on error instead of fatal error. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15401)
- Fixes edge case starting validator client with new validator keys starts the slot ticker too early resulting in replayed slots in the main runner loop. Fixes edge case of replayed slots when waiting for account acivations. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15479)
- DV aggregations failing first slot of the epoch. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15156)
- Skip genesis block retrieval when EIP-6110 deposit requests have started to prevent "pruned history unavailable" errors with execution clients that have pruned pre-merge data. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15494)
- Fixed lookahead initialization at the fulu fork. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15450)
- Write `Content-Encoding` header in the response properly when gzip encoding is requested. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15499)
- Subnets subscription: Avoid dynamic subscribing blocking in case not enough peers per subnets are found. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15471)
- Do not apply the gzip middleware to the event stream API. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15517)
- Fixed various reasons why a node is banned by its peers when it stops. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15505)
- Use `MinEpochsForDataColumnSidecarsRequest` in `WithinDAPeriod` when in Fulu. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15522)
- Return zero value for `Eth-Consensus-Block-Value` on error to avoid missed block proposals. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15526)
- Moved reconstruction lock to prevent unnecessary work. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15528)
- Fixed variable names, links, and typos in das core code. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15524)
- Fix builder bid version compatibility to support Electra bids with Fulu blocks. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15536)
- Fixed align submitPoolSyncCommitteeSignatures response with Beacon API specification. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15516)
- Trigger payload attribute event as soon as an early block is processed. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15541)
- Beacon-api proposer duty fulu computation. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15534)
- Fixed the max proofs in `BlobsBundleV2`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15530)
- Prevent a race on double `ReceiveBlock`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15565)
- Fixed [#15544](https://github.com/OffchainLabs/prysm/issues/15544): Persist metadata sequence number if it is needed (e.g., use static peer ID option or Fulu enabled). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15554)
- Fix the validateConsensus endpoint handler. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15548)
- builder version check was using head block version instead of current fork's version based on slot, fixes e2e from https://github.com/OffchainLabs/prysm/commit/57e27199bdb9b3ef1af14c3374999aba5e0788a3. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15568)
- Don't submit duplicate `SignedContributionAndProof` messages. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15571)
- Genesis state, timestamp and validators root now ubiquitously available at node startup, supporting tech debt cleanup. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15470)
- Fixed a condition where the blob cache could panic when there were less than or no sidecars in the cache entry. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15581)
- Fixed endpoint response to return 404 or 400 after isOptimistic check. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15559)
- Safeguard against accidental out of bounds array access in dataColumnSidecars method. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15586)
- Fixed NewSignedBeaconBlock calls to use Block field for proper equivocation handling. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15595)
- Fixed regression in find peer functions introduced in PR#15471, where nodes with equal sequence numbers were incorrectly skipped and the peer count was incorrectly reduced when replacing nodes with higher sequence numbers. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15578)
- Fix bug where stale computed value in closure excludes newly required (eg attestation) subscriptions. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15603)
- Fix bug where arguments of fillInForkChoiceMissingBlocks were incorrectly placed. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15639)
- Fix next epoch proposer duties in Fulu by advancing the state to the beginning of the current epoch. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15642)
- Fix getBlockAttestationsV2 to return [] instead of null when data is empty. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15651)
- Fixed the issue of empty dirs not being deleted when using blob-storage-layout=by-epoch. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15573)
- Start topic-based peer discovery before initial sync completes so that we have coverage of needed columns when range syncing. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15660)
- Fixed an off-by-one in forkchoice startup. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15684)
- mitigate potential supernode clustering due to libp2p ConnManager pruning of non-supernodes, see https://github.com/OffchainLabs/prysm/issues/15607. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15681)
- Initial sync: Do not request data column sidecars for blocks before the retention period. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15644)
- Fixed incorrect attestation data request where the assigned committee index was used after Electra, instead of 0. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15696)
- Use v2 endpoint for blinded block submission post-Fulu. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15716)
- Fixed 'justified' block support missing on blocker.Block and optimized logic between blocker.Block and blocker.Blob. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15715)
- Fix prysmctl panic when baseFee is not set in genesis.json. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15687)
- Fix getStateRandao not returning historic RANDAO mix values. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15653)
- fix race in PriorityQueue.Pop by checking emptiness under write lock. (#15726). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15726)
- In P2P service start, wait for the custody info to be correctly initialized. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15732)
- `createLocalNode`: Wait before retrying to retrieve the custody group count if not present. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15735)
- Replace fmt.Printf with proper test error handling in web3signer keymanager tests, using require.NoError(t, err) instead of t.Fatalf for better error handling and debugging. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15723)
- fixed regression introduced in PR #15715 , blocker now returns an error for not found, and error handling correctly handles error and returns 404 instead of 500. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15742)
- da metric was not writing correctly because if statement on err was accidently flipped. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15743)
### Security
- Updated go to version 1.24.5. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15561)
- Updated distroless/cc-debian11 to latest to resolve CVE-2024-2961. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15562)
- Updated go to version 1.24.6. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15566)
- Updated quic-go to latest version. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15749)
## [v6.0.5](https://github.com/prysmaticlabs/prysm/compare/v6.0.4...v6.0.5) - 2025-09-26
We are releasing a patch update on top of v6.0.4 to address a stability issue with quic-go.
All operators should update as soon as possible to v6.0.5 or later.
### Security
- Updated quic-go to latest version. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15749)
## [v6.0.4](https://github.com/prysmaticlabs/prysm/compare/v6.0.3...v6.0.4) - 2025-06-05
This release has more work on PeerDAS, and light client support. Additionally, we have a few bug fixes:
@@ -3458,4 +3797,4 @@ There are no security updates in this release.
# Older than v2.0.0
For changelog history for releases older than v2.0.0, please refer to https://github.com/prysmaticlabs/prysm/releases
For changelog history for releases older than v2.0.0, please refer to https://github.com/prysmaticlabs/prysm/releases

View File

@@ -253,16 +253,16 @@ filegroup(
url = "https://github.com/ethereum/EIPs/archive/5480440fe51742ed23342b68cf106cefd427e39d.tar.gz",
)
consensus_spec_version = "v1.6.0-alpha.6"
consensus_spec_version = "v1.6.0-beta.0"
load("@prysm//tools:download_spectests.bzl", "consensus_spec_tests")
consensus_spec_tests(
name = "consensus_spec_tests",
flavors = {
"general": "sha256-7wkWuahuCO37uVYnxq8Badvi+jY907pBj68ixL8XDOI=",
"minimal": "sha256-Qy/f27N0LffS/ej7VhIubwDejD6LMK0VdenKkqtZVt4=",
"mainnet": "sha256-3H7mu5yE+FGz2Wr/nc8Nd9aEu93YoEpsYtn0zBSoeDE=",
"general": "sha256-rT3jQp2+ZaDiO66gIQggetzqr+kGeexaLqEhbx4HDMY=",
"minimal": "sha256-wowwwyvd0KJLsE+oDOtPkrhZyJndJpJ0lbXYsLH6XBw=",
"mainnet": "sha256-4ZLrLNeO7NihZ4TuWH5V5fUhvW9Y3mAPBQDCqrfShps=",
},
version = consensus_spec_version,
)
@@ -278,7 +278,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-uvz3XfMTGfy3/BtQQoEp5XQOgrWgcH/5Zo/gR0iiP+k=",
integrity = "sha256-sBe3Rx8zGq9IrvfgIhZQpYidGjy3mE1SiCb6/+pjLdY=",
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
)
@@ -300,22 +300,6 @@ filegroup(
url = "https://github.com/ethereum/bls12-381-tests/releases/download/%s/bls_tests_yaml.tar.gz" % bls_test_version,
)
http_archive(
name = "eth2_networks",
build_file_content = """
filegroup(
name = "configs",
srcs = glob([
"shared/**/config.yaml",
]),
visibility = ["//visibility:public"],
)
""",
sha256 = "77e7e3ed65e33b7bb19d30131f4c2bb39e4dfeb188ab9ae84651c3cc7600131d",
strip_prefix = "eth2-networks-934c948e69205dcf2deb87e4ae6cc140c335f94d",
url = "https://github.com/eth-clients/eth2-networks/archive/934c948e69205dcf2deb87e4ae6cc140c335f94d.tar.gz",
)
http_archive(
name = "holesky_testnet",
build_file_content = """
@@ -327,9 +311,9 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-YVFFrCmjoGZ3fXMWpsCpSsYbANy1grnqYwOLKIg2SsA=",
strip_prefix = "holesky-32a72e21c6e53c262f27d50dd540cb654517d03a",
url = "https://github.com/eth-clients/holesky/archive/32a72e21c6e53c262f27d50dd540cb654517d03a.tar.gz", # 2025-03-17
integrity = "sha256-htyxg8Ln2o8eCiifFN7/hcHGZg8Ir9CPzCEx+FUnnCs=",
strip_prefix = "holesky-8aec65f11f0c986d6b76b2eb902420635eb9b815",
url = "https://github.com/eth-clients/holesky/archive/8aec65f11f0c986d6b76b2eb902420635eb9b815.tar.gz",
)
http_archive(
@@ -359,9 +343,9 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-b5F7Wg9LLMqGRIpP2uqb/YsSFVn2ynzlV7g/Nb1EFLk=",
strip_prefix = "sepolia-562d9938f08675e9ba490a1dfba21fb05843f39f",
url = "https://github.com/eth-clients/sepolia/archive/562d9938f08675e9ba490a1dfba21fb05843f39f.tar.gz", # 2025-03-17
integrity = "sha256-+UZgfvBcea0K0sbvAJZOz5ZNmxdWZYbohP38heUuc6w=",
strip_prefix = "sepolia-f9158732adb1a2a6440613ad2232eb50e7384c4f",
url = "https://github.com/eth-clients/sepolia/archive/f9158732adb1a2a6440613ad2232eb50e7384c4f.tar.gz",
)
http_archive(
@@ -375,9 +359,9 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-dPiEWUd8QvbYGwGtIm0QtCekitVLOLsW5rpQIGzz8PU=",
strip_prefix = "hoodi-828c2c940e1141092bd4bb979cef547ea926d272",
url = "https://github.com/eth-clients/hoodi/archive/828c2c940e1141092bd4bb979cef547ea926d272.tar.gz",
integrity = "sha256-G+4c9c/vci1OyPrQJnQCI+ZCv/E0cWN4hrHDY3i7ns0=",
strip_prefix = "hoodi-b6ee51b2045a5e7fe3efac52534f75b080b049c6",
url = "https://github.com/eth-clients/hoodi/archive/b6ee51b2045a5e7fe3efac52534f75b080b049c6.tar.gz",
)
http_archive(

View File

@@ -284,7 +284,7 @@ func (c *Client) SubmitChangeBLStoExecution(ctx context.Context, request []*stru
if resp.StatusCode != http.StatusOK {
decoder := json.NewDecoder(resp.Body)
decoder.DisallowUnknownFields()
errorJson := &server.IndexedVerificationFailureError{}
errorJson := &server.IndexedErrorContainer{}
if err := decoder.Decode(errorJson); err != nil {
return errors.Wrapf(err, "failed to decode error JSON for %s", resp.Request.URL)
}

View File

@@ -59,6 +59,7 @@ go_test(
"//runtime/version:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",

View File

@@ -726,6 +726,12 @@ func unexpectedStatusErr(response *http.Response, expected int) error {
return errors.Wrap(jsonErr, "unable to read response body")
}
return errors.Wrap(ErrNotOK, errMessage.Message)
case http.StatusBadGateway:
log.WithError(ErrBadGateway).Debug(msg)
if jsonErr := json.Unmarshal(bodyBytes, &errMessage); jsonErr != nil {
return errors.Wrap(jsonErr, "unable to read response body")
}
return errors.Wrap(ErrBadGateway, errMessage.Message)
default:
log.WithError(ErrNotOK).Debug(msg)
return errors.Wrap(ErrNotOK, fmt.Sprintf("unsupported error code: %d", response.StatusCode))

View File

@@ -21,6 +21,7 @@ import (
eth "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
"github.com/prysmaticlabs/go-bitfield"
log "github.com/sirupsen/logrus"
)
@@ -170,8 +171,11 @@ func TestClient_RegisterValidator(t *testing.T) {
func TestClient_GetHeader(t *testing.T) {
ctx := t.Context()
expectedPath := "/eth/v1/builder/header/23/0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2/0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a"
var slot primitives.Slot = 23
ds := util.SlotAtEpoch(t, params.BeaconConfig().DenebForkEpoch)
es := util.SlotAtEpoch(t, params.BeaconConfig().ElectraForkEpoch)
expectedPath := "/eth/v1/builder/header/%d/0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2/0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a"
expectedPath = fmt.Sprintf(expectedPath, ds)
var slot primitives.Slot = ds
parentHash := ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")
pubkey := ezDecode(t, "0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a")
t.Run("server error", func(t *testing.T) {
@@ -533,7 +537,7 @@ func TestClient_GetHeader(t *testing.T) {
require.Equal(t, expectedPath, r.URL.Path)
epr := &ExecHeaderResponseElectra{}
require.NoError(t, json.Unmarshal([]byte(testExampleHeaderResponseElectra), epr))
pro, err := epr.ToProto(100)
pro, err := epr.ToProto(es)
require.NoError(t, err)
ssz, err := pro.MarshalSSZ()
require.NoError(t, err)

View File

@@ -21,3 +21,4 @@ var ErrUnsupportedMediaType = errors.Wrap(ErrNotOK, "The media type in \"Content
// ErrNotAcceptable specifically means that a '406 - Not Acceptable' was received from the API.
var ErrNotAcceptable = errors.Wrap(ErrNotOK, "The accept header value is not acceptable")
var ErrBadGateway = errors.Wrap(ErrNotOK, "recv 502 BadGateway response from API")

View File

@@ -6,6 +6,11 @@ import (
"strings"
)
var (
ErrIndexedValidationFail = "One or more messages failed validation"
ErrIndexedBroadcastFail = "One or more messages failed broadcast"
)
// DecodeError represents an error resulting from trying to decode an HTTP request.
// It tracks the full field name for which decoding failed.
type DecodeError struct {
@@ -29,19 +34,38 @@ func (e *DecodeError) Error() string {
return fmt.Sprintf("could not decode %s: %s", strings.Join(e.path, "."), e.err.Error())
}
// IndexedVerificationFailureError wraps a collection of verification failures.
type IndexedVerificationFailureError struct {
Message string `json:"message"`
Code int `json:"code"`
Failures []*IndexedVerificationFailure `json:"failures"`
// IndexedErrorContainer wraps a collection of indexed errors.
type IndexedErrorContainer struct {
Message string `json:"message"`
Code int `json:"code"`
Failures []*IndexedError `json:"failures"`
}
func (e *IndexedVerificationFailureError) StatusCode() int {
func (e *IndexedErrorContainer) StatusCode() int {
return e.Code
}
// IndexedVerificationFailure represents an issue when verifying a single indexed object e.g. an item in an array.
type IndexedVerificationFailure struct {
// IndexedError represents an issue when processing a single indexed object e.g. an item in an array.
type IndexedError struct {
Index int `json:"index"`
Message string `json:"message"`
}
// BroadcastFailedError represents an error scenario where broadcasting a published message failed.
type BroadcastFailedError struct {
msg string
err error
}
// NewBroadcastFailedError creates a new instance of BroadcastFailedError.
func NewBroadcastFailedError(msg string, err error) *BroadcastFailedError {
return &BroadcastFailedError{
msg: msg,
err: err,
}
}
// Error returns the underlying error message.
func (e *BroadcastFailedError) Error() string {
return fmt.Sprintf("could not broadcast %s: %s", e.msg, e.err.Error())
}

View File

@@ -296,3 +296,8 @@ type GetBlobsResponse struct {
Finalized bool `json:"finalized"`
Data []string `json:"data"` //blobs
}
type SSZQueryRequest struct {
Query string `json:"query"`
IncludeProof bool `json:"include_proof,omitempty"`
}

View File

@@ -173,6 +173,7 @@ go_test(
"//beacon-chain/state/state-native:go_default_library",
"//beacon-chain/state/stategen:go_default_library",
"//beacon-chain/verification:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",

View File

@@ -30,7 +30,7 @@ var (
// errWSBlockNotFoundInEpoch is returned when a block is not found in the WS cache or DB within epoch.
errWSBlockNotFoundInEpoch = errors.New("weak subjectivity root not found in db within epoch")
// ErrNotDescendantOfFinalized is returned when a block is not a descendant of the finalized checkpoint
ErrNotDescendantOfFinalized = invalidBlock{error: errors.New("not descendant of finalized checkpoint")}
ErrNotDescendantOfFinalized = errors.New("not descendant of finalized checkpoint")
// ErrNotCheckpoint is returned when a given checkpoint is not a
// checkpoint in any chain known to forkchoice
ErrNotCheckpoint = errors.New("not a checkpoint in forkchoice")

View File

@@ -16,7 +16,7 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/config/features"
"github.com/OffchainLabs/prysm/v6/config/params"
consensusblocks "github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
blocktypes "github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
payloadattribute "github.com/OffchainLabs/prysm/v6/consensus-types/payload-attribute"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
@@ -218,24 +218,18 @@ func (s *Service) getPayloadHash(ctx context.Context, root []byte) ([32]byte, er
// notifyNewPayload signals execution engine on a new payload.
// It returns true if the EL has returned VALID for the block
func (s *Service) notifyNewPayload(ctx context.Context, preStateVersion int,
preStateHeader interfaces.ExecutionData, blk interfaces.ReadOnlySignedBeaconBlock) (bool, error) {
// stVersion should represent the version of the pre-state; header should also be from the pre-state.
func (s *Service) notifyNewPayload(ctx context.Context, stVersion int, header interfaces.ExecutionData, blk blocktypes.ROBlock) (bool, error) {
ctx, span := trace.StartSpan(ctx, "blockChain.notifyNewPayload")
defer span.End()
// Execution payload is only supported in Bellatrix and beyond. Pre
// merge blocks are never optimistic
if blk == nil {
return false, errors.New("signed beacon block can't be nil")
}
if preStateVersion < version.Bellatrix {
if stVersion < version.Bellatrix {
return true, nil
}
if err := consensusblocks.BeaconBlockIsNil(blk); err != nil {
return false, err
}
body := blk.Block().Body()
enabled, err := blocks.IsExecutionEnabledUsingHeader(preStateHeader, body)
enabled, err := blocks.IsExecutionEnabledUsingHeader(header, body)
if err != nil {
return false, errors.Wrap(invalidBlock{error: err}, "could not determine if execution is enabled")
}
@@ -268,28 +262,32 @@ func (s *Service) notifyNewPayload(ctx context.Context, preStateVersion int,
return false, errors.New("nil execution requests")
}
}
lastValidHash, err = s.cfg.ExecutionEngineCaller.NewPayload(ctx, payload, versionedHashes, parentRoot, requests)
switch {
case err == nil:
lastValidHash, err = s.cfg.ExecutionEngineCaller.NewPayload(ctx, payload, versionedHashes, parentRoot, requests)
if err == nil {
newPayloadValidNodeCount.Inc()
return true, nil
case errors.Is(err, execution.ErrAcceptedSyncingPayloadStatus):
}
logFields := logrus.Fields{
"slot": blk.Block().Slot(),
"parentRoot": fmt.Sprintf("%#x", parentRoot),
"root": fmt.Sprintf("%#x", blk.Root()),
"payloadBlockHash": fmt.Sprintf("%#x", bytesutil.Trunc(payload.BlockHash())),
}
if errors.Is(err, execution.ErrAcceptedSyncingPayloadStatus) {
newPayloadOptimisticNodeCount.Inc()
log.WithFields(logrus.Fields{
"slot": blk.Block().Slot(),
"payloadBlockHash": fmt.Sprintf("%#x", bytesutil.Trunc(payload.BlockHash())),
}).Info("Called new payload with optimistic block")
log.WithFields(logFields).Info("Called new payload with optimistic block")
return false, nil
case errors.Is(err, execution.ErrInvalidPayloadStatus):
lvh := bytesutil.ToBytes32(lastValidHash)
}
if errors.Is(err, execution.ErrInvalidPayloadStatus) {
log.WithFields(logFields).WithError(err).Error("Invalid payload status")
return false, invalidBlock{
error: ErrInvalidPayload,
lastValidHash: lvh,
lastValidHash: bytesutil.ToBytes32(lastValidHash),
}
default:
return false, errors.WithMessage(ErrUndefinedExecutionEngineError, err.Error())
}
log.WithFields(logFields).WithError(err).Error("Unexpected execution engine error")
return false, errors.WithMessage(ErrUndefinedExecutionEngineError, err.Error())
}
// reportInvalidBlock deals with the event that an invalid block was detected by the execution layer

View File

@@ -481,33 +481,12 @@ func Test_NotifyNewPayload(t *testing.T) {
phase0State, _ := util.DeterministicGenesisState(t, 1)
altairState, _ := util.DeterministicGenesisStateAltair(t, 1)
bellatrixState, _ := util.DeterministicGenesisStateBellatrix(t, 2)
a := &ethpb.SignedBeaconBlockAltair{
Block: &ethpb.BeaconBlockAltair{
Body: &ethpb.BeaconBlockBodyAltair{},
},
}
a := util.NewBeaconBlockAltair()
altairBlk, err := consensusblocks.NewSignedBeaconBlock(a)
require.NoError(t, err)
blk := &ethpb.SignedBeaconBlockBellatrix{
Block: &ethpb.BeaconBlockBellatrix{
Slot: 1,
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &v1.ExecutionPayload{
BlockNumber: 1,
ParentHash: make([]byte, fieldparams.RootLength),
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
StateRoot: make([]byte, fieldparams.RootLength),
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
},
},
},
}
blk := util.NewBeaconBlockBellatrix()
blk.Block.Slot = 1
blk.Block.Body.ExecutionPayload.BlockNumber = 1
bellatrixBlk, err := consensusblocks.NewSignedBeaconBlock(util.HydrateSignedBeaconBlockBellatrix(blk))
require.NoError(t, err)
st := params.BeaconConfig().SlotsPerEpoch.Mul(uint64(epochsSinceFinalitySaveHotStateDB))
@@ -544,12 +523,6 @@ func Test_NotifyNewPayload(t *testing.T) {
blk: altairBlk,
isValidPayload: true,
},
{
name: "nil beacon block",
postState: bellatrixState,
errString: "signed beacon block can't be nil",
isValidPayload: false,
},
{
name: "new payload with optimistic block",
postState: bellatrixState,
@@ -576,15 +549,8 @@ func Test_NotifyNewPayload(t *testing.T) {
name: "altair pre state, happy case",
postState: bellatrixState,
blk: func() interfaces.ReadOnlySignedBeaconBlock {
blk := &ethpb.SignedBeaconBlockBellatrix{
Block: &ethpb.BeaconBlockBellatrix{
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &v1.ExecutionPayload{
ParentHash: bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength),
},
},
},
}
blk := util.NewBeaconBlockBellatrix()
blk.Block.Body.ExecutionPayload.ParentHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
b, err := consensusblocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
return b
@@ -595,24 +561,7 @@ func Test_NotifyNewPayload(t *testing.T) {
name: "not at merge transition",
postState: bellatrixState,
blk: func() interfaces.ReadOnlySignedBeaconBlock {
blk := &ethpb.SignedBeaconBlockBellatrix{
Block: &ethpb.BeaconBlockBellatrix{
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &v1.ExecutionPayload{
ParentHash: make([]byte, fieldparams.RootLength),
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
StateRoot: make([]byte, fieldparams.RootLength),
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
},
},
},
}
blk := util.NewBeaconBlockBellatrix()
b, err := consensusblocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
return b
@@ -623,15 +572,8 @@ func Test_NotifyNewPayload(t *testing.T) {
name: "happy case",
postState: bellatrixState,
blk: func() interfaces.ReadOnlySignedBeaconBlock {
blk := &ethpb.SignedBeaconBlockBellatrix{
Block: &ethpb.BeaconBlockBellatrix{
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &v1.ExecutionPayload{
ParentHash: bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength),
},
},
},
}
blk := util.NewBeaconBlockBellatrix()
blk.Block.Body.ExecutionPayload.ParentHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
b, err := consensusblocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
return b
@@ -642,15 +584,8 @@ func Test_NotifyNewPayload(t *testing.T) {
name: "undefined error from ee",
postState: bellatrixState,
blk: func() interfaces.ReadOnlySignedBeaconBlock {
blk := &ethpb.SignedBeaconBlockBellatrix{
Block: &ethpb.BeaconBlockBellatrix{
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &v1.ExecutionPayload{
ParentHash: bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength),
},
},
},
}
blk := util.NewBeaconBlockBellatrix()
blk.Block.Body.ExecutionPayload.ParentHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
b, err := consensusblocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
return b
@@ -662,15 +597,8 @@ func Test_NotifyNewPayload(t *testing.T) {
name: "invalid block hash error from ee",
postState: bellatrixState,
blk: func() interfaces.ReadOnlySignedBeaconBlock {
blk := &ethpb.SignedBeaconBlockBellatrix{
Block: &ethpb.BeaconBlockBellatrix{
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &v1.ExecutionPayload{
ParentHash: bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength),
},
},
},
}
blk := util.NewBeaconBlockBellatrix()
blk.Block.Body.ExecutionPayload.ParentHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
b, err := consensusblocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
return b
@@ -701,7 +629,9 @@ func Test_NotifyNewPayload(t *testing.T) {
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
postVersion, postHeader, err := getStateVersionAndPayload(tt.postState)
require.NoError(t, err)
isValidPayload, err := service.notifyNewPayload(ctx, postVersion, postHeader, tt.blk)
rob, err := consensusblocks.NewROBlock(tt.blk)
require.NoError(t, err)
isValidPayload, err := service.notifyNewPayload(ctx, postVersion, postHeader, rob)
if tt.errString != "" {
require.ErrorContains(t, tt.errString, err)
if tt.invalidBlock {
@@ -725,17 +655,12 @@ func Test_NotifyNewPayload_SetOptimisticToValid(t *testing.T) {
ctx := tr.ctx
bellatrixState, _ := util.DeterministicGenesisStateBellatrix(t, 2)
blk := &ethpb.SignedBeaconBlockBellatrix{
Block: &ethpb.BeaconBlockBellatrix{
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &v1.ExecutionPayload{
ParentHash: bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength),
},
},
},
}
blk := util.NewBeaconBlockBellatrix()
blk.Block.Body.ExecutionPayload.ParentHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
bellatrixBlk, err := consensusblocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
rob, err := consensusblocks.NewROBlock(bellatrixBlk)
require.NoError(t, err)
e := &mockExecution.EngineClient{BlockByHashMap: map[[32]byte]*v1.ExecutionBlock{}}
e.BlockByHashMap[[32]byte{'a'}] = &v1.ExecutionBlock{
Header: gethtypes.Header{
@@ -752,7 +677,7 @@ func Test_NotifyNewPayload_SetOptimisticToValid(t *testing.T) {
service.cfg.ExecutionEngineCaller = e
postVersion, postHeader, err := getStateVersionAndPayload(bellatrixState)
require.NoError(t, err)
validated, err := service.notifyNewPayload(ctx, postVersion, postHeader, bellatrixBlk)
validated, err := service.notifyNewPayload(ctx, postVersion, postHeader, rob)
require.NoError(t, err)
require.Equal(t, true, validated)
}

View File

@@ -16,7 +16,6 @@ import (
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/math"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
ethpbv1 "github.com/OffchainLabs/prysm/v6/proto/eth/v1"
"github.com/OffchainLabs/prysm/v6/runtime/version"
@@ -108,7 +107,7 @@ func (s *Service) saveHead(ctx context.Context, newHeadRoot [32]byte, headBlock
commonRoot = params.BeaconConfig().ZeroHash
}
dis := headSlot + newHeadSlot - 2*forkSlot
dep := math.Max(uint64(headSlot-forkSlot), uint64(newHeadSlot-forkSlot))
dep := max(uint64(headSlot-forkSlot), uint64(newHeadSlot-forkSlot))
oldWeight, err := s.cfg.ForkChoiceStore.Weight(oldHeadRoot)
if err != nil {
log.WithField("root", fmt.Sprintf("%#x", oldHeadRoot)).Warn("Could not determine node weight")
@@ -135,7 +134,7 @@ func (s *Service) saveHead(ctx context.Context, newHeadRoot [32]byte, headBlock
Type: statefeed.Reorg,
Data: &ethpbv1.EventChainReorg{
Slot: newHeadSlot,
Depth: math.Max(uint64(headSlot-forkSlot), uint64(newHeadSlot-forkSlot)),
Depth: max(uint64(headSlot-forkSlot), uint64(newHeadSlot-forkSlot)),
OldHeadBlock: oldHeadRoot[:],
NewHeadBlock: newHeadRoot[:],
OldHeadState: oldStateRoot[:],
@@ -347,13 +346,24 @@ func (s *Service) notifyNewHeadEvent(
if err != nil {
return errors.Wrap(err, "could not check if node is optimistically synced")
}
parentRoot, err := s.ParentRoot([32]byte(newHeadRoot))
if err != nil {
return errors.Wrap(err, "could not obtain parent root in forkchoice")
}
parentSlot, err := s.RecentBlockSlot(parentRoot)
if err != nil {
return errors.Wrap(err, "could not obtain parent slot in forkchoice")
}
epochTransition := slots.ToEpoch(newHeadSlot) > slots.ToEpoch(parentSlot)
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.NewHead,
Data: &ethpbv1.EventHead{
Slot: newHeadSlot,
Block: newHeadRoot,
State: newHeadStateRoot,
EpochTransition: slots.IsEpochStart(newHeadSlot),
EpochTransition: epochTransition,
PreviousDutyDependentRoot: previousDutyDependentRoot[:],
CurrentDutyDependentRoot: currentDutyDependentRoot[:],
ExecutionOptimistic: isOptimistic,

View File

@@ -162,6 +162,9 @@ func Test_notifyNewHeadEvent(t *testing.T) {
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(t.Context(), st, blk))
newHeadStateRoot := [32]byte{2}
newHeadRoot := [32]byte{3}
st, blk, err = prepareForkchoiceState(t.Context(), 1, newHeadRoot, [32]byte{}, [32]byte{}, &ethpb.Checkpoint{}, &ethpb.Checkpoint{})
require.NoError(t, err)
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(t.Context(), st, blk))
require.NoError(t, srv.notifyNewHeadEvent(t.Context(), 1, bState, newHeadStateRoot[:], newHeadRoot[:]))
events := notifier.ReceivedEvents()
require.Equal(t, 1, len(events))
@@ -196,6 +199,9 @@ func Test_notifyNewHeadEvent(t *testing.T) {
newHeadStateRoot := [32]byte{2}
newHeadRoot := [32]byte{3}
st, blk, err = prepareForkchoiceState(t.Context(), 0, newHeadRoot, [32]byte{}, [32]byte{}, &ethpb.Checkpoint{}, &ethpb.Checkpoint{})
require.NoError(t, err)
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(t.Context(), st, blk))
err = srv.notifyNewHeadEvent(t.Context(), epoch2Start, bState, newHeadStateRoot[:], newHeadRoot[:])
require.NoError(t, err)
events := notifier.ReceivedEvents()
@@ -213,6 +219,37 @@ func Test_notifyNewHeadEvent(t *testing.T) {
}
require.DeepSSZEqual(t, wanted, eventHead)
})
t.Run("epoch transition", func(t *testing.T) {
bState, _ := util.DeterministicGenesisState(t, 10)
srv := testServiceWithDB(t)
srv.SetGenesisTime(time.Now())
notifier := srv.cfg.StateNotifier.(*mock.MockStateNotifier)
srv.originBlockRoot = [32]byte{1}
st, blk, err := prepareForkchoiceState(t.Context(), 0, [32]byte{}, [32]byte{}, [32]byte{}, &ethpb.Checkpoint{}, &ethpb.Checkpoint{})
require.NoError(t, err)
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(t.Context(), st, blk))
newHeadStateRoot := [32]byte{2}
newHeadRoot := [32]byte{3}
st, blk, err = prepareForkchoiceState(t.Context(), 32, newHeadRoot, [32]byte{}, [32]byte{}, &ethpb.Checkpoint{}, &ethpb.Checkpoint{})
require.NoError(t, err)
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(t.Context(), st, blk))
newHeadSlot := params.BeaconConfig().SlotsPerEpoch
require.NoError(t, srv.notifyNewHeadEvent(t.Context(), newHeadSlot, bState, newHeadStateRoot[:], newHeadRoot[:]))
events := notifier.ReceivedEvents()
require.Equal(t, 1, len(events))
eventHead, ok := events[0].Data.(*ethpbv1.EventHead)
require.Equal(t, true, ok)
wanted := &ethpbv1.EventHead{
Slot: newHeadSlot,
Block: newHeadRoot[:],
State: newHeadStateRoot[:],
EpochTransition: true,
PreviousDutyDependentRoot: params.BeaconConfig().ZeroHash[:],
CurrentDutyDependentRoot: srv.originBlockRoot[:],
}
require.DeepSSZEqual(t, wanted, eventHead)
})
}
func TestRetrieveHead_ReadOnly(t *testing.T) {

View File

@@ -109,6 +109,7 @@ func VerifyCellKZGProofBatch(commitmentsBytes []Bytes48, cellIndices []uint64, c
}
// RecoverCellsAndKZGProofs recovers the complete cells and KZG proofs from a given set of cell indices and partial cells.
// Note: `len(cellIndices)` must be equal to `len(partialCells)` and `cellIndices` must be sorted in ascending order.
func RecoverCellsAndKZGProofs(cellIndices []uint64, partialCells []Cell) (CellsAndProofs, error) {
// Convert `Cell` type to `ckzg4844.Cell`
ckzgPartialCells := make([]ckzg4844.Cell, len(partialCells))

View File

@@ -3,7 +3,6 @@ package blockchain
import (
"context"
"fmt"
"slices"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/blocks"
@@ -73,7 +72,7 @@ func (s *Service) postBlockProcess(cfg *postBlockProcessConfig) error {
if features.Get().EnableLightClient && slots.ToEpoch(s.CurrentSlot()) >= params.BeaconConfig().AltairForkEpoch {
defer s.processLightClientUpdates(cfg)
}
defer s.sendStateFeedOnBlock(cfg)
defer reportProcessingTime(startTime)
defer reportAttestationInclusion(cfg.roblock.Block())
@@ -94,6 +93,8 @@ func (s *Service) postBlockProcess(cfg *postBlockProcessConfig) error {
return errors.Wrap(err, "could not set optimistic block to valid")
}
}
defer s.sendStateFeedOnBlock(cfg) // only send event after successful insertion
start := time.Now()
cfg.headRoot, err = s.cfg.ForkChoiceStore.Head(ctx)
if err != nil {
@@ -713,7 +714,7 @@ func (s *Service) areDataColumnsAvailable(
nodeID := s.cfg.P2P.NodeID()
// Get the custody group sampling size for the node.
custodyGroupCount, err := s.cfg.P2P.CustodyGroupCount()
custodyGroupCount, err := s.cfg.P2P.CustodyGroupCount(ctx)
if err != nil {
return errors.Wrap(err, "custody group count")
}
@@ -746,14 +747,14 @@ func (s *Service) areDataColumnsAvailable(
}
// Get a map of data column indices that are not currently available.
missingMap, err := missingDataColumnIndices(s.dataColumnStorage, root, peerInfo.CustodyColumns)
missing, err := missingDataColumnIndices(s.dataColumnStorage, root, peerInfo.CustodyColumns)
if err != nil {
return errors.Wrap(err, "missing data columns")
}
// If there are no missing indices, all data column sidecars are available.
// This is the happy path.
if len(missingMap) == 0 {
if len(missing) == 0 {
return nil
}
@@ -770,33 +771,17 @@ func (s *Service) areDataColumnsAvailable(
// Avoid logging if DA check is called after next slot start.
if nextSlot.After(time.Now()) {
timer := time.AfterFunc(time.Until(nextSlot), func() {
missingMapCount := uint64(len(missingMap))
missingCount := uint64(len(missing))
if missingMapCount == 0 {
if missingCount == 0 {
return
}
var (
expected interface{} = "all"
missing interface{} = "all"
)
numberOfColumns := params.BeaconConfig().NumberOfColumns
colMapCount := uint64(len(peerInfo.CustodyColumns))
if colMapCount < numberOfColumns {
expected = uint64MapToSortedSlice(peerInfo.CustodyColumns)
}
if missingMapCount < numberOfColumns {
missing = uint64MapToSortedSlice(missingMap)
}
log.WithFields(logrus.Fields{
"slot": block.Slot(),
"root": fmt.Sprintf("%#x", root),
"columnsExpected": expected,
"columnsWaiting": missing,
"columnsExpected": helpers.SortedPrettySliceFromMap(peerInfo.CustodyColumns),
"columnsWaiting": helpers.SortedPrettySliceFromMap(missing),
}).Warning("Data columns still missing at slot end")
})
defer timer.Stop()
@@ -812,7 +797,7 @@ func (s *Service) areDataColumnsAvailable(
for _, index := range idents.Indices {
// This is a data column we are expecting.
if _, ok := missingMap[index]; ok {
if _, ok := missing[index]; ok {
storedDataColumnsCount++
}
@@ -823,10 +808,10 @@ func (s *Service) areDataColumnsAvailable(
}
// Remove the index from the missing map.
delete(missingMap, index)
delete(missing, index)
// Return if there is no more missing data columns.
if len(missingMap) == 0 {
if len(missing) == 0 {
return nil
}
}
@@ -834,10 +819,10 @@ func (s *Service) areDataColumnsAvailable(
case <-ctx.Done():
var missingIndices interface{} = "all"
numberOfColumns := params.BeaconConfig().NumberOfColumns
missingIndicesCount := uint64(len(missingMap))
missingIndicesCount := uint64(len(missing))
if missingIndicesCount < numberOfColumns {
missingIndices = uint64MapToSortedSlice(missingMap)
missingIndices = helpers.SortedPrettySliceFromMap(missing)
}
return errors.Wrapf(ctx.Err(), "data column sidecars slot: %d, BlockRoot: %#x, missing: %v", block.Slot(), root, missingIndices)
@@ -921,16 +906,6 @@ func (s *Service) areBlobsAvailable(ctx context.Context, root [fieldparams.RootL
}
}
// uint64MapToSortedSlice produces a sorted uint64 slice from a map.
func uint64MapToSortedSlice(input map[uint64]bool) []uint64 {
output := make([]uint64, 0, len(input))
for idx := range input {
output = append(output, idx)
}
slices.Sort[[]uint64](output)
return output
}
// lateBlockTasks is called 4 seconds into the slot and performs tasks
// related to late blocks. It emits a MissedSlot state feed event.
// It calls FCU and sets the right attributes if we are proposing next slot

View File

@@ -9,8 +9,10 @@ import (
"testing"
"time"
mock "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/blocks"
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/signing"
@@ -2413,6 +2415,8 @@ func driftGenesisTime(s *Service, slot primitives.Slot, delay time.Duration) {
}
func TestMissingBlobIndices(t *testing.T) {
ds := util.SlotAtEpoch(t, params.BeaconConfig().DenebForkEpoch)
maxBlobs := params.BeaconConfig().MaxBlobsPerBlock(ds)
cases := []struct {
name string
expected [][]byte
@@ -2426,23 +2430,23 @@ func TestMissingBlobIndices(t *testing.T) {
},
{
name: "expected exceeds max",
expected: fakeCommitments(params.BeaconConfig().MaxBlobsPerBlock(0) + 1),
expected: fakeCommitments(maxBlobs + 1),
err: errMaxBlobsExceeded,
},
{
name: "first missing",
expected: fakeCommitments(params.BeaconConfig().MaxBlobsPerBlock(0)),
expected: fakeCommitments(maxBlobs),
present: []uint64{1, 2, 3, 4, 5},
result: fakeResult([]uint64{0}),
},
{
name: "all missing",
expected: fakeCommitments(params.BeaconConfig().MaxBlobsPerBlock(0)),
expected: fakeCommitments(maxBlobs),
result: fakeResult([]uint64{0, 1, 2, 3, 4, 5}),
},
{
name: "none missing",
expected: fakeCommitments(params.BeaconConfig().MaxBlobsPerBlock(0)),
expected: fakeCommitments(maxBlobs),
present: []uint64{0, 1, 2, 3, 4, 5},
result: fakeResult([]uint64{}),
},
@@ -2475,8 +2479,8 @@ func TestMissingBlobIndices(t *testing.T) {
for _, c := range cases {
bm, bs := filesystem.NewEphemeralBlobStorageWithMocker(t)
t.Run(c.name, func(t *testing.T) {
require.NoError(t, bm.CreateFakeIndices(c.root, 0, c.present...))
missing, err := missingBlobIndices(bs, c.root, c.expected, 0)
require.NoError(t, bm.CreateFakeIndices(c.root, ds, c.present...))
missing, err := missingBlobIndices(bs, c.root, c.expected, ds)
if c.err != nil {
require.ErrorIs(t, err, c.err)
return
@@ -2904,22 +2908,21 @@ type testIsAvailableParams struct {
columnsToSave []uint64
}
func testIsAvailableSetup(t *testing.T, params testIsAvailableParams) (context.Context, context.CancelFunc, *Service, [fieldparams.RootLength]byte, interfaces.SignedBeaconBlock) {
func testIsAvailableSetup(t *testing.T, p testIsAvailableParams) (context.Context, context.CancelFunc, *Service, [fieldparams.RootLength]byte, interfaces.SignedBeaconBlock) {
ctx, cancel := context.WithCancel(t.Context())
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
options := append(params.options, WithDataColumnStorage(dataColumnStorage))
options := append(p.options, WithDataColumnStorage(dataColumnStorage))
service, _ := minimalTestService(t, options...)
fs := util.SlotAtEpoch(t, params.BeaconConfig().FuluForkEpoch)
genesisState, secretKeys := util.DeterministicGenesisStateElectra(t, 32 /*validator count*/)
err := service.saveGenesisData(ctx, genesisState)
require.NoError(t, err)
genesisState, secretKeys := util.DeterministicGenesisStateElectra(t, 32, util.WithElectraStateSlot(fs))
require.NoError(t, service.saveGenesisData(ctx, genesisState))
conf := util.DefaultBlockGenConfig()
conf.NumBlobKzgCommitments = params.blobKzgCommitmentsCount
conf.NumBlobKzgCommitments = p.blobKzgCommitmentsCount
signedBeaconBlock, err := util.GenerateFullBlockFulu(genesisState, secretKeys, conf, 10 /*block slot*/)
signedBeaconBlock, err := util.GenerateFullBlockFulu(genesisState, secretKeys, conf, fs+1)
require.NoError(t, err)
block := signedBeaconBlock.Block
@@ -2929,8 +2932,8 @@ func testIsAvailableSetup(t *testing.T, params testIsAvailableParams) (context.C
root, err := block.HashTreeRoot()
require.NoError(t, err)
dataColumnsParams := make([]util.DataColumnParam, 0, len(params.columnsToSave))
for _, i := range params.columnsToSave {
dataColumnsParams := make([]util.DataColumnParam, 0, len(p.columnsToSave))
for _, i := range p.columnsToSave {
dataColumnParam := util.DataColumnParam{
Index: i,
Slot: block.Slot,
@@ -2954,8 +2957,12 @@ func testIsAvailableSetup(t *testing.T, params testIsAvailableParams) (context.C
}
func TestIsDataAvailable(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.AltairForkEpoch, cfg.BellatrixForkEpoch, cfg.CapellaForkEpoch, cfg.DenebForkEpoch, cfg.ElectraForkEpoch, cfg.FuluForkEpoch = 0, 0, 0, 0, 0, 0
params.OverrideBeaconConfig(cfg)
t.Run("Fulu - out of retention window", func(t *testing.T) {
params := testIsAvailableParams{options: []Option{WithGenesisTime(time.Unix(0, 0))}}
params := testIsAvailableParams{}
ctx, _, service, root, signed := testIsAvailableSetup(t, params)
roBlock, err := consensusblocks.NewROBlockWithRoot(signed, root)
@@ -2972,7 +2979,6 @@ func TestIsDataAvailable(t *testing.T) {
err = service.isDataAvailable(ctx, roBlock)
require.NoError(t, err)
})
t.Run("Fulu - more than half of the columns in custody", func(t *testing.T) {
minimumColumnsCountToReconstruct := peerdas.MinimumColumnCountToReconstruct()
indices := make([]uint64, 0, minimumColumnsCountToReconstruct)
@@ -3143,6 +3149,159 @@ func TestIsDataAvailable(t *testing.T) {
})
}
// Test_postBlockProcess_EventSending tests that block processed events are only sent
// when block processing succeeds according to the decision tree:
//
// Block Processing Flow:
// ├─ InsertNode FAILS (fork choice timeout)
// │ └─ blockProcessed = false ❌ NO EVENT
// │
// ├─ InsertNode succeeds
// │ ├─ handleBlockAttestations FAILS
// │ │ └─ blockProcessed = false ❌ NO EVENT
// │ │
// │ ├─ Block is NON-CANONICAL (not head)
// │ │ └─ blockProcessed = true ✅ SEND EVENT (Line 111)
// │ │
// │ ├─ Block IS CANONICAL (new head)
// │ │ ├─ getFCUArgs FAILS
// │ │ │ └─ blockProcessed = true ✅ SEND EVENT (Line 117)
// │ │ │
// │ │ ├─ sendFCU FAILS
// │ │ │ └─ blockProcessed = false ❌ NO EVENT
// │ │ │
// │ │ └─ Full success
// │ │ └─ blockProcessed = true ✅ SEND EVENT (Line 125)
func Test_postBlockProcess_EventSending(t *testing.T) {
ctx := context.Background()
// Helper to create a minimal valid block and state
createTestBlockAndState := func(t *testing.T, slot primitives.Slot, parentRoot [32]byte) (consensusblocks.ROBlock, state.BeaconState) {
st, _ := util.DeterministicGenesisState(t, 64)
require.NoError(t, st.SetSlot(slot))
stateRoot, err := st.HashTreeRoot(ctx)
require.NoError(t, err)
blk := util.NewBeaconBlock()
blk.Block.Slot = slot
blk.Block.ProposerIndex = 0
blk.Block.ParentRoot = parentRoot[:]
blk.Block.StateRoot = stateRoot[:]
signed := util.HydrateSignedBeaconBlock(blk)
roBlock, err := consensusblocks.NewSignedBeaconBlock(signed)
require.NoError(t, err)
roBlk, err := consensusblocks.NewROBlock(roBlock)
require.NoError(t, err)
return roBlk, st
}
tests := []struct {
name string
setupService func(*Service, [32]byte)
expectEvent bool
expectError bool
errorContains string
}{
{
name: "Block successfully processed - sends event",
setupService: func(s *Service, blockRoot [32]byte) {
// Default setup should work
},
expectEvent: true,
expectError: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
// Create service with required options
opts := testServiceOptsWithDB(t)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
// Initialize fork choice with genesis block
st, _ := util.DeterministicGenesisState(t, 64)
require.NoError(t, st.SetSlot(0))
genesisBlock := util.NewBeaconBlock()
genesisBlock.Block.StateRoot = bytesutil.PadTo([]byte("genesisState"), 32)
signedGenesis := util.HydrateSignedBeaconBlock(genesisBlock)
block, err := consensusblocks.NewSignedBeaconBlock(signedGenesis)
require.NoError(t, err)
genesisRoot, err := block.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, block))
require.NoError(t, service.cfg.BeaconDB.SaveGenesisBlockRoot(ctx, genesisRoot))
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, st, genesisRoot))
genesisROBlock, err := consensusblocks.NewROBlock(block)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, genesisROBlock))
// Create test block and state with genesis as parent
roBlock, postSt := createTestBlockAndState(t, 100, genesisRoot)
// Apply additional service setup if provided
if tt.setupService != nil {
tt.setupService(service, roBlock.Root())
}
// Create post block process config
cfg := &postBlockProcessConfig{
ctx: ctx,
roblock: roBlock,
postState: postSt,
isValidPayload: true,
}
// Execute postBlockProcess
err = service.postBlockProcess(cfg)
// Check error expectation
if tt.expectError {
require.NotNil(t, err)
if tt.errorContains != "" {
require.ErrorContains(t, tt.errorContains, err)
}
} else {
require.NoError(t, err)
}
// Give a moment for deferred functions to execute
time.Sleep(10 * time.Millisecond)
// Check event expectation
notifier := service.cfg.StateNotifier.(*mock.MockStateNotifier)
events := notifier.ReceivedEvents()
if tt.expectEvent {
require.NotEqual(t, 0, len(events), "Expected event to be sent but none were received")
// Verify it's a BlockProcessed event
foundBlockProcessed := false
for _, evt := range events {
if evt.Type == statefeed.BlockProcessed {
foundBlockProcessed = true
data, ok := evt.Data.(*statefeed.BlockProcessedData)
require.Equal(t, true, ok, "Event data should be BlockProcessedData")
require.Equal(t, roBlock.Root(), data.BlockRoot, "Event should contain correct block root")
break
}
}
require.Equal(t, true, foundBlockProcessed, "Expected BlockProcessed event type")
} else {
// For no-event cases, verify no BlockProcessed events were sent
for _, evt := range events {
require.NotEqual(t, statefeed.BlockProcessed, evt.Type,
"Expected no BlockProcessed event but one was sent")
}
}
})
}
}
func setupLightClientTestRequirements(ctx context.Context, t *testing.T, s *Service, v int, options ...util.LightClientOption) (*util.TestLightClient, *postBlockProcessConfig) {
var l *util.TestLightClient
switch v {

View File

@@ -219,6 +219,9 @@ func (s *Service) validateExecutionAndConsensus(
eg.Go(func() error {
var err error
postState, err = s.validateStateTransition(ctx, preState, block)
if errors.Is(err, ErrNotDescendantOfFinalized) {
return invalidBlock{error: err, root: block.Root()}
}
if err != nil {
return errors.Wrap(err, "failed to validate consensus state transition function")
}
@@ -248,7 +251,7 @@ func (s *Service) handleDA(ctx context.Context, avs das.AvailabilityStore, block
err = s.isDataAvailable(ctx, block)
}
elapsed := time.Since(start)
if err != nil {
if err == nil {
dataAvailWaitedTime.Observe(float64(elapsed.Milliseconds()))
}
return elapsed, err
@@ -585,17 +588,17 @@ func (s *Service) sendNewFinalizedEvent(ctx context.Context, postState state.Bea
func (s *Service) sendBlockAttestationsToSlasher(signed interfaces.ReadOnlySignedBeaconBlock, preState state.BeaconState) {
// Feed the indexed attestation to slasher if enabled. This action
// is done in the background to avoid adding more load to this critical code path.
ctx := context.TODO()
ctx := s.ctx
for _, att := range signed.Block().Body().Attestations() {
committees, err := helpers.AttestationCommitteesFromState(ctx, preState, att)
if err != nil {
log.WithError(err).Error("Could not get attestation committees")
return
continue
}
indexedAtt, err := attestation.ConvertToIndexed(ctx, att, committees...)
if err != nil {
log.WithError(err).Error("Could not convert to indexed attestation")
return
continue
}
s.cfg.SlasherAttestationsFeed.Send(&types.WrappedIndexedAtt{IndexedAtt: indexedAtt})
}

View File

@@ -493,7 +493,7 @@ func (s *Service) updateCustodyInfoInDB(slot primitives.Slot) (primitives.Slot,
// Compute the custody group count.
custodyGroupCount := custodyRequirement
if isSubscribedToAllDataSubnets {
custodyGroupCount = beaconConfig.NumberOfColumns
custodyGroupCount = beaconConfig.NumberOfCustodyGroups
}
// Safely compute the fulu fork slot.

View File

@@ -23,9 +23,11 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
state_native "github.com/OffchainLabs/prysm/v6/beacon-chain/state/state-native"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state/stategen"
"github.com/OffchainLabs/prysm/v6/cmd/beacon-chain/flags"
"github.com/OffchainLabs/prysm/v6/config/features"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
consensusblocks "github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
@@ -562,8 +564,9 @@ func TestNotifyIndex(t *testing.T) {
var root [32]byte
copy(root[:], "exampleRoot")
ds := util.SlotAtEpoch(t, params.BeaconConfig().DenebForkEpoch)
// Test notifying a new index
bn.notifyIndex(root, 1, 1)
bn.notifyIndex(root, 1, ds)
if !bn.seenIndex[root][1] {
t.Errorf("Index was not marked as seen")
}
@@ -580,7 +583,7 @@ func TestNotifyIndex(t *testing.T) {
}
// Test notifying a new index again
bn.notifyIndex(root, 2, 1)
bn.notifyIndex(root, 2, ds)
if !bn.seenIndex[root][2] {
t.Errorf("Index was not marked as seen")
}
@@ -595,3 +598,103 @@ func TestNotifyIndex(t *testing.T) {
t.Errorf("Notifier channel did not receive the index")
}
}
func TestUpdateCustodyInfoInDB(t *testing.T) {
const (
fuluForkEpoch = 10
custodyRequirement = uint64(4)
earliestStoredSlot = primitives.Slot(12)
numberOfCustodyGroups = uint64(64)
numberOfColumns = uint64(128)
)
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.FuluForkEpoch = fuluForkEpoch
cfg.CustodyRequirement = custodyRequirement
cfg.NumberOfCustodyGroups = numberOfCustodyGroups
cfg.NumberOfColumns = numberOfColumns
params.OverrideBeaconConfig(cfg)
ctx := t.Context()
pbBlock := util.NewBeaconBlock()
pbBlock.Block.Slot = 12
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(pbBlock)
require.NoError(t, err)
roBlock, err := blocks.NewROBlock(signedBeaconBlock)
require.NoError(t, err)
t.Run("CGC increases before fulu", func(t *testing.T) {
service, requirements := minimalTestService(t)
err = requirements.db.SaveBlock(ctx, roBlock)
require.NoError(t, err)
// Before Fulu
// -----------
actualEas, actualCgc, err := service.updateCustodyInfoInDB(15)
require.NoError(t, err)
require.Equal(t, earliestStoredSlot, actualEas)
require.Equal(t, custodyRequirement, actualCgc)
actualEas, actualCgc, err = service.updateCustodyInfoInDB(17)
require.NoError(t, err)
require.Equal(t, earliestStoredSlot, actualEas)
require.Equal(t, custodyRequirement, actualCgc)
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.SubscribeAllDataSubnets = true
flags.Init(gFlags)
defer flags.Init(resetFlags)
actualEas, actualCgc, err = service.updateCustodyInfoInDB(19)
require.NoError(t, err)
require.Equal(t, earliestStoredSlot, actualEas)
require.Equal(t, numberOfCustodyGroups, actualCgc)
// After Fulu
// ----------
actualEas, actualCgc, err = service.updateCustodyInfoInDB(fuluForkEpoch*primitives.Slot(cfg.SlotsPerEpoch) + 1)
require.NoError(t, err)
require.Equal(t, earliestStoredSlot, actualEas)
require.Equal(t, numberOfCustodyGroups, actualCgc)
})
t.Run("CGC increases after fulu", func(t *testing.T) {
service, requirements := minimalTestService(t)
err = requirements.db.SaveBlock(ctx, roBlock)
require.NoError(t, err)
// Before Fulu
// -----------
actualEas, actualCgc, err := service.updateCustodyInfoInDB(15)
require.NoError(t, err)
require.Equal(t, earliestStoredSlot, actualEas)
require.Equal(t, custodyRequirement, actualCgc)
actualEas, actualCgc, err = service.updateCustodyInfoInDB(17)
require.NoError(t, err)
require.Equal(t, earliestStoredSlot, actualEas)
require.Equal(t, custodyRequirement, actualCgc)
// After Fulu
// ----------
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.SubscribeAllDataSubnets = true
flags.Init(gFlags)
defer flags.Init(resetFlags)
slot := fuluForkEpoch*primitives.Slot(cfg.SlotsPerEpoch) + 1
actualEas, actualCgc, err = service.updateCustodyInfoInDB(slot)
require.NoError(t, err)
require.Equal(t, slot, actualEas)
require.Equal(t, numberOfCustodyGroups, actualCgc)
actualEas, actualCgc, err = service.updateCustodyInfoInDB(slot + 2)
require.NoError(t, err)
require.Equal(t, slot, actualEas)
require.Equal(t, numberOfCustodyGroups, actualCgc)
})
}

View File

@@ -89,7 +89,7 @@ func (mb *mockBroadcaster) BroadcastLightClientFinalityUpdate(_ context.Context,
return nil
}
func (mb *mockBroadcaster) BroadcastDataColumnSidecar(_ uint64, _ blocks.VerifiedRODataColumn) error {
func (mb *mockBroadcaster) BroadcastDataColumnSidecars(_ context.Context, _ []blocks.VerifiedRODataColumn) error {
mb.broadcastCalled = true
return nil
}
@@ -106,14 +106,14 @@ type mockCustodyManager struct {
custodyGroupCount uint64
}
func (dch *mockCustodyManager) EarliestAvailableSlot() (primitives.Slot, error) {
func (dch *mockCustodyManager) EarliestAvailableSlot(context.Context) (primitives.Slot, error) {
dch.mut.RLock()
defer dch.mut.RUnlock()
return dch.earliestAvailableSlot, nil
}
func (dch *mockCustodyManager) CustodyGroupCount() (uint64, error) {
func (dch *mockCustodyManager) CustodyGroupCount(context.Context) (uint64, error) {
dch.mut.RLock()
defer dch.mut.RUnlock()
@@ -130,6 +130,14 @@ func (dch *mockCustodyManager) UpdateCustodyInfo(earliestAvailableSlot primitive
return earliestAvailableSlot, custodyGroupCount, nil
}
func (dch *mockCustodyManager) UpdateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error {
dch.mut.Lock()
defer dch.mut.Unlock()
dch.earliestAvailableSlot = earliestAvailableSlot
return nil
}
func (dch *mockCustodyManager) CustodyGroupCountFromPeer(peer.ID) uint64 {
return 0
}

View File

@@ -5,7 +5,6 @@ package cache
import (
"context"
"errors"
"math"
"sync"
"time"
@@ -272,7 +271,7 @@ func (c *CommitteeCache) checkInProgress(ctx context.Context, seed [32]byte) err
// for the in progress boolean to flip to false.
time.Sleep(time.Duration(delay) * time.Nanosecond)
delay *= delayFactor
delay = math.Min(delay, maxDelay)
delay = min(delay, maxDelay)
}
return nil
}

View File

@@ -52,7 +52,7 @@ func create(leaves [][32]byte, depth uint64) MerkleTreeNode {
if depth == 0 {
return &LeafNode{hash: leaves[0]}
}
split := math.Min(math.PowerOf2(depth-1), length)
split := min(math.PowerOf2(depth-1), length)
left := create(leaves[0:split], depth-1)
right := create(leaves[split:], depth-1)
return &InnerNode{left: left, right: right}

View File

@@ -2,7 +2,6 @@ package cache
import (
"context"
"math"
"sync"
"time"
@@ -90,7 +89,7 @@ func (c *SkipSlotCache) Get(ctx context.Context, r [32]byte) (state.BeaconState,
// for the in progress boolean to flip to false.
time.Sleep(time.Duration(delay) * time.Nanosecond)
delay *= delayFactor
delay = math.Min(delay, maxDelay)
delay = min(delay, maxDelay)
}
span.SetAttributes(trace.BoolAttribute("inProgress", inProgress))

View File

@@ -49,13 +49,22 @@ func ProcessSyncAggregate(ctx context.Context, s state.BeaconState, sync *ethpb.
if err != nil {
return nil, 0, errors.Wrap(err, "could not filter sync committee votes")
}
if err := VerifySyncCommitteeSig(s, votedKeys, sync.SyncCommitteeSignature); err != nil {
return nil, 0, errors.Wrap(err, "could not verify sync committee signature")
}
return s, reward, nil
}
// ProcessSyncAggregateNoVerifySig processes the sync aggregate without verifying the sync committee signature.
// This is useful in scenarios such as block reward calculation, where we can assume the data in the block is valid.
func ProcessSyncAggregateNoVerifySig(ctx context.Context, s state.BeaconState, sync *ethpb.SyncAggregate) (state.BeaconState, uint64, error) {
s, _, reward, err := processSyncAggregate(ctx, s, sync)
if err != nil {
return nil, 0, errors.Wrap(err, "could not filter sync committee votes")
}
return s, reward, nil
}
// processSyncAggregate applies all the logic in the spec function `process_sync_aggregate` except
// verifying the BLS signatures. It returns the modified beacons state, the list of validators'
// public keys that voted (for future signature verification) and the proposer reward for including

View File

@@ -53,9 +53,19 @@ func TestProcessSyncCommittee_PerfectParticipation(t *testing.T) {
SyncCommitteeSignature: aggregatedSig,
}
// Verify that ProcessSyncAggregateNoVerifySig and ProcessSyncAggregate have the same outcome.
beaconStateNoVerifySig := beaconState.Copy()
beaconStateNoVerifySig, rewardNoVerifySig, err := altair.ProcessSyncAggregateNoVerifySig(t.Context(), beaconStateNoVerifySig, syncAggregate)
require.NoError(t, err)
sszNoVerifySig, err := beaconStateNoVerifySig.MarshalSSZ()
require.NoError(t, err)
var reward uint64
beaconState, reward, err = altair.ProcessSyncAggregate(t.Context(), beaconState, syncAggregate)
require.NoError(t, err)
ssz, err := beaconState.MarshalSSZ()
require.NoError(t, err)
assert.DeepEqual(t, sszNoVerifySig, ssz, "States resulting from ProcessSyncAggregateNoVerifySig and ProcessSyncAggregate are not equal")
assert.Equal(t, rewardNoVerifySig, reward, "Rewards resulting from ProcessSyncAggregateNoVerifySig and ProcessSyncAggregate are not equal")
assert.Equal(t, uint64(72192), reward)
// Use a non-sync committee index to compare profitability.

View File

@@ -16,7 +16,6 @@ import (
"github.com/OffchainLabs/prysm/v6/crypto/bls"
"github.com/OffchainLabs/prysm/v6/crypto/hash"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/math"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/time/slots"
@@ -209,7 +208,7 @@ func IsSyncCommitteeAggregator(sig []byte) (bool, error) {
}
cfg := params.BeaconConfig()
modulo := math.Max(1, cfg.SyncCommitteeSize/cfg.SyncCommitteeSubnetCount/cfg.TargetAggregatorsPerSyncSubcommittee)
modulo := max(1, cfg.SyncCommitteeSize/cfg.SyncCommitteeSubnetCount/cfg.TargetAggregatorsPerSyncSubcommittee)
hashedSig := hash.Hash(sig)
return bytesutil.FromBytes8(hashedSig[:8])%modulo == 0, nil
}

View File

@@ -12,6 +12,46 @@ import (
"github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1/attestation"
)
// ConvertToAltair converts a Phase 0 beacon state to an Altair beacon state.
func ConvertToAltair(state state.BeaconState) (state.BeaconState, error) {
epoch := time.CurrentEpoch(state)
numValidators := state.NumValidators()
s := &ethpb.BeaconStateAltair{
GenesisTime: uint64(state.GenesisTime().Unix()),
GenesisValidatorsRoot: state.GenesisValidatorsRoot(),
Slot: state.Slot(),
Fork: &ethpb.Fork{
PreviousVersion: state.Fork().CurrentVersion,
CurrentVersion: params.BeaconConfig().AltairForkVersion,
Epoch: epoch,
},
LatestBlockHeader: state.LatestBlockHeader(),
BlockRoots: state.BlockRoots(),
StateRoots: state.StateRoots(),
HistoricalRoots: state.HistoricalRoots(),
Eth1Data: state.Eth1Data(),
Eth1DataVotes: state.Eth1DataVotes(),
Eth1DepositIndex: state.Eth1DepositIndex(),
Validators: state.Validators(),
Balances: state.Balances(),
RandaoMixes: state.RandaoMixes(),
Slashings: state.Slashings(),
PreviousEpochParticipation: make([]byte, numValidators),
CurrentEpochParticipation: make([]byte, numValidators),
JustificationBits: state.JustificationBits(),
PreviousJustifiedCheckpoint: state.PreviousJustifiedCheckpoint(),
CurrentJustifiedCheckpoint: state.CurrentJustifiedCheckpoint(),
FinalizedCheckpoint: state.FinalizedCheckpoint(),
InactivityScores: make([]uint64, numValidators),
}
newState, err := state_native.InitializeFromProtoUnsafeAltair(s)
if err != nil {
return nil, err
}
return newState, nil
}
// UpgradeToAltair updates input state to return the version Altair state.
//
// Spec code:
@@ -64,39 +104,7 @@ import (
// post.next_sync_committee = get_next_sync_committee(post)
// return post
func UpgradeToAltair(ctx context.Context, state state.BeaconState) (state.BeaconState, error) {
epoch := time.CurrentEpoch(state)
numValidators := state.NumValidators()
s := &ethpb.BeaconStateAltair{
GenesisTime: uint64(state.GenesisTime().Unix()),
GenesisValidatorsRoot: state.GenesisValidatorsRoot(),
Slot: state.Slot(),
Fork: &ethpb.Fork{
PreviousVersion: state.Fork().CurrentVersion,
CurrentVersion: params.BeaconConfig().AltairForkVersion,
Epoch: epoch,
},
LatestBlockHeader: state.LatestBlockHeader(),
BlockRoots: state.BlockRoots(),
StateRoots: state.StateRoots(),
HistoricalRoots: state.HistoricalRoots(),
Eth1Data: state.Eth1Data(),
Eth1DataVotes: state.Eth1DataVotes(),
Eth1DepositIndex: state.Eth1DepositIndex(),
Validators: state.Validators(),
Balances: state.Balances(),
RandaoMixes: state.RandaoMixes(),
Slashings: state.Slashings(),
PreviousEpochParticipation: make([]byte, numValidators),
CurrentEpochParticipation: make([]byte, numValidators),
JustificationBits: state.JustificationBits(),
PreviousJustifiedCheckpoint: state.PreviousJustifiedCheckpoint(),
CurrentJustifiedCheckpoint: state.CurrentJustifiedCheckpoint(),
FinalizedCheckpoint: state.FinalizedCheckpoint(),
InactivityScores: make([]uint64, numValidators),
}
newState, err := state_native.InitializeFromProtoUnsafeAltair(s)
newState, err := ConvertToAltair(state)
if err != nil {
return nil, err
}

View File

@@ -39,7 +39,6 @@ go_library(
"//crypto/hash:go_default_library",
"//encoding/bytesutil:go_default_library",
"//encoding/ssz:go_default_library",
"//math:go_default_library",
"//monitoring/tracing/trace:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",

View File

@@ -42,6 +42,9 @@ func ProcessAttesterSlashings(
slashings []ethpb.AttSlashing,
exitInfo *validators.ExitInfo,
) (state.BeaconState, error) {
if exitInfo == nil && len(slashings) > 0 {
return nil, errors.New("exit info required to process attester slashings")
}
var err error
for _, slashing := range slashings {
beaconState, err = ProcessAttesterSlashing(ctx, beaconState, slashing, exitInfo)
@@ -52,6 +55,28 @@ func ProcessAttesterSlashings(
return beaconState, nil
}
// ProcessAttesterSlashingsNoVerify processes attester slashings without verifying them.
// This is useful in scenarios such as block reward calculation, where we can assume the data
// in the block is valid.
func ProcessAttesterSlashingsNoVerify(
ctx context.Context,
beaconState state.BeaconState,
slashings []ethpb.AttSlashing,
exitInfo *validators.ExitInfo,
) (state.BeaconState, error) {
if exitInfo == nil && len(slashings) > 0 {
return nil, errors.New("exit info required to process attester slashings")
}
var err error
for _, slashing := range slashings {
beaconState, err = ProcessAttesterSlashingNoVerify(ctx, beaconState, slashing, exitInfo)
if err != nil {
return nil, err
}
}
return beaconState, nil
}
// ProcessAttesterSlashing processes individual attester slashing.
func ProcessAttesterSlashing(
ctx context.Context,
@@ -59,9 +84,36 @@ func ProcessAttesterSlashing(
slashing ethpb.AttSlashing,
exitInfo *validators.ExitInfo,
) (state.BeaconState, error) {
if exitInfo == nil {
return nil, errors.New("exit info is required to process attester slashing")
}
if err := VerifyAttesterSlashing(ctx, beaconState, slashing); err != nil {
return nil, errors.Wrap(err, "could not verify attester slashing")
}
return processAttesterSlashing(ctx, beaconState, slashing, exitInfo)
}
// ProcessAttesterSlashingNoVerify processes individual attester slashing without verifying it.
// This is useful in scenarios such as block reward calculation, where we can assume the data
// in the block is valid.
func ProcessAttesterSlashingNoVerify(
ctx context.Context,
beaconState state.BeaconState,
slashing ethpb.AttSlashing,
exitInfo *validators.ExitInfo,
) (state.BeaconState, error) {
if exitInfo == nil {
return nil, errors.New("exit info is required to process attester slashing")
}
return processAttesterSlashing(ctx, beaconState, slashing, exitInfo)
}
func processAttesterSlashing(
ctx context.Context,
beaconState state.BeaconState,
slashing ethpb.AttSlashing,
exitInfo *validators.ExitInfo,
) (state.BeaconState, error) {
slashableIndices := SlashableAttesterIndices(slashing)
sort.SliceStable(slashableIndices, func(i, j int) bool {
return slashableIndices[i] < slashableIndices[j]

View File

@@ -242,8 +242,18 @@ func TestProcessAttesterSlashings_AppliesCorrectStatus(t *testing.T) {
currentSlot := 2 * params.BeaconConfig().SlotsPerEpoch
require.NoError(t, tc.st.SetSlot(currentSlot))
// Verify that ProcessAttesterSlashingsNoVerify and ProcessAttesterSlashings have the same outcome.
stNoVerify := tc.st.Copy()
newStateNoVerify, err := blocks.ProcessAttesterSlashingsNoVerify(t.Context(), stNoVerify, []ethpb.AttSlashing{tc.slashing}, v.ExitInformation(stNoVerify))
require.NoError(t, err)
sszNoVerify, err := newStateNoVerify.MarshalSSZ()
require.NoError(t, err)
newState, err := blocks.ProcessAttesterSlashings(t.Context(), tc.st, []ethpb.AttSlashing{tc.slashing}, v.ExitInformation(tc.st))
require.NoError(t, err)
ssz, err := newState.MarshalSSZ()
require.NoError(t, err)
assert.DeepEqual(t, sszNoVerify, ssz, "States resulting from ProcessAttesterSlashingsNoVerify and ProcessAttesterSlashings are not equal")
newRegistry := newState.Validators()
// Given the intersection of slashable indices is [1], only validator

View File

@@ -11,7 +11,6 @@ import (
"github.com/OffchainLabs/prysm/v6/contracts/deposit"
"github.com/OffchainLabs/prysm/v6/crypto/bls"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/math"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/pkg/errors"
)
@@ -34,7 +33,7 @@ func ActivateValidatorWithEffectiveBalance(beaconState state.BeaconState, deposi
if err != nil {
return nil, err
}
validator.EffectiveBalance = math.Min(balance-balance%params.BeaconConfig().EffectiveBalanceIncrement, params.BeaconConfig().MaxEffectiveBalance)
validator.EffectiveBalance = min(balance-balance%params.BeaconConfig().EffectiveBalanceIncrement, params.BeaconConfig().MaxEffectiveBalance)
if validator.EffectiveBalance ==
params.BeaconConfig().MaxEffectiveBalance {
validator.ActivationEligibilityEpoch = 0

View File

@@ -6,3 +6,4 @@ var errNilSignedWithdrawalMessage = errors.New("nil SignedBLSToExecutionChange m
var errNilWithdrawalMessage = errors.New("nil BLSToExecutionChange message")
var errInvalidBLSPrefix = errors.New("withdrawal credential prefix is not a BLS prefix")
var errInvalidWithdrawalCredentials = errors.New("withdrawal credentials do not match")
var ErrInvalidSignature = errors.New("invalid signature")

View File

@@ -55,6 +55,9 @@ func ProcessVoluntaryExits(
if len(exits) == 0 {
return beaconState, nil
}
if exitInfo == nil {
return nil, errors.New("exit info required to process voluntary exits")
}
for idx, exit := range exits {
if exit == nil || exit.Exit == nil {
return nil, errors.New("nil voluntary exit in block body")

View File

@@ -51,6 +51,9 @@ func ProcessProposerSlashings(
slashings []*ethpb.ProposerSlashing,
exitInfo *validators.ExitInfo,
) (state.BeaconState, error) {
if exitInfo == nil && len(slashings) > 0 {
return nil, errors.New("exit info required to process proposer slashings")
}
var err error
for _, slashing := range slashings {
beaconState, err = ProcessProposerSlashing(ctx, beaconState, slashing, exitInfo)
@@ -61,6 +64,28 @@ func ProcessProposerSlashings(
return beaconState, nil
}
// ProcessProposerSlashingsNoVerify processes proposer slashings without verifying them.
// This is useful in scenarios such as block reward calculation, where we can assume the data
// in the block is valid.
func ProcessProposerSlashingsNoVerify(
ctx context.Context,
beaconState state.BeaconState,
slashings []*ethpb.ProposerSlashing,
exitInfo *validators.ExitInfo,
) (state.BeaconState, error) {
if exitInfo == nil && len(slashings) > 0 {
return nil, errors.New("exit info required to process proposer slashings")
}
var err error
for _, slashing := range slashings {
beaconState, err = ProcessProposerSlashingNoVerify(ctx, beaconState, slashing, exitInfo)
if err != nil {
return nil, err
}
}
return beaconState, nil
}
// ProcessProposerSlashing processes individual proposer slashing.
func ProcessProposerSlashing(
ctx context.Context,
@@ -68,13 +93,40 @@ func ProcessProposerSlashing(
slashing *ethpb.ProposerSlashing,
exitInfo *validators.ExitInfo,
) (state.BeaconState, error) {
var err error
if slashing == nil {
return nil, errors.New("nil proposer slashings in block body")
}
if err = VerifyProposerSlashing(beaconState, slashing); err != nil {
if err := VerifyProposerSlashing(beaconState, slashing); err != nil {
return nil, errors.Wrap(err, "could not verify proposer slashing")
}
return processProposerSlashing(ctx, beaconState, slashing, exitInfo)
}
// ProcessProposerSlashingNoVerify processes individual proposer slashing without verifying it.
// This is useful in scenarios such as block reward calculation, where we can assume the data
// in the block is valid.
func ProcessProposerSlashingNoVerify(
ctx context.Context,
beaconState state.BeaconState,
slashing *ethpb.ProposerSlashing,
exitInfo *validators.ExitInfo,
) (state.BeaconState, error) {
if slashing == nil {
return nil, errors.New("nil proposer slashings in block body")
}
return processProposerSlashing(ctx, beaconState, slashing, exitInfo)
}
func processProposerSlashing(
ctx context.Context,
beaconState state.BeaconState,
slashing *ethpb.ProposerSlashing,
exitInfo *validators.ExitInfo,
) (state.BeaconState, error) {
if exitInfo == nil {
return nil, errors.New("exit info is required to process proposer slashing")
}
var err error
beaconState, err = validators.SlashValidator(ctx, beaconState, slashing.Header_1.Header.ProposerIndex, exitInfo)
if err != nil {
return nil, errors.Wrapf(err, "could not slash proposer index %d", slashing.Header_1.Header.ProposerIndex)

View File

@@ -172,8 +172,17 @@ func TestProcessProposerSlashings_AppliesCorrectStatus(t *testing.T) {
block := util.NewBeaconBlock()
block.Block.Body.ProposerSlashings = slashings
// Verify that ProcessProposerSlashingsNoVerify and ProcessProposerSlashings have the same outcome.
beaconStateNoVerify := beaconState.Copy()
newStateNoVerify, err := blocks.ProcessProposerSlashingsNoVerify(t.Context(), beaconStateNoVerify, block.Block.Body.ProposerSlashings, v.ExitInformation(beaconStateNoVerify))
require.NoError(t, err)
sszNoVerify, err := newStateNoVerify.MarshalSSZ()
require.NoError(t, err)
newState, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, block.Block.Body.ProposerSlashings, v.ExitInformation(beaconState))
require.NoError(t, err)
ssz, err := newState.MarshalSSZ()
require.NoError(t, err)
assert.DeepEqual(t, sszNoVerify, ssz, "States resulting from ProcessProposerSlashingsNoVerify and ProcessProposerSlashings are not equal")
newStateVals := newState.Validators()
if newStateVals[1].ExitEpoch != beaconState.Validators()[1].ExitEpoch {

View File

@@ -114,9 +114,12 @@ func VerifyBlockSignatureUsingCurrentFork(beaconState state.ReadOnlyBeaconState,
}
proposerPubKey := proposer.PublicKey
sig := blk.Signature()
return signing.VerifyBlockSigningRoot(proposerPubKey, sig[:], domain, func() ([32]byte, error) {
if err := signing.VerifyBlockSigningRoot(proposerPubKey, sig[:], domain, func() ([32]byte, error) {
return blkRoot, nil
})
}); err != nil {
return ErrInvalidSignature
}
return nil
}
// BlockSignatureBatch retrieves the block signature batch from the provided block and its corresponding state.

View File

@@ -89,3 +89,36 @@ func TestVerifyBlockSignatureUsingCurrentFork(t *testing.T) {
require.NoError(t, err)
assert.NoError(t, blocks.VerifyBlockSignatureUsingCurrentFork(bState, wsb, blkRoot))
}
func TestVerifyBlockSignatureUsingCurrentFork_InvalidSignature(t *testing.T) {
params.SetupTestConfigCleanup(t)
bCfg := params.BeaconConfig()
bCfg.AltairForkEpoch = 100
bCfg.ForkVersionSchedule[bytesutil.ToBytes4(bCfg.AltairForkVersion)] = 100
params.OverrideBeaconConfig(bCfg)
bState, keys := util.DeterministicGenesisState(t, 100)
altairBlk := util.NewBeaconBlockAltair()
altairBlk.Block.ProposerIndex = 0
altairBlk.Block.Slot = params.BeaconConfig().SlotsPerEpoch * 100
blkRoot, err := altairBlk.Block.HashTreeRoot()
assert.NoError(t, err)
// Sign with wrong key (proposer index 0, but using key 1)
fData := &ethpb.Fork{
Epoch: 100,
CurrentVersion: params.BeaconConfig().AltairForkVersion,
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
}
domain, err := signing.Domain(fData, 100, params.BeaconConfig().DomainBeaconProposer, bState.GenesisValidatorsRoot())
assert.NoError(t, err)
rt, err := signing.ComputeSigningRoot(altairBlk.Block, domain)
assert.NoError(t, err)
wrongSig := keys[1].Sign(rt[:]).Marshal()
altairBlk.Signature = wrongSig
wsb, err := consensusblocks.NewSignedBeaconBlock(altairBlk)
require.NoError(t, err)
err = blocks.VerifyBlockSignatureUsingCurrentFork(bState, wsb, blkRoot)
require.ErrorIs(t, err, blocks.ErrInvalidSignature, "Expected ErrInvalidSignature for invalid signature")
}

View File

@@ -53,9 +53,15 @@ func ProcessOperations(ctx context.Context, st state.BeaconState, block interfac
// 6110 validations are in VerifyOperationLengths
bb := block.Body()
// Electra extends the altair operations.
exitInfo := v.ExitInformation(st)
if err := helpers.UpdateTotalActiveBalanceCache(st, exitInfo.TotalActiveBalance); err != nil {
return nil, errors.Wrap(err, "could not update total active balance cache")
var exitInfo *v.ExitInfo
hasSlashings := len(bb.ProposerSlashings()) > 0 || len(bb.AttesterSlashings()) > 0
hasExits := len(bb.VoluntaryExits()) > 0
if hasSlashings || hasExits {
// ExitInformation is expensive to compute, only do it if we need it.
exitInfo = v.ExitInformation(st)
if err := helpers.UpdateTotalActiveBalanceCache(st, exitInfo.TotalActiveBalance); err != nil {
return nil, errors.Wrap(err, "could not update total active balance cache")
}
}
st, err = ProcessProposerSlashings(ctx, st, bb.ProposerSlashings(), exitInfo)
if err != nil {

View File

@@ -15,6 +15,129 @@ import (
"github.com/pkg/errors"
)
// ConvertToElectra converts a Deneb beacon state to an Electra beacon state. It does not perform any fork logic.
func ConvertToElectra(beaconState state.BeaconState) (state.BeaconState, error) {
currentSyncCommittee, err := beaconState.CurrentSyncCommittee()
if err != nil {
return nil, err
}
nextSyncCommittee, err := beaconState.NextSyncCommittee()
if err != nil {
return nil, err
}
prevEpochParticipation, err := beaconState.PreviousEpochParticipation()
if err != nil {
return nil, err
}
currentEpochParticipation, err := beaconState.CurrentEpochParticipation()
if err != nil {
return nil, err
}
inactivityScores, err := beaconState.InactivityScores()
if err != nil {
return nil, err
}
payloadHeader, err := beaconState.LatestExecutionPayloadHeader()
if err != nil {
return nil, err
}
txRoot, err := payloadHeader.TransactionsRoot()
if err != nil {
return nil, err
}
wdRoot, err := payloadHeader.WithdrawalsRoot()
if err != nil {
return nil, err
}
wi, err := beaconState.NextWithdrawalIndex()
if err != nil {
return nil, err
}
vi, err := beaconState.NextWithdrawalValidatorIndex()
if err != nil {
return nil, err
}
summaries, err := beaconState.HistoricalSummaries()
if err != nil {
return nil, err
}
excessBlobGas, err := payloadHeader.ExcessBlobGas()
if err != nil {
return nil, err
}
blobGasUsed, err := payloadHeader.BlobGasUsed()
if err != nil {
return nil, err
}
s := &ethpb.BeaconStateElectra{
GenesisTime: uint64(beaconState.GenesisTime().Unix()),
GenesisValidatorsRoot: beaconState.GenesisValidatorsRoot(),
Slot: beaconState.Slot(),
Fork: &ethpb.Fork{
PreviousVersion: beaconState.Fork().CurrentVersion,
CurrentVersion: params.BeaconConfig().ElectraForkVersion,
Epoch: time.CurrentEpoch(beaconState),
},
LatestBlockHeader: beaconState.LatestBlockHeader(),
BlockRoots: beaconState.BlockRoots(),
StateRoots: beaconState.StateRoots(),
HistoricalRoots: beaconState.HistoricalRoots(),
Eth1Data: beaconState.Eth1Data(),
Eth1DataVotes: beaconState.Eth1DataVotes(),
Eth1DepositIndex: beaconState.Eth1DepositIndex(),
Validators: beaconState.Validators(),
Balances: beaconState.Balances(),
RandaoMixes: beaconState.RandaoMixes(),
Slashings: beaconState.Slashings(),
PreviousEpochParticipation: prevEpochParticipation,
CurrentEpochParticipation: currentEpochParticipation,
JustificationBits: beaconState.JustificationBits(),
PreviousJustifiedCheckpoint: beaconState.PreviousJustifiedCheckpoint(),
CurrentJustifiedCheckpoint: beaconState.CurrentJustifiedCheckpoint(),
FinalizedCheckpoint: beaconState.FinalizedCheckpoint(),
InactivityScores: inactivityScores,
CurrentSyncCommittee: currentSyncCommittee,
NextSyncCommittee: nextSyncCommittee,
LatestExecutionPayloadHeader: &enginev1.ExecutionPayloadHeaderDeneb{
ParentHash: payloadHeader.ParentHash(),
FeeRecipient: payloadHeader.FeeRecipient(),
StateRoot: payloadHeader.StateRoot(),
ReceiptsRoot: payloadHeader.ReceiptsRoot(),
LogsBloom: payloadHeader.LogsBloom(),
PrevRandao: payloadHeader.PrevRandao(),
BlockNumber: payloadHeader.BlockNumber(),
GasLimit: payloadHeader.GasLimit(),
GasUsed: payloadHeader.GasUsed(),
Timestamp: payloadHeader.Timestamp(),
ExtraData: payloadHeader.ExtraData(),
BaseFeePerGas: payloadHeader.BaseFeePerGas(),
BlockHash: payloadHeader.BlockHash(),
TransactionsRoot: txRoot,
WithdrawalsRoot: wdRoot,
ExcessBlobGas: excessBlobGas,
BlobGasUsed: blobGasUsed,
},
NextWithdrawalIndex: wi,
NextWithdrawalValidatorIndex: vi,
HistoricalSummaries: summaries,
DepositRequestsStartIndex: params.BeaconConfig().UnsetDepositRequestsStartIndex,
DepositBalanceToConsume: 0,
EarliestConsolidationEpoch: helpers.ActivationExitEpoch(slots.ToEpoch(beaconState.Slot())),
PendingDeposits: make([]*ethpb.PendingDeposit, 0),
PendingPartialWithdrawals: make([]*ethpb.PendingPartialWithdrawal, 0),
PendingConsolidations: make([]*ethpb.PendingConsolidation, 0),
}
// need to cast the beaconState to use in helper functions
post, err := state_native.InitializeFromProtoUnsafeElectra(s)
if err != nil {
return nil, errors.Wrap(err, "failed to initialize post electra beaconState")
}
return post, nil
}
// UpgradeToElectra updates inputs a generic state to return the version Electra state.
//
// nolint:dupword
@@ -126,55 +249,7 @@ import (
//
// return post
func UpgradeToElectra(beaconState state.BeaconState) (state.BeaconState, error) {
currentSyncCommittee, err := beaconState.CurrentSyncCommittee()
if err != nil {
return nil, err
}
nextSyncCommittee, err := beaconState.NextSyncCommittee()
if err != nil {
return nil, err
}
prevEpochParticipation, err := beaconState.PreviousEpochParticipation()
if err != nil {
return nil, err
}
currentEpochParticipation, err := beaconState.CurrentEpochParticipation()
if err != nil {
return nil, err
}
inactivityScores, err := beaconState.InactivityScores()
if err != nil {
return nil, err
}
payloadHeader, err := beaconState.LatestExecutionPayloadHeader()
if err != nil {
return nil, err
}
txRoot, err := payloadHeader.TransactionsRoot()
if err != nil {
return nil, err
}
wdRoot, err := payloadHeader.WithdrawalsRoot()
if err != nil {
return nil, err
}
wi, err := beaconState.NextWithdrawalIndex()
if err != nil {
return nil, err
}
vi, err := beaconState.NextWithdrawalValidatorIndex()
if err != nil {
return nil, err
}
summaries, err := beaconState.HistoricalSummaries()
if err != nil {
return nil, err
}
excessBlobGas, err := payloadHeader.ExcessBlobGas()
if err != nil {
return nil, err
}
blobGasUsed, err := payloadHeader.BlobGasUsed()
s, err := ConvertToElectra(beaconState)
if err != nil {
return nil, err
}
@@ -206,97 +281,38 @@ func UpgradeToElectra(beaconState state.BeaconState) (state.BeaconState, error)
if err != nil {
return nil, errors.Wrap(err, "failed to get total active balance")
}
s := &ethpb.BeaconStateElectra{
GenesisTime: uint64(beaconState.GenesisTime().Unix()),
GenesisValidatorsRoot: beaconState.GenesisValidatorsRoot(),
Slot: beaconState.Slot(),
Fork: &ethpb.Fork{
PreviousVersion: beaconState.Fork().CurrentVersion,
CurrentVersion: params.BeaconConfig().ElectraForkVersion,
Epoch: time.CurrentEpoch(beaconState),
},
LatestBlockHeader: beaconState.LatestBlockHeader(),
BlockRoots: beaconState.BlockRoots(),
StateRoots: beaconState.StateRoots(),
HistoricalRoots: beaconState.HistoricalRoots(),
Eth1Data: beaconState.Eth1Data(),
Eth1DataVotes: beaconState.Eth1DataVotes(),
Eth1DepositIndex: beaconState.Eth1DepositIndex(),
Validators: beaconState.Validators(),
Balances: beaconState.Balances(),
RandaoMixes: beaconState.RandaoMixes(),
Slashings: beaconState.Slashings(),
PreviousEpochParticipation: prevEpochParticipation,
CurrentEpochParticipation: currentEpochParticipation,
JustificationBits: beaconState.JustificationBits(),
PreviousJustifiedCheckpoint: beaconState.PreviousJustifiedCheckpoint(),
CurrentJustifiedCheckpoint: beaconState.CurrentJustifiedCheckpoint(),
FinalizedCheckpoint: beaconState.FinalizedCheckpoint(),
InactivityScores: inactivityScores,
CurrentSyncCommittee: currentSyncCommittee,
NextSyncCommittee: nextSyncCommittee,
LatestExecutionPayloadHeader: &enginev1.ExecutionPayloadHeaderDeneb{
ParentHash: payloadHeader.ParentHash(),
FeeRecipient: payloadHeader.FeeRecipient(),
StateRoot: payloadHeader.StateRoot(),
ReceiptsRoot: payloadHeader.ReceiptsRoot(),
LogsBloom: payloadHeader.LogsBloom(),
PrevRandao: payloadHeader.PrevRandao(),
BlockNumber: payloadHeader.BlockNumber(),
GasLimit: payloadHeader.GasLimit(),
GasUsed: payloadHeader.GasUsed(),
Timestamp: payloadHeader.Timestamp(),
ExtraData: payloadHeader.ExtraData(),
BaseFeePerGas: payloadHeader.BaseFeePerGas(),
BlockHash: payloadHeader.BlockHash(),
TransactionsRoot: txRoot,
WithdrawalsRoot: wdRoot,
ExcessBlobGas: excessBlobGas,
BlobGasUsed: blobGasUsed,
},
NextWithdrawalIndex: wi,
NextWithdrawalValidatorIndex: vi,
HistoricalSummaries: summaries,
DepositRequestsStartIndex: params.BeaconConfig().UnsetDepositRequestsStartIndex,
DepositBalanceToConsume: 0,
ExitBalanceToConsume: helpers.ActivationExitChurnLimit(primitives.Gwei(tab)),
EarliestExitEpoch: earliestExitEpoch,
ConsolidationBalanceToConsume: helpers.ConsolidationChurnLimit(primitives.Gwei(tab)),
EarliestConsolidationEpoch: helpers.ActivationExitEpoch(slots.ToEpoch(beaconState.Slot())),
PendingDeposits: make([]*ethpb.PendingDeposit, 0),
PendingPartialWithdrawals: make([]*ethpb.PendingPartialWithdrawal, 0),
PendingConsolidations: make([]*ethpb.PendingConsolidation, 0),
if err := s.SetExitBalanceToConsume(helpers.ActivationExitChurnLimit(primitives.Gwei(tab))); err != nil {
return nil, errors.Wrap(err, "failed to set exit balance to consume")
}
if err := s.SetEarliestExitEpoch(earliestExitEpoch); err != nil {
return nil, errors.Wrap(err, "failed to set earliest exit epoch")
}
if err := s.SetConsolidationBalanceToConsume(helpers.ConsolidationChurnLimit(primitives.Gwei(tab))); err != nil {
return nil, errors.Wrap(err, "failed to set consolidation balance to consume")
}
// Sorting preActivationIndices based on a custom criteria
vals := s.Validators()
sort.Slice(preActivationIndices, func(i, j int) bool {
// Comparing based on ActivationEligibilityEpoch and then by index if the epochs are the same
if s.Validators[preActivationIndices[i]].ActivationEligibilityEpoch == s.Validators[preActivationIndices[j]].ActivationEligibilityEpoch {
if vals[preActivationIndices[i]].ActivationEligibilityEpoch == vals[preActivationIndices[j]].ActivationEligibilityEpoch {
return preActivationIndices[i] < preActivationIndices[j]
}
return s.Validators[preActivationIndices[i]].ActivationEligibilityEpoch < s.Validators[preActivationIndices[j]].ActivationEligibilityEpoch
return vals[preActivationIndices[i]].ActivationEligibilityEpoch < vals[preActivationIndices[j]].ActivationEligibilityEpoch
})
// need to cast the beaconState to use in helper functions
post, err := state_native.InitializeFromProtoUnsafeElectra(s)
if err != nil {
return nil, errors.Wrap(err, "failed to initialize post electra beaconState")
}
for _, index := range preActivationIndices {
if err := QueueEntireBalanceAndResetValidator(post, index); err != nil {
if err := QueueEntireBalanceAndResetValidator(s, index); err != nil {
return nil, errors.Wrap(err, "failed to queue entire balance and reset validator")
}
}
// Ensure early adopters of compounding credentials go through the activation churn
for _, index := range compoundWithdrawalIndices {
if err := QueueExcessActiveBalance(post, index); err != nil {
if err := QueueExcessActiveBalance(s, index); err != nil {
return nil, errors.Wrap(err, "failed to queue excess active balance")
}
}
return post, nil
return s, nil
}

View File

@@ -13,6 +13,7 @@ import (
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
enginev1 "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
@@ -91,6 +92,18 @@ func ProcessWithdrawalRequests(ctx context.Context, st state.BeaconState, wrs []
ctx, span := trace.StartSpan(ctx, "electra.ProcessWithdrawalRequests")
defer span.End()
currentEpoch := slots.ToEpoch(st.Slot())
if len(wrs) == 0 {
return st, nil
}
// It is correct to compute exitInfo once for all withdrawals in the block, as the ExitInfo pointer is
// updated within InitiateValidatorExit which is the only function that uses it.
var exitInfo *validators.ExitInfo
if st.Version() < version.Electra {
exitInfo = validators.ExitInformation(st)
} else {
// After Electra, the function InitiateValidatorExit ignores the exitInfo passed to it and recomputes it anyway.
exitInfo = &validators.ExitInfo{}
}
for _, wr := range wrs {
if wr == nil {
return nil, errors.New("nil execution layer withdrawal request")
@@ -148,7 +161,8 @@ func ProcessWithdrawalRequests(ctx context.Context, st state.BeaconState, wrs []
// Only exit validator if it has no pending withdrawals in the queue
if pendingBalanceToWithdraw == 0 {
var err error
st, err = validators.InitiateValidatorExit(ctx, st, vIdx, validators.ExitInformation(st))
// exitInfo is updated within InitiateValidatorExit
st, err = validators.InitiateValidatorExit(ctx, st, vIdx, exitInfo)
if err != nil {
return nil, err
}

View File

@@ -96,12 +96,17 @@ func ProcessRegistryUpdates(ctx context.Context, st state.BeaconState) (state.Be
}
// Process validators eligible for ejection.
for _, idx := range eligibleForEjection {
// Here is fine to do a quadratic loop since this should
// barely happen
st, err = validators.InitiateValidatorExit(ctx, st, idx, validators.ExitInformation(st))
if err != nil && !errors.Is(err, validators.ErrValidatorAlreadyExited) {
return nil, errors.Wrapf(err, "could not initiate exit for validator %d", idx)
if len(eligibleForEjection) > 0 {
// It is safe to compute exitInfo once for all ejections in the epoch, as the ExitInfo pointer is
// updated within InitiateValidatorExit which is the only function that uses it.
exitInfo := validators.ExitInformation(st)
for _, idx := range eligibleForEjection {
// Here is fine to do a quadratic loop since this should
// barely happen
st, err = validators.InitiateValidatorExit(ctx, st, idx, exitInfo)
if err != nil && !errors.Is(err, validators.ErrValidatorAlreadyExited) {
return nil, errors.Wrapf(err, "could not initiate exit for validator %d", idx)
}
}
}
@@ -228,7 +233,7 @@ func ProcessSlashings(st state.BeaconState) error {
// a callback is used here to apply the following actions to all validators
// below equally.
increment := params.BeaconConfig().EffectiveBalanceIncrement
minSlashing := math.Min(totalSlashing*slashingMultiplier, totalBalance)
minSlashing := min(totalSlashing*slashingMultiplier, totalBalance)
// Modified in Electra:EIP7251
var penaltyPerEffectiveBalanceIncrement uint64

View File

@@ -5,7 +5,6 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/math"
)
// ProcessSlashingsPrecompute processes the slashed validators during epoch processing.
@@ -21,7 +20,7 @@ func ProcessSlashingsPrecompute(s state.BeaconState, pBal *Balance) error {
totalSlashing += slashing
}
minSlashing := math.Min(totalSlashing*params.BeaconConfig().ProportionalSlashingMultiplier, pBal.ActiveCurrentEpoch)
minSlashing := min(totalSlashing*params.BeaconConfig().ProportionalSlashingMultiplier, pBal.ActiveCurrentEpoch)
epochToWithdraw := currentEpoch + exitLength/2
var hasSlashing bool

View File

@@ -7,6 +7,7 @@ go_library(
visibility = [
"//beacon-chain:__subpackages__",
"//cmd/prysmctl/testnet:__pkg__",
"//consensus-types/hdiff:__subpackages__",
"//testing/spectest:__subpackages__",
"//validator/client:__pkg__",
],

View File

@@ -15,6 +15,7 @@ go_library(
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//monitoring/tracing/trace:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",

View File

@@ -8,6 +8,7 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
state_native "github.com/OffchainLabs/prysm/v6/beacon-chain/state/state-native"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
enginev1 "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/time/slots"
@@ -17,6 +18,25 @@ import (
// UpgradeToFulu updates inputs a generic state to return the version Fulu state.
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/fork.md#upgrading-the-state
func UpgradeToFulu(ctx context.Context, beaconState state.BeaconState) (state.BeaconState, error) {
s, err := ConvertToFulu(beaconState)
if err != nil {
return nil, errors.Wrap(err, "could not convert to fulu")
}
proposerLookahead, err := helpers.InitializeProposerLookahead(ctx, beaconState, slots.ToEpoch(beaconState.Slot()))
if err != nil {
return nil, err
}
pl := make([]primitives.ValidatorIndex, len(proposerLookahead))
for i, v := range proposerLookahead {
pl[i] = primitives.ValidatorIndex(v)
}
if err := s.SetProposerLookahead(pl); err != nil {
return nil, errors.Wrap(err, "failed to set proposer lookahead")
}
return s, nil
}
func ConvertToFulu(beaconState state.BeaconState) (state.BeaconState, error) {
currentSyncCommittee, err := beaconState.CurrentSyncCommittee()
if err != nil {
return nil, err
@@ -105,11 +125,6 @@ func UpgradeToFulu(ctx context.Context, beaconState state.BeaconState) (state.Be
if err != nil {
return nil, err
}
proposerLookahead, err := helpers.InitializeProposerLookahead(ctx, beaconState, slots.ToEpoch(beaconState.Slot()))
if err != nil {
return nil, err
}
s := &ethpb.BeaconStateFulu{
GenesisTime: uint64(beaconState.GenesisTime().Unix()),
GenesisValidatorsRoot: beaconState.GenesisValidatorsRoot(),
@@ -171,14 +186,6 @@ func UpgradeToFulu(ctx context.Context, beaconState state.BeaconState) (state.Be
PendingDeposits: pendingDeposits,
PendingPartialWithdrawals: pendingPartialWithdrawals,
PendingConsolidations: pendingConsolidations,
ProposerLookahead: proposerLookahead,
}
// Need to cast the beaconState to use in helper functions
post, err := state_native.InitializeFromProtoUnsafeFulu(s)
if err != nil {
return nil, errors.Wrap(err, "failed to initialize post fulu beaconState")
}
return post, nil
return state_native.InitializeFromProtoUnsafeFulu(s)
}

View File

@@ -10,6 +10,7 @@ go_library(
"legacy.go",
"metrics.go",
"randao.go",
"ranges.go",
"rewards_penalties.go",
"shuffle.go",
"sync_committee.go",
@@ -56,6 +57,7 @@ go_test(
"private_access_fuzz_noop_test.go", # keep
"private_access_test.go",
"randao_test.go",
"ranges_test.go",
"rewards_penalties_test.go",
"shuffle_test.go",
"sync_committee_test.go",

View File

@@ -399,7 +399,6 @@ func CommitteeAssignments(ctx context.Context, state state.BeaconState, epoch pr
ctx, span := trace.StartSpan(ctx, "helpers.CommitteeAssignments")
defer span.End()
// Verify if the epoch is valid for assignment based on the provided state.
if err := VerifyAssignmentEpoch(epoch, state); err != nil {
return nil, err
}
@@ -407,12 +406,15 @@ func CommitteeAssignments(ctx context.Context, state state.BeaconState, epoch pr
if err != nil {
return nil, err
}
vals := make(map[primitives.ValidatorIndex]struct{})
// Deduplicate and make set for O(1) membership checks.
vals := make(map[primitives.ValidatorIndex]struct{}, len(validators))
for _, v := range validators {
vals[v] = struct{}{}
}
assignments := make(map[primitives.ValidatorIndex]*CommitteeAssignment)
// Compute committee assignments for each slot in the epoch.
remaining := len(vals)
assignments := make(map[primitives.ValidatorIndex]*CommitteeAssignment, len(vals))
for slot := startSlot; slot < startSlot+params.BeaconConfig().SlotsPerEpoch; slot++ {
committees, err := BeaconCommittees(ctx, state, slot)
if err != nil {
@@ -420,7 +422,7 @@ func CommitteeAssignments(ctx context.Context, state state.BeaconState, epoch pr
}
for j, committee := range committees {
for _, vIndex := range committee {
if _, ok := vals[vIndex]; !ok { // Skip if the validator is not in the provided validators slice.
if _, ok := vals[vIndex]; !ok {
continue
}
if _, ok := assignments[vIndex]; !ok {
@@ -429,6 +431,11 @@ func CommitteeAssignments(ctx context.Context, state state.BeaconState, epoch pr
assignments[vIndex].Committee = committee
assignments[vIndex].AttesterSlot = slot
assignments[vIndex].CommitteeIndex = primitives.CommitteeIndex(j)
delete(vals, vIndex)
remaining--
if remaining == 0 {
return assignments, nil // early exit
}
}
}
}

View File

@@ -0,0 +1,62 @@
package helpers
import (
"fmt"
"slices"
)
// SortedSliceFromMap takes a map with uint64 keys and returns a sorted slice of the keys.
func SortedSliceFromMap(toSort map[uint64]bool) []uint64 {
slice := make([]uint64, 0, len(toSort))
for key := range toSort {
slice = append(slice, key)
}
slices.Sort(slice)
return slice
}
// PrettySlice returns a pretty string representation of a sorted slice of uint64.
// `sortedSlice` must be sorted in ascending order.
// Example: [1,2,3,5,6,7,8,10] -> "1-3,5-8,10"
func PrettySlice(sortedSlice []uint64) string {
if len(sortedSlice) == 0 {
return ""
}
var result string
start := sortedSlice[0]
end := sortedSlice[0]
for i := 1; i < len(sortedSlice); i++ {
if sortedSlice[i] == end+1 {
end = sortedSlice[i]
continue
}
if start == end {
result += fmt.Sprintf("%d,", start)
start = sortedSlice[i]
end = sortedSlice[i]
continue
}
result += fmt.Sprintf("%d-%d,", start, end)
start = sortedSlice[i]
end = sortedSlice[i]
}
if start == end {
result += fmt.Sprintf("%d", start)
return result
}
result += fmt.Sprintf("%d-%d", start, end)
return result
}
// SortedPrettySliceFromMap combines SortedSliceFromMap and PrettySlice to return a pretty string representation of the keys in a map.
func SortedPrettySliceFromMap(toSort map[uint64]bool) string {
sorted := SortedSliceFromMap(toSort)
return PrettySlice(sorted)
}

View File

@@ -0,0 +1,64 @@
package helpers_test
import (
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/testing/require"
)
func TestSortedSliceFromMap(t *testing.T) {
input := map[uint64]bool{5: true, 3: true, 8: true, 1: true}
expected := []uint64{1, 3, 5, 8}
actual := helpers.SortedSliceFromMap(input)
require.Equal(t, len(expected), len(actual))
for i := range expected {
require.Equal(t, expected[i], actual[i])
}
}
func TestPrettySlice(t *testing.T) {
tests := []struct {
name string
input []uint64
expected string
}{
{
name: "empty slice",
input: []uint64{},
expected: "",
},
{
name: "only distinct elements",
input: []uint64{1, 3, 5, 7, 9},
expected: "1,3,5,7,9",
},
{
name: "single range",
input: []uint64{1, 2, 3, 4, 5},
expected: "1-5",
},
{
name: "multiple ranges and distinct elements",
input: []uint64{1, 2, 3, 5, 6, 7, 8, 10, 12, 13, 14},
expected: "1-3,5-8,10,12-14",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
actual := helpers.PrettySlice(tt.input)
require.Equal(t, tt.expected, actual)
})
}
}
func TestSortedPrettySliceFromMap(t *testing.T) {
input := map[uint64]bool{5: true, 7: true, 8: true, 10: true}
expected := "5,7-8,10"
actual := helpers.SortedPrettySliceFromMap(input)
require.Equal(t, expected, actual)
}

View File

@@ -79,7 +79,7 @@ func TotalActiveBalance(s state.ReadOnlyBeaconState) (uint64, error) {
}
// Spec defines `EffectiveBalanceIncrement` as min to avoid divisions by zero.
total = mathutil.Max(params.BeaconConfig().EffectiveBalanceIncrement, total)
total = max(params.BeaconConfig().EffectiveBalanceIncrement, total)
if err := balanceCache.AddTotalEffectiveBalance(s, total); err != nil {
return 0, err
}

View File

@@ -14,7 +14,6 @@ import (
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/math"
v1alpha1 "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/time/slots"
)
@@ -95,7 +94,7 @@ func ComputeWeakSubjectivityPeriod(ctx context.Context, st state.ReadOnlyBeaconS
if T*(200+3*D) < t*(200+12*D) {
epochsForValidatorSetChurn := N * (t*(200+12*D) - T*(200+3*D)) / (600 * delta * (2*t + T))
epochsForBalanceTopUps := N * (200 + 3*D) / (600 * Delta)
wsp += math.Max(epochsForValidatorSetChurn, epochsForBalanceTopUps)
wsp += max(epochsForValidatorSetChurn, epochsForBalanceTopUps)
} else {
wsp += 3 * N * D * t / (200 * Delta * (T - t))
}
@@ -202,14 +201,3 @@ func ParseWeakSubjectivityInputString(wsCheckpointString string) (*v1alpha1.Chec
Root: bRoot,
}, nil
}
// MinEpochsForBlockRequests computes the number of epochs of block history that we need to maintain,
// relative to the current epoch, per the p2p specs. This is used to compute the slot where backfill is complete.
// value defined:
// https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/p2p-interface.md#configuration
// MIN_VALIDATOR_WITHDRAWABILITY_DELAY + CHURN_LIMIT_QUOTIENT // 2 (= 33024, ~5 months)
// detailed rationale: https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/p2p-interface.md#why-are-blocksbyrange-requests-only-required-to-be-served-for-the-latest-min_epochs_for_block_requests-epochs
func MinEpochsForBlockRequests() primitives.Epoch {
return params.BeaconConfig().MinValidatorWithdrawabilityDelay +
primitives.Epoch(params.BeaconConfig().ChurnLimitQuotient/2)
}

View File

@@ -286,20 +286,3 @@ func genState(t *testing.T, valCount, avgBalance uint64) state.BeaconState {
return beaconState
}
func TestMinEpochsForBlockRequests(t *testing.T) {
helpers.ClearCache()
params.SetActiveTestCleanup(t, params.MainnetConfig())
var expected primitives.Epoch = 33024
// expected value of 33024 via spec commentary:
// https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/p2p-interface.md#why-are-blocksbyrange-requests-only-required-to-be-served-for-the-latest-min_epochs_for_block_requests-epochs
// MIN_EPOCHS_FOR_BLOCK_REQUESTS is calculated using the arithmetic from compute_weak_subjectivity_period found in the weak subjectivity guide. Specifically to find this max epoch range, we use the worst case event of a very large validator size (>= MIN_PER_EPOCH_CHURN_LIMIT * CHURN_LIMIT_QUOTIENT).
//
// MIN_EPOCHS_FOR_BLOCK_REQUESTS = (
// MIN_VALIDATOR_WITHDRAWABILITY_DELAY
// + MAX_SAFETY_DECAY * CHURN_LIMIT_QUOTIENT // (2 * 100)
// )
//
// Where MAX_SAFETY_DECAY = 100 and thus MIN_EPOCHS_FOR_BLOCK_REQUESTS = 33024 (~5 months).
require.Equal(t, expected, helpers.MinEpochsForBlockRequests())
}

View File

@@ -43,6 +43,13 @@ func VerifyDataColumnSidecar(sidecar blocks.RODataColumn) error {
return ErrNoKzgCommitments
}
// A sidecar with more commitments than the max blob count for this block is invalid.
slot := sidecar.Slot()
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(slot)
if len(sidecar.KzgCommitments) > maxBlobsPerBlock {
return ErrTooManyCommitments
}
// The column length must be equal to the number of commitments/proofs.
if len(sidecar.Column) != len(sidecar.KzgCommitments) || len(sidecar.Column) != len(sidecar.KzgProofs) {
return ErrMismatchLength
@@ -72,10 +79,30 @@ func VerifyDataColumnsSidecarKZGProofs(sidecars []blocks.RODataColumn) error {
for _, sidecar := range sidecars {
for i := range sidecar.Column {
commitments = append(commitments, kzg.Bytes48(sidecar.KzgCommitments[i]))
var (
commitment kzg.Bytes48
cell kzg.Cell
proof kzg.Bytes48
)
commitmentBytes := sidecar.KzgCommitments[i]
cellBytes := sidecar.Column[i]
proofBytes := sidecar.KzgProofs[i]
if len(commitmentBytes) != len(commitment) ||
len(cellBytes) != len(cell) ||
len(proofBytes) != len(proof) {
return ErrMismatchLength
}
copy(commitment[:], commitmentBytes)
copy(cell[:], cellBytes)
copy(proof[:], proofBytes)
commitments = append(commitments, commitment)
indices = append(indices, sidecar.Index)
cells = append(cells, kzg.Cell(sidecar.Column[i]))
proofs = append(proofs, kzg.Bytes48(sidecar.KzgProofs[i]))
cells = append(cells, cell)
proofs = append(proofs, proof)
}
}

View File

@@ -18,38 +18,46 @@ import (
)
func TestVerifyDataColumnSidecar(t *testing.T) {
t.Run("index too large", func(t *testing.T) {
roSidecar := createTestSidecar(t, 1_000_000, nil, nil, nil)
err := peerdas.VerifyDataColumnSidecar(roSidecar)
require.ErrorIs(t, err, peerdas.ErrIndexTooLarge)
})
testCases := []struct {
name string
index uint64
blobCount int
commitmentCount int
proofCount int
maxBlobsPerBlock uint64
expectedError error
}{
{name: "index too large", index: 1_000_000, expectedError: peerdas.ErrIndexTooLarge},
{name: "no commitments", expectedError: peerdas.ErrNoKzgCommitments},
{name: "too many commitments", blobCount: 10, commitmentCount: 10, proofCount: 10, maxBlobsPerBlock: 2, expectedError: peerdas.ErrTooManyCommitments},
{name: "commitments size mismatch", commitmentCount: 1, maxBlobsPerBlock: 1, expectedError: peerdas.ErrMismatchLength},
{name: "proofs size mismatch", blobCount: 1, commitmentCount: 1, maxBlobsPerBlock: 1, expectedError: peerdas.ErrMismatchLength},
{name: "nominal", blobCount: 1, commitmentCount: 1, proofCount: 1, maxBlobsPerBlock: 1, expectedError: nil},
}
t.Run("no commitments", func(t *testing.T) {
roSidecar := createTestSidecar(t, 0, nil, nil, nil)
err := peerdas.VerifyDataColumnSidecar(roSidecar)
require.ErrorIs(t, err, peerdas.ErrNoKzgCommitments)
})
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.FuluForkEpoch = 0
cfg.BlobSchedule = []params.BlobScheduleEntry{{Epoch: 0, MaxBlobsPerBlock: tc.maxBlobsPerBlock}}
params.OverrideBeaconConfig(cfg)
t.Run("KZG commitments size mismatch", func(t *testing.T) {
kzgCommitments := make([][]byte, 1)
roSidecar := createTestSidecar(t, 0, nil, kzgCommitments, nil)
err := peerdas.VerifyDataColumnSidecar(roSidecar)
require.ErrorIs(t, err, peerdas.ErrMismatchLength)
})
column := make([][]byte, tc.blobCount)
kzgCommitments := make([][]byte, tc.commitmentCount)
kzgProof := make([][]byte, tc.proofCount)
t.Run("KZG proofs size mismatch", func(t *testing.T) {
column, kzgCommitments := make([][]byte, 1), make([][]byte, 1)
roSidecar := createTestSidecar(t, 0, column, kzgCommitments, nil)
err := peerdas.VerifyDataColumnSidecar(roSidecar)
require.ErrorIs(t, err, peerdas.ErrMismatchLength)
})
roSidecar := createTestSidecar(t, tc.index, column, kzgCommitments, kzgProof)
err := peerdas.VerifyDataColumnSidecar(roSidecar)
t.Run("nominal", func(t *testing.T) {
column, kzgCommitments, kzgProofs := make([][]byte, 1), make([][]byte, 1), make([][]byte, 1)
roSidecar := createTestSidecar(t, 0, column, kzgCommitments, kzgProofs)
err := peerdas.VerifyDataColumnSidecar(roSidecar)
require.NoError(t, err)
})
if tc.expectedError != nil {
require.ErrorIs(t, err, tc.expectedError)
return
}
require.NoError(t, err)
})
}
}
func TestVerifyDataColumnSidecarKZGProofs(t *testing.T) {
@@ -60,6 +68,14 @@ func TestVerifyDataColumnSidecarKZGProofs(t *testing.T) {
err := kzg.Start()
require.NoError(t, err)
t.Run("size mismatch", func(t *testing.T) {
sidecars := generateRandomSidecars(t, seed, blobCount)
sidecars[0].Column[0] = sidecars[0].Column[0][:len(sidecars[0].Column[0])-1] // Remove one byte to create size mismatch
err := peerdas.VerifyDataColumnsSidecarKZGProofs(sidecars)
require.ErrorIs(t, err, peerdas.ErrMismatchLength)
})
t.Run("invalid proof", func(t *testing.T) {
sidecars := generateRandomSidecars(t, seed, blobCount)
sidecars[0].Column[0][0]++ // It is OK to overflow

View File

@@ -1,6 +1,8 @@
package peerdas
import (
"sort"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
@@ -28,7 +30,8 @@ func MinimumColumnCountToReconstruct() uint64 {
// ReconstructDataColumnSidecars reconstructs all the data column sidecars from the given input data column sidecars.
// All input sidecars must be committed to the same block.
// `inVerifiedRoSidecars` should contain enough (unique) sidecars to reconstruct the missing columns.
// `inVerifiedRoSidecars` should contain enough sidecars to reconstruct the missing columns, and should not contain any duplicate.
// WARNING: This function sorts inplace `verifiedRoSidecars` by index.
func ReconstructDataColumnSidecars(verifiedRoSidecars []blocks.VerifiedRODataColumn) ([]blocks.VerifiedRODataColumn, error) {
// Check if there is at least one input sidecar.
if len(verifiedRoSidecars) == 0 {
@@ -51,18 +54,17 @@ func ReconstructDataColumnSidecars(verifiedRoSidecars []blocks.VerifiedRODataCol
}
}
// Deduplicate sidecars.
sidecarByIndex := make(map[uint64]blocks.VerifiedRODataColumn, len(verifiedRoSidecars))
for _, inVerifiedRoSidecar := range verifiedRoSidecars {
sidecarByIndex[inVerifiedRoSidecar.Index] = inVerifiedRoSidecar
}
// Check if there is enough sidecars to reconstruct the missing columns.
sidecarCount := len(sidecarByIndex)
sidecarCount := len(verifiedRoSidecars)
if uint64(sidecarCount) < MinimumColumnCountToReconstruct() {
return nil, ErrNotEnoughDataColumnSidecars
}
// Sort the input sidecars by index.
sort.Slice(verifiedRoSidecars, func(i, j int) bool {
return verifiedRoSidecars[i].Index < verifiedRoSidecars[j].Index
})
// Recover cells and compute proofs in parallel.
var wg errgroup.Group
cellsAndProofs := make([]kzg.CellsAndProofs, blobCount)
@@ -71,10 +73,10 @@ func ReconstructDataColumnSidecars(verifiedRoSidecars []blocks.VerifiedRODataCol
cellsIndices := make([]uint64, 0, sidecarCount)
cells := make([]kzg.Cell, 0, sidecarCount)
for columnIndex, sidecar := range sidecarByIndex {
for _, sidecar := range verifiedRoSidecars {
cell := sidecar.Column[blobIndex]
cells = append(cells, kzg.Cell(cell))
cellsIndices = append(cellsIndices, columnIndex)
cellsIndices = append(cellsIndices, sidecar.Index)
}
// Recover the cells and proofs for the corresponding blob
@@ -255,7 +257,7 @@ func ComputeCellsAndProofsFromStructured(blobsAndProofs []*pb.BlobAndProofV2) ([
return nil, errors.Wrap(err, "compute cells")
}
kzgProofs := make([]kzg.Proof, 0, numberOfColumns*kzg.BytesPerProof)
kzgProofs := make([]kzg.Proof, 0, numberOfColumns)
for _, kzgProofBytes := range blobAndProof.KzgProofs {
if len(kzgProofBytes) != kzg.BytesPerProof {
return nil, errors.New("wrong KZG proof size - should never happen")

View File

@@ -125,11 +125,12 @@ func TestReconstructDataColumnSidecars(t *testing.T) {
}
func TestReconstructBlobs(t *testing.T) {
// Start the trusted setup.
err := kzg.Start()
require.NoError(t, err)
params.SetupTestConfigCleanup(t)
params.BeaconConfig().FuluForkEpoch = params.BeaconConfig().ElectraForkEpoch + 4096*2
require.NoError(t, kzg.Start())
var emptyBlock blocks.ROBlock
fs := util.SlotAtEpoch(t, params.BeaconConfig().FuluForkEpoch)
t.Run("no index", func(t *testing.T) {
actual, err := peerdas.ReconstructBlobs(emptyBlock, nil, nil)
@@ -190,10 +191,10 @@ func TestReconstructBlobs(t *testing.T) {
})
t.Run("not committed to the same block", func(t *testing.T) {
_, _, verifiedRoSidecars := util.GenerateTestFuluBlockWithSidecars(t, 3, util.WithParentRoot([fieldparams.RootLength]byte{1}))
roBlock, _, _ := util.GenerateTestFuluBlockWithSidecars(t, 3, util.WithParentRoot([fieldparams.RootLength]byte{2}))
_, _, verifiedRoSidecars := util.GenerateTestFuluBlockWithSidecars(t, 3, util.WithParentRoot([fieldparams.RootLength]byte{1}), util.WithSlot(fs))
roBlock, _, _ := util.GenerateTestFuluBlockWithSidecars(t, 3, util.WithParentRoot([fieldparams.RootLength]byte{2}), util.WithSlot(fs))
_, err = peerdas.ReconstructBlobs(roBlock, verifiedRoSidecars, []int{0})
_, err := peerdas.ReconstructBlobs(roBlock, verifiedRoSidecars, []int{0})
require.ErrorContains(t, peerdas.ErrRootMismatch.Error(), err)
})
@@ -440,6 +441,7 @@ func TestComputeCellsAndProofsFromStructured(t *testing.T) {
for i := range blobCount {
require.Equal(t, len(expectedCellsAndProofs[i].Cells), len(actualCellsAndProofs[i].Cells))
require.Equal(t, len(expectedCellsAndProofs[i].Proofs), len(actualCellsAndProofs[i].Proofs))
require.Equal(t, len(expectedCellsAndProofs[i].Proofs), cap(actualCellsAndProofs[i].Proofs))
// Compare cells
for j, expectedCell := range expectedCellsAndProofs[i].Cells {

View File

@@ -16,61 +16,60 @@ func TestDataColumnsAlignWithBlock(t *testing.T) {
err := kzg.Start()
require.NoError(t, err)
params.BeaconConfig().FuluForkEpoch = params.BeaconConfig().ElectraForkEpoch + 4096*2
fs := util.SlotAtEpoch(t, params.BeaconConfig().ElectraForkEpoch)
require.NoError(t, err)
fuluMax := params.BeaconConfig().MaxBlobsPerBlock(fs)
t.Run("pre fulu", func(t *testing.T) {
block, _ := util.GenerateTestElectraBlockWithSidecar(t, [fieldparams.RootLength]byte{}, 0, 0)
block, _ := util.GenerateTestElectraBlockWithSidecar(t, [fieldparams.RootLength]byte{}, fs, 0)
err := peerdas.DataColumnsAlignWithBlock(block, nil)
require.NoError(t, err)
})
t.Run("too many commitmnets", func(t *testing.T) {
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
config.BlobSchedule = []params.BlobScheduleEntry{{}}
params.OverrideBeaconConfig(config)
block, _, _ := util.GenerateTestFuluBlockWithSidecars(t, 3)
t.Run("too many commitments", func(t *testing.T) {
block, _, _ := util.GenerateTestFuluBlockWithSidecars(t, fuluMax+1, util.WithSlot(fs))
err := peerdas.DataColumnsAlignWithBlock(block, nil)
require.ErrorIs(t, err, peerdas.ErrTooManyCommitments)
})
t.Run("root mismatch", func(t *testing.T) {
_, sidecars, _ := util.GenerateTestFuluBlockWithSidecars(t, 2)
block, _, _ := util.GenerateTestFuluBlockWithSidecars(t, 0)
_, sidecars, _ := util.GenerateTestFuluBlockWithSidecars(t, 2, util.WithSlot(fs))
block, _, _ := util.GenerateTestFuluBlockWithSidecars(t, 0, util.WithSlot(fs))
err := peerdas.DataColumnsAlignWithBlock(block, sidecars)
require.ErrorIs(t, err, peerdas.ErrRootMismatch)
})
t.Run("column size mismatch", func(t *testing.T) {
block, sidecars, _ := util.GenerateTestFuluBlockWithSidecars(t, 2)
block, sidecars, _ := util.GenerateTestFuluBlockWithSidecars(t, 2, util.WithSlot(fs))
sidecars[0].Column = [][]byte{}
err := peerdas.DataColumnsAlignWithBlock(block, sidecars)
require.ErrorIs(t, err, peerdas.ErrBlockColumnSizeMismatch)
})
t.Run("KZG commitments size mismatch", func(t *testing.T) {
block, sidecars, _ := util.GenerateTestFuluBlockWithSidecars(t, 2)
block, sidecars, _ := util.GenerateTestFuluBlockWithSidecars(t, 2, util.WithSlot(fs))
sidecars[0].KzgCommitments = [][]byte{}
err := peerdas.DataColumnsAlignWithBlock(block, sidecars)
require.ErrorIs(t, err, peerdas.ErrBlockColumnSizeMismatch)
})
t.Run("KZG proofs mismatch", func(t *testing.T) {
block, sidecars, _ := util.GenerateTestFuluBlockWithSidecars(t, 2)
block, sidecars, _ := util.GenerateTestFuluBlockWithSidecars(t, 2, util.WithSlot(fs))
sidecars[0].KzgProofs = [][]byte{}
err := peerdas.DataColumnsAlignWithBlock(block, sidecars)
require.ErrorIs(t, err, peerdas.ErrBlockColumnSizeMismatch)
})
t.Run("commitment mismatch", func(t *testing.T) {
block, _, _ := util.GenerateTestFuluBlockWithSidecars(t, 2)
_, alteredSidecars, _ := util.GenerateTestFuluBlockWithSidecars(t, 2)
block, _, _ := util.GenerateTestFuluBlockWithSidecars(t, 2, util.WithSlot(fs))
_, alteredSidecars, _ := util.GenerateTestFuluBlockWithSidecars(t, 2, util.WithSlot(fs))
alteredSidecars[1].KzgCommitments[0][0]++ // Overflow is OK
err := peerdas.DataColumnsAlignWithBlock(block, alteredSidecars)
require.ErrorIs(t, err, peerdas.ErrCommitmentMismatch)
})
t.Run("nominal", func(t *testing.T) {
block, sidecars, _ := util.GenerateTestFuluBlockWithSidecars(t, 2)
block, sidecars, _ := util.GenerateTestFuluBlockWithSidecars(t, 2, util.WithSlot(fs))
err := peerdas.DataColumnsAlignWithBlock(block, sidecars)
require.NoError(t, err)
})

View File

@@ -10,6 +10,7 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/electra"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/transition/interop"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/validators"
v "github.com/OffchainLabs/prysm/v6/beacon-chain/core/validators"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
@@ -378,9 +379,16 @@ func ProcessBlockForStateRoot(
func altairOperations(ctx context.Context, st state.BeaconState, beaconBlock interfaces.ReadOnlyBeaconBlock) (state.BeaconState, error) {
var err error
exitInfo := v.ExitInformation(st)
if err := helpers.UpdateTotalActiveBalanceCache(st, exitInfo.TotalActiveBalance); err != nil {
return nil, errors.Wrap(err, "could not update total active balance cache")
hasSlashings := len(beaconBlock.Body().ProposerSlashings()) > 0 || len(beaconBlock.Body().AttesterSlashings()) > 0
// exitInfo is only needed for voluntary exits pre Electra.
hasExits := st.Version() < version.Electra && len(beaconBlock.Body().VoluntaryExits()) > 0
exitInfo := &validators.ExitInfo{}
if hasSlashings || hasExits {
// ExitInformation is expensive to compute, only do it if we need it.
exitInfo = v.ExitInformation(st)
if err := helpers.UpdateTotalActiveBalanceCache(st, exitInfo.TotalActiveBalance); err != nil {
return nil, errors.Wrap(err, "could not update total active balance cache")
}
}
st, err = b.ProcessProposerSlashings(ctx, st, beaconBlock.Body().ProposerSlashings(), exitInfo)
if err != nil {
@@ -407,10 +415,15 @@ func altairOperations(ctx context.Context, st state.BeaconState, beaconBlock int
// This calls phase 0 block operations.
func phase0Operations(ctx context.Context, st state.BeaconState, beaconBlock interfaces.ReadOnlyBeaconBlock) (state.BeaconState, error) {
var err error
exitInfo := v.ExitInformation(st)
if err := helpers.UpdateTotalActiveBalanceCache(st, exitInfo.TotalActiveBalance); err != nil {
return nil, errors.Wrap(err, "could not update total active balance cache")
hasSlashings := len(beaconBlock.Body().ProposerSlashings()) > 0 || len(beaconBlock.Body().AttesterSlashings()) > 0
hasExits := len(beaconBlock.Body().VoluntaryExits()) > 0
var exitInfo *v.ExitInfo
if hasSlashings || hasExits {
// ExitInformation is expensive to compute, only do it if we need it.
exitInfo = v.ExitInformation(st)
if err := helpers.UpdateTotalActiveBalanceCache(st, exitInfo.TotalActiveBalance); err != nil {
return nil, errors.Wrap(err, "could not update total active balance cache")
}
}
st, err = b.ProcessProposerSlashings(ctx, st, beaconBlock.Body().ProposerSlashings(), exitInfo)
if err != nil {

View File

@@ -13,7 +13,6 @@ import (
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/math"
mathutil "github.com/OffchainLabs/prysm/v6/math"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/time/slots"
@@ -60,7 +59,7 @@ func ExitInformation(s state.BeaconState) *ExitInfo {
_ = err
// Apply minimum balance as per spec
exitInfo.TotalActiveBalance = mathutil.Max(params.BeaconConfig().EffectiveBalanceIncrement, totalActiveBalance)
exitInfo.TotalActiveBalance = max(params.BeaconConfig().EffectiveBalanceIncrement, totalActiveBalance)
return exitInfo
}
@@ -98,7 +97,9 @@ func InitiateValidatorExit(
if validator.ExitEpoch != params.BeaconConfig().FarFutureEpoch {
return s, ErrValidatorAlreadyExited
}
if exitInfo == nil {
return nil, errors.New("exit info is required to process validator exit")
}
// Compute exit queue epoch.
if s.Version() < version.Electra {
if err = initiateValidatorExitPreElectra(ctx, s, exitInfo); err != nil {
@@ -177,6 +178,9 @@ func initiateValidatorExitPreElectra(ctx context.Context, s state.BeaconState, e
// if exit_queue_churn >= get_validator_churn_limit(state):
// exit_queue_epoch += Epoch(1)
exitableEpoch := helpers.ActivationExitEpoch(time.CurrentEpoch(s))
if exitInfo == nil {
return errors.New("exit info is required to process validator exit")
}
if exitableEpoch > exitInfo.HighestExitEpoch {
exitInfo.HighestExitEpoch = exitableEpoch
exitInfo.Churn = 0
@@ -235,7 +239,9 @@ func SlashValidator(
exitInfo *ExitInfo,
) (state.BeaconState, error) {
var err error
if exitInfo == nil {
return nil, errors.New("exit info is required to slash validator")
}
s, err = InitiateValidatorExitForTotalBal(ctx, s, slashedIdx, exitInfo, primitives.Gwei(exitInfo.TotalActiveBalance))
if err != nil && !errors.Is(err, ErrValidatorAlreadyExited) {
return nil, errors.Wrapf(err, "could not initiate validator %d exit", slashedIdx)

View File

@@ -18,13 +18,16 @@ import (
)
func Test_commitmentsToCheck(t *testing.T) {
params.SetupTestConfigCleanup(t)
params.BeaconConfig().FuluForkEpoch = params.BeaconConfig().ElectraForkEpoch + 4096*2
fulu := primitives.Slot(params.BeaconConfig().FuluForkEpoch) * params.BeaconConfig().SlotsPerEpoch
windowSlots, err := slots.EpochEnd(params.BeaconConfig().MinEpochsForBlobsSidecarsRequest)
require.NoError(t, err)
commits := [][]byte{
bytesutil.PadTo([]byte("a"), 48),
bytesutil.PadTo([]byte("b"), 48),
bytesutil.PadTo([]byte("c"), 48),
bytesutil.PadTo([]byte("d"), 48),
windowSlots = windowSlots + primitives.Slot(params.BeaconConfig().FuluForkEpoch)
maxBlobs := params.LastNetworkScheduleEntry().MaxBlobsPerBlock
commits := make([][]byte, maxBlobs+1)
for i := 0; i < len(commits); i++ {
commits[i] = bytesutil.PadTo([]byte{byte(i)}, 48)
}
cases := []struct {
name string
@@ -47,41 +50,40 @@ func Test_commitmentsToCheck(t *testing.T) {
{
name: "commitments within da",
block: func(t *testing.T) blocks.ROBlock {
d := util.NewBeaconBlockDeneb()
d.Block.Body.BlobKzgCommitments = commits
d.Block.Slot = 100
d := util.NewBeaconBlockFulu()
d.Block.Body.BlobKzgCommitments = commits[:maxBlobs]
d.Block.Slot = fulu + 100
sb, err := blocks.NewSignedBeaconBlock(d)
require.NoError(t, err)
rb, err := blocks.NewROBlock(sb)
require.NoError(t, err)
return rb
},
commits: commits,
slot: 100,
commits: commits[:maxBlobs],
slot: fulu + 100,
},
{
name: "commitments outside da",
block: func(t *testing.T) blocks.ROBlock {
d := util.NewBeaconBlockDeneb()
d := util.NewBeaconBlockFulu()
d.Block.Slot = fulu
// block is from slot 0, "current slot" is window size +1 (so outside the window)
d.Block.Body.BlobKzgCommitments = commits
d.Block.Body.BlobKzgCommitments = commits[:maxBlobs]
sb, err := blocks.NewSignedBeaconBlock(d)
require.NoError(t, err)
rb, err := blocks.NewROBlock(sb)
require.NoError(t, err)
return rb
},
slot: windowSlots + 1,
slot: fulu + windowSlots + 1,
},
{
name: "excessive commitments",
block: func(t *testing.T) blocks.ROBlock {
d := util.NewBeaconBlockDeneb()
d.Block.Slot = 100
d := util.NewBeaconBlockFulu()
d.Block.Slot = fulu + 100
// block is from slot 0, "current slot" is window size +1 (so outside the window)
d.Block.Body.BlobKzgCommitments = commits
// Double the number of commitments, assert that this is over the limit
d.Block.Body.BlobKzgCommitments = append(commits, d.Block.Body.BlobKzgCommitments...)
sb, err := blocks.NewSignedBeaconBlock(d)
require.NoError(t, err)
rb, err := blocks.NewROBlock(sb)
@@ -115,67 +117,69 @@ func Test_commitmentsToCheck(t *testing.T) {
func TestLazilyPersistent_Missing(t *testing.T) {
ctx := t.Context()
store := filesystem.NewEphemeralBlobStorage(t)
ds := util.SlotAtEpoch(t, params.BeaconConfig().DenebForkEpoch)
blk, blobSidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 3)
blk, blobSidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, ds, 3)
mbv := &mockBlobBatchVerifier{t: t, scs: blobSidecars}
as := NewLazilyPersistentStore(store, mbv)
// Only one commitment persisted, should return error with other indices
require.NoError(t, as.Persist(1, blobSidecars[2]))
err := as.IsDataAvailable(ctx, 1, blk)
require.NoError(t, as.Persist(ds, blobSidecars[2]))
err := as.IsDataAvailable(ctx, ds, blk)
require.ErrorIs(t, err, errMissingSidecar)
// All but one persisted, return missing idx
require.NoError(t, as.Persist(1, blobSidecars[0]))
err = as.IsDataAvailable(ctx, 1, blk)
require.NoError(t, as.Persist(ds, blobSidecars[0]))
err = as.IsDataAvailable(ctx, ds, blk)
require.ErrorIs(t, err, errMissingSidecar)
// All persisted, return nil
require.NoError(t, as.Persist(1, blobSidecars...))
require.NoError(t, as.Persist(ds, blobSidecars...))
require.NoError(t, as.IsDataAvailable(ctx, 1, blk))
require.NoError(t, as.IsDataAvailable(ctx, ds, blk))
}
func TestLazilyPersistent_Mismatch(t *testing.T) {
ctx := t.Context()
store := filesystem.NewEphemeralBlobStorage(t)
ds := util.SlotAtEpoch(t, params.BeaconConfig().DenebForkEpoch)
blk, blobSidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 3)
blk, blobSidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, ds, 3)
mbv := &mockBlobBatchVerifier{t: t, err: errors.New("kzg check should not run")}
blobSidecars[0].KzgCommitment = bytesutil.PadTo([]byte("nope"), 48)
as := NewLazilyPersistentStore(store, mbv)
// Only one commitment persisted, should return error with other indices
require.NoError(t, as.Persist(1, blobSidecars[0]))
err := as.IsDataAvailable(ctx, 1, blk)
require.NoError(t, as.Persist(ds, blobSidecars[0]))
err := as.IsDataAvailable(ctx, ds, blk)
require.NotNil(t, err)
require.ErrorIs(t, err, errCommitmentMismatch)
}
func TestLazyPersistOnceCommitted(t *testing.T) {
_, blobSidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 6)
ds := util.SlotAtEpoch(t, params.BeaconConfig().DenebForkEpoch)
_, blobSidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, ds, 6)
as := NewLazilyPersistentStore(filesystem.NewEphemeralBlobStorage(t), &mockBlobBatchVerifier{})
// stashes as expected
require.NoError(t, as.Persist(1, blobSidecars...))
require.NoError(t, as.Persist(ds, blobSidecars...))
// ignores duplicates
require.ErrorIs(t, as.Persist(1, blobSidecars...), ErrDuplicateSidecar)
require.ErrorIs(t, as.Persist(ds, blobSidecars...), ErrDuplicateSidecar)
// ignores index out of bound
blobSidecars[0].Index = 6
require.ErrorIs(t, as.Persist(1, blobSidecars[0]), errIndexOutOfBounds)
_, moreBlobSidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 4)
require.ErrorIs(t, as.Persist(ds, blobSidecars[0]), errIndexOutOfBounds)
_, moreBlobSidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, ds, 4)
// ignores sidecars before the retention period
slotOOB, err := slots.EpochStart(params.BeaconConfig().MinEpochsForBlobsSidecarsRequest)
require.NoError(t, err)
require.NoError(t, as.Persist(32+slotOOB, moreBlobSidecars[0]))
slotOOB := util.SlotAtEpoch(t, params.BeaconConfig().MinEpochsForBlobsSidecarsRequest)
slotOOB += ds + 32
require.NoError(t, as.Persist(slotOOB, moreBlobSidecars[0]))
// doesn't ignore new sidecars with a different block root
require.NoError(t, as.Persist(1, moreBlobSidecars...))
require.NoError(t, as.Persist(ds, moreBlobSidecars...))
}
type mockBlobBatchVerifier struct {

View File

@@ -39,7 +39,7 @@ func filterTestCaseSetup(slot primitives.Slot, nBlobs int, onDisk []int, numExpe
entry := &blobCacheEntry{}
if len(onDisk) > 0 {
od := map[[32]byte][]int{blk.Root(): onDisk}
sumz := filesystem.NewMockBlobStorageSummarizer(t, od)
sumz := filesystem.NewMockBlobStorageSummarizer(t, slots.ToEpoch(slot), od)
sum := sumz.Summary(blk.Root())
entry.setDiskSummary(sum)
}

View File

@@ -122,18 +122,18 @@ type BlobStorage struct {
func (bs *BlobStorage) WarmCache() {
start := time.Now()
if bs.layoutName == LayoutNameFlat {
log.Info("Blob filesystem cache warm-up started. This may take a few minutes.")
log.Info("Blob filesystem cache warm-up started. This may take a few minutes")
} else {
log.Info("Blob filesystem cache warm-up started.")
log.Info("Blob filesystem cache warm-up started")
}
if err := warmCache(bs.layout, bs.cache); err != nil {
log.WithError(err).Error("Error encountered while warming up blob filesystem cache.")
log.WithError(err).Error("Error encountered while warming up blob filesystem cache")
}
if err := bs.migrateLayouts(); err != nil {
log.WithError(err).Error("Error encountered while migrating blob storage.")
log.WithError(err).Error("Error encountered while migrating blob storage")
}
log.WithField("elapsed", time.Since(start)).Info("Blob filesystem cache warm-up complete.")
log.WithField("elapsed", time.Since(start)).Info("Blob filesystem cache warm-up complete")
}
// If any blob storage directories are found for layouts besides the configured layout, migrate them.

View File

@@ -21,7 +21,8 @@ import (
)
func TestBlobStorage_SaveBlobData(t *testing.T) {
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, params.BeaconConfig().MaxBlobsPerBlock(1))
ds := util.SlotAtEpoch(t, params.BeaconConfig().DenebForkEpoch)
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, ds, params.BeaconConfig().MaxBlobsPerBlock(ds))
testSidecars := verification.FakeVerifySliceForTest(t, sidecars)
t.Run("no error for duplicate", func(t *testing.T) {
@@ -127,21 +128,22 @@ func TestBlobStorage_SaveBlobData(t *testing.T) {
}
func TestBlobIndicesBounds(t *testing.T) {
es := util.SlotAtEpoch(t, params.BeaconConfig().ElectraForkEpoch)
fs := afero.NewMemMapFs()
root := [32]byte{}
okIdx := uint64(params.BeaconConfig().MaxBlobsPerBlock(0)) - 1
writeFakeSSZ(t, fs, root, 0, okIdx)
okIdx := uint64(params.BeaconConfig().MaxBlobsPerBlock(es)) - 1
writeFakeSSZ(t, fs, root, es, okIdx)
bs := NewWarmedEphemeralBlobStorageUsingFs(t, fs, WithLayout(LayoutNameByEpoch))
indices := bs.Summary(root).mask
expected := make([]bool, params.BeaconConfig().MaxBlobsPerBlock(0))
expected := make([]bool, params.BeaconConfig().MaxBlobsPerBlock(es))
expected[okIdx] = true
for i := range expected {
require.Equal(t, expected[i], indices[i])
}
oobIdx := uint64(params.BeaconConfig().MaxBlobsPerBlock(0))
writeFakeSSZ(t, fs, root, 0, oobIdx)
oobIdx := uint64(params.BeaconConfig().MaxBlobsPerBlock(es))
writeFakeSSZ(t, fs, root, es, oobIdx)
// This now fails at cache warmup time.
require.ErrorIs(t, warmCache(bs.layout, bs.cache), errIndexOutOfBounds)
}

View File

@@ -6,14 +6,17 @@ import (
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
)
func TestSlotByRoot_Summary(t *testing.T) {
noneSet := make([]bool, params.BeaconConfig().MaxBlobsPerBlock(0))
allSet := make([]bool, params.BeaconConfig().MaxBlobsPerBlock(0))
firstSet := make([]bool, params.BeaconConfig().MaxBlobsPerBlock(0))
lastSet := make([]bool, params.BeaconConfig().MaxBlobsPerBlock(0))
oneSet := make([]bool, params.BeaconConfig().MaxBlobsPerBlock(0))
ee := params.BeaconConfig().ElectraForkEpoch
es := util.SlotAtEpoch(t, ee)
noneSet := make([]bool, params.BeaconConfig().MaxBlobsPerBlock(es))
allSet := make([]bool, params.BeaconConfig().MaxBlobsPerBlock(es))
firstSet := make([]bool, params.BeaconConfig().MaxBlobsPerBlock(es))
lastSet := make([]bool, params.BeaconConfig().MaxBlobsPerBlock(es))
oneSet := make([]bool, params.BeaconConfig().MaxBlobsPerBlock(es))
firstSet[0] = true
lastSet[len(lastSet)-1] = true
oneSet[1] = true
@@ -53,7 +56,7 @@ func TestSlotByRoot_Summary(t *testing.T) {
for _, c := range cases {
if c.expected != nil {
key := bytesutil.ToBytes32([]byte(c.name))
sc.cache[key] = BlobStorageSummary{epoch: 0, mask: c.expected}
sc.cache[key] = BlobStorageSummary{epoch: ee, mask: c.expected}
}
}
for _, c := range cases {
@@ -73,6 +76,7 @@ func TestSlotByRoot_Summary(t *testing.T) {
}
func TestAllAvailable(t *testing.T) {
es := util.SlotAtEpoch(t, params.BeaconConfig().ElectraForkEpoch)
idxUpTo := func(u int) []int {
r := make([]int, u)
for i := range r {
@@ -125,13 +129,13 @@ func TestAllAvailable(t *testing.T) {
},
{
name: "out of bound is safe",
count: params.BeaconConfig().MaxBlobsPerBlock(0) + 1,
count: params.BeaconConfig().MaxBlobsPerBlock(es) + 1,
aa: false,
},
{
name: "max present",
count: params.BeaconConfig().MaxBlobsPerBlock(0),
idxSet: idxUpTo(params.BeaconConfig().MaxBlobsPerBlock(0)),
count: params.BeaconConfig().MaxBlobsPerBlock(es),
idxSet: idxUpTo(params.BeaconConfig().MaxBlobsPerBlock(es)),
aa: true,
},
{
@@ -143,7 +147,7 @@ func TestAllAvailable(t *testing.T) {
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
mask := make([]bool, params.BeaconConfig().MaxBlobsPerBlock(0))
mask := make([]bool, params.BeaconConfig().MaxBlobsPerBlock(es))
for _, idx := range c.idxSet {
mask[idx] = true
}

View File

@@ -200,6 +200,7 @@ func (dcs *DataColumnStorage) WarmCache() {
fileMetadata, err := extractFileMetadata(path)
if err != nil {
log.WithError(err).Error("Error encountered while extracting file metadata")
return nil
}
// Open the data column filesystem file.
@@ -988,8 +989,8 @@ func filePath(root [fieldparams.RootLength]byte, epoch primitives.Epoch) string
// extractFileMetadata extracts the metadata from a file path.
// If the path is not a leaf, it returns nil.
func extractFileMetadata(path string) (*fileMetadata, error) {
// Is this Windows friendly?
parts := strings.Split(path, "/")
// Use filepath.Separator to handle both Windows (\) and Unix (/) path separators
parts := strings.Split(path, string(filepath.Separator))
if len(parts) != 3 {
return nil, errors.Errorf("unexpected file %s", path)
}
@@ -1032,5 +1033,5 @@ func extractFileMetadata(path string) (*fileMetadata, error) {
// period computes the period of a given epoch.
func period(epoch primitives.Epoch) uint64 {
return uint64(epoch / params.BeaconConfig().MinEpochsForBlobsSidecarsRequest)
return uint64(epoch / params.BeaconConfig().MinEpochsForDataColumnSidecarsRequest)
}

View File

@@ -35,8 +35,9 @@ func (s DataColumnStorageSummary) HasIndex(index uint64) bool {
// HasAtLeastOneIndex returns true if at least one of the DataColumnSidecars at the given indices is available in the filesystem.
func (s DataColumnStorageSummary) HasAtLeastOneIndex(indices []uint64) bool {
size := uint64(len(s.mask))
for _, index := range indices {
if s.mask[index] {
if index < size && s.mask[index] {
return true
}
}

View File

@@ -25,11 +25,11 @@ func TestHasIndex(t *testing.T) {
func TestHasAtLeastOneIndex(t *testing.T) {
summary := NewDataColumnStorageSummary(0, [fieldparams.NumberOfColumns]bool{false, true})
hasAtLeastOneIndex := summary.HasAtLeastOneIndex([]uint64{3, 1, 2})
require.Equal(t, true, hasAtLeastOneIndex)
actual := summary.HasAtLeastOneIndex([]uint64{3, 1, fieldparams.NumberOfColumns, 2})
require.Equal(t, true, actual)
hasAtLeastOneIndex = summary.HasAtLeastOneIndex([]uint64{3, 4, 2})
require.Equal(t, false, hasAtLeastOneIndex)
actual = summary.HasAtLeastOneIndex([]uint64{3, 4, fieldparams.NumberOfColumns, 2})
require.Equal(t, false, actual)
}
func TestCount(t *testing.T) {

View File

@@ -3,6 +3,7 @@ package filesystem
import (
"encoding/binary"
"os"
"path/filepath"
"testing"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
@@ -725,3 +726,37 @@ func TestPrune(t *testing.T) {
require.Equal(t, true, compareSlices([]string{"0x0de28a18cae63cbc6f0b20dc1afb0b1df38da40824a5f09f92d485ade04de97f.sszs"}, dirs))
})
}
func TestExtractFileMetadata(t *testing.T) {
t.Run("Unix", func(t *testing.T) {
// Test with Unix-style path separators (/)
path := "12/1234/0x8bb2f09de48c102635622dc27e6de03ae2b22639df7c33edbc8222b2ec423746.sszs"
metadata, err := extractFileMetadata(path)
if filepath.Separator == '/' {
// On Unix systems, this should succeed
require.NoError(t, err)
require.Equal(t, uint64(12), metadata.period)
require.Equal(t, primitives.Epoch(1234), metadata.epoch)
return
}
// On Windows systems, this should fail because it uses the wrong separator
require.NotNil(t, err)
})
t.Run("Windows", func(t *testing.T) {
// Test with Windows-style path separators (\)
path := "12\\1234\\0x8bb2f09de48c102635622dc27e6de03ae2b22639df7c33edbc8222b2ec423746.sszs"
metadata, err := extractFileMetadata(path)
if filepath.Separator == '\\' {
// On Windows systems, this should succeed
require.NoError(t, err)
require.Equal(t, uint64(12), metadata.period)
require.Equal(t, primitives.Epoch(1234), metadata.epoch)
return
}
// On Unix systems, this should fail because it uses the wrong separator
require.NotNil(t, err)
})
}

View File

@@ -11,6 +11,7 @@ import (
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/verification"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
@@ -60,12 +61,13 @@ func TestRootFromDir(t *testing.T) {
}
func TestSlotFromFile(t *testing.T) {
es := util.SlotAtEpoch(t, params.BeaconConfig().ElectraForkEpoch)
cases := []struct {
slot primitives.Slot
}{
{slot: 0},
{slot: 2},
{slot: 1123581321},
{slot: es + 0},
{slot: es + 2},
{slot: es + 1123581321},
{slot: math.MaxUint64},
}
for _, c := range cases {
@@ -243,39 +245,40 @@ func TestSlotFromBlob(t *testing.T) {
}
func TestIterationComplete(t *testing.T) {
de := params.BeaconConfig().DenebForkEpoch
targets := []migrationTestTarget{
{
ident: ezIdent(t, "0x0125e54c64c925018c9296965a5b622d9f5ab626c10917860dcfb6aa09a0a00b", 1234, 0),
path: "by-epoch/0/1234/0x0125e54c64c925018c9296965a5b622d9f5ab626c10917860dcfb6aa09a0a00b/0.ssz",
ident: ezIdent(t, "0x0125e54c64c925018c9296965a5b622d9f5ab626c10917860dcfb6aa09a0a00b", de+1234, 0),
path: "by-epoch/%d/%d/0x0125e54c64c925018c9296965a5b622d9f5ab626c10917860dcfb6aa09a0a00b/0.ssz",
},
{
ident: ezIdent(t, "0x0127dba6fd30fdbb47e73e861d5c6e602b38ac3ddc945bb6a2fc4e10761e9a86", 5330, 0),
ident: ezIdent(t, "0x0127dba6fd30fdbb47e73e861d5c6e602b38ac3ddc945bb6a2fc4e10761e9a86", de+5330, 0),
slotOffset: 31,
path: "by-epoch/1/5330/0x0127dba6fd30fdbb47e73e861d5c6e602b38ac3ddc945bb6a2fc4e10761e9a86/0.ssz",
path: "by-epoch/%d/%d/0x0127dba6fd30fdbb47e73e861d5c6e602b38ac3ddc945bb6a2fc4e10761e9a86/0.ssz",
},
{
ident: ezIdent(t, "0x0127dba6fd30fdbb47e73e861d5c6e602b38ac3ddc945bb6a2fc4e10761e9a86", 5330, 1),
ident: ezIdent(t, "0x0127dba6fd30fdbb47e73e861d5c6e602b38ac3ddc945bb6a2fc4e10761e9a86", de+5330, 1),
slotOffset: 31,
path: "by-epoch/1/5330/0x0127dba6fd30fdbb47e73e861d5c6e602b38ac3ddc945bb6a2fc4e10761e9a86/1.ssz",
path: "by-epoch/%d/%d/0x0127dba6fd30fdbb47e73e861d5c6e602b38ac3ddc945bb6a2fc4e10761e9a86/1.ssz",
},
{
ident: ezIdent(t, "0x0232521756a0b965eab2c2245d7ad85feaeaf5f427cd14d1a7531f9d555b415c", 16777216, 0),
ident: ezIdent(t, "0x0232521756a0b965eab2c2245d7ad85feaeaf5f427cd14d1a7531f9d555b415c", -1+math.MaxUint64/32, 0),
slotOffset: 16,
path: "by-epoch/4096/16777216/0x0232521756a0b965eab2c2245d7ad85feaeaf5f427cd14d1a7531f9d555b415c/0.ssz",
path: "by-epoch/%d/%d/0x0232521756a0b965eab2c2245d7ad85feaeaf5f427cd14d1a7531f9d555b415c/0.ssz",
},
{
ident: ezIdent(t, "0x0232521756a0b965eab2c2245d7ad85feaeaf5f427cd14d1a7531f9d555b415c", 16777216, 1),
ident: ezIdent(t, "0x0232521756a0b965eab2c2245d7ad85feaeaf5f427cd14d1a7531f9d555b415c", -1+math.MaxUint64/32, 1),
slotOffset: 16,
path: "by-epoch/4096/16777216/0x0232521756a0b965eab2c2245d7ad85feaeaf5f427cd14d1a7531f9d555b415c/1.ssz",
path: "by-epoch/%d/%d/0x0232521756a0b965eab2c2245d7ad85feaeaf5f427cd14d1a7531f9d555b415c/1.ssz",
},
{
ident: ezIdent(t, "0x42eabe3d2c125410cd226de6f2825fb7575ab896c3f52e43de1fa29e4c809aba", 16777217, 0),
ident: ezIdent(t, "0x42eabe3d2c125410cd226de6f2825fb7575ab896c3f52e43de1fa29e4c809aba", -1+math.MaxUint64/32, 0),
slotOffset: 16,
path: "by-epoch/4096/16777217/0x42eabe3d2c125410cd226de6f2825fb7575ab896c3f52e43de1fa29e4c809aba/0.ssz",
path: "by-epoch/%d/%d/0x42eabe3d2c125410cd226de6f2825fb7575ab896c3f52e43de1fa29e4c809aba/0.ssz",
},
{
ident: ezIdent(t, "0x666cea5034e22bd3b849cb33914cad59afd88ee08e4d5bc0e997411c945fbc1d", 11235, 1),
path: "by-epoch/2/11235/0x666cea5034e22bd3b849cb33914cad59afd88ee08e4d5bc0e997411c945fbc1d/1.ssz",
ident: ezIdent(t, "0x666cea5034e22bd3b849cb33914cad59afd88ee08e4d5bc0e997411c945fbc1d", de+11235, 1),
path: "by-epoch/%d/%d/0x666cea5034e22bd3b849cb33914cad59afd88ee08e4d5bc0e997411c945fbc1d/1.ssz",
},
}
fs := afero.NewMemMapFs()
@@ -299,6 +302,7 @@ func TestIterationComplete(t *testing.T) {
require.Equal(t, true, ok)
require.Equal(t, tar.ident.epoch, entry.epoch)
require.Equal(t, true, entry.HasIndex(tar.ident.index))
require.Equal(t, tar.path, byEpoch.sszPath(tar.ident))
path := fmt.Sprintf(tar.path, periodForEpoch(tar.ident.epoch), tar.ident.epoch)
require.Equal(t, path, byEpoch.sszPath(tar.ident))
}
}

View File

@@ -4,10 +4,10 @@ import (
"os"
"testing"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/spf13/afero"
)
@@ -18,9 +18,7 @@ func ezIdent(t *testing.T, rootStr string, epoch primitives.Epoch, index uint64)
}
func setupTestBlobFile(t *testing.T, ident blobIdent, offset primitives.Slot, fs afero.Fs, l fsLayout) {
slot, err := slots.EpochStart(ident.epoch)
require.NoError(t, err)
slot += offset
slot := util.SlotAtEpoch(t, ident.epoch) + offset
_, sc := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, slot, 1)
scb, err := sc[0].MarshalSSZ()
require.NoError(t, err)
@@ -53,6 +51,7 @@ func testAssertFsMigrated(t *testing.T, fs afero.Fs, ident blobIdent, before, af
}
func TestMigrations(t *testing.T) {
de := params.BeaconConfig().DenebForkEpoch
cases := []struct {
name string
forwardLayout string
@@ -65,18 +64,18 @@ func TestMigrations(t *testing.T) {
forwardLayout: LayoutNameByEpoch,
targets: []migrationTestTarget{
{
ident: ezIdent(t, "0x0125e54c64c925018c9296965a5b622d9f5ab626c10917860dcfb6aa09a0a00b", 1234, 0),
ident: ezIdent(t, "0x0125e54c64c925018c9296965a5b622d9f5ab626c10917860dcfb6aa09a0a00b", de+1234, 0),
},
{
ident: ezIdent(t, "0x0127dba6fd30fdbb47e73e861d5c6e602b38ac3ddc945bb6a2fc4e10761e9a86", 5330, 0),
ident: ezIdent(t, "0x0127dba6fd30fdbb47e73e861d5c6e602b38ac3ddc945bb6a2fc4e10761e9a86", de+5330, 0),
slotOffset: 31,
},
{
ident: ezIdent(t, "0x0127dba6fd30fdbb47e73e861d5c6e602b38ac3ddc945bb6a2fc4e10761e9a86", 5330, 1),
ident: ezIdent(t, "0x0127dba6fd30fdbb47e73e861d5c6e602b38ac3ddc945bb6a2fc4e10761e9a86", de+5330, 1),
slotOffset: 31,
},
{
ident: ezIdent(t, "0x0232521756a0b965eab2c2245d7ad85feaeaf5f427cd14d1a7531f9d555b415c", 16777216, 0),
ident: ezIdent(t, "0x0232521756a0b965eab2c2245d7ad85feaeaf5f427cd14d1a7531f9d555b415c", de+16777216, 0),
slotOffset: 16,
},
},
@@ -87,33 +86,33 @@ func TestMigrations(t *testing.T) {
forwardLayout: LayoutNameByEpoch,
targets: []migrationTestTarget{
{
ident: ezIdent(t, "0x0125e54c64c925018c9296965a5b622d9f5ab626c10917860dcfb6aa09a0a00b", 1234, 0),
ident: ezIdent(t, "0x0125e54c64c925018c9296965a5b622d9f5ab626c10917860dcfb6aa09a0a00b", de+1234, 0),
},
{
ident: ezIdent(t, "0x0127dba6fd30fdbb47e73e861d5c6e602b38ac3ddc945bb6a2fc4e10761e9a86", 5330, 0),
ident: ezIdent(t, "0x0127dba6fd30fdbb47e73e861d5c6e602b38ac3ddc945bb6a2fc4e10761e9a86", de+5330, 0),
slotOffset: 31,
},
{
ident: ezIdent(t, "0x0127dba6fd30fdbb47e73e861d5c6e602b38ac3ddc945bb6a2fc4e10761e9a86", 5330, 1),
ident: ezIdent(t, "0x0127dba6fd30fdbb47e73e861d5c6e602b38ac3ddc945bb6a2fc4e10761e9a86", de+5330, 1),
slotOffset: 31,
},
{
ident: ezIdent(t, "0x0232521756a0b965eab2c2245d7ad85feaeaf5f427cd14d1a7531f9d555b415c", 16777216, 0),
ident: ezIdent(t, "0x0232521756a0b965eab2c2245d7ad85feaeaf5f427cd14d1a7531f9d555b415c", de+16777216, 0),
slotOffset: 16,
migrated: true,
},
{
ident: ezIdent(t, "0x0232521756a0b965eab2c2245d7ad85feaeaf5f427cd14d1a7531f9d555b415c", 16777216, 1),
ident: ezIdent(t, "0x0232521756a0b965eab2c2245d7ad85feaeaf5f427cd14d1a7531f9d555b415c", de+16777216, 1),
slotOffset: 16,
migrated: true,
},
{
ident: ezIdent(t, "0x42eabe3d2c125410cd226de6f2825fb7575ab896c3f52e43de1fa29e4c809aba", 16777217, 0),
ident: ezIdent(t, "0x42eabe3d2c125410cd226de6f2825fb7575ab896c3f52e43de1fa29e4c809aba", de+16777217, 0),
slotOffset: 16,
migrated: true,
},
{
ident: ezIdent(t, "0x666cea5034e22bd3b849cb33914cad59afd88ee08e4d5bc0e997411c945fbc1d", 11235, 1),
ident: ezIdent(t, "0x666cea5034e22bd3b849cb33914cad59afd88ee08e4d5bc0e997411c945fbc1d", de+11235, 1),
migrated: true,
},
},

View File

@@ -88,11 +88,11 @@ func NewEphemeralBlobStorageWithMocker(t testing.TB) (*BlobMocker, *BlobStorage)
return &BlobMocker{fs: fs, bs: bs}, bs
}
func NewMockBlobStorageSummarizer(t *testing.T, set map[[32]byte][]int) BlobStorageSummarizer {
func NewMockBlobStorageSummarizer(t *testing.T, epoch primitives.Epoch, set map[[32]byte][]int) BlobStorageSummarizer {
c := newBlobStorageCache()
for k, v := range set {
for i := range v {
if err := c.ensure(blobIdent{root: k, epoch: 0, index: uint64(v[i])}); err != nil {
if err := c.ensure(blobIdent{root: k, epoch: epoch, index: uint64(v[i])}); err != nil {
t.Fatal(err)
}
}
@@ -126,7 +126,7 @@ func NewWarmedEphemeralDataColumnStorageUsingFs(t testing.TB, fs afero.Fs, opts
func NewEphemeralDataColumnStorageUsingFs(t testing.TB, fs afero.Fs, opts ...DataColumnStorageOption) *DataColumnStorage {
opts = append(opts,
WithDataColumnRetentionEpochs(params.BeaconConfig().MinEpochsForBlobsSidecarsRequest),
WithDataColumnRetentionEpochs(params.BeaconConfig().MinEpochsForDataColumnSidecarsRequest),
WithDataColumnFs(fs),
)

View File

@@ -142,6 +142,7 @@ func testRoots(n int) [][32]byte {
}
func TestLayoutPruneBefore(t *testing.T) {
electra := params.BeaconConfig().ElectraForkEpoch
roots := testRoots(10)
cases := []struct {
name string
@@ -153,27 +154,27 @@ func TestLayoutPruneBefore(t *testing.T) {
}{
{
name: "none pruned",
pruneBefore: 1,
pruneBefore: electra + 1,
pruned: []testIdent{},
remain: []testIdent{
{offset: 1, blobIdent: blobIdent{root: roots[0], epoch: 1, index: 0}},
{offset: 1, blobIdent: blobIdent{root: roots[1], epoch: 1, index: 0}},
{offset: 1, blobIdent: blobIdent{root: roots[0], epoch: electra + 1, index: 0}},
{offset: 1, blobIdent: blobIdent{root: roots[1], epoch: electra + 1, index: 0}},
},
},
{
name: "expected pruned before epoch",
pruneBefore: 3,
pruneBefore: electra + 3,
pruned: []testIdent{
{offset: 0, blobIdent: blobIdent{root: roots[0], epoch: 1, index: 0}},
{offset: 31, blobIdent: blobIdent{root: roots[1], epoch: 1, index: 5}},
{offset: 0, blobIdent: blobIdent{root: roots[2], epoch: 2, index: 0}},
{offset: 31, blobIdent: blobIdent{root: roots[3], epoch: 2, index: 3}},
{offset: 0, blobIdent: blobIdent{root: roots[0], epoch: electra + 1, index: 0}},
{offset: 31, blobIdent: blobIdent{root: roots[1], epoch: electra + 1, index: 5}},
{offset: 0, blobIdent: blobIdent{root: roots[2], epoch: electra + 2, index: 0}},
{offset: 31, blobIdent: blobIdent{root: roots[3], epoch: electra + 2, index: 3}},
},
remain: []testIdent{
{offset: 0, blobIdent: blobIdent{root: roots[4], epoch: 3, index: 2}}, // boundary
{offset: 31, blobIdent: blobIdent{root: roots[5], epoch: 3, index: 0}}, // boundary
{offset: 0, blobIdent: blobIdent{root: roots[6], epoch: 4, index: 1}},
{offset: 31, blobIdent: blobIdent{root: roots[7], epoch: 4, index: 5}},
{offset: 0, blobIdent: blobIdent{root: roots[4], epoch: electra + 3, index: 2}}, // boundary
{offset: 31, blobIdent: blobIdent{root: roots[5], epoch: electra + 3, index: 0}}, // boundary
{offset: 0, blobIdent: blobIdent{root: roots[6], epoch: electra + 4, index: 1}},
{offset: 31, blobIdent: blobIdent{root: roots[7], epoch: electra + 4, index: 5}},
},
sum: pruneSummary{blobsPruned: 4},
},

View File

@@ -129,6 +129,7 @@ type NoHeadAccessDatabase interface {
// Custody operations.
UpdateSubscribedToAllDataSubnets(ctx context.Context, subscribed bool) (bool, error)
UpdateCustodyInfo(ctx context.Context, earliestAvailableSlot primitives.Slot, custodyGroupCount uint64) (primitives.Slot, uint64, error)
UpdateEarliestAvailableSlot(ctx context.Context, earliestAvailableSlot primitives.Slot) error
// P2P Metadata operations.
SaveMetadataSeqNum(ctx context.Context, seqNum uint64) error

View File

@@ -132,6 +132,6 @@ func recoverStateSummary(ctx context.Context, tx *bolt.Tx, root []byte) error {
if err != nil {
return err
}
summaryBucket := tx.Bucket(stateBucket)
summaryBucket := tx.Bucket(stateSummaryBucket)
return summaryBucket.Put(root, summaryEnc)
}

View File

@@ -137,3 +137,32 @@ func TestStore_FinalizedCheckpoint_StateMustExist(t *testing.T) {
require.ErrorContains(t, errMissingStateForCheckpoint.Error(), db.SaveFinalizedCheckpoint(ctx, cp))
}
// Regression test: verify that saving a checkpoint triggers recovery which writes
// the state summary into the correct stateSummaryBucket so that HasStateSummary/StateSummary see it.
func TestRecoverStateSummary_WritesToStateSummaryBucket(t *testing.T) {
db := setupDB(t)
ctx := t.Context()
// Create a block without saving a state or summary, so recovery is needed.
blk := util.HydrateSignedBeaconBlock(&ethpb.SignedBeaconBlock{})
root, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
wsb, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, db.SaveBlock(ctx, wsb))
// Precondition: summary not present yet.
require.Equal(t, false, db.HasStateSummary(ctx, root))
// Saving justified checkpoint should trigger recovery path calling recoverStateSummary.
cp := &ethpb.Checkpoint{Epoch: 2, Root: root[:]}
require.NoError(t, db.SaveJustifiedCheckpoint(ctx, cp))
// Postcondition: summary is visible via the public summary APIs (which read stateSummaryBucket).
require.Equal(t, true, db.HasStateSummary(ctx, root))
summary, err := db.StateSummary(ctx, root)
require.NoError(t, err)
require.NotNil(t, summary)
assert.DeepEqual(t, &ethpb.StateSummary{Slot: blk.Block.Slot, Root: root[:]}, summary)
}

View File

@@ -2,16 +2,19 @@ package kv
import (
"context"
"time"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
bolt "go.etcd.io/bbolt"
)
// UpdateCustodyInfo atomically updates the custody group count only it is greater than the stored one.
// UpdateCustodyInfo atomically updates the custody group count only if it is greater than the stored one.
// In this case, it also updates the earliest available slot with the provided value.
// It returns the (potentially updated) custody group count and earliest available slot.
func (s *Store) UpdateCustodyInfo(ctx context.Context, earliestAvailableSlot primitives.Slot, custodyGroupCount uint64) (primitives.Slot, uint64, error) {
@@ -70,6 +73,79 @@ func (s *Store) UpdateCustodyInfo(ctx context.Context, earliestAvailableSlot pri
return storedEarliestAvailableSlot, storedGroupCount, nil
}
// UpdateEarliestAvailableSlot updates the earliest available slot.
func (s *Store) UpdateEarliestAvailableSlot(ctx context.Context, earliestAvailableSlot primitives.Slot) error {
_, span := trace.StartSpan(ctx, "BeaconDB.UpdateEarliestAvailableSlot")
defer span.End()
storedEarliestAvailableSlot := primitives.Slot(0)
if err := s.db.Update(func(tx *bolt.Tx) error {
// Retrieve the custody bucket.
bucket, err := tx.CreateBucketIfNotExists(custodyBucket)
if err != nil {
return errors.Wrap(err, "create custody bucket")
}
// Retrieve the stored earliest available slot.
storedEarliestAvailableSlotBytes := bucket.Get(earliestAvailableSlotKey)
if len(storedEarliestAvailableSlotBytes) != 0 {
storedEarliestAvailableSlot = primitives.Slot(bytesutil.BytesToUint64BigEndian(storedEarliestAvailableSlotBytes))
}
// Allow decrease (for backfill scenarios)
if earliestAvailableSlot <= storedEarliestAvailableSlot {
storedEarliestAvailableSlot = earliestAvailableSlot
bytes := bytesutil.Uint64ToBytesBigEndian(uint64(earliestAvailableSlot))
if err := bucket.Put(earliestAvailableSlotKey, bytes); err != nil {
return errors.Wrap(err, "put earliest available slot")
}
return nil
}
// Prevent increase within the MIN_EPOCHS_FOR_BLOCK_REQUESTS period
// This ensures we don't voluntarily refuse to serve mandatory block data
genesisTime := time.Unix(int64(params.BeaconConfig().MinGenesisTime+params.BeaconConfig().GenesisDelay), 0)
currentSlot := slots.CurrentSlot(genesisTime)
currentEpoch := slots.ToEpoch(currentSlot)
minEpochsForBlocks := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
// Calculate the minimum required epoch (or 0 if we're early in the chain)
minRequiredEpoch := primitives.Epoch(0)
if currentEpoch > minEpochsForBlocks {
minRequiredEpoch = currentEpoch - minEpochsForBlocks
}
// Convert to slot to ensure we compare at slot-level granularity
minRequiredSlot, err := slots.EpochStart(minRequiredEpoch)
if err != nil {
return errors.Wrap(err, "calculate minimum required slot")
}
// Prevent any increase that would put earliest available slot beyond the minimum required slot
if earliestAvailableSlot > minRequiredSlot {
return errors.Errorf(
"cannot increase earliest available slot to %d (epoch %d) as it exceeds minimum required slot %d (epoch %d)",
earliestAvailableSlot, slots.ToEpoch(earliestAvailableSlot),
minRequiredSlot, minRequiredEpoch,
)
}
storedEarliestAvailableSlot = earliestAvailableSlot
bytes := bytesutil.Uint64ToBytesBigEndian(uint64(earliestAvailableSlot))
if err := bucket.Put(earliestAvailableSlotKey, bytes); err != nil {
return errors.Wrap(err, "put earliest available slot")
}
return nil
}); err != nil {
return err
}
log.WithField("earliestAvailableSlot", storedEarliestAvailableSlot).Debug("Updated earliest available slot")
return nil
}
// UpdateSubscribedToAllDataSubnets updates the "subscribed to all data subnets" status in the database
// only if `subscribed` is `true`.
// It returns the previous subscription status.

View File

@@ -3,10 +3,13 @@ package kv
import (
"context"
"testing"
"time"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/time/slots"
bolt "go.etcd.io/bbolt"
)
@@ -132,6 +135,131 @@ func TestUpdateCustodyInfo(t *testing.T) {
})
}
func TestUpdateEarliestAvailableSlot(t *testing.T) {
ctx := t.Context()
t.Run("allow decreasing earliest slot (backfill scenario)", func(t *testing.T) {
const (
initialSlot = primitives.Slot(300)
initialCount = uint64(10)
earliestSlot = primitives.Slot(200) // Lower than initial (backfill discovered earlier blocks)
)
db := setupDB(t)
// Initialize custody info
_, _, err := db.UpdateCustodyInfo(ctx, initialSlot, initialCount)
require.NoError(t, err)
// Update with a lower slot (should update for backfill)
err = db.UpdateEarliestAvailableSlot(ctx, earliestSlot)
require.NoError(t, err)
storedSlot, storedCount := getCustodyInfoFromDB(t, db)
require.Equal(t, earliestSlot, storedSlot)
require.Equal(t, initialCount, storedCount)
})
t.Run("allow increasing slot within MIN_EPOCHS_FOR_BLOCK_REQUESTS (pruning scenario)", func(t *testing.T) {
db := setupDB(t)
// Calculate the current slot and minimum required slot based on actual current time
genesisTime := time.Unix(int64(params.BeaconConfig().MinGenesisTime+params.BeaconConfig().GenesisDelay), 0)
currentSlot := slots.CurrentSlot(genesisTime)
currentEpoch := slots.ToEpoch(currentSlot)
minEpochsForBlocks := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
var minRequiredEpoch primitives.Epoch
if currentEpoch > minEpochsForBlocks {
minRequiredEpoch = currentEpoch - minEpochsForBlocks
} else {
minRequiredEpoch = 0
}
minRequiredSlot, err := slots.EpochStart(minRequiredEpoch)
require.NoError(t, err)
// Initial setup: set earliest slot well before minRequiredSlot
const groupCount = uint64(5)
initialSlot := primitives.Slot(1000)
_, _, err = db.UpdateCustodyInfo(ctx, initialSlot, groupCount)
require.NoError(t, err)
// Try to increase to a slot that's still BEFORE minRequiredSlot (should succeed)
validSlot := minRequiredSlot - 100
err = db.UpdateEarliestAvailableSlot(ctx, validSlot)
require.NoError(t, err)
// Verify the database was updated
storedSlot, storedCount := getCustodyInfoFromDB(t, db)
require.Equal(t, validSlot, storedSlot)
require.Equal(t, groupCount, storedCount)
})
t.Run("prevent increasing slot beyond MIN_EPOCHS_FOR_BLOCK_REQUESTS", func(t *testing.T) {
db := setupDB(t)
// Calculate the current slot and minimum required slot based on actual current time
genesisTime := time.Unix(int64(params.BeaconConfig().MinGenesisTime+params.BeaconConfig().GenesisDelay), 0)
currentSlot := slots.CurrentSlot(genesisTime)
currentEpoch := slots.ToEpoch(currentSlot)
minEpochsForBlocks := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
var minRequiredEpoch primitives.Epoch
if currentEpoch > minEpochsForBlocks {
minRequiredEpoch = currentEpoch - minEpochsForBlocks
} else {
minRequiredEpoch = 0
}
minRequiredSlot, err := slots.EpochStart(minRequiredEpoch)
require.NoError(t, err)
// Initial setup: set a valid earliest slot (well before minRequiredSlot)
const initialCount = uint64(5)
initialSlot := primitives.Slot(1000)
_, _, err = db.UpdateCustodyInfo(ctx, initialSlot, initialCount)
require.NoError(t, err)
// Try to set earliest slot beyond the minimum required slot
invalidSlot := minRequiredSlot + 100
// This should fail
err = db.UpdateEarliestAvailableSlot(ctx, invalidSlot)
require.ErrorContains(t, "cannot increase earliest available slot", err)
require.ErrorContains(t, "exceeds minimum required slot", err)
// Verify the database wasn't updated
storedSlot, storedCount := getCustodyInfoFromDB(t, db)
require.Equal(t, initialSlot, storedSlot)
require.Equal(t, initialCount, storedCount)
})
t.Run("no change when slot equals current slot", func(t *testing.T) {
const (
initialSlot = primitives.Slot(100)
initialCount = uint64(5)
)
db := setupDB(t)
// Initialize custody info
_, _, err := db.UpdateCustodyInfo(ctx, initialSlot, initialCount)
require.NoError(t, err)
// Update with the same slot
err = db.UpdateEarliestAvailableSlot(ctx, initialSlot)
require.NoError(t, err)
storedSlot, storedCount := getCustodyInfoFromDB(t, db)
require.Equal(t, initialSlot, storedSlot)
require.Equal(t, initialCount, storedCount)
})
}
func TestUpdateSubscribedToAllDataSubnets(t *testing.T) {
ctx := context.Background()

View File

@@ -954,7 +954,9 @@ func (s *Store) CleanUpDirtyStates(ctx context.Context, slotsPerArchivedPoint pr
deletedRoots := make([][32]byte, 0)
oRoot, err := s.OriginCheckpointBlockRoot(ctx)
if err != nil {
if err != nil && !errors.Is(err, ErrNotFoundOriginBlockRoot) {
// If the node did not use checkpoint sync, there will be no origin block root.
// Use zero hash which will never match any actual state root
return err
}

View File

@@ -10,6 +10,7 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/config/features"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
@@ -1283,3 +1284,50 @@ func BenchmarkState_CheckStateSaveTime_10(b *testing.B) { checkStateSaveTime(b,
func BenchmarkState_CheckStateReadTime_1(b *testing.B) { checkStateReadTime(b, 1) }
func BenchmarkState_CheckStateReadTime_10(b *testing.B) { checkStateReadTime(b, 10) }
func TestStore_CleanUpDirtyStates_NoOriginRoot(t *testing.T) {
// This test verifies that CleanUpDirtyStates does not fail when the origin block root is not set,
// which can happen when starting from genesis or in certain fork scenarios like Fulu.
db := setupDB(t)
genesisState, err := util.NewBeaconState()
require.NoError(t, err)
genesisRoot := [fieldparams.RootLength]byte{'a'}
require.NoError(t, db.SaveGenesisBlockRoot(t.Context(), genesisRoot))
require.NoError(t, db.SaveState(t.Context(), genesisState, genesisRoot))
// Note: We intentionally do NOT call SaveOriginCheckpointBlockRoot here
// to simulate the scenario where origin block root is not set
slotsPerArchivedPoint := primitives.Slot(128)
bRoots := make([][fieldparams.RootLength]byte, 0)
prevRoot := genesisRoot
for i := primitives.Slot(1); i <= slotsPerArchivedPoint; i++ { // skip slot 0
b := util.NewBeaconBlock()
b.Block.Slot = i
b.Block.ParentRoot = prevRoot[:]
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
wsb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
require.NoError(t, db.SaveBlock(t.Context(), wsb))
bRoots = append(bRoots, r)
prevRoot = r
st, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, st.SetSlot(i))
require.NoError(t, db.SaveState(t.Context(), st, r))
}
require.NoError(t, db.SaveFinalizedCheckpoint(t.Context(), &ethpb.Checkpoint{
Root: bRoots[len(bRoots)-1][:],
Epoch: primitives.Epoch(slotsPerArchivedPoint / params.BeaconConfig().SlotsPerEpoch),
}))
// This should not fail even though origin block root is not set
err = db.CleanUpDirtyStates(t.Context(), slotsPerArchivedPoint)
require.NoError(t, err)
// Verify that cleanup still works correctly
for i, root := range bRoots {
if primitives.Slot(i) >= slotsPerArchivedPoint.SubSlot(slotsPerArchivedPoint.Div(3)) {
require.Equal(t, true, db.HasState(t.Context(), root))
} else {
require.Equal(t, false, db.HasState(t.Context(), root))
}
}
}

View File

@@ -8,7 +8,6 @@ go_library(
"//beacon-chain:__subpackages__",
],
deps = [
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/iface:go_default_library",
"//config/params:go_default_library",
@@ -29,6 +28,7 @@ go_test(
"//consensus-types/blocks:go_default_library",
"//consensus-types/primitives:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//time/slots/testing:go_default_library",

View File

@@ -4,7 +4,6 @@ import (
"context"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/iface"
"github.com/OffchainLabs/prysm/v6/config/params"
@@ -25,17 +24,24 @@ const (
defaultNumBatchesToPrune = 15
)
// custodyUpdater is a tiny interface that p2p service implements; kept here to avoid
// importing the p2p package and creating a cycle.
type custodyUpdater interface {
UpdateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error
}
type ServiceOption func(*Service)
// WithRetentionPeriod allows the user to specify a different data retention period than the spec default.
// The retention period is specified in epochs, and must be >= MIN_EPOCHS_FOR_BLOCK_REQUESTS.
func WithRetentionPeriod(retentionEpochs primitives.Epoch) ServiceOption {
return func(s *Service) {
defaultRetentionEpochs := helpers.MinEpochsForBlockRequests() + 1
defaultRetentionEpochs := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests) + 1
if retentionEpochs < defaultRetentionEpochs {
log.WithField("userEpochs", retentionEpochs).
WithField("minRequired", defaultRetentionEpochs).
Warn("Retention period too low, using minimum required value")
Warn("Retention period too low, ignoring and using minimum required value")
retentionEpochs = defaultRetentionEpochs
}
s.ps = pruneStartSlotFunc(retentionEpochs)
@@ -58,17 +64,23 @@ type Service struct {
slotTicker slots.Ticker
backfillWaiter func() error
initSyncWaiter func() error
custody custodyUpdater
}
func New(ctx context.Context, db iface.Database, genesisTime time.Time, initSyncWaiter, backfillWaiter func() error, opts ...ServiceOption) (*Service, error) {
func New(ctx context.Context, db iface.Database, genesisTime time.Time, initSyncWaiter, backfillWaiter func() error, custody custodyUpdater, opts ...ServiceOption) (*Service, error) {
if custody == nil {
return nil, errors.New("custody updater is required for pruner but was not provided")
}
p := &Service{
ctx: ctx,
db: db,
ps: pruneStartSlotFunc(helpers.MinEpochsForBlockRequests() + 1), // Default retention epochs is MIN_EPOCHS_FOR_BLOCK_REQUESTS + 1 from the current slot.
ps: pruneStartSlotFunc(primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests) + 1), // Default retention epochs is MIN_EPOCHS_FOR_BLOCK_REQUESTS + 1 from the current slot.
done: make(chan struct{}),
slotTicker: slots.NewSlotTicker(slots.UnsafeStartTime(genesisTime, 0), params.BeaconConfig().SecondsPerSlot),
initSyncWaiter: initSyncWaiter,
backfillWaiter: backfillWaiter,
custody: custody,
}
for _, o := range opts {
@@ -157,17 +169,45 @@ func (p *Service) prune(slot primitives.Slot) error {
return errors.Wrap(err, "failed to prune batches")
}
log.WithFields(logrus.Fields{
"prunedUpto": pruneUpto,
"duration": time.Since(tt),
"currentSlot": slot,
"batchSize": defaultPrunableBatchSize,
"numBatches": numBatches,
}).Debug("Successfully pruned chain data")
earliestAvailableSlot := pruneUpto + 1
// Update pruning checkpoint.
p.prunedUpto = pruneUpto
// Update the earliest available slot after pruning
if err := p.updateEarliestAvailableSlot(earliestAvailableSlot); err != nil {
return errors.Wrap(err, "update earliest available slot")
}
log.WithFields(logrus.Fields{
"prunedUpto": pruneUpto,
"earliestAvailableSlot": earliestAvailableSlot,
"duration": time.Since(tt),
"currentSlot": slot,
"batchSize": defaultPrunableBatchSize,
"numBatches": numBatches,
}).Debug("Successfully pruned chain data")
return nil
}
// updateEarliestAvailableSlot updates the earliest available slot via the injected custody updater
// and also persists it to the database.
func (p *Service) updateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error {
if !params.FuluEnabled() {
return nil
}
// Update the p2p in-memory state
if err := p.custody.UpdateEarliestAvailableSlot(earliestAvailableSlot); err != nil {
return errors.Wrapf(err, "update earliest available slot after pruning to %d", earliestAvailableSlot)
}
// Persist to database to ensure it survives restarts
if err := p.db.UpdateEarliestAvailableSlot(p.ctx, earliestAvailableSlot); err != nil {
return errors.Wrapf(err, "update earliest available slot in database for slot %d", earliestAvailableSlot)
}
return nil
}

View File

@@ -2,6 +2,7 @@ package pruner
import (
"context"
"errors"
"testing"
"time"
@@ -15,6 +16,7 @@ import (
dbtest "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
logTest "github.com/sirupsen/logrus/hooks/test"
)
@@ -62,7 +64,9 @@ func TestPruner_PruningConditions(t *testing.T) {
if !tt.backfillCompleted {
backfillWaiter = waiter
}
p, err := New(ctx, beaconDB, time.Now(), initSyncWaiter, backfillWaiter, WithSlotTicker(slotTicker))
mockCustody := &mockCustodyUpdater{}
p, err := New(ctx, beaconDB, time.Now(), initSyncWaiter, backfillWaiter, mockCustody, WithSlotTicker(slotTicker))
require.NoError(t, err)
go p.Start()
@@ -97,12 +101,14 @@ func TestPruner_PruneSuccess(t *testing.T) {
retentionEpochs := primitives.Epoch(2)
slotTicker := &slottest.MockTicker{Channel: make(chan primitives.Slot)}
mockCustody := &mockCustodyUpdater{}
p, err := New(
ctx,
beaconDB,
time.Now(),
nil,
nil,
mockCustody,
WithSlotTicker(slotTicker),
)
require.NoError(t, err)
@@ -133,3 +139,242 @@ func TestPruner_PruneSuccess(t *testing.T) {
require.NoError(t, p.Stop())
}
// Mock custody updater for testing
type mockCustodyUpdater struct {
custodyGroupCount uint64
earliestAvailableSlot primitives.Slot
updateCallCount int
}
func (m *mockCustodyUpdater) UpdateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error {
m.updateCallCount++
m.earliestAvailableSlot = earliestAvailableSlot
return nil
}
func TestPruner_UpdatesEarliestAvailableSlot(t *testing.T) {
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
config.FuluForkEpoch = 0 // Enable Fulu from epoch 0
params.OverrideBeaconConfig(config)
logrus.SetLevel(logrus.DebugLevel)
hook := logTest.NewGlobal()
ctx, cancel := context.WithCancel(t.Context())
defer cancel()
beaconDB := dbtest.SetupDB(t)
retentionEpochs := primitives.Epoch(2)
slotTicker := &slottest.MockTicker{Channel: make(chan primitives.Slot)}
// Create mock custody updater
mockCustody := &mockCustodyUpdater{
custodyGroupCount: 4,
earliestAvailableSlot: 0,
}
// Create pruner with mock custody updater
p, err := New(
ctx,
beaconDB,
time.Now(),
nil,
nil,
mockCustody,
WithSlotTicker(slotTicker),
)
require.NoError(t, err)
p.ps = func(current primitives.Slot) primitives.Slot {
return current - primitives.Slot(retentionEpochs)*params.BeaconConfig().SlotsPerEpoch
}
// Save some blocks to be pruned
for i := primitives.Slot(1); i <= 32; i++ {
blk := util.NewBeaconBlock()
blk.Block.Slot = i
wsb, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wsb))
}
// Start pruner and trigger at slot 80 (middle of 3rd epoch)
go p.Start()
currentSlot := primitives.Slot(80)
slotTicker.Channel <- currentSlot
// Wait for pruning to complete
time.Sleep(100 * time.Millisecond)
// Check that UpdateEarliestAvailableSlot was called
assert.Equal(t, true, mockCustody.updateCallCount > 0, "UpdateEarliestAvailableSlot should have been called")
// The earliest available slot should be pruneUpto + 1
// pruneUpto = currentSlot - retentionEpochs*slotsPerEpoch = 80 - 2*32 = 16
// So earliest available slot should be 16 + 1 = 17
expectedEarliestSlot := primitives.Slot(17)
require.Equal(t, expectedEarliestSlot, mockCustody.earliestAvailableSlot, "Earliest available slot should be updated correctly")
require.Equal(t, uint64(4), mockCustody.custodyGroupCount, "Custody group count should be preserved")
// Verify that no error was logged
for _, entry := range hook.AllEntries() {
if entry.Level == logrus.ErrorLevel {
t.Errorf("Unexpected error log: %s", entry.Message)
}
}
require.NoError(t, p.Stop())
}
// Mock custody updater that returns an error for UpdateEarliestAvailableSlot
type mockCustodyUpdaterWithUpdateError struct {
updateCallCount int
}
func (m *mockCustodyUpdaterWithUpdateError) UpdateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error {
m.updateCallCount++
return errors.New("failed to update earliest available slot")
}
func TestWithRetentionPeriod_EnforcesMinimum(t *testing.T) {
// Use minimal config for testing
params.SetupTestConfigCleanup(t)
config := params.MinimalSpecConfig()
params.OverrideBeaconConfig(config)
ctx := t.Context()
beaconDB := dbtest.SetupDB(t)
// Get the minimum required epochs (272 + 1 = 273 for minimal)
minRequiredEpochs := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests + 1)
// Use a slot that's guaranteed to be after the minimum retention period
currentSlot := primitives.Slot(minRequiredEpochs+100) * (params.BeaconConfig().SlotsPerEpoch)
tests := []struct {
name string
userRetentionEpochs primitives.Epoch
expectedPruneSlot primitives.Slot
description string
}{
{
name: "User value below minimum - should use minimum",
userRetentionEpochs: 2, // Way below minimum
expectedPruneSlot: currentSlot - primitives.Slot(minRequiredEpochs)*params.BeaconConfig().SlotsPerEpoch,
description: "Should use minimum when user value is too low",
},
{
name: "User value at minimum",
userRetentionEpochs: minRequiredEpochs,
expectedPruneSlot: currentSlot - primitives.Slot(minRequiredEpochs)*params.BeaconConfig().SlotsPerEpoch,
description: "Should use user value when at minimum",
},
{
name: "User value above minimum",
userRetentionEpochs: minRequiredEpochs + 10,
expectedPruneSlot: currentSlot - primitives.Slot(minRequiredEpochs+10)*params.BeaconConfig().SlotsPerEpoch,
description: "Should use user value when above minimum",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
hook := logTest.NewGlobal()
logrus.SetLevel(logrus.WarnLevel)
mockCustody := &mockCustodyUpdater{}
// Create pruner with retention period
p, err := New(
ctx,
beaconDB,
time.Now(),
nil,
nil,
mockCustody,
WithRetentionPeriod(tt.userRetentionEpochs),
)
require.NoError(t, err)
// Test the pruning calculation
pruneUptoSlot := p.ps(currentSlot)
// Verify the pruning slot
assert.Equal(t, tt.expectedPruneSlot, pruneUptoSlot, tt.description)
// Check if warning was logged when value was too low
if tt.userRetentionEpochs < minRequiredEpochs {
assert.LogsContain(t, hook, "Retention period too low, ignoring and using minimum required value")
}
})
}
}
func TestPruner_UpdateEarliestSlotError(t *testing.T) {
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
config.FuluForkEpoch = 0 // Enable Fulu from epoch 0
params.OverrideBeaconConfig(config)
logrus.SetLevel(logrus.DebugLevel)
hook := logTest.NewGlobal()
ctx, cancel := context.WithCancel(t.Context())
defer cancel()
beaconDB := dbtest.SetupDB(t)
retentionEpochs := primitives.Epoch(2)
slotTicker := &slottest.MockTicker{Channel: make(chan primitives.Slot)}
// Create mock custody updater that returns an error for UpdateEarliestAvailableSlot
mockCustody := &mockCustodyUpdaterWithUpdateError{}
// Create pruner with mock custody updater
p, err := New(
ctx,
beaconDB,
time.Now(),
nil,
nil,
mockCustody,
WithSlotTicker(slotTicker),
)
require.NoError(t, err)
p.ps = func(current primitives.Slot) primitives.Slot {
return current - primitives.Slot(retentionEpochs)*params.BeaconConfig().SlotsPerEpoch
}
// Save some blocks to be pruned
for i := primitives.Slot(1); i <= 32; i++ {
blk := util.NewBeaconBlock()
blk.Block.Slot = i
wsb, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wsb))
}
// Start pruner and trigger at slot 80
go p.Start()
currentSlot := primitives.Slot(80)
slotTicker.Channel <- currentSlot
// Wait for pruning to complete
time.Sleep(100 * time.Millisecond)
// Should have called UpdateEarliestAvailableSlot
assert.Equal(t, 1, mockCustody.updateCallCount, "UpdateEarliestAvailableSlot should be called")
// Check that error was logged by the prune function
found := false
for _, entry := range hook.AllEntries() {
if entry.Level == logrus.ErrorLevel && entry.Message == "Failed to prune database" {
found = true
break
}
}
assert.Equal(t, true, found, "Should log error when UpdateEarliestAvailableSlot fails")
require.NoError(t, p.Stop())
}

View File

@@ -4,17 +4,26 @@ import (
"bytes"
"context"
"encoding/binary"
"time"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/pkg/errors"
bolt "go.etcd.io/bbolt"
)
var errTimeOut = errors.New("operation timed out")
// PruneAttestationsAtEpoch deletes all attestations from the slasher DB with target epoch
// less than or equal to the specified epoch.
func (s *Store) PruneAttestationsAtEpoch(
_ context.Context, maxEpoch primitives.Epoch,
ctx context.Context, maxEpoch primitives.Epoch,
) (numPruned uint, err error) {
// In some cases, pruning may take a very long time and consume significant memory in the
// open Update transaction. Therefore, we impose a 1 minute timeout on this operation.
ctx, cancel := context.WithTimeout(ctx, 1*time.Minute)
defer cancel()
// We can prune everything less than the current epoch - history length.
encodedEndPruneEpoch := make([]byte, 8)
binary.BigEndian.PutUint64(encodedEndPruneEpoch, uint64(maxEpoch))
@@ -48,13 +57,18 @@ func (s *Store) PruneAttestationsAtEpoch(
return
}
if err = s.db.Update(func(tx *bolt.Tx) error {
err = s.db.Update(func(tx *bolt.Tx) error {
signingRootsBkt := tx.Bucket(attestationDataRootsBucket)
attRecordsBkt := tx.Bucket(attestationRecordsBucket)
c := signingRootsBkt.Cursor()
// We begin a pruning iteration starting from the first item in the bucket.
for k, v := c.First(); k != nil; k, v = c.Next() {
if ctx.Err() != nil {
// Exit the routine if the context has expired.
return errTimeOut
}
// We check the epoch from the current key in the database.
// If we have hit an epoch that is greater than the end epoch of the pruning process,
// we then completely exit the process as we are done.
@@ -67,18 +81,27 @@ func (s *Store) PruneAttestationsAtEpoch(
// so it is possible we have a few adjacent objects that have the same slot, such as
// (target_epoch = 3 ++ _) => encode(attestation)
if err := signingRootsBkt.Delete(k); err != nil {
return err
return errors.Wrap(err, "delete attestation signing root")
}
if err := attRecordsBkt.Delete(v); err != nil {
return err
return errors.Wrap(err, "delete attestation record")
}
slasherAttestationsPrunedTotal.Inc()
numPruned++
}
return nil
}); err != nil {
})
if errors.Is(err, errTimeOut) {
log.Warning("Aborting pruning routine")
return
}
if err != nil {
log.WithError(err).Error("Failed to prune attestations")
return
}
return
}

View File

@@ -142,10 +142,9 @@ var ErrEmptyBlockHash = errors.New("Block hash is empty 0x0000...")
func (s *Service) NewPayload(ctx context.Context, payload interfaces.ExecutionData, versionedHashes []common.Hash, parentBlockRoot *common.Hash, executionRequests *pb.ExecutionRequests) ([]byte, error) {
ctx, span := trace.StartSpan(ctx, "powchain.engine-api-client.NewPayload")
defer span.End()
start := time.Now()
defer func() {
defer func(start time.Time) {
newPayloadLatency.Observe(float64(time.Since(start).Milliseconds()))
}()
}(time.Now())
d := time.Now().Add(time.Duration(params.BeaconConfig().ExecutionEngineTimeoutValue) * time.Second)
ctx, cancel := context.WithDeadline(ctx, d)
@@ -183,7 +182,10 @@ func (s *Service) NewPayload(ctx context.Context, payload interfaces.ExecutionDa
return nil, errors.New("unknown execution data type")
}
if result.ValidationError != "" {
log.WithError(errors.New(result.ValidationError)).Error("Got a validation error in newPayload")
log.WithField("status", result.Status.String()).
WithField("parentRoot", fmt.Sprintf("%#x", parentBlockRoot)).
WithError(errors.New(result.ValidationError)).
Error("Got a validation error in newPayload")
}
switch result.Status {
case pb.PayloadStatus_INVALID_BLOCK_HASH:
@@ -195,7 +197,7 @@ func (s *Service) NewPayload(ctx context.Context, payload interfaces.ExecutionDa
case pb.PayloadStatus_VALID:
return result.LatestValidHash, nil
default:
return nil, ErrUnknownPayloadStatus
return nil, errors.Wrapf(ErrUnknownPayloadStatus, "unknown payload status: %s", result.Status.String())
}
}

View File

@@ -928,7 +928,7 @@ func TestClient_HTTP(t *testing.T) {
wrappedPayload, err := blocks.WrappedExecutionPayload(execPayload)
require.NoError(t, err)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil)
require.ErrorIs(t, ErrUnknownPayloadStatus, err)
require.ErrorIs(t, err, ErrUnknownPayloadStatus)
require.DeepEqual(t, []uint8(nil), resp)
})
t.Run(BlockByNumberMethod, func(t *testing.T) {

View File

@@ -254,6 +254,7 @@ func (s *Store) getCacheUpdatesByPeriod(headBlock interfaces.ReadOnlySignedBeaco
return updatesByPeriod, nil
}
// SetLastFinalityUpdate should be used only for testing.
func (s *Store) SetLastFinalityUpdate(update interfaces.LightClientFinalityUpdate, broadcast bool) {
s.mu.Lock()
defer s.mu.Unlock()
@@ -263,9 +264,11 @@ func (s *Store) SetLastFinalityUpdate(update interfaces.LightClientFinalityUpdat
func (s *Store) setLastFinalityUpdate(update interfaces.LightClientFinalityUpdate, broadcast bool) {
if broadcast && IsFinalityUpdateValidForBroadcast(update, s.lastFinalityUpdate) {
if err := s.p2p.BroadcastLightClientFinalityUpdate(context.Background(), update); err != nil {
log.WithError(err).Error("Could not broadcast light client finality update")
}
go func() {
if err := s.p2p.BroadcastLightClientFinalityUpdate(context.Background(), update); err != nil {
log.WithError(err).Error("Could not broadcast light client finality update")
}
}()
}
s.lastFinalityUpdate = update
@@ -283,6 +286,7 @@ func (s *Store) LastFinalityUpdate() interfaces.LightClientFinalityUpdate {
return s.lastFinalityUpdate
}
// SetLastOptimisticUpdate should be used only for testing.
func (s *Store) SetLastOptimisticUpdate(update interfaces.LightClientOptimisticUpdate, broadcast bool) {
s.mu.Lock()
defer s.mu.Unlock()
@@ -292,9 +296,11 @@ func (s *Store) SetLastOptimisticUpdate(update interfaces.LightClientOptimisticU
func (s *Store) setLastOptimisticUpdate(update interfaces.LightClientOptimisticUpdate, broadcast bool) {
if broadcast {
if err := s.p2p.BroadcastLightClientOptimisticUpdate(context.Background(), update); err != nil {
log.WithError(err).Error("Could not broadcast light client optimistic update")
}
go func() {
if err := s.p2p.BroadcastLightClientOptimisticUpdate(context.Background(), update); err != nil {
log.WithError(err).Error("Could not broadcast light client optimistic update")
}
}()
}
s.lastOptimisticUpdate = update

View File

@@ -3,6 +3,7 @@ package light_client
import (
"context"
"testing"
"time"
"github.com/OffchainLabs/prysm/v6/async/event"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
@@ -74,6 +75,7 @@ func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
p2p := p2pTesting.NewTestP2P(t)
lcStore := NewLightClientStore(p2p, new(event.Feed), testDB.SetupDB(t))
timeForGoroutinesToFinish := 20 * time.Microsecond
// update 0 with basic data and no supermajority following an empty lastFinalityUpdate - should save and broadcast
l0 := util.NewTestLightClient(t, version.Altair)
update0, err := NewLightClientFinalityUpdateFromBeaconState(l0.Ctx, l0.State, l0.Block, l0.AttestedState, l0.AttestedBlock, l0.FinalizedBlock)
@@ -85,6 +87,7 @@ func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
lcStore.SetLastFinalityUpdate(update0, true)
require.Equal(t, update0, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
time.Sleep(timeForGoroutinesToFinish) // give some time for the broadcast goroutine to finish
require.Equal(t, true, p2p.BroadcastCalled.Load(), "Broadcast should have been called after setting a new last finality update when previous is nil")
p2p.BroadcastCalled.Store(false) // Reset for next test
@@ -99,6 +102,7 @@ func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
lcStore.SetLastFinalityUpdate(update1, true)
require.Equal(t, update1, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
time.Sleep(timeForGoroutinesToFinish) // give some time for the broadcast goroutine to finish
require.Equal(t, false, p2p.BroadcastCalled.Load(), "Broadcast should not have been called after setting a new last finality update without supermajority")
p2p.BroadcastCalled.Store(false) // Reset for next test
@@ -113,6 +117,7 @@ func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
lcStore.SetLastFinalityUpdate(update2, true)
require.Equal(t, update2, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
time.Sleep(timeForGoroutinesToFinish) // give some time for the broadcast goroutine to finish
require.Equal(t, true, p2p.BroadcastCalled.Load(), "Broadcast should have been called after setting a new last finality update with supermajority")
p2p.BroadcastCalled.Store(false) // Reset for next test
@@ -127,6 +132,7 @@ func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
lcStore.SetLastFinalityUpdate(update3, true)
require.Equal(t, update3, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
time.Sleep(timeForGoroutinesToFinish) // give some time for the broadcast goroutine to finish
require.Equal(t, false, p2p.BroadcastCalled.Load(), "Broadcast should not have been when previous was already broadcast")
// update 4 with increased finality slot, increased attested slot, and supermajority - should save and broadcast
@@ -140,6 +146,7 @@ func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
lcStore.SetLastFinalityUpdate(update4, true)
require.Equal(t, update4, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
time.Sleep(timeForGoroutinesToFinish) // give some time for the broadcast goroutine to finish
require.Equal(t, true, p2p.BroadcastCalled.Load(), "Broadcast should have been called after a new finality update with increased finality slot")
p2p.BroadcastCalled.Store(false) // Reset for next test
@@ -154,6 +161,7 @@ func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
lcStore.SetLastFinalityUpdate(update5, true)
require.Equal(t, update5, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
time.Sleep(timeForGoroutinesToFinish) // give some time for the broadcast goroutine to finish
require.Equal(t, false, p2p.BroadcastCalled.Load(), "Broadcast should not have been called when previous was already broadcast with supermajority")
// update 6 with the same new finality slot, increased attested slot, and no supermajority - should save but not broadcast
@@ -167,6 +175,7 @@ func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
lcStore.SetLastFinalityUpdate(update6, true)
require.Equal(t, update6, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
time.Sleep(timeForGoroutinesToFinish) // give some time for the broadcast goroutine to finish
require.Equal(t, false, p2p.BroadcastCalled.Load(), "Broadcast should not have been called when previous was already broadcast with supermajority")
}

View File

@@ -58,7 +58,6 @@ go_library(
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//container/slice:go_default_library",
"//encoding/bytesutil:go_default_library",
"//genesis:go_default_library",
"//monitoring/prometheus:go_default_library",
"//monitoring/tracing:go_default_library",

View File

@@ -2,11 +2,13 @@ package node
import (
"context"
"os"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/kv"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/slasherkv"
"github.com/OffchainLabs/prysm/v6/cmd"
"github.com/OffchainLabs/prysm/v6/genesis"
"github.com/pkg/errors"
"github.com/urfave/cli/v2"
)
@@ -36,6 +38,22 @@ func (c *dbClearer) clearKV(ctx context.Context, db *kv.Store) (*kv.Store, error
return kv.NewKVStore(ctx, db.DatabasePath())
}
func (c *dbClearer) clearGenesis(dir string) error {
if !c.shouldProceed() {
return nil
}
gfile, err := genesis.FindStateFile(dir)
if err != nil {
return nil
}
if err := os.Remove(gfile.FilePath()); err != nil {
return errors.Wrapf(err, "genesis state file not removed: %s", gfile.FilePath())
}
return nil
}
func (c *dbClearer) clearBlobs(bs *filesystem.BlobStorage) error {
if !c.shouldProceed() {
return nil

Some files were not shown because too many files have changed in this diff Show More