Compare commits

...

41 Commits

Author SHA1 Message Date
james-prysm
253f91930a changelog v6.1.3 (#15901)
* updating changelog

* adding changelog

* kasey's comment
2025-10-21 16:46:44 +00:00
terence
7c3e45637f Fix proposer to use advanced state for sync committee position calculation (#15905)
* Sync committee use correct state to calculate position

* Unit test
2025-10-21 15:29:46 +00:00
Manu NALEPA
96429c5089 updateCustodyInfoInDB: Use NumberOfCustodyGroups instead of NumberOfColumns. (#15908)
* `updateCustodyInfoInDB`: Add tests.

* `updateCustodyInfoInDB`: Use `NumberOfCustodyGroups` instead of `NumberOfColumns`.

* Add changelog.

* Fix Potuz's comment.
2025-10-21 14:37:04 +00:00
satushh
d613f3a262 Update Earliest available slot when pruning (#15694)
* Update Earliest available slot when pruning

* bazel run //:gazelle -- fix

* custodyUpdater interface to avoid import cycle

* bazel run //:gazelle -- fix

* simplify test

* separation of concerns

* debug log for updating eas

* UpdateEarliestAvailableSlot function in CustodyManager

* fix test

* UpdateEarliestAvailableSlot function for FakeP2P

* lint

* UpdateEarliestAvailableSlot instead of UpdateCustodyInfo + check for Fulu

* fix test and lint

* bugfix: enforce minimum retention period in pruner

* remove MinEpochsForBlockRequests function and use from config

* remove modifying earliest_available_slot after data column pruning

* correct earliestAvailableSlot validation: allow backfill decrease but prevent increase within MIN_EPOCHS_FOR_BLOCK_REQUESTS

* lint

* bazel run //:gazelle -- fix

* lint and remove unwanted debug logs

* Return a wrapped error, and let the caller decide what to do

* fix tests because updateEarliestSlot returns error now

* avoid re-doing computation in the test function

* lint and correct changelog

* custody updater should be a mandatory part of the pruner service

* ensure never increase eas if we are in the block requests window

* slot level granularity edge case

* update the value stored in the DB

* log tidy up

* use errNoCustodyInfo

* allow earliestAvailableSlot edit when custodyGroupCount doesnt change

* undo the minimal config change

* add context to CustodyGroupCount after merging from develop

* cosmetic change

* shift responsibility from caller to callee, protection for updateEarliestSlot. UpdateEarliestAvailableSlot returns cgc

* allow increase in earliestAvailableSlot only when custodyGroupCount also increases

* remove CustodyGroupCount as it is no longer needed as UpdateEarliestAvailableSlot returns cgc now

* proper place for log and name refactor

* test for Nil custody info

* allow decreasing earliest slot in DB (just like in memory)

* invert if statement to make more readable

* UpdateEarliestAvailableSlot for DB (equivalent of p2p's UpdateEarliestAvailableSlot) & undo changes made to UpdateCustodyInfo

* in UpdateEarliestAvailableSlot, no need to return unused values

* no need to log stored group count

* log.WithField instead of log.WithFields
2025-10-21 13:54:52 +00:00
MozirDmitriy
5751dbf134 kv: write recovered state summaries to stateSummaryBucket (#15896)
* kv: write recovered state summaries to stateSummaryBucket

* Create MozirDmitriy_fix_kv-recover-state-summurt-bucket.md

* add a test
2025-10-21 11:21:10 +00:00
Potuz
426fbcc3b0 Add state diff serialization (#15250)
* Add serialization code for state diffs

Adds serialization code for state diffs.
Adds code to create and apply state diffs
Adds fuzz tests and benchmarks for serialization/deserialization

Co-authored-by: Claude <noreply@anthropic.com>

* Add Fulu support

* Review #1

* gazelle

* Fix some fuzzers

* Failing cases from the fuzzers in consensus-types/hdiff

* Fix more fuzz tests

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* add comparison tests

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Use ConvertToElectra in UpgradeToElectra

* Add comments on constants

* Fix readEth1Data

* remove colons from error messages

* Add design doc

* Apply suggestions from code review

Bast

Co-authored-by: Bastin <43618253+Inspector-Butters@users.noreply.github.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Preston Van Loon <preston@pvl.dev>
Co-authored-by: Bastin <43618253+Inspector-Butters@users.noreply.github.com>
2025-10-20 21:52:32 +00:00
Manu NALEPA
a3baf98b05 VerifyDataColumnsSidecarKZGProofs: Check if sizes match. (#15892) 2025-10-20 17:06:13 +00:00
Jun Song
5a897dfa6b SSZ-QL: Add endpoints (BeaconState/BeaconBlock) (#15888)
* Move ssz_query objects into testing folder (ensuring test objects only used in test environment)

* Add containers for response

* Export sszInfo

* Add QueryBeaconState/Block

* Add comments and few refactor

* Fix merge conflict issues

* Return 500 when calculate offset fails

* Add test for QueryBeaconState

* Add test for QueryBeaconBlock

* Changelog :)

* Rename `QuerySSZRequest` to `SSZQueryRequest`

* Fix middleware hooks for RPC to accept JSON from client and return SSZ

* Convert to `SSZObject` directly from proto

* Move marshalling/calculating hash tree root part after `CalculateOffsetAndLength`

* Make nogo happy

* Add informing comment for using proto unsafe conversion

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-10-20 16:24:06 +00:00
Muzry
90190883bc Fixed metadata extraction on Windows by correctly splitting file paths (#15899)
* Fixed metadata extraction on Windows by correctly splitting file paths

* `TestExtractFileMetadata`: Refactor a bit.

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2025-10-20 14:17:32 +00:00
terence
64ec665890 Fix sync committee subscription to use subnet indices instead of committee indices (#15885)
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-10-17 19:03:53 +00:00
kasey
fdb06ea461 clear genesis state file when --(force-)clear-db is specified (#15883)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-10-17 14:03:15 +00:00
Manu NALEPA
0486631d73 Improve error message when the byte count read from disk when reading a data column sidecar is lower than expected. (Mostly, because the file is truncated.) (#15881)
* `VerifiedRODataColumnError`: Don't reuse Blob error.

* `VerifiedRODataColumnFromDisk`: Use a specific error when the count of read bytes is lower than expected.

* Add changelog.
2025-10-16 21:49:11 +00:00
Manu NALEPA
47764696ce randomPeer: Return if the context is cancelled when waiting for peers. (#15876)
* `randomPeer`: Return if the context is cancelled when waiting for peers.

* `randomPeer`: Refactor to reduce indentation.
2025-10-16 21:13:11 +00:00
Manu NALEPA
b2d350b988 Correctly advertise (in ENR and metadata) attestation subnets when using --subscribe-all-subnets. (#15880) 2025-10-16 21:12:00 +00:00
kasey
41e7607092 Decrease att batch deadline to 5ms for faster net prop (#15882)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-10-16 17:30:59 +00:00
Jun Song
cd429dc253 SSZ-QL: Access n-th element in List/Vector. (#15767)
* Add basic parsing feature for accessing by index

* Add more tests for 2d byte vector

* Add List case for access indexing

* Handle 2D bytes List example

* Fix misleading cases for CalculateOffsetAndLength

* Use elementSizes[index] if it is the last path element

* Add variable_container_list field for mocking attester_slashings in BeaconBlockBody

* Remove redundant protobuf message

* Better documentation

* Changelog

* Fix `expectedSize` of `VariableTestContainer`: as we added `variable_container_list` here

* Apply reviews from Radek
2025-10-15 16:11:12 +00:00
phrwlk
5ced1125f2 fix: reject out-of-range attestation committee index (#15855)
* reject committee index >= committees_per_slot in unaggregated attestation validation

* Create phrwlk_fix-attestation-committee-index-bound.md

* add a unit test

* fix test

* fixing test

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Co-authored-by: james-prysm <james@prysmaticlabs.com>
2025-10-15 16:02:08 +00:00
Potuz
f67ca6ae5e Fix epoch transition on head event (#15871)
h/t to the NuConstruct team for reporting this. The event feed
incorrectly sends epoch transition flag on head events when the first
slot of the epoch is missing (or reorgs across epoch transition).

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-10-15 15:13:49 +00:00
Manu NALEPA
9742333f68 WithDataColumnRetentionEpochs: Use dataColumnRetentionEpoch instead of blobColumnRetentionEpoch. (#15872) 2025-10-15 14:44:49 +00:00
Manu NALEPA
c811fadf33 VerifyDataColumnSidecar: Check if there is no too many commitments. (#15859)
* `VerifyDataColumnSidecar`: Check if there is no too many commitments.

* `TestVerifyDataColumnSidecar`: Refactor using test cases.

* Add changelog.
2025-10-15 12:18:04 +00:00
Manu NALEPA
55b9448d41 dataColumnSidecarsByRangeRPCHandler: Gracefully close the stream if no data to return. (#15866)
* `TestDataColumnSidecarsByRangeRPCHandler`: Remove commented code.

* Remove double import

* `dataColumnSidecarsByRangeRPCHandler`: Gracefully close the stream if no data to return.

* Tests: Change `require` to `assert` in goroutines in tests.

https://pkg.go.dev/github.com/stretchr/testify/require#hdr-Assertions

* Add changelog.
2025-10-15 12:16:05 +00:00
Manu NALEPA
10f8d8c26e Fix /eth/v1/beacon/blob_sidecars/ beacon API if the fulu fork epoch is set to the far future epoch. (#15867)
* Fix `/eth/v1/beacon/blob_sidecars/` beacon API is the fulu fork epoch is set to the far future epoch.

* Fix Terence's comment.

* adding a test

---------

Co-authored-by: james-prysm <james@prysmaticlabs.com>
2025-10-14 21:38:12 +00:00
Jun Song
4eab41ea4c SSZ-QL: use fastssz-generated SizeSSZ method & clarify Size method (#15864)
* Add SizeSSZ as a member of SSZObject

* Temporarily rename dereferencePointer function

* Fix analyzeType: use reflect.Value for analyzing

* Fix PopulateVariableLengthInfo: change function signature & reset pointer

* Remove Container arm for Size function as it'll be handled in the previous branch

* Remove OffsetBytes function in listInfo

* Refactor and document codes

* Remove misleading "fixedSize" concept & Add Uint8...64 SSZTypes

* Add size testing

* Move TestSSZObject_Batch and rename it as TestHashTreeRoot

* Changelog :)

* Rename endOffset to fixedOffset

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-10-14 17:33:52 +00:00
Radosław Kapka
683608e34a Improve returning individual message errors from Beacon API (#15835)
* Improve returning individual message errors from Beacon API

* changelog <3

* fix test

* add debug logs

* batch broadcast errors

* use logrus fields

* capitalize log messages
2025-10-14 15:22:00 +00:00
Manu NALEPA
fbbf2a1404 HasAtLeastOneIndex: Check the index is not too high. (#15865) 2025-10-14 14:39:38 +00:00
Potuz
82f556c50f Remove redundant check (#15844)
* Remove redundant check

* changelog

* fix gazelle
2025-10-14 12:39:19 +00:00
Radosław Kapka
c88aa77ac1 Display non-JSON error messages (#15860)
* Display non-JSON error messages

* changelog <3
2025-10-14 12:08:21 +00:00
fernantho
0568bec935 SSZ-QL: use FastSSZ-generated HashTreeRoot through SSZObject in sszInfo (#15805)
* stored CL object to enable the usage Fastssz's HashTreeRoot(). added basic test

* refactorization - using interfaces instead of storing original object

* added tests covering ssz custom types

* renamed hash_tree_root to ssz_interface as it contains MarshalSSZ and UnmarshalSSZ functions

* run gazelle

* renamed test and improved comments

* refactored test and extend to marshalSSZ and UnmarshalSSZ

* added changelog

* updated comment

* Changed SSZIface name to SSZObject. Removed MarshalSSZ and UnmarshalSSZ function signatures from interface as they are not used still. Refactored tests.

* renamed file ssz_interface.go to ssz_object.go. merge test from ssz_interface_test.go into query_test.go.
reordered source SSZObject field from sszInfo struct

* sticked SSZObject interface to HashTreeRoot() function, the only one needed so far

* run gazelle :)

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-10-13 21:39:15 +00:00
Potuz
e463bcd1e1 Mark block as invalid in gossip if it fails signature check (#15847)
* Mark block as invalid in gossip if it fails signature check

* Add tests

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-13 20:29:27 +00:00
terence
5f8eb69201 Add proper handling for submit blind block 502 error (#15848)
* Add proper handling for builder relay 502 BadGateway errors

* James feedback

* Change wording
2025-10-13 18:36:06 +00:00
Marco Munizaga
4b98451649 fix allocation size of proofs in ComputeCellsAndProofsFromStructured (#15809)
* fix allocation size of proofs in ComputeCellsAndProofsFromStructured

the preallocated slice for KZG Proofs was 48x bigger than it needed to
be.

* changelog

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-10-13 17:16:02 +00:00
satushh
0aa248e663 Fix ignored gossip attestation validation for early arriving attestations (#15840)
* debug log

* undo debug logs

* fix: return if block not available

* changelog
2025-10-12 19:19:45 +00:00
Preston Van Loon
6973cd2c5f Changelog v6.1.2 (#15843)
* Changelog for v6.0.5

* Changelog for v6.1.0

* Changelog for v6.1.2

* Removing old fragments
2025-10-10 21:04:38 +00:00
Potuz
4e47905884 Do not mark blocks as invalid unnecessarily (#15846) 2025-10-10 20:55:29 +00:00
Manu NALEPA
a94ea1e5f5 Add grandine in known agents (#15829)
* `knownAgentVersions`: Sort.

* `knownAgentVersions`: Add Grandine.

* Add changelog.
2025-10-09 19:30:26 +00:00
james-prysm
c0ad87df4b fixing web3signer for e2e (#15832)
* fixing web3signer for e2e

* fixing tests

* gaz

* reverting fix

* extra space
2025-10-09 19:21:56 +00:00
james-prysm
515590e7fe making block event only send on certain success (#15814)
* making block event only send on certain success

* potuz's comment

* potuz comment

* test
2025-10-09 16:42:48 +00:00
terence
83a171b439 Process pending atts after pending blocks clear (#15824) 2025-10-09 14:27:03 +00:00
Manu NALEPA
4946b007ab Data column sidecars fetch: Adjust log levels. (#15820) 2025-10-09 10:34:04 +00:00
Radosław Kapka
3f10439de1 Do not verify block data when calculating rewards (#15819)
* Do not verify block data when calculating rewards

* remove `Get` from function names

* changelog <3

* do not verify sync committee sig in handler

* Revert "remove `Get` from function names"

This reverts commit 770a89d990.

* typo fix

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-10-08 21:40:48 +00:00
james-prysm
5b20352ac6 cleaning up processAggregate (#15823)
* cleaning up some code

* kasey feedback

* further simplifying

* kasey's suggestion
2025-10-08 18:34:32 +00:00
428 changed files with 12060 additions and 2324 deletions

View File

@@ -4,6 +4,345 @@ All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
## [v6.1.3](https://github.com/prysmaticlabs/prysm/compare/v6.1.2...v6.1.3) - 2025-10-20
This release has several important beacon API and p2p fixes.
### Added
- Add Grandine to P2P known agents. (Useful for metrics). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15829)
- Delegate sszInfo HashTreeRoot to FastSSZ-generated implementations via SSZObject, enabling roots calculation for generated types while avoiding duplicate logic. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15805)
- SSZ-QL: Use `fastssz`'s `SizeSSZ` method for calculating the size of `Container` type. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15864)
- SSZ-QL: Access n-th element in `List`/`Vector`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15767)
### Changed
- Do not verify block data when calculating rewards. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15819)
- Process pending attestations after pending blocks are cleared. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15824)
- updated web3signer to 25.9.1. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15832)
- Gracefully handle submit blind block returning 502 errors. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15848)
- Improve returning individual message errors from Beacon API. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15835)
- SSZ-QL: Clarify `Size` method with more sophisticated `SSZType`s. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15864)
### Fixed
- Use service context and continue on slasher attestation errors (#15803). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15803)
- block event probably shouldn't be sent on certain block processing failures, now sends only on successing processing Block is NON-CANONICAL, Block IS CANONICAL but getFCUArgs FAILS, and Full success. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15814)
- Fixed web3signer e2e, issues caused due to a regression on old fork support. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15832)
- Do not mark blocks as invalid from ErrNotDescendantOfFinalized. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15846)
- Fixed [#15812](https://github.com/OffchainLabs/prysm/issues/15812): Gossip attestation validation incorrectly rejecting attestations that arrive before their referenced blocks. Previously, attestations were saved to the pending queue but immediately rejected by forkchoice validation, causing "not descendant of finalized checkpoint" errors. Now attestations for missing blocks return `ValidationIgnore` without error, allowing them to be properly processed when their blocks arrive. This eliminates false positive rejections and prevents potential incorrect peer downscoring during network congestion. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15840)
- Mark the block as invalid if it has an invalid signature. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15847)
- Display error messages from the server verbatim when they are not encoded as `application/json`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15860)
- `HasAtLeastOneIndex`: Check the index is not too high. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15865)
- Fix `/eth/v1/beacon/blob_sidecars/` beacon API is the fulu fork epoch is set to the far future epoch. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15867)
- `dataColumnSidecarsByRangeRPCHandler`: Gracefully close the stream if no data to return. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15866)
- `VerifyDataColumnSidecar`: Check if there is no too many commitments. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15859)
- `WithDataColumnRetentionEpochs`: Use `dataColumnRetentionEpoch` instead of `blobColumnRetentionEpoch`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15872)
- Mark epoch transition correctly on new head events. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15871)
- reject committee index >= committees_per_slot in unaggregated attestation validation. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15855)
- Decreased attestation gossip validation batch deadline to 5ms. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15882)
## [v6.1.2](https://github.com/prysmaticlabs/prysm/compare/v6.1.1...v6.1.2) - 2025-10-10
This release has several important fixes to improve Prysm's peering, stability, and attestation inclusion on mainnet and all testnets. All node operators are encouraged to update to this release as soon as practical for the best mainnet performance.
### Added
- Added a 1 minute timeout on PruneAttestationOnEpoch operations to prevent very large bolt transactions. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15746)
- Added expected delay before broadcasting light client p2p messages. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15776)
### Changed
- Replaced reflect.TypeOf with reflect.TypeFor. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15627)
- Bazel builds with `--config=release` now properly apply `--strip=always` to strip debug symbols from the release assets. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15774)
- Add sources for compute_fork_digest to specrefs. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15699)
- Aggregate logs when broadcasting data column sidecars (one per root instead of one per sidecar). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15748)
- `c-kzg-4844`: Update from `v2.1.1` to `v2.1.5`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15708)
- Process pending attestations as soon as the block arrives. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15791)
- Compare received LC messages over gossipsub with locally computed ones before forwarding. Also no longer save updates. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15783)
- Optimize pending attestation processing by adding batching. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15801)
### Removed
- removed unused configs and hides prysm specific configs from `/eth/v1/config/spec` endpoint. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15797)
### Fixed
- SSZ-QL: Support nested `List` type (e.g., `ExecutionPayload.Transactions`). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15725)
- Fixing Unsupported config field kind; value forwarded verbatim errors for type string. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15773)
- fix /eth/v1/config/spec endpoint to properly skip omitted values. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15777)
- Fix ProduceSyncCommitteeContribution not returning error when committee index is out of range. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15770)
- adding in improvements to getduties v2, replaces helpers.PrecomputeCommittees() ( exepensive ) with CommitteeAssignments. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15784)
- Avoid unnecessary calls to `ExitInformation()`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15764)
- `inclusionProofKey`: Include the commitments in the key. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15795)
- Do not reject peers if they have a mismatched version|digest when the next for epoch is FAR_FUTURE_EPOCH. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15798)
- Don't include entries in the fork schedule if their epoch is set to far future epoch. Avoids reporting next_fork_version == <unscheduled fork>. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15799)
- Wait for custody info to be initialized before querying them. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15804)
- fixes level=error msg="Could not clean up dirty states" error="OriginBlockRoot: not found in db" prefix=state-gen error when starting in kurtosis. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15808)
- Correctly clear disconnected peers from `connected_libp2p_peers` and `connected_libp2p_peers_average_scores`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15807)
- `buildStatusFromStream`: Respond `statusV2` only if Fulu is enabled. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15818)
- Send our real earliest available slot when sending a Status request post Fulu instead of `0`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15818)
- switch to built-in min/max. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15817)
- `findPeersWithSubnets`: If the filter function returns an error for a given peer, log an error and skip the peer instead of aborting the whole function. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15815)
- `computeIndicesByRootByPeer`: If the loop returns an error for a given peer, log an error and skip the peer instead of aborting the whole function. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15815)
- Fixed issue #15738 where separate goroutines assume sole responsibility for topic registration. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15779)
## [v6.1.0](https://github.com/prysmaticlabs/prysm/compare/v6.0.5...v6.1.0) and [v6.1.1](https://github.com/prysmaticlabs/prysm/compare/v6.1.0...v6.1.1) - 2025-09-26
This release has support for Fusaka testnets as well as many mainnet improvements. Testnet operators are required to updated prior to the testnet fork date. See [PR #15721](https://github.com/OffchainLabs/prysm/pull/15721).
Mainnet operators are encouraged to update per their regular update cadence.
Note: This release was re-issued as v6.1.1 to distribute release assets without debug symbols. See issue [#15760](https://github.com/OffchainLabs/prysm/issues/15760).
#### Noteworthy improvements, changes and bugfixes:
- The `--disable-experimental-state` beacon-node flag has been removed, marking the full graduation of the [Copy-on-write design](https://hackmd.io/zlTJ6Qe_RiueT3y2R77BvA) for BeaconState fields, which reduces the memory overhead of keeping multiple BeaconStates in RAM for block processing. Congrats @rkapka!
- The behavior set by the `--attest_timely` flag is now on by default, with the flag itself deprecated.
- GetDutiesV2 introduced, lowering duty request latency and beacon-node load. Multiple other improvements and bugfixes have been made to harden the validator run loop.
- New validator flag `--max-health-checks` configures a validator to switch to a fallback beacon node after the given number of health check failures.
- Improvements to rest-mode validator, defaulting to SSZ where available and adding SSZ support to more Beacon API endpoints.
- Beacon API now honors the gzip content-encoding header.
- Log timestamps now include milliseconds.
- Full fusaka support for testnets!
**Special thanks to external contributors!**: @Alleysira, @KaloyanTanev, @rose2221
[1] To override this limit, use the validator flag `--suggested-gas-limit` or set the `builder.gas_limit` setting in your [proposer settings file](https://prysm.offchainlabs.com/docs/configure-prysm/fee-recipient/#advanced-configure-mev-builder-and-gas-limit).
### Added
- PeerDAS: Add `CustodyInfo` in `BeaconNode`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15378)
- GetDutiesV2 gRPC function, removes committee list from duties, replaced with committee length, validator committee index. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15273)
- Add SSZ support for two attestation APIs: `/eth/v1/validator/attestation_data` and. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15377)
- Added feature flag for validator client to use get duties v2. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15380)
- PeerDAS: Implement DAS. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15367)
- `verifyBlobCommitmentCount`: Print max allowed blob count in error message. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15386)
- Data column support for beacon api event end point. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15387)
- Implement EIP-7917: Stable proposer lookahead. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15129)
- Implement `dataColumnSidecarByRootRPCHandler`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15405)
- New ssz-only flag for validator client to enable calling rest apis in SSZ, starting with get block endpoint. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15390)
- Implement `dataColumnSidecarsByRangeRPCHandler`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15421)
- Add SSZ support for `submitPoolAttestationsV2` beacon API. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15422)
- New `StatusV2` proto message. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15423)
- Implement `SendDataColumnSidecarsByRangeRequest`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15430)
- Implement `SendDataColumnSidecarsByRootRequest`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15430)
- Implement beacon API blob sidecar enpoint for Fulu. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15436)
- PeerDAS: Implement the new Fulu Metadata. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15440)
- PeerDAS: Implement reconstruction. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15454)
- Implement engine method `GetBlobsV2`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15469)
- Implement execution `ReconstructDataColumnSidecars`, which reconstruct data column sidecars from data fetched from the execution layer. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15469)
- new `--batch-verifier-limit` flag to configure max number of signatures to batch verify on gossip. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15467)
- `disable-attest-timely` flag to disable attest timely. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15410)
- Added `max-health-checks` flag that sets the maximum times the validator tries to check the health of the beacon node before timing out. 0 or a negative number is indefinite. (the default is 0). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15401)
- Add method `VersionToForkEpochMap()` to the `BeaconChainConfig` in the `params` package. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15482)
- Add log capitalization analyzer and apply changes across codebase. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15452)
- Slot aware cache for seen data column gossip p2p to reduce memory usages. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15477)
- **Gzip Compression for Beacon API:**. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14982)
- Implement data column sidecars reconstruction with data retrieved from the execution client when receiving a block via gossip. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15483)
- Add support for parsing and handling `ExecutionPayloadAndBlobsBundleV2`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15503)
- Added new PRYSM_API_OVERRIDE_ACCEPT environment variable to override ssz accept header as a replacement to flag. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15433)
- Implements the `/eth/v1/beacon/states/{state_id}/proposer_lookahead` beacon api endpoint. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15525)
- Added new metadata fields (attnets,syncnets,custody_group_count) to `/eth/v1/node/identity`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15506)
- Add BLOB_SCHEDULE field to `/eth/v1/config/spec` endpoint response to expose blob scheduling configuration for networks. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15485)
- Add timing metric `publish_block_v2_duration_milliseconds` to measure processing duration of the `PublishBlockV2` beacon API endpoint. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15539)
- Add Fulu case for `saveStatesEfficientInternal`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15553)
- Support for fusaka `nfd` enr field, and changes to the semantics of the eth2 field. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15501)
- Implement post-Fulu MEV-boost protocol changes where relays only return status codes for blinded block submissions. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15486)
- Added fulu block support to StreamBlocksAltair. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15583)
- All outbound HTTP requests from the validator client now include a custom `User-Agent` header in the format `Prysm/<name>/<version>`. This enhances observability and enables upstream systems to correctly identify Prysm validator clients by their name and version. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15574)
- Fixes [#15435](https://github.com/OffchainLabs/prysm/issues/15435). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15574)
- Data columns syncing for Fusaka. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15564)
- Added specification references which map spec to implementation. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15592)
- Warm data columns storage cache at start. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15629)
- Add `--data-column-path` flag. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15629)
- Initialize package for SSZ Query Language. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15588)
- In FetchDataColumnSidecars, after retrieving sidecars from peers, if still some sidecars are missing for a given root and if a reconstruction is possible (combining sidecars already retrieved from peers and sidecars in the storage), then reconstruct missing sidecars instead of trying to fetch the missing ones from peers. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15593)
- Fulu block proposal changes for beacon api and gRPC. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15628)
- Retry to fetch origin data column sidecars when starting from a checkpoint. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15634)
- Aggregate and pack sync committee messages into blocks. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15608)
- Support `List` type for SSZ-QL. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15637)
- Configured the beacon node to seek peers when we have validator custody requirements. If one or more validators are connected to the beacon node, then the beacon node should seek a diverse set of peers such that broadcasting to all data column subnets for a block proposal is more efficient. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15654)
- SSZ-QL: Add element information for `Vector` type. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15668)
- SSZ-QL: Support multi-dimensional tag parsing. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15668)
- Added more metadata for debug logs when initial sync requests fail for "invalid data returned from peer" errors. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15674)
- Adding Fulu types for web3signer. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15498)
- Added erigon/caplin to known p2p agent strings. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15678)
- Add Fulu fork transition tests for mainnet and minimal configurations. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15666)
- Fulu proposer lookahead epoch processing tests for mainnet and minimal configurations. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15667)
- Populate sszInfo of variable-length fields in AnalyzeObjects. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15676)
- KZG proof batch verification for data column gossip validation. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15617)
- Added flag `--p2p-colocation-whitelist` to accept CIDRs which will bypass the p2p colocation restrictions. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15685)
- Fulu spec tests coverage for covering the general package. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15682)
- Implemented syncing in a disjoint network with respect to data column sidecars subscribed by peers. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15644)
- Add retry logic when GetBlobsV2 is called. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15520)
- Call GetBlobsV2 as soon as we receive the first data column sidecar or block. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15520)
- Added new post fulu /eth/v1/beacon/blobs/{block_id} endpoint. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15610)
- SSZ-QL: Handle `Bitlist` and `Bitvector` types. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15704)
- Adding `/eth/v1/debug/beacon/data_column_sidecars/{block_id}` endpoint. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15701)
- Support Fulu genesis block. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15652)
- Update spectests to 1.6.0-beta.0. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15741)
### Changed
- `parseIndices`: Return `[]int` instead of `[]uint64`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15386)
- Reclaim memory manually in some tests that fuzz the beacon state. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15395)
- when REST api is enabled the get Block api defaults to requesting and receiving SSZ instead of JSON, JSON is the fallback. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15390)
- Remove "invalid" from logs for incoming blob sidecar that is missing parent or out of range slot. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15428)
- In `TopicFromMessage`: Do not assume anymore that all Fulu specific topic are V3 only. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15423)
- `readChunkedDataColumnSidecar`: Add `validationFunctions` parameter and add tests. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15423)
- Put the initiation of LC Store behind the `enable-light-client` flag. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15464)
- default batch signature verification limit increased from 50 to 1000. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15467)
- Increase mainnet DefaultBuilderGasLimit from 36M to 45M. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15455)
- Attest timely is now default. `attest-timely` flag is now deprecated. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15410)
- Move data col reconstruction log to a more accurate place in the code. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15475)
- Makes the multivalue slice permanent in the state and removes old paths. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15414)
- Previously, we optimistically believed the beacon node was healthy and tried to get chain start, but now we do a health check at the start. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15401)
- Optimize proposer inclusion proof calcuation by pre caching subtries. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15473)
- Move setter/getter functions for LC Bootstrap into LcStore for a unified interface. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15476)
- Changed `enable-duties-v2` to `disable-duties-v2` to default to using duties v2. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15445)
- Changed `uint64` genesis time to use `time.Time`. Also did some refactoring and cleanup that was enabled by these changes. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15419)
- Add milliseconds to log timestamps. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15496)
- Move setter/getter functions for LC Updates into LcStore for a unified interface. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15488)
- Change LC Bootstrap logic to only save bootstraps on finalized checkpoints instead of every block. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15497)
- Update links to consensus-specs to point to `master` branch. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15523)
- changed from in-memory to persistent discv5 db to keep local node information persistent for the key to keep the ENR sequence number deterministic when restarting. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15519)
- Fix some nits associated with data column sidecar verification. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15521)
- Include state root in StateNotFoundError for better debugging of consensus validation failures. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15533)
- when shutting down the sync service we now send p2p goodbye messages in parallel to maxmimize changes of propogating goodbyes to all peers before an unsafe shutdown. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15542)
- Do not compare liveness response with LH in e2e Beacon API evaluator. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15556)
- Moved the broadcast and event notifier logic for saving LC updates to the store function. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15540)
- Fixed the issue with broadcasting more than twice per LC Finality update, and the if-case bug. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15540)
- Separated the finality update validation rules for saving and broadcasting. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15540)
- Update validator custody to the latest specification, including the new status message. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15532)
- Beacon api optimize validator lookup for large batch request size. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15558)
- Check pending block is in forkchoice before importing pending attestation. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15547)
- Redesign the pending attestation queue. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15024)
- Replaced hardcoded `grpc-gateway-port` with `flags.HTTPServerPort.Name` in `testing/endtoend/components/validator.go`, resolving an inline TODO for improved flag consistency. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15236)
- Refactor `htrutil.go` by removing redundant codes. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15453)
- Improved sync unaggregated attestation cache key outside of lock path. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15572)
- Move aggregated attestation cache key generation outside of critical locks to improve performance. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15579)
- Renamed various variables/functions to be more clear. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15529)
- Update consensus spec to v1.6.0-alpha.4 and implement data column support for forkchoice spectests. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15590)
- Reject incoming connections when the fork schedule of the connecting peer (parsed from their ENR) has a matching next_fork_epoch, but mismatched next_fork_version or nfd (next fork digest). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15604)
- Update gohashtree to v0.0.5-beta. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15619)
- Updated consensus spec from v1.6.0-alpha.4 to v1.6.0-alpha.5 with adjusted minimal config parameters. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15621)
- Changed old atomic functions to new atomic.Int for safer and clearer code. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15625)
- Start from justified checkpoint by default. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15636)
- Updated consensus spec from v1.6.0-alpha.5 to v1.6.0-alpha.6. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15658)
- Updated outdated documentation links for Web3Signer and Why Bazel. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15631)
- changed validatorpb.SignRequest_AggregateAttestationAndProof signing type to use AggregateAttestationAndProofV2 on web3signer. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15498)
- Pre-calculate exit epoch, churn and active balance before processing slashings to reduce CPU load. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14990)
- Switching default of validator client rest call for submit block from JSON to SSZ. Fallback json will be attempted. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15645)
- Deprecated and added error to /prysm/v1/beacon/blobs endpoint for post Fulu fork. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15643)
- Upgraded gossipsub to v0.14.2 and libp2p to v0.39.1. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15677)
- Prysm will now downscore peers that return invalid block_by_range responses. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15686)
- Filtering peers for data column subnets: Added a one-epoch slack to the peers head slot view. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15644)
- Fetching data column sidecars: If not all requested sidecars are available for a given root, return the successfully retrieved ones along with a map indicating which could not be fetched. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15644)
- Fetching origin data column sidecars: If only some sidecars are fetched, save the retrieved ones and retry fetching the missing ones on the next attempt. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15644)
- Renamed the `--enable-experimental-backfill` flag to `--enable-backfill` to signal that it is more mature. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15690)
- Restrict best LC update collection to canonical blocks. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15585)
- PeerDAS: Wait for a random delay, then reconstruct data column sidecars and immediately reseed instead of immediately reconstructing, waiting and then reseeding. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15705)
- Clarified misleading log messages in beacon-chain/rpc/service gRPC module. [[PR]](https://github.com/prysmaticlabs/prysm/pull/13063)
- Broadcast block then sidecars, instead block and sidecars concurrently. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15720)
- Broadcast and receive sidecars in concurrently, instead sequentially. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15720)
- Changed blst dependency from `http_archive` to `go_repository` so that gazelle can keep it in sync with go.mod. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15709)
- Updated go to v1.25.1. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15641)
- Updated rules_go to v0.57.0. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15641)
- Updated protobuf to 28.3. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15641)
- Set Fulu fork epochs for Holesky, Hoodi, and Sepolia testnets. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15721)
- Improve logging of data column sidecars. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15728)
- Updated go.mod to v1.25.1. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15740)
### Deprecated
- Deprecated `p2p-metadata` flag. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15554)
### Removed
- Removed //tools/eth1voting tool. This is no longer needed as the beacon chain no longer uses eth1data voting since Electra. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15415)
- Remove deposit count from sync new block log. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15420)
- Unused `DataColumnIdentifier` proto message. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15423)
- Validator client will no longer need to call the canonical head api. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15480)
- Partially reverting pr #15390 removing the `ssz-only` debug flag until there is a real usecase for the flag. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15433)
### Fixed
- Added regression test for [PR 15369](https://github.com/OffchainLabs/prysm/pull/15369). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15379)
- Added missing `meta` field to the response of the endpoint `/eth/v1/node/peers` to align with the Beacon API spec (#15370). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15371)
- Fix blob metric name for peer count. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15412)
- Non deterministic output order of `dataColumnSidecarByRootRPCHandler`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15441)
- Fixed the versioning bug for light client data types in the Beacon API. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15400)
- `--chain-config-file`: Do not use any more mainnet boot nodes. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15460)
- Fix panic on dutiesv2 when there is no committee assignment on the epoch. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15466)
- Allow SSZ requests for pending deposits, partial withdrawals and consolidations. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15474)
- Validator client shuts down cleanly on error instead of fatal error. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15401)
- Fixes edge case starting validator client with new validator keys starts the slot ticker too early resulting in replayed slots in the main runner loop. Fixes edge case of replayed slots when waiting for account acivations. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15479)
- DV aggregations failing first slot of the epoch. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15156)
- Skip genesis block retrieval when EIP-6110 deposit requests have started to prevent "pruned history unavailable" errors with execution clients that have pruned pre-merge data. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15494)
- Fixed lookahead initialization at the fulu fork. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15450)
- Write `Content-Encoding` header in the response properly when gzip encoding is requested. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15499)
- Subnets subscription: Avoid dynamic subscribing blocking in case not enough peers per subnets are found. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15471)
- Do not apply the gzip middleware to the event stream API. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15517)
- Fixed various reasons why a node is banned by its peers when it stops. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15505)
- Use `MinEpochsForDataColumnSidecarsRequest` in `WithinDAPeriod` when in Fulu. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15522)
- Return zero value for `Eth-Consensus-Block-Value` on error to avoid missed block proposals. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15526)
- Moved reconstruction lock to prevent unnecessary work. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15528)
- Fixed variable names, links, and typos in das core code. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15524)
- Fix builder bid version compatibility to support Electra bids with Fulu blocks. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15536)
- Fixed align submitPoolSyncCommitteeSignatures response with Beacon API specification. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15516)
- Trigger payload attribute event as soon as an early block is processed. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15541)
- Beacon-api proposer duty fulu computation. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15534)
- Fixed the max proofs in `BlobsBundleV2`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15530)
- Prevent a race on double `ReceiveBlock`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15565)
- Fixed [#15544](https://github.com/OffchainLabs/prysm/issues/15544): Persist metadata sequence number if it is needed (e.g., use static peer ID option or Fulu enabled). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15554)
- Fix the validateConsensus endpoint handler. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15548)
- builder version check was using head block version instead of current fork's version based on slot, fixes e2e from https://github.com/OffchainLabs/prysm/commit/57e27199bdb9b3ef1af14c3374999aba5e0788a3. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15568)
- Don't submit duplicate `SignedContributionAndProof` messages. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15571)
- Genesis state, timestamp and validators root now ubiquitously available at node startup, supporting tech debt cleanup. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15470)
- Fixed a condition where the blob cache could panic when there were less than or no sidecars in the cache entry. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15581)
- Fixed endpoint response to return 404 or 400 after isOptimistic check. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15559)
- Safeguard against accidental out of bounds array access in dataColumnSidecars method. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15586)
- Fixed NewSignedBeaconBlock calls to use Block field for proper equivocation handling. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15595)
- Fixed regression in find peer functions introduced in PR#15471, where nodes with equal sequence numbers were incorrectly skipped and the peer count was incorrectly reduced when replacing nodes with higher sequence numbers. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15578)
- Fix bug where stale computed value in closure excludes newly required (eg attestation) subscriptions. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15603)
- Fix bug where arguments of fillInForkChoiceMissingBlocks were incorrectly placed. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15639)
- Fix next epoch proposer duties in Fulu by advancing the state to the beginning of the current epoch. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15642)
- Fix getBlockAttestationsV2 to return [] instead of null when data is empty. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15651)
- Fixed the issue of empty dirs not being deleted when using blob-storage-layout=by-epoch. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15573)
- Start topic-based peer discovery before initial sync completes so that we have coverage of needed columns when range syncing. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15660)
- Fixed an off-by-one in forkchoice startup. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15684)
- mitigate potential supernode clustering due to libp2p ConnManager pruning of non-supernodes, see https://github.com/OffchainLabs/prysm/issues/15607. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15681)
- Initial sync: Do not request data column sidecars for blocks before the retention period. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15644)
- Fixed incorrect attestation data request where the assigned committee index was used after Electra, instead of 0. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15696)
- Use v2 endpoint for blinded block submission post-Fulu. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15716)
- Fixed 'justified' block support missing on blocker.Block and optimized logic between blocker.Block and blocker.Blob. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15715)
- Fix prysmctl panic when baseFee is not set in genesis.json. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15687)
- Fix getStateRandao not returning historic RANDAO mix values. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15653)
- fix race in PriorityQueue.Pop by checking emptiness under write lock. (#15726). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15726)
- In P2P service start, wait for the custody info to be correctly initialized. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15732)
- `createLocalNode`: Wait before retrying to retrieve the custody group count if not present. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15735)
- Replace fmt.Printf with proper test error handling in web3signer keymanager tests, using require.NoError(t, err) instead of t.Fatalf for better error handling and debugging. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15723)
- fixed regression introduced in PR #15715 , blocker now returns an error for not found, and error handling correctly handles error and returns 404 instead of 500. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15742)
- da metric was not writing correctly because if statement on err was accidently flipped. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15743)
### Security
- Updated go to version 1.24.5. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15561)
- Updated distroless/cc-debian11 to latest to resolve CVE-2024-2961. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15562)
- Updated go to version 1.24.6. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15566)
- Updated quic-go to latest version. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15749)
## [v6.0.5](https://github.com/prysmaticlabs/prysm/compare/v6.0.4...v6.0.5) - 2025-09-26
We are releasing a patch update on top of v6.0.4 to address a stability issue with quic-go.
All operators should update as soon as possible to v6.0.5 or later.
### Security
- Updated quic-go to latest version. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15749)
## [v6.0.4](https://github.com/prysmaticlabs/prysm/compare/v6.0.3...v6.0.4) - 2025-06-05
This release has more work on PeerDAS, and light client support. Additionally, we have a few bug fixes:
@@ -3458,4 +3797,4 @@ There are no security updates in this release.
# Older than v2.0.0
For changelog history for releases older than v2.0.0, please refer to https://github.com/prysmaticlabs/prysm/releases
For changelog history for releases older than v2.0.0, please refer to https://github.com/prysmaticlabs/prysm/releases

View File

@@ -284,7 +284,7 @@ func (c *Client) SubmitChangeBLStoExecution(ctx context.Context, request []*stru
if resp.StatusCode != http.StatusOK {
decoder := json.NewDecoder(resp.Body)
decoder.DisallowUnknownFields()
errorJson := &server.IndexedVerificationFailureError{}
errorJson := &server.IndexedErrorContainer{}
if err := decoder.Decode(errorJson); err != nil {
return errors.Wrapf(err, "failed to decode error JSON for %s", resp.Request.URL)
}

View File

@@ -726,6 +726,12 @@ func unexpectedStatusErr(response *http.Response, expected int) error {
return errors.Wrap(jsonErr, "unable to read response body")
}
return errors.Wrap(ErrNotOK, errMessage.Message)
case http.StatusBadGateway:
log.WithError(ErrBadGateway).Debug(msg)
if jsonErr := json.Unmarshal(bodyBytes, &errMessage); jsonErr != nil {
return errors.Wrap(jsonErr, "unable to read response body")
}
return errors.Wrap(ErrBadGateway, errMessage.Message)
default:
log.WithError(ErrNotOK).Debug(msg)
return errors.Wrap(ErrNotOK, fmt.Sprintf("unsupported error code: %d", response.StatusCode))

View File

@@ -12,7 +12,6 @@ import (
"github.com/OffchainLabs/prysm/v6/api"
"github.com/OffchainLabs/prysm/v6/api/server/structs"
"github.com/OffchainLabs/prysm/v6/testing/util"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
@@ -22,6 +21,7 @@ import (
eth "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
"github.com/prysmaticlabs/go-bitfield"
log "github.com/sirupsen/logrus"
)

View File

@@ -21,3 +21,4 @@ var ErrUnsupportedMediaType = errors.Wrap(ErrNotOK, "The media type in \"Content
// ErrNotAcceptable specifically means that a '406 - Not Acceptable' was received from the API.
var ErrNotAcceptable = errors.Wrap(ErrNotOK, "The accept header value is not acceptable")
var ErrBadGateway = errors.Wrap(ErrNotOK, "recv 502 BadGateway response from API")

View File

@@ -6,6 +6,11 @@ import (
"strings"
)
var (
ErrIndexedValidationFail = "One or more messages failed validation"
ErrIndexedBroadcastFail = "One or more messages failed broadcast"
)
// DecodeError represents an error resulting from trying to decode an HTTP request.
// It tracks the full field name for which decoding failed.
type DecodeError struct {
@@ -29,19 +34,38 @@ func (e *DecodeError) Error() string {
return fmt.Sprintf("could not decode %s: %s", strings.Join(e.path, "."), e.err.Error())
}
// IndexedVerificationFailureError wraps a collection of verification failures.
type IndexedVerificationFailureError struct {
Message string `json:"message"`
Code int `json:"code"`
Failures []*IndexedVerificationFailure `json:"failures"`
// IndexedErrorContainer wraps a collection of indexed errors.
type IndexedErrorContainer struct {
Message string `json:"message"`
Code int `json:"code"`
Failures []*IndexedError `json:"failures"`
}
func (e *IndexedVerificationFailureError) StatusCode() int {
func (e *IndexedErrorContainer) StatusCode() int {
return e.Code
}
// IndexedVerificationFailure represents an issue when verifying a single indexed object e.g. an item in an array.
type IndexedVerificationFailure struct {
// IndexedError represents an issue when processing a single indexed object e.g. an item in an array.
type IndexedError struct {
Index int `json:"index"`
Message string `json:"message"`
}
// BroadcastFailedError represents an error scenario where broadcasting a published message failed.
type BroadcastFailedError struct {
msg string
err error
}
// NewBroadcastFailedError creates a new instance of BroadcastFailedError.
func NewBroadcastFailedError(msg string, err error) *BroadcastFailedError {
return &BroadcastFailedError{
msg: msg,
err: err,
}
}
// Error returns the underlying error message.
func (e *BroadcastFailedError) Error() string {
return fmt.Sprintf("could not broadcast %s: %s", e.msg, e.err.Error())
}

View File

@@ -296,3 +296,8 @@ type GetBlobsResponse struct {
Finalized bool `json:"finalized"`
Data []string `json:"data"` //blobs
}
type SSZQueryRequest struct {
Query string `json:"query"`
IncludeProof bool `json:"include_proof,omitempty"`
}

View File

@@ -173,6 +173,7 @@ go_test(
"//beacon-chain/state/state-native:go_default_library",
"//beacon-chain/state/stategen:go_default_library",
"//beacon-chain/verification:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",

View File

@@ -30,7 +30,7 @@ var (
// errWSBlockNotFoundInEpoch is returned when a block is not found in the WS cache or DB within epoch.
errWSBlockNotFoundInEpoch = errors.New("weak subjectivity root not found in db within epoch")
// ErrNotDescendantOfFinalized is returned when a block is not a descendant of the finalized checkpoint
ErrNotDescendantOfFinalized = invalidBlock{error: errors.New("not descendant of finalized checkpoint")}
ErrNotDescendantOfFinalized = errors.New("not descendant of finalized checkpoint")
// ErrNotCheckpoint is returned when a given checkpoint is not a
// checkpoint in any chain known to forkchoice
ErrNotCheckpoint = errors.New("not a checkpoint in forkchoice")

View File

@@ -346,13 +346,24 @@ func (s *Service) notifyNewHeadEvent(
if err != nil {
return errors.Wrap(err, "could not check if node is optimistically synced")
}
parentRoot, err := s.ParentRoot([32]byte(newHeadRoot))
if err != nil {
return errors.Wrap(err, "could not obtain parent root in forkchoice")
}
parentSlot, err := s.RecentBlockSlot(parentRoot)
if err != nil {
return errors.Wrap(err, "could not obtain parent slot in forkchoice")
}
epochTransition := slots.ToEpoch(newHeadSlot) > slots.ToEpoch(parentSlot)
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.NewHead,
Data: &ethpbv1.EventHead{
Slot: newHeadSlot,
Block: newHeadRoot,
State: newHeadStateRoot,
EpochTransition: slots.IsEpochStart(newHeadSlot),
EpochTransition: epochTransition,
PreviousDutyDependentRoot: previousDutyDependentRoot[:],
CurrentDutyDependentRoot: currentDutyDependentRoot[:],
ExecutionOptimistic: isOptimistic,

View File

@@ -162,6 +162,9 @@ func Test_notifyNewHeadEvent(t *testing.T) {
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(t.Context(), st, blk))
newHeadStateRoot := [32]byte{2}
newHeadRoot := [32]byte{3}
st, blk, err = prepareForkchoiceState(t.Context(), 1, newHeadRoot, [32]byte{}, [32]byte{}, &ethpb.Checkpoint{}, &ethpb.Checkpoint{})
require.NoError(t, err)
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(t.Context(), st, blk))
require.NoError(t, srv.notifyNewHeadEvent(t.Context(), 1, bState, newHeadStateRoot[:], newHeadRoot[:]))
events := notifier.ReceivedEvents()
require.Equal(t, 1, len(events))
@@ -196,6 +199,9 @@ func Test_notifyNewHeadEvent(t *testing.T) {
newHeadStateRoot := [32]byte{2}
newHeadRoot := [32]byte{3}
st, blk, err = prepareForkchoiceState(t.Context(), 0, newHeadRoot, [32]byte{}, [32]byte{}, &ethpb.Checkpoint{}, &ethpb.Checkpoint{})
require.NoError(t, err)
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(t.Context(), st, blk))
err = srv.notifyNewHeadEvent(t.Context(), epoch2Start, bState, newHeadStateRoot[:], newHeadRoot[:])
require.NoError(t, err)
events := notifier.ReceivedEvents()
@@ -213,6 +219,37 @@ func Test_notifyNewHeadEvent(t *testing.T) {
}
require.DeepSSZEqual(t, wanted, eventHead)
})
t.Run("epoch transition", func(t *testing.T) {
bState, _ := util.DeterministicGenesisState(t, 10)
srv := testServiceWithDB(t)
srv.SetGenesisTime(time.Now())
notifier := srv.cfg.StateNotifier.(*mock.MockStateNotifier)
srv.originBlockRoot = [32]byte{1}
st, blk, err := prepareForkchoiceState(t.Context(), 0, [32]byte{}, [32]byte{}, [32]byte{}, &ethpb.Checkpoint{}, &ethpb.Checkpoint{})
require.NoError(t, err)
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(t.Context(), st, blk))
newHeadStateRoot := [32]byte{2}
newHeadRoot := [32]byte{3}
st, blk, err = prepareForkchoiceState(t.Context(), 32, newHeadRoot, [32]byte{}, [32]byte{}, &ethpb.Checkpoint{}, &ethpb.Checkpoint{})
require.NoError(t, err)
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(t.Context(), st, blk))
newHeadSlot := params.BeaconConfig().SlotsPerEpoch
require.NoError(t, srv.notifyNewHeadEvent(t.Context(), newHeadSlot, bState, newHeadStateRoot[:], newHeadRoot[:]))
events := notifier.ReceivedEvents()
require.Equal(t, 1, len(events))
eventHead, ok := events[0].Data.(*ethpbv1.EventHead)
require.Equal(t, true, ok)
wanted := &ethpbv1.EventHead{
Slot: newHeadSlot,
Block: newHeadRoot[:],
State: newHeadStateRoot[:],
EpochTransition: true,
PreviousDutyDependentRoot: params.BeaconConfig().ZeroHash[:],
CurrentDutyDependentRoot: srv.originBlockRoot[:],
}
require.DeepSSZEqual(t, wanted, eventHead)
})
}
func TestRetrieveHead_ReadOnly(t *testing.T) {

View File

@@ -72,7 +72,7 @@ func (s *Service) postBlockProcess(cfg *postBlockProcessConfig) error {
if features.Get().EnableLightClient && slots.ToEpoch(s.CurrentSlot()) >= params.BeaconConfig().AltairForkEpoch {
defer s.processLightClientUpdates(cfg)
}
defer s.sendStateFeedOnBlock(cfg)
defer reportProcessingTime(startTime)
defer reportAttestationInclusion(cfg.roblock.Block())
@@ -93,6 +93,8 @@ func (s *Service) postBlockProcess(cfg *postBlockProcessConfig) error {
return errors.Wrap(err, "could not set optimistic block to valid")
}
}
defer s.sendStateFeedOnBlock(cfg) // only send event after successful insertion
start := time.Now()
cfg.headRoot, err = s.cfg.ForkChoiceStore.Head(ctx)
if err != nil {

View File

@@ -9,8 +9,10 @@ import (
"testing"
"time"
mock "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/blocks"
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/signing"
@@ -3147,6 +3149,159 @@ func TestIsDataAvailable(t *testing.T) {
})
}
// Test_postBlockProcess_EventSending tests that block processed events are only sent
// when block processing succeeds according to the decision tree:
//
// Block Processing Flow:
// ├─ InsertNode FAILS (fork choice timeout)
// │ └─ blockProcessed = false ❌ NO EVENT
// │
// ├─ InsertNode succeeds
// │ ├─ handleBlockAttestations FAILS
// │ │ └─ blockProcessed = false ❌ NO EVENT
// │ │
// │ ├─ Block is NON-CANONICAL (not head)
// │ │ └─ blockProcessed = true ✅ SEND EVENT (Line 111)
// │ │
// │ ├─ Block IS CANONICAL (new head)
// │ │ ├─ getFCUArgs FAILS
// │ │ │ └─ blockProcessed = true ✅ SEND EVENT (Line 117)
// │ │ │
// │ │ ├─ sendFCU FAILS
// │ │ │ └─ blockProcessed = false ❌ NO EVENT
// │ │ │
// │ │ └─ Full success
// │ │ └─ blockProcessed = true ✅ SEND EVENT (Line 125)
func Test_postBlockProcess_EventSending(t *testing.T) {
ctx := context.Background()
// Helper to create a minimal valid block and state
createTestBlockAndState := func(t *testing.T, slot primitives.Slot, parentRoot [32]byte) (consensusblocks.ROBlock, state.BeaconState) {
st, _ := util.DeterministicGenesisState(t, 64)
require.NoError(t, st.SetSlot(slot))
stateRoot, err := st.HashTreeRoot(ctx)
require.NoError(t, err)
blk := util.NewBeaconBlock()
blk.Block.Slot = slot
blk.Block.ProposerIndex = 0
blk.Block.ParentRoot = parentRoot[:]
blk.Block.StateRoot = stateRoot[:]
signed := util.HydrateSignedBeaconBlock(blk)
roBlock, err := consensusblocks.NewSignedBeaconBlock(signed)
require.NoError(t, err)
roBlk, err := consensusblocks.NewROBlock(roBlock)
require.NoError(t, err)
return roBlk, st
}
tests := []struct {
name string
setupService func(*Service, [32]byte)
expectEvent bool
expectError bool
errorContains string
}{
{
name: "Block successfully processed - sends event",
setupService: func(s *Service, blockRoot [32]byte) {
// Default setup should work
},
expectEvent: true,
expectError: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
// Create service with required options
opts := testServiceOptsWithDB(t)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
// Initialize fork choice with genesis block
st, _ := util.DeterministicGenesisState(t, 64)
require.NoError(t, st.SetSlot(0))
genesisBlock := util.NewBeaconBlock()
genesisBlock.Block.StateRoot = bytesutil.PadTo([]byte("genesisState"), 32)
signedGenesis := util.HydrateSignedBeaconBlock(genesisBlock)
block, err := consensusblocks.NewSignedBeaconBlock(signedGenesis)
require.NoError(t, err)
genesisRoot, err := block.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, block))
require.NoError(t, service.cfg.BeaconDB.SaveGenesisBlockRoot(ctx, genesisRoot))
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, st, genesisRoot))
genesisROBlock, err := consensusblocks.NewROBlock(block)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, genesisROBlock))
// Create test block and state with genesis as parent
roBlock, postSt := createTestBlockAndState(t, 100, genesisRoot)
// Apply additional service setup if provided
if tt.setupService != nil {
tt.setupService(service, roBlock.Root())
}
// Create post block process config
cfg := &postBlockProcessConfig{
ctx: ctx,
roblock: roBlock,
postState: postSt,
isValidPayload: true,
}
// Execute postBlockProcess
err = service.postBlockProcess(cfg)
// Check error expectation
if tt.expectError {
require.NotNil(t, err)
if tt.errorContains != "" {
require.ErrorContains(t, tt.errorContains, err)
}
} else {
require.NoError(t, err)
}
// Give a moment for deferred functions to execute
time.Sleep(10 * time.Millisecond)
// Check event expectation
notifier := service.cfg.StateNotifier.(*mock.MockStateNotifier)
events := notifier.ReceivedEvents()
if tt.expectEvent {
require.NotEqual(t, 0, len(events), "Expected event to be sent but none were received")
// Verify it's a BlockProcessed event
foundBlockProcessed := false
for _, evt := range events {
if evt.Type == statefeed.BlockProcessed {
foundBlockProcessed = true
data, ok := evt.Data.(*statefeed.BlockProcessedData)
require.Equal(t, true, ok, "Event data should be BlockProcessedData")
require.Equal(t, roBlock.Root(), data.BlockRoot, "Event should contain correct block root")
break
}
}
require.Equal(t, true, foundBlockProcessed, "Expected BlockProcessed event type")
} else {
// For no-event cases, verify no BlockProcessed events were sent
for _, evt := range events {
require.NotEqual(t, statefeed.BlockProcessed, evt.Type,
"Expected no BlockProcessed event but one was sent")
}
}
})
}
}
func setupLightClientTestRequirements(ctx context.Context, t *testing.T, s *Service, v int, options ...util.LightClientOption) (*util.TestLightClient, *postBlockProcessConfig) {
var l *util.TestLightClient
switch v {

View File

@@ -219,6 +219,9 @@ func (s *Service) validateExecutionAndConsensus(
eg.Go(func() error {
var err error
postState, err = s.validateStateTransition(ctx, preState, block)
if errors.Is(err, ErrNotDescendantOfFinalized) {
return invalidBlock{error: err, root: block.Root()}
}
if err != nil {
return errors.Wrap(err, "failed to validate consensus state transition function")
}

View File

@@ -493,7 +493,7 @@ func (s *Service) updateCustodyInfoInDB(slot primitives.Slot) (primitives.Slot,
// Compute the custody group count.
custodyGroupCount := custodyRequirement
if isSubscribedToAllDataSubnets {
custodyGroupCount = beaconConfig.NumberOfColumns
custodyGroupCount = beaconConfig.NumberOfCustodyGroups
}
// Safely compute the fulu fork slot.

View File

@@ -23,9 +23,11 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
state_native "github.com/OffchainLabs/prysm/v6/beacon-chain/state/state-native"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state/stategen"
"github.com/OffchainLabs/prysm/v6/cmd/beacon-chain/flags"
"github.com/OffchainLabs/prysm/v6/config/features"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
consensusblocks "github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
@@ -596,3 +598,103 @@ func TestNotifyIndex(t *testing.T) {
t.Errorf("Notifier channel did not receive the index")
}
}
func TestUpdateCustodyInfoInDB(t *testing.T) {
const (
fuluForkEpoch = 10
custodyRequirement = uint64(4)
earliestStoredSlot = primitives.Slot(12)
numberOfCustodyGroups = uint64(64)
numberOfColumns = uint64(128)
)
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.FuluForkEpoch = fuluForkEpoch
cfg.CustodyRequirement = custodyRequirement
cfg.NumberOfCustodyGroups = numberOfCustodyGroups
cfg.NumberOfColumns = numberOfColumns
params.OverrideBeaconConfig(cfg)
ctx := t.Context()
pbBlock := util.NewBeaconBlock()
pbBlock.Block.Slot = 12
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(pbBlock)
require.NoError(t, err)
roBlock, err := blocks.NewROBlock(signedBeaconBlock)
require.NoError(t, err)
t.Run("CGC increases before fulu", func(t *testing.T) {
service, requirements := minimalTestService(t)
err = requirements.db.SaveBlock(ctx, roBlock)
require.NoError(t, err)
// Before Fulu
// -----------
actualEas, actualCgc, err := service.updateCustodyInfoInDB(15)
require.NoError(t, err)
require.Equal(t, earliestStoredSlot, actualEas)
require.Equal(t, custodyRequirement, actualCgc)
actualEas, actualCgc, err = service.updateCustodyInfoInDB(17)
require.NoError(t, err)
require.Equal(t, earliestStoredSlot, actualEas)
require.Equal(t, custodyRequirement, actualCgc)
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.SubscribeAllDataSubnets = true
flags.Init(gFlags)
defer flags.Init(resetFlags)
actualEas, actualCgc, err = service.updateCustodyInfoInDB(19)
require.NoError(t, err)
require.Equal(t, earliestStoredSlot, actualEas)
require.Equal(t, numberOfCustodyGroups, actualCgc)
// After Fulu
// ----------
actualEas, actualCgc, err = service.updateCustodyInfoInDB(fuluForkEpoch*primitives.Slot(cfg.SlotsPerEpoch) + 1)
require.NoError(t, err)
require.Equal(t, earliestStoredSlot, actualEas)
require.Equal(t, numberOfCustodyGroups, actualCgc)
})
t.Run("CGC increases after fulu", func(t *testing.T) {
service, requirements := minimalTestService(t)
err = requirements.db.SaveBlock(ctx, roBlock)
require.NoError(t, err)
// Before Fulu
// -----------
actualEas, actualCgc, err := service.updateCustodyInfoInDB(15)
require.NoError(t, err)
require.Equal(t, earliestStoredSlot, actualEas)
require.Equal(t, custodyRequirement, actualCgc)
actualEas, actualCgc, err = service.updateCustodyInfoInDB(17)
require.NoError(t, err)
require.Equal(t, earliestStoredSlot, actualEas)
require.Equal(t, custodyRequirement, actualCgc)
// After Fulu
// ----------
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.SubscribeAllDataSubnets = true
flags.Init(gFlags)
defer flags.Init(resetFlags)
slot := fuluForkEpoch*primitives.Slot(cfg.SlotsPerEpoch) + 1
actualEas, actualCgc, err = service.updateCustodyInfoInDB(slot)
require.NoError(t, err)
require.Equal(t, slot, actualEas)
require.Equal(t, numberOfCustodyGroups, actualCgc)
actualEas, actualCgc, err = service.updateCustodyInfoInDB(slot + 2)
require.NoError(t, err)
require.Equal(t, slot, actualEas)
require.Equal(t, numberOfCustodyGroups, actualCgc)
})
}

View File

@@ -130,6 +130,14 @@ func (dch *mockCustodyManager) UpdateCustodyInfo(earliestAvailableSlot primitive
return earliestAvailableSlot, custodyGroupCount, nil
}
func (dch *mockCustodyManager) UpdateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error {
dch.mut.Lock()
defer dch.mut.Unlock()
dch.earliestAvailableSlot = earliestAvailableSlot
return nil
}
func (dch *mockCustodyManager) CustodyGroupCountFromPeer(peer.ID) uint64 {
return 0
}

View File

@@ -49,13 +49,22 @@ func ProcessSyncAggregate(ctx context.Context, s state.BeaconState, sync *ethpb.
if err != nil {
return nil, 0, errors.Wrap(err, "could not filter sync committee votes")
}
if err := VerifySyncCommitteeSig(s, votedKeys, sync.SyncCommitteeSignature); err != nil {
return nil, 0, errors.Wrap(err, "could not verify sync committee signature")
}
return s, reward, nil
}
// ProcessSyncAggregateNoVerifySig processes the sync aggregate without verifying the sync committee signature.
// This is useful in scenarios such as block reward calculation, where we can assume the data in the block is valid.
func ProcessSyncAggregateNoVerifySig(ctx context.Context, s state.BeaconState, sync *ethpb.SyncAggregate) (state.BeaconState, uint64, error) {
s, _, reward, err := processSyncAggregate(ctx, s, sync)
if err != nil {
return nil, 0, errors.Wrap(err, "could not filter sync committee votes")
}
return s, reward, nil
}
// processSyncAggregate applies all the logic in the spec function `process_sync_aggregate` except
// verifying the BLS signatures. It returns the modified beacons state, the list of validators'
// public keys that voted (for future signature verification) and the proposer reward for including

View File

@@ -53,9 +53,19 @@ func TestProcessSyncCommittee_PerfectParticipation(t *testing.T) {
SyncCommitteeSignature: aggregatedSig,
}
// Verify that ProcessSyncAggregateNoVerifySig and ProcessSyncAggregate have the same outcome.
beaconStateNoVerifySig := beaconState.Copy()
beaconStateNoVerifySig, rewardNoVerifySig, err := altair.ProcessSyncAggregateNoVerifySig(t.Context(), beaconStateNoVerifySig, syncAggregate)
require.NoError(t, err)
sszNoVerifySig, err := beaconStateNoVerifySig.MarshalSSZ()
require.NoError(t, err)
var reward uint64
beaconState, reward, err = altair.ProcessSyncAggregate(t.Context(), beaconState, syncAggregate)
require.NoError(t, err)
ssz, err := beaconState.MarshalSSZ()
require.NoError(t, err)
assert.DeepEqual(t, sszNoVerifySig, ssz, "States resulting from ProcessSyncAggregateNoVerifySig and ProcessSyncAggregate are not equal")
assert.Equal(t, rewardNoVerifySig, reward, "Rewards resulting from ProcessSyncAggregateNoVerifySig and ProcessSyncAggregate are not equal")
assert.Equal(t, uint64(72192), reward)
// Use a non-sync committee index to compare profitability.

View File

@@ -12,6 +12,46 @@ import (
"github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1/attestation"
)
// ConvertToAltair converts a Phase 0 beacon state to an Altair beacon state.
func ConvertToAltair(state state.BeaconState) (state.BeaconState, error) {
epoch := time.CurrentEpoch(state)
numValidators := state.NumValidators()
s := &ethpb.BeaconStateAltair{
GenesisTime: uint64(state.GenesisTime().Unix()),
GenesisValidatorsRoot: state.GenesisValidatorsRoot(),
Slot: state.Slot(),
Fork: &ethpb.Fork{
PreviousVersion: state.Fork().CurrentVersion,
CurrentVersion: params.BeaconConfig().AltairForkVersion,
Epoch: epoch,
},
LatestBlockHeader: state.LatestBlockHeader(),
BlockRoots: state.BlockRoots(),
StateRoots: state.StateRoots(),
HistoricalRoots: state.HistoricalRoots(),
Eth1Data: state.Eth1Data(),
Eth1DataVotes: state.Eth1DataVotes(),
Eth1DepositIndex: state.Eth1DepositIndex(),
Validators: state.Validators(),
Balances: state.Balances(),
RandaoMixes: state.RandaoMixes(),
Slashings: state.Slashings(),
PreviousEpochParticipation: make([]byte, numValidators),
CurrentEpochParticipation: make([]byte, numValidators),
JustificationBits: state.JustificationBits(),
PreviousJustifiedCheckpoint: state.PreviousJustifiedCheckpoint(),
CurrentJustifiedCheckpoint: state.CurrentJustifiedCheckpoint(),
FinalizedCheckpoint: state.FinalizedCheckpoint(),
InactivityScores: make([]uint64, numValidators),
}
newState, err := state_native.InitializeFromProtoUnsafeAltair(s)
if err != nil {
return nil, err
}
return newState, nil
}
// UpgradeToAltair updates input state to return the version Altair state.
//
// Spec code:
@@ -64,39 +104,7 @@ import (
// post.next_sync_committee = get_next_sync_committee(post)
// return post
func UpgradeToAltair(ctx context.Context, state state.BeaconState) (state.BeaconState, error) {
epoch := time.CurrentEpoch(state)
numValidators := state.NumValidators()
s := &ethpb.BeaconStateAltair{
GenesisTime: uint64(state.GenesisTime().Unix()),
GenesisValidatorsRoot: state.GenesisValidatorsRoot(),
Slot: state.Slot(),
Fork: &ethpb.Fork{
PreviousVersion: state.Fork().CurrentVersion,
CurrentVersion: params.BeaconConfig().AltairForkVersion,
Epoch: epoch,
},
LatestBlockHeader: state.LatestBlockHeader(),
BlockRoots: state.BlockRoots(),
StateRoots: state.StateRoots(),
HistoricalRoots: state.HistoricalRoots(),
Eth1Data: state.Eth1Data(),
Eth1DataVotes: state.Eth1DataVotes(),
Eth1DepositIndex: state.Eth1DepositIndex(),
Validators: state.Validators(),
Balances: state.Balances(),
RandaoMixes: state.RandaoMixes(),
Slashings: state.Slashings(),
PreviousEpochParticipation: make([]byte, numValidators),
CurrentEpochParticipation: make([]byte, numValidators),
JustificationBits: state.JustificationBits(),
PreviousJustifiedCheckpoint: state.PreviousJustifiedCheckpoint(),
CurrentJustifiedCheckpoint: state.CurrentJustifiedCheckpoint(),
FinalizedCheckpoint: state.FinalizedCheckpoint(),
InactivityScores: make([]uint64, numValidators),
}
newState, err := state_native.InitializeFromProtoUnsafeAltair(s)
newState, err := ConvertToAltair(state)
if err != nil {
return nil, err
}

View File

@@ -55,6 +55,28 @@ func ProcessAttesterSlashings(
return beaconState, nil
}
// ProcessAttesterSlashingsNoVerify processes attester slashings without verifying them.
// This is useful in scenarios such as block reward calculation, where we can assume the data
// in the block is valid.
func ProcessAttesterSlashingsNoVerify(
ctx context.Context,
beaconState state.BeaconState,
slashings []ethpb.AttSlashing,
exitInfo *validators.ExitInfo,
) (state.BeaconState, error) {
if exitInfo == nil && len(slashings) > 0 {
return nil, errors.New("exit info required to process attester slashings")
}
var err error
for _, slashing := range slashings {
beaconState, err = ProcessAttesterSlashingNoVerify(ctx, beaconState, slashing, exitInfo)
if err != nil {
return nil, err
}
}
return beaconState, nil
}
// ProcessAttesterSlashing processes individual attester slashing.
func ProcessAttesterSlashing(
ctx context.Context,
@@ -68,6 +90,30 @@ func ProcessAttesterSlashing(
if err := VerifyAttesterSlashing(ctx, beaconState, slashing); err != nil {
return nil, errors.Wrap(err, "could not verify attester slashing")
}
return processAttesterSlashing(ctx, beaconState, slashing, exitInfo)
}
// ProcessAttesterSlashingNoVerify processes individual attester slashing without verifying it.
// This is useful in scenarios such as block reward calculation, where we can assume the data
// in the block is valid.
func ProcessAttesterSlashingNoVerify(
ctx context.Context,
beaconState state.BeaconState,
slashing ethpb.AttSlashing,
exitInfo *validators.ExitInfo,
) (state.BeaconState, error) {
if exitInfo == nil {
return nil, errors.New("exit info is required to process attester slashing")
}
return processAttesterSlashing(ctx, beaconState, slashing, exitInfo)
}
func processAttesterSlashing(
ctx context.Context,
beaconState state.BeaconState,
slashing ethpb.AttSlashing,
exitInfo *validators.ExitInfo,
) (state.BeaconState, error) {
slashableIndices := SlashableAttesterIndices(slashing)
sort.SliceStable(slashableIndices, func(i, j int) bool {
return slashableIndices[i] < slashableIndices[j]

View File

@@ -242,8 +242,18 @@ func TestProcessAttesterSlashings_AppliesCorrectStatus(t *testing.T) {
currentSlot := 2 * params.BeaconConfig().SlotsPerEpoch
require.NoError(t, tc.st.SetSlot(currentSlot))
// Verify that ProcessAttesterSlashingsNoVerify and ProcessAttesterSlashings have the same outcome.
stNoVerify := tc.st.Copy()
newStateNoVerify, err := blocks.ProcessAttesterSlashingsNoVerify(t.Context(), stNoVerify, []ethpb.AttSlashing{tc.slashing}, v.ExitInformation(stNoVerify))
require.NoError(t, err)
sszNoVerify, err := newStateNoVerify.MarshalSSZ()
require.NoError(t, err)
newState, err := blocks.ProcessAttesterSlashings(t.Context(), tc.st, []ethpb.AttSlashing{tc.slashing}, v.ExitInformation(tc.st))
require.NoError(t, err)
ssz, err := newState.MarshalSSZ()
require.NoError(t, err)
assert.DeepEqual(t, sszNoVerify, ssz, "States resulting from ProcessAttesterSlashingsNoVerify and ProcessAttesterSlashings are not equal")
newRegistry := newState.Validators()
// Given the intersection of slashable indices is [1], only validator

View File

@@ -6,3 +6,4 @@ var errNilSignedWithdrawalMessage = errors.New("nil SignedBLSToExecutionChange m
var errNilWithdrawalMessage = errors.New("nil BLSToExecutionChange message")
var errInvalidBLSPrefix = errors.New("withdrawal credential prefix is not a BLS prefix")
var errInvalidWithdrawalCredentials = errors.New("withdrawal credentials do not match")
var ErrInvalidSignature = errors.New("invalid signature")

View File

@@ -64,6 +64,28 @@ func ProcessProposerSlashings(
return beaconState, nil
}
// ProcessProposerSlashingsNoVerify processes proposer slashings without verifying them.
// This is useful in scenarios such as block reward calculation, where we can assume the data
// in the block is valid.
func ProcessProposerSlashingsNoVerify(
ctx context.Context,
beaconState state.BeaconState,
slashings []*ethpb.ProposerSlashing,
exitInfo *validators.ExitInfo,
) (state.BeaconState, error) {
if exitInfo == nil && len(slashings) > 0 {
return nil, errors.New("exit info required to process proposer slashings")
}
var err error
for _, slashing := range slashings {
beaconState, err = ProcessProposerSlashingNoVerify(ctx, beaconState, slashing, exitInfo)
if err != nil {
return nil, err
}
}
return beaconState, nil
}
// ProcessProposerSlashing processes individual proposer slashing.
func ProcessProposerSlashing(
ctx context.Context,
@@ -71,16 +93,40 @@ func ProcessProposerSlashing(
slashing *ethpb.ProposerSlashing,
exitInfo *validators.ExitInfo,
) (state.BeaconState, error) {
var err error
if slashing == nil {
return nil, errors.New("nil proposer slashings in block body")
}
if err = VerifyProposerSlashing(beaconState, slashing); err != nil {
if err := VerifyProposerSlashing(beaconState, slashing); err != nil {
return nil, errors.Wrap(err, "could not verify proposer slashing")
}
return processProposerSlashing(ctx, beaconState, slashing, exitInfo)
}
// ProcessProposerSlashingNoVerify processes individual proposer slashing without verifying it.
// This is useful in scenarios such as block reward calculation, where we can assume the data
// in the block is valid.
func ProcessProposerSlashingNoVerify(
ctx context.Context,
beaconState state.BeaconState,
slashing *ethpb.ProposerSlashing,
exitInfo *validators.ExitInfo,
) (state.BeaconState, error) {
if slashing == nil {
return nil, errors.New("nil proposer slashings in block body")
}
return processProposerSlashing(ctx, beaconState, slashing, exitInfo)
}
func processProposerSlashing(
ctx context.Context,
beaconState state.BeaconState,
slashing *ethpb.ProposerSlashing,
exitInfo *validators.ExitInfo,
) (state.BeaconState, error) {
if exitInfo == nil {
return nil, errors.New("exit info is required to process proposer slashing")
}
var err error
beaconState, err = validators.SlashValidator(ctx, beaconState, slashing.Header_1.Header.ProposerIndex, exitInfo)
if err != nil {
return nil, errors.Wrapf(err, "could not slash proposer index %d", slashing.Header_1.Header.ProposerIndex)

View File

@@ -172,8 +172,17 @@ func TestProcessProposerSlashings_AppliesCorrectStatus(t *testing.T) {
block := util.NewBeaconBlock()
block.Block.Body.ProposerSlashings = slashings
// Verify that ProcessProposerSlashingsNoVerify and ProcessProposerSlashings have the same outcome.
beaconStateNoVerify := beaconState.Copy()
newStateNoVerify, err := blocks.ProcessProposerSlashingsNoVerify(t.Context(), beaconStateNoVerify, block.Block.Body.ProposerSlashings, v.ExitInformation(beaconStateNoVerify))
require.NoError(t, err)
sszNoVerify, err := newStateNoVerify.MarshalSSZ()
require.NoError(t, err)
newState, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, block.Block.Body.ProposerSlashings, v.ExitInformation(beaconState))
require.NoError(t, err)
ssz, err := newState.MarshalSSZ()
require.NoError(t, err)
assert.DeepEqual(t, sszNoVerify, ssz, "States resulting from ProcessProposerSlashingsNoVerify and ProcessProposerSlashings are not equal")
newStateVals := newState.Validators()
if newStateVals[1].ExitEpoch != beaconState.Validators()[1].ExitEpoch {

View File

@@ -114,9 +114,12 @@ func VerifyBlockSignatureUsingCurrentFork(beaconState state.ReadOnlyBeaconState,
}
proposerPubKey := proposer.PublicKey
sig := blk.Signature()
return signing.VerifyBlockSigningRoot(proposerPubKey, sig[:], domain, func() ([32]byte, error) {
if err := signing.VerifyBlockSigningRoot(proposerPubKey, sig[:], domain, func() ([32]byte, error) {
return blkRoot, nil
})
}); err != nil {
return ErrInvalidSignature
}
return nil
}
// BlockSignatureBatch retrieves the block signature batch from the provided block and its corresponding state.

View File

@@ -89,3 +89,36 @@ func TestVerifyBlockSignatureUsingCurrentFork(t *testing.T) {
require.NoError(t, err)
assert.NoError(t, blocks.VerifyBlockSignatureUsingCurrentFork(bState, wsb, blkRoot))
}
func TestVerifyBlockSignatureUsingCurrentFork_InvalidSignature(t *testing.T) {
params.SetupTestConfigCleanup(t)
bCfg := params.BeaconConfig()
bCfg.AltairForkEpoch = 100
bCfg.ForkVersionSchedule[bytesutil.ToBytes4(bCfg.AltairForkVersion)] = 100
params.OverrideBeaconConfig(bCfg)
bState, keys := util.DeterministicGenesisState(t, 100)
altairBlk := util.NewBeaconBlockAltair()
altairBlk.Block.ProposerIndex = 0
altairBlk.Block.Slot = params.BeaconConfig().SlotsPerEpoch * 100
blkRoot, err := altairBlk.Block.HashTreeRoot()
assert.NoError(t, err)
// Sign with wrong key (proposer index 0, but using key 1)
fData := &ethpb.Fork{
Epoch: 100,
CurrentVersion: params.BeaconConfig().AltairForkVersion,
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
}
domain, err := signing.Domain(fData, 100, params.BeaconConfig().DomainBeaconProposer, bState.GenesisValidatorsRoot())
assert.NoError(t, err)
rt, err := signing.ComputeSigningRoot(altairBlk.Block, domain)
assert.NoError(t, err)
wrongSig := keys[1].Sign(rt[:]).Marshal()
altairBlk.Signature = wrongSig
wsb, err := consensusblocks.NewSignedBeaconBlock(altairBlk)
require.NoError(t, err)
err = blocks.VerifyBlockSignatureUsingCurrentFork(bState, wsb, blkRoot)
require.ErrorIs(t, err, blocks.ErrInvalidSignature, "Expected ErrInvalidSignature for invalid signature")
}

View File

@@ -15,6 +15,129 @@ import (
"github.com/pkg/errors"
)
// ConvertToElectra converts a Deneb beacon state to an Electra beacon state. It does not perform any fork logic.
func ConvertToElectra(beaconState state.BeaconState) (state.BeaconState, error) {
currentSyncCommittee, err := beaconState.CurrentSyncCommittee()
if err != nil {
return nil, err
}
nextSyncCommittee, err := beaconState.NextSyncCommittee()
if err != nil {
return nil, err
}
prevEpochParticipation, err := beaconState.PreviousEpochParticipation()
if err != nil {
return nil, err
}
currentEpochParticipation, err := beaconState.CurrentEpochParticipation()
if err != nil {
return nil, err
}
inactivityScores, err := beaconState.InactivityScores()
if err != nil {
return nil, err
}
payloadHeader, err := beaconState.LatestExecutionPayloadHeader()
if err != nil {
return nil, err
}
txRoot, err := payloadHeader.TransactionsRoot()
if err != nil {
return nil, err
}
wdRoot, err := payloadHeader.WithdrawalsRoot()
if err != nil {
return nil, err
}
wi, err := beaconState.NextWithdrawalIndex()
if err != nil {
return nil, err
}
vi, err := beaconState.NextWithdrawalValidatorIndex()
if err != nil {
return nil, err
}
summaries, err := beaconState.HistoricalSummaries()
if err != nil {
return nil, err
}
excessBlobGas, err := payloadHeader.ExcessBlobGas()
if err != nil {
return nil, err
}
blobGasUsed, err := payloadHeader.BlobGasUsed()
if err != nil {
return nil, err
}
s := &ethpb.BeaconStateElectra{
GenesisTime: uint64(beaconState.GenesisTime().Unix()),
GenesisValidatorsRoot: beaconState.GenesisValidatorsRoot(),
Slot: beaconState.Slot(),
Fork: &ethpb.Fork{
PreviousVersion: beaconState.Fork().CurrentVersion,
CurrentVersion: params.BeaconConfig().ElectraForkVersion,
Epoch: time.CurrentEpoch(beaconState),
},
LatestBlockHeader: beaconState.LatestBlockHeader(),
BlockRoots: beaconState.BlockRoots(),
StateRoots: beaconState.StateRoots(),
HistoricalRoots: beaconState.HistoricalRoots(),
Eth1Data: beaconState.Eth1Data(),
Eth1DataVotes: beaconState.Eth1DataVotes(),
Eth1DepositIndex: beaconState.Eth1DepositIndex(),
Validators: beaconState.Validators(),
Balances: beaconState.Balances(),
RandaoMixes: beaconState.RandaoMixes(),
Slashings: beaconState.Slashings(),
PreviousEpochParticipation: prevEpochParticipation,
CurrentEpochParticipation: currentEpochParticipation,
JustificationBits: beaconState.JustificationBits(),
PreviousJustifiedCheckpoint: beaconState.PreviousJustifiedCheckpoint(),
CurrentJustifiedCheckpoint: beaconState.CurrentJustifiedCheckpoint(),
FinalizedCheckpoint: beaconState.FinalizedCheckpoint(),
InactivityScores: inactivityScores,
CurrentSyncCommittee: currentSyncCommittee,
NextSyncCommittee: nextSyncCommittee,
LatestExecutionPayloadHeader: &enginev1.ExecutionPayloadHeaderDeneb{
ParentHash: payloadHeader.ParentHash(),
FeeRecipient: payloadHeader.FeeRecipient(),
StateRoot: payloadHeader.StateRoot(),
ReceiptsRoot: payloadHeader.ReceiptsRoot(),
LogsBloom: payloadHeader.LogsBloom(),
PrevRandao: payloadHeader.PrevRandao(),
BlockNumber: payloadHeader.BlockNumber(),
GasLimit: payloadHeader.GasLimit(),
GasUsed: payloadHeader.GasUsed(),
Timestamp: payloadHeader.Timestamp(),
ExtraData: payloadHeader.ExtraData(),
BaseFeePerGas: payloadHeader.BaseFeePerGas(),
BlockHash: payloadHeader.BlockHash(),
TransactionsRoot: txRoot,
WithdrawalsRoot: wdRoot,
ExcessBlobGas: excessBlobGas,
BlobGasUsed: blobGasUsed,
},
NextWithdrawalIndex: wi,
NextWithdrawalValidatorIndex: vi,
HistoricalSummaries: summaries,
DepositRequestsStartIndex: params.BeaconConfig().UnsetDepositRequestsStartIndex,
DepositBalanceToConsume: 0,
EarliestConsolidationEpoch: helpers.ActivationExitEpoch(slots.ToEpoch(beaconState.Slot())),
PendingDeposits: make([]*ethpb.PendingDeposit, 0),
PendingPartialWithdrawals: make([]*ethpb.PendingPartialWithdrawal, 0),
PendingConsolidations: make([]*ethpb.PendingConsolidation, 0),
}
// need to cast the beaconState to use in helper functions
post, err := state_native.InitializeFromProtoUnsafeElectra(s)
if err != nil {
return nil, errors.Wrap(err, "failed to initialize post electra beaconState")
}
return post, nil
}
// UpgradeToElectra updates inputs a generic state to return the version Electra state.
//
// nolint:dupword
@@ -126,55 +249,7 @@ import (
//
// return post
func UpgradeToElectra(beaconState state.BeaconState) (state.BeaconState, error) {
currentSyncCommittee, err := beaconState.CurrentSyncCommittee()
if err != nil {
return nil, err
}
nextSyncCommittee, err := beaconState.NextSyncCommittee()
if err != nil {
return nil, err
}
prevEpochParticipation, err := beaconState.PreviousEpochParticipation()
if err != nil {
return nil, err
}
currentEpochParticipation, err := beaconState.CurrentEpochParticipation()
if err != nil {
return nil, err
}
inactivityScores, err := beaconState.InactivityScores()
if err != nil {
return nil, err
}
payloadHeader, err := beaconState.LatestExecutionPayloadHeader()
if err != nil {
return nil, err
}
txRoot, err := payloadHeader.TransactionsRoot()
if err != nil {
return nil, err
}
wdRoot, err := payloadHeader.WithdrawalsRoot()
if err != nil {
return nil, err
}
wi, err := beaconState.NextWithdrawalIndex()
if err != nil {
return nil, err
}
vi, err := beaconState.NextWithdrawalValidatorIndex()
if err != nil {
return nil, err
}
summaries, err := beaconState.HistoricalSummaries()
if err != nil {
return nil, err
}
excessBlobGas, err := payloadHeader.ExcessBlobGas()
if err != nil {
return nil, err
}
blobGasUsed, err := payloadHeader.BlobGasUsed()
s, err := ConvertToElectra(beaconState)
if err != nil {
return nil, err
}
@@ -206,97 +281,38 @@ func UpgradeToElectra(beaconState state.BeaconState) (state.BeaconState, error)
if err != nil {
return nil, errors.Wrap(err, "failed to get total active balance")
}
s := &ethpb.BeaconStateElectra{
GenesisTime: uint64(beaconState.GenesisTime().Unix()),
GenesisValidatorsRoot: beaconState.GenesisValidatorsRoot(),
Slot: beaconState.Slot(),
Fork: &ethpb.Fork{
PreviousVersion: beaconState.Fork().CurrentVersion,
CurrentVersion: params.BeaconConfig().ElectraForkVersion,
Epoch: time.CurrentEpoch(beaconState),
},
LatestBlockHeader: beaconState.LatestBlockHeader(),
BlockRoots: beaconState.BlockRoots(),
StateRoots: beaconState.StateRoots(),
HistoricalRoots: beaconState.HistoricalRoots(),
Eth1Data: beaconState.Eth1Data(),
Eth1DataVotes: beaconState.Eth1DataVotes(),
Eth1DepositIndex: beaconState.Eth1DepositIndex(),
Validators: beaconState.Validators(),
Balances: beaconState.Balances(),
RandaoMixes: beaconState.RandaoMixes(),
Slashings: beaconState.Slashings(),
PreviousEpochParticipation: prevEpochParticipation,
CurrentEpochParticipation: currentEpochParticipation,
JustificationBits: beaconState.JustificationBits(),
PreviousJustifiedCheckpoint: beaconState.PreviousJustifiedCheckpoint(),
CurrentJustifiedCheckpoint: beaconState.CurrentJustifiedCheckpoint(),
FinalizedCheckpoint: beaconState.FinalizedCheckpoint(),
InactivityScores: inactivityScores,
CurrentSyncCommittee: currentSyncCommittee,
NextSyncCommittee: nextSyncCommittee,
LatestExecutionPayloadHeader: &enginev1.ExecutionPayloadHeaderDeneb{
ParentHash: payloadHeader.ParentHash(),
FeeRecipient: payloadHeader.FeeRecipient(),
StateRoot: payloadHeader.StateRoot(),
ReceiptsRoot: payloadHeader.ReceiptsRoot(),
LogsBloom: payloadHeader.LogsBloom(),
PrevRandao: payloadHeader.PrevRandao(),
BlockNumber: payloadHeader.BlockNumber(),
GasLimit: payloadHeader.GasLimit(),
GasUsed: payloadHeader.GasUsed(),
Timestamp: payloadHeader.Timestamp(),
ExtraData: payloadHeader.ExtraData(),
BaseFeePerGas: payloadHeader.BaseFeePerGas(),
BlockHash: payloadHeader.BlockHash(),
TransactionsRoot: txRoot,
WithdrawalsRoot: wdRoot,
ExcessBlobGas: excessBlobGas,
BlobGasUsed: blobGasUsed,
},
NextWithdrawalIndex: wi,
NextWithdrawalValidatorIndex: vi,
HistoricalSummaries: summaries,
DepositRequestsStartIndex: params.BeaconConfig().UnsetDepositRequestsStartIndex,
DepositBalanceToConsume: 0,
ExitBalanceToConsume: helpers.ActivationExitChurnLimit(primitives.Gwei(tab)),
EarliestExitEpoch: earliestExitEpoch,
ConsolidationBalanceToConsume: helpers.ConsolidationChurnLimit(primitives.Gwei(tab)),
EarliestConsolidationEpoch: helpers.ActivationExitEpoch(slots.ToEpoch(beaconState.Slot())),
PendingDeposits: make([]*ethpb.PendingDeposit, 0),
PendingPartialWithdrawals: make([]*ethpb.PendingPartialWithdrawal, 0),
PendingConsolidations: make([]*ethpb.PendingConsolidation, 0),
if err := s.SetExitBalanceToConsume(helpers.ActivationExitChurnLimit(primitives.Gwei(tab))); err != nil {
return nil, errors.Wrap(err, "failed to set exit balance to consume")
}
if err := s.SetEarliestExitEpoch(earliestExitEpoch); err != nil {
return nil, errors.Wrap(err, "failed to set earliest exit epoch")
}
if err := s.SetConsolidationBalanceToConsume(helpers.ConsolidationChurnLimit(primitives.Gwei(tab))); err != nil {
return nil, errors.Wrap(err, "failed to set consolidation balance to consume")
}
// Sorting preActivationIndices based on a custom criteria
vals := s.Validators()
sort.Slice(preActivationIndices, func(i, j int) bool {
// Comparing based on ActivationEligibilityEpoch and then by index if the epochs are the same
if s.Validators[preActivationIndices[i]].ActivationEligibilityEpoch == s.Validators[preActivationIndices[j]].ActivationEligibilityEpoch {
if vals[preActivationIndices[i]].ActivationEligibilityEpoch == vals[preActivationIndices[j]].ActivationEligibilityEpoch {
return preActivationIndices[i] < preActivationIndices[j]
}
return s.Validators[preActivationIndices[i]].ActivationEligibilityEpoch < s.Validators[preActivationIndices[j]].ActivationEligibilityEpoch
return vals[preActivationIndices[i]].ActivationEligibilityEpoch < vals[preActivationIndices[j]].ActivationEligibilityEpoch
})
// need to cast the beaconState to use in helper functions
post, err := state_native.InitializeFromProtoUnsafeElectra(s)
if err != nil {
return nil, errors.Wrap(err, "failed to initialize post electra beaconState")
}
for _, index := range preActivationIndices {
if err := QueueEntireBalanceAndResetValidator(post, index); err != nil {
if err := QueueEntireBalanceAndResetValidator(s, index); err != nil {
return nil, errors.Wrap(err, "failed to queue entire balance and reset validator")
}
}
// Ensure early adopters of compounding credentials go through the activation churn
for _, index := range compoundWithdrawalIndices {
if err := QueueExcessActiveBalance(post, index); err != nil {
if err := QueueExcessActiveBalance(s, index); err != nil {
return nil, errors.Wrap(err, "failed to queue excess active balance")
}
}
return post, nil
return s, nil
}

View File

@@ -7,6 +7,7 @@ go_library(
visibility = [
"//beacon-chain:__subpackages__",
"//cmd/prysmctl/testnet:__pkg__",
"//consensus-types/hdiff:__subpackages__",
"//testing/spectest:__subpackages__",
"//validator/client:__pkg__",
],

View File

@@ -15,6 +15,7 @@ go_library(
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//monitoring/tracing/trace:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",

View File

@@ -8,6 +8,7 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
state_native "github.com/OffchainLabs/prysm/v6/beacon-chain/state/state-native"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
enginev1 "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/time/slots"
@@ -17,6 +18,25 @@ import (
// UpgradeToFulu updates inputs a generic state to return the version Fulu state.
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/fork.md#upgrading-the-state
func UpgradeToFulu(ctx context.Context, beaconState state.BeaconState) (state.BeaconState, error) {
s, err := ConvertToFulu(beaconState)
if err != nil {
return nil, errors.Wrap(err, "could not convert to fulu")
}
proposerLookahead, err := helpers.InitializeProposerLookahead(ctx, beaconState, slots.ToEpoch(beaconState.Slot()))
if err != nil {
return nil, err
}
pl := make([]primitives.ValidatorIndex, len(proposerLookahead))
for i, v := range proposerLookahead {
pl[i] = primitives.ValidatorIndex(v)
}
if err := s.SetProposerLookahead(pl); err != nil {
return nil, errors.Wrap(err, "failed to set proposer lookahead")
}
return s, nil
}
func ConvertToFulu(beaconState state.BeaconState) (state.BeaconState, error) {
currentSyncCommittee, err := beaconState.CurrentSyncCommittee()
if err != nil {
return nil, err
@@ -105,11 +125,6 @@ func UpgradeToFulu(ctx context.Context, beaconState state.BeaconState) (state.Be
if err != nil {
return nil, err
}
proposerLookahead, err := helpers.InitializeProposerLookahead(ctx, beaconState, slots.ToEpoch(beaconState.Slot()))
if err != nil {
return nil, err
}
s := &ethpb.BeaconStateFulu{
GenesisTime: uint64(beaconState.GenesisTime().Unix()),
GenesisValidatorsRoot: beaconState.GenesisValidatorsRoot(),
@@ -171,14 +186,6 @@ func UpgradeToFulu(ctx context.Context, beaconState state.BeaconState) (state.Be
PendingDeposits: pendingDeposits,
PendingPartialWithdrawals: pendingPartialWithdrawals,
PendingConsolidations: pendingConsolidations,
ProposerLookahead: proposerLookahead,
}
// Need to cast the beaconState to use in helper functions
post, err := state_native.InitializeFromProtoUnsafeFulu(s)
if err != nil {
return nil, errors.Wrap(err, "failed to initialize post fulu beaconState")
}
return post, nil
return state_native.InitializeFromProtoUnsafeFulu(s)
}

View File

@@ -201,14 +201,3 @@ func ParseWeakSubjectivityInputString(wsCheckpointString string) (*v1alpha1.Chec
Root: bRoot,
}, nil
}
// MinEpochsForBlockRequests computes the number of epochs of block history that we need to maintain,
// relative to the current epoch, per the p2p specs. This is used to compute the slot where backfill is complete.
// value defined:
// https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/p2p-interface.md#configuration
// MIN_VALIDATOR_WITHDRAWABILITY_DELAY + CHURN_LIMIT_QUOTIENT // 2 (= 33024, ~5 months)
// detailed rationale: https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/p2p-interface.md#why-are-blocksbyrange-requests-only-required-to-be-served-for-the-latest-min_epochs_for_block_requests-epochs
func MinEpochsForBlockRequests() primitives.Epoch {
return params.BeaconConfig().MinValidatorWithdrawabilityDelay +
primitives.Epoch(params.BeaconConfig().ChurnLimitQuotient/2)
}

View File

@@ -286,20 +286,3 @@ func genState(t *testing.T, valCount, avgBalance uint64) state.BeaconState {
return beaconState
}
func TestMinEpochsForBlockRequests(t *testing.T) {
helpers.ClearCache()
params.SetActiveTestCleanup(t, params.MainnetConfig())
var expected primitives.Epoch = 33024
// expected value of 33024 via spec commentary:
// https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/p2p-interface.md#why-are-blocksbyrange-requests-only-required-to-be-served-for-the-latest-min_epochs_for_block_requests-epochs
// MIN_EPOCHS_FOR_BLOCK_REQUESTS is calculated using the arithmetic from compute_weak_subjectivity_period found in the weak subjectivity guide. Specifically to find this max epoch range, we use the worst case event of a very large validator size (>= MIN_PER_EPOCH_CHURN_LIMIT * CHURN_LIMIT_QUOTIENT).
//
// MIN_EPOCHS_FOR_BLOCK_REQUESTS = (
// MIN_VALIDATOR_WITHDRAWABILITY_DELAY
// + MAX_SAFETY_DECAY * CHURN_LIMIT_QUOTIENT // (2 * 100)
// )
//
// Where MAX_SAFETY_DECAY = 100 and thus MIN_EPOCHS_FOR_BLOCK_REQUESTS = 33024 (~5 months).
require.Equal(t, expected, helpers.MinEpochsForBlockRequests())
}

View File

@@ -43,6 +43,13 @@ func VerifyDataColumnSidecar(sidecar blocks.RODataColumn) error {
return ErrNoKzgCommitments
}
// A sidecar with more commitments than the max blob count for this block is invalid.
slot := sidecar.Slot()
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(slot)
if len(sidecar.KzgCommitments) > maxBlobsPerBlock {
return ErrTooManyCommitments
}
// The column length must be equal to the number of commitments/proofs.
if len(sidecar.Column) != len(sidecar.KzgCommitments) || len(sidecar.Column) != len(sidecar.KzgProofs) {
return ErrMismatchLength
@@ -72,10 +79,30 @@ func VerifyDataColumnsSidecarKZGProofs(sidecars []blocks.RODataColumn) error {
for _, sidecar := range sidecars {
for i := range sidecar.Column {
commitments = append(commitments, kzg.Bytes48(sidecar.KzgCommitments[i]))
var (
commitment kzg.Bytes48
cell kzg.Cell
proof kzg.Bytes48
)
commitmentBytes := sidecar.KzgCommitments[i]
cellBytes := sidecar.Column[i]
proofBytes := sidecar.KzgProofs[i]
if len(commitmentBytes) != len(commitment) ||
len(cellBytes) != len(cell) ||
len(proofBytes) != len(proof) {
return ErrMismatchLength
}
copy(commitment[:], commitmentBytes)
copy(cell[:], cellBytes)
copy(proof[:], proofBytes)
commitments = append(commitments, commitment)
indices = append(indices, sidecar.Index)
cells = append(cells, kzg.Cell(sidecar.Column[i]))
proofs = append(proofs, kzg.Bytes48(sidecar.KzgProofs[i]))
cells = append(cells, cell)
proofs = append(proofs, proof)
}
}

View File

@@ -18,38 +18,46 @@ import (
)
func TestVerifyDataColumnSidecar(t *testing.T) {
t.Run("index too large", func(t *testing.T) {
roSidecar := createTestSidecar(t, 1_000_000, nil, nil, nil)
err := peerdas.VerifyDataColumnSidecar(roSidecar)
require.ErrorIs(t, err, peerdas.ErrIndexTooLarge)
})
testCases := []struct {
name string
index uint64
blobCount int
commitmentCount int
proofCount int
maxBlobsPerBlock uint64
expectedError error
}{
{name: "index too large", index: 1_000_000, expectedError: peerdas.ErrIndexTooLarge},
{name: "no commitments", expectedError: peerdas.ErrNoKzgCommitments},
{name: "too many commitments", blobCount: 10, commitmentCount: 10, proofCount: 10, maxBlobsPerBlock: 2, expectedError: peerdas.ErrTooManyCommitments},
{name: "commitments size mismatch", commitmentCount: 1, maxBlobsPerBlock: 1, expectedError: peerdas.ErrMismatchLength},
{name: "proofs size mismatch", blobCount: 1, commitmentCount: 1, maxBlobsPerBlock: 1, expectedError: peerdas.ErrMismatchLength},
{name: "nominal", blobCount: 1, commitmentCount: 1, proofCount: 1, maxBlobsPerBlock: 1, expectedError: nil},
}
t.Run("no commitments", func(t *testing.T) {
roSidecar := createTestSidecar(t, 0, nil, nil, nil)
err := peerdas.VerifyDataColumnSidecar(roSidecar)
require.ErrorIs(t, err, peerdas.ErrNoKzgCommitments)
})
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.FuluForkEpoch = 0
cfg.BlobSchedule = []params.BlobScheduleEntry{{Epoch: 0, MaxBlobsPerBlock: tc.maxBlobsPerBlock}}
params.OverrideBeaconConfig(cfg)
t.Run("KZG commitments size mismatch", func(t *testing.T) {
kzgCommitments := make([][]byte, 1)
roSidecar := createTestSidecar(t, 0, nil, kzgCommitments, nil)
err := peerdas.VerifyDataColumnSidecar(roSidecar)
require.ErrorIs(t, err, peerdas.ErrMismatchLength)
})
column := make([][]byte, tc.blobCount)
kzgCommitments := make([][]byte, tc.commitmentCount)
kzgProof := make([][]byte, tc.proofCount)
t.Run("KZG proofs size mismatch", func(t *testing.T) {
column, kzgCommitments := make([][]byte, 1), make([][]byte, 1)
roSidecar := createTestSidecar(t, 0, column, kzgCommitments, nil)
err := peerdas.VerifyDataColumnSidecar(roSidecar)
require.ErrorIs(t, err, peerdas.ErrMismatchLength)
})
roSidecar := createTestSidecar(t, tc.index, column, kzgCommitments, kzgProof)
err := peerdas.VerifyDataColumnSidecar(roSidecar)
t.Run("nominal", func(t *testing.T) {
column, kzgCommitments, kzgProofs := make([][]byte, 1), make([][]byte, 1), make([][]byte, 1)
roSidecar := createTestSidecar(t, 0, column, kzgCommitments, kzgProofs)
err := peerdas.VerifyDataColumnSidecar(roSidecar)
require.NoError(t, err)
})
if tc.expectedError != nil {
require.ErrorIs(t, err, tc.expectedError)
return
}
require.NoError(t, err)
})
}
}
func TestVerifyDataColumnSidecarKZGProofs(t *testing.T) {
@@ -60,6 +68,14 @@ func TestVerifyDataColumnSidecarKZGProofs(t *testing.T) {
err := kzg.Start()
require.NoError(t, err)
t.Run("size mismatch", func(t *testing.T) {
sidecars := generateRandomSidecars(t, seed, blobCount)
sidecars[0].Column[0] = sidecars[0].Column[0][:len(sidecars[0].Column[0])-1] // Remove one byte to create size mismatch
err := peerdas.VerifyDataColumnsSidecarKZGProofs(sidecars)
require.ErrorIs(t, err, peerdas.ErrMismatchLength)
})
t.Run("invalid proof", func(t *testing.T) {
sidecars := generateRandomSidecars(t, seed, blobCount)
sidecars[0].Column[0][0]++ // It is OK to overflow

View File

@@ -257,7 +257,7 @@ func ComputeCellsAndProofsFromStructured(blobsAndProofs []*pb.BlobAndProofV2) ([
return nil, errors.Wrap(err, "compute cells")
}
kzgProofs := make([]kzg.Proof, 0, numberOfColumns*kzg.BytesPerProof)
kzgProofs := make([]kzg.Proof, 0, numberOfColumns)
for _, kzgProofBytes := range blobAndProof.KzgProofs {
if len(kzgProofBytes) != kzg.BytesPerProof {
return nil, errors.New("wrong KZG proof size - should never happen")

View File

@@ -441,6 +441,7 @@ func TestComputeCellsAndProofsFromStructured(t *testing.T) {
for i := range blobCount {
require.Equal(t, len(expectedCellsAndProofs[i].Cells), len(actualCellsAndProofs[i].Cells))
require.Equal(t, len(expectedCellsAndProofs[i].Proofs), len(actualCellsAndProofs[i].Proofs))
require.Equal(t, len(expectedCellsAndProofs[i].Proofs), cap(actualCellsAndProofs[i].Proofs))
// Compare cells
for j, expectedCell := range expectedCellsAndProofs[i].Cells {

View File

@@ -200,6 +200,7 @@ func (dcs *DataColumnStorage) WarmCache() {
fileMetadata, err := extractFileMetadata(path)
if err != nil {
log.WithError(err).Error("Error encountered while extracting file metadata")
return nil
}
// Open the data column filesystem file.
@@ -988,8 +989,8 @@ func filePath(root [fieldparams.RootLength]byte, epoch primitives.Epoch) string
// extractFileMetadata extracts the metadata from a file path.
// If the path is not a leaf, it returns nil.
func extractFileMetadata(path string) (*fileMetadata, error) {
// Is this Windows friendly?
parts := strings.Split(path, "/")
// Use filepath.Separator to handle both Windows (\) and Unix (/) path separators
parts := strings.Split(path, string(filepath.Separator))
if len(parts) != 3 {
return nil, errors.Errorf("unexpected file %s", path)
}
@@ -1032,5 +1033,5 @@ func extractFileMetadata(path string) (*fileMetadata, error) {
// period computes the period of a given epoch.
func period(epoch primitives.Epoch) uint64 {
return uint64(epoch / params.BeaconConfig().MinEpochsForBlobsSidecarsRequest)
return uint64(epoch / params.BeaconConfig().MinEpochsForDataColumnSidecarsRequest)
}

View File

@@ -35,8 +35,9 @@ func (s DataColumnStorageSummary) HasIndex(index uint64) bool {
// HasAtLeastOneIndex returns true if at least one of the DataColumnSidecars at the given indices is available in the filesystem.
func (s DataColumnStorageSummary) HasAtLeastOneIndex(indices []uint64) bool {
size := uint64(len(s.mask))
for _, index := range indices {
if s.mask[index] {
if index < size && s.mask[index] {
return true
}
}

View File

@@ -25,11 +25,11 @@ func TestHasIndex(t *testing.T) {
func TestHasAtLeastOneIndex(t *testing.T) {
summary := NewDataColumnStorageSummary(0, [fieldparams.NumberOfColumns]bool{false, true})
hasAtLeastOneIndex := summary.HasAtLeastOneIndex([]uint64{3, 1, 2})
require.Equal(t, true, hasAtLeastOneIndex)
actual := summary.HasAtLeastOneIndex([]uint64{3, 1, fieldparams.NumberOfColumns, 2})
require.Equal(t, true, actual)
hasAtLeastOneIndex = summary.HasAtLeastOneIndex([]uint64{3, 4, 2})
require.Equal(t, false, hasAtLeastOneIndex)
actual = summary.HasAtLeastOneIndex([]uint64{3, 4, fieldparams.NumberOfColumns, 2})
require.Equal(t, false, actual)
}
func TestCount(t *testing.T) {

View File

@@ -3,6 +3,7 @@ package filesystem
import (
"encoding/binary"
"os"
"path/filepath"
"testing"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
@@ -725,3 +726,37 @@ func TestPrune(t *testing.T) {
require.Equal(t, true, compareSlices([]string{"0x0de28a18cae63cbc6f0b20dc1afb0b1df38da40824a5f09f92d485ade04de97f.sszs"}, dirs))
})
}
func TestExtractFileMetadata(t *testing.T) {
t.Run("Unix", func(t *testing.T) {
// Test with Unix-style path separators (/)
path := "12/1234/0x8bb2f09de48c102635622dc27e6de03ae2b22639df7c33edbc8222b2ec423746.sszs"
metadata, err := extractFileMetadata(path)
if filepath.Separator == '/' {
// On Unix systems, this should succeed
require.NoError(t, err)
require.Equal(t, uint64(12), metadata.period)
require.Equal(t, primitives.Epoch(1234), metadata.epoch)
return
}
// On Windows systems, this should fail because it uses the wrong separator
require.NotNil(t, err)
})
t.Run("Windows", func(t *testing.T) {
// Test with Windows-style path separators (\)
path := "12\\1234\\0x8bb2f09de48c102635622dc27e6de03ae2b22639df7c33edbc8222b2ec423746.sszs"
metadata, err := extractFileMetadata(path)
if filepath.Separator == '\\' {
// On Windows systems, this should succeed
require.NoError(t, err)
require.Equal(t, uint64(12), metadata.period)
require.Equal(t, primitives.Epoch(1234), metadata.epoch)
return
}
// On Unix systems, this should fail because it uses the wrong separator
require.NotNil(t, err)
})
}

View File

@@ -126,7 +126,7 @@ func NewWarmedEphemeralDataColumnStorageUsingFs(t testing.TB, fs afero.Fs, opts
func NewEphemeralDataColumnStorageUsingFs(t testing.TB, fs afero.Fs, opts ...DataColumnStorageOption) *DataColumnStorage {
opts = append(opts,
WithDataColumnRetentionEpochs(params.BeaconConfig().MinEpochsForBlobsSidecarsRequest),
WithDataColumnRetentionEpochs(params.BeaconConfig().MinEpochsForDataColumnSidecarsRequest),
WithDataColumnFs(fs),
)

View File

@@ -129,6 +129,7 @@ type NoHeadAccessDatabase interface {
// Custody operations.
UpdateSubscribedToAllDataSubnets(ctx context.Context, subscribed bool) (bool, error)
UpdateCustodyInfo(ctx context.Context, earliestAvailableSlot primitives.Slot, custodyGroupCount uint64) (primitives.Slot, uint64, error)
UpdateEarliestAvailableSlot(ctx context.Context, earliestAvailableSlot primitives.Slot) error
// P2P Metadata operations.
SaveMetadataSeqNum(ctx context.Context, seqNum uint64) error

View File

@@ -132,6 +132,6 @@ func recoverStateSummary(ctx context.Context, tx *bolt.Tx, root []byte) error {
if err != nil {
return err
}
summaryBucket := tx.Bucket(stateBucket)
summaryBucket := tx.Bucket(stateSummaryBucket)
return summaryBucket.Put(root, summaryEnc)
}

View File

@@ -137,3 +137,32 @@ func TestStore_FinalizedCheckpoint_StateMustExist(t *testing.T) {
require.ErrorContains(t, errMissingStateForCheckpoint.Error(), db.SaveFinalizedCheckpoint(ctx, cp))
}
// Regression test: verify that saving a checkpoint triggers recovery which writes
// the state summary into the correct stateSummaryBucket so that HasStateSummary/StateSummary see it.
func TestRecoverStateSummary_WritesToStateSummaryBucket(t *testing.T) {
db := setupDB(t)
ctx := t.Context()
// Create a block without saving a state or summary, so recovery is needed.
blk := util.HydrateSignedBeaconBlock(&ethpb.SignedBeaconBlock{})
root, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
wsb, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, db.SaveBlock(ctx, wsb))
// Precondition: summary not present yet.
require.Equal(t, false, db.HasStateSummary(ctx, root))
// Saving justified checkpoint should trigger recovery path calling recoverStateSummary.
cp := &ethpb.Checkpoint{Epoch: 2, Root: root[:]}
require.NoError(t, db.SaveJustifiedCheckpoint(ctx, cp))
// Postcondition: summary is visible via the public summary APIs (which read stateSummaryBucket).
require.Equal(t, true, db.HasStateSummary(ctx, root))
summary, err := db.StateSummary(ctx, root)
require.NoError(t, err)
require.NotNil(t, summary)
assert.DeepEqual(t, &ethpb.StateSummary{Slot: blk.Block.Slot, Root: root[:]}, summary)
}

View File

@@ -2,16 +2,19 @@ package kv
import (
"context"
"time"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
bolt "go.etcd.io/bbolt"
)
// UpdateCustodyInfo atomically updates the custody group count only it is greater than the stored one.
// UpdateCustodyInfo atomically updates the custody group count only if it is greater than the stored one.
// In this case, it also updates the earliest available slot with the provided value.
// It returns the (potentially updated) custody group count and earliest available slot.
func (s *Store) UpdateCustodyInfo(ctx context.Context, earliestAvailableSlot primitives.Slot, custodyGroupCount uint64) (primitives.Slot, uint64, error) {
@@ -70,6 +73,79 @@ func (s *Store) UpdateCustodyInfo(ctx context.Context, earliestAvailableSlot pri
return storedEarliestAvailableSlot, storedGroupCount, nil
}
// UpdateEarliestAvailableSlot updates the earliest available slot.
func (s *Store) UpdateEarliestAvailableSlot(ctx context.Context, earliestAvailableSlot primitives.Slot) error {
_, span := trace.StartSpan(ctx, "BeaconDB.UpdateEarliestAvailableSlot")
defer span.End()
storedEarliestAvailableSlot := primitives.Slot(0)
if err := s.db.Update(func(tx *bolt.Tx) error {
// Retrieve the custody bucket.
bucket, err := tx.CreateBucketIfNotExists(custodyBucket)
if err != nil {
return errors.Wrap(err, "create custody bucket")
}
// Retrieve the stored earliest available slot.
storedEarliestAvailableSlotBytes := bucket.Get(earliestAvailableSlotKey)
if len(storedEarliestAvailableSlotBytes) != 0 {
storedEarliestAvailableSlot = primitives.Slot(bytesutil.BytesToUint64BigEndian(storedEarliestAvailableSlotBytes))
}
// Allow decrease (for backfill scenarios)
if earliestAvailableSlot <= storedEarliestAvailableSlot {
storedEarliestAvailableSlot = earliestAvailableSlot
bytes := bytesutil.Uint64ToBytesBigEndian(uint64(earliestAvailableSlot))
if err := bucket.Put(earliestAvailableSlotKey, bytes); err != nil {
return errors.Wrap(err, "put earliest available slot")
}
return nil
}
// Prevent increase within the MIN_EPOCHS_FOR_BLOCK_REQUESTS period
// This ensures we don't voluntarily refuse to serve mandatory block data
genesisTime := time.Unix(int64(params.BeaconConfig().MinGenesisTime+params.BeaconConfig().GenesisDelay), 0)
currentSlot := slots.CurrentSlot(genesisTime)
currentEpoch := slots.ToEpoch(currentSlot)
minEpochsForBlocks := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
// Calculate the minimum required epoch (or 0 if we're early in the chain)
minRequiredEpoch := primitives.Epoch(0)
if currentEpoch > minEpochsForBlocks {
minRequiredEpoch = currentEpoch - minEpochsForBlocks
}
// Convert to slot to ensure we compare at slot-level granularity
minRequiredSlot, err := slots.EpochStart(minRequiredEpoch)
if err != nil {
return errors.Wrap(err, "calculate minimum required slot")
}
// Prevent any increase that would put earliest available slot beyond the minimum required slot
if earliestAvailableSlot > minRequiredSlot {
return errors.Errorf(
"cannot increase earliest available slot to %d (epoch %d) as it exceeds minimum required slot %d (epoch %d)",
earliestAvailableSlot, slots.ToEpoch(earliestAvailableSlot),
minRequiredSlot, minRequiredEpoch,
)
}
storedEarliestAvailableSlot = earliestAvailableSlot
bytes := bytesutil.Uint64ToBytesBigEndian(uint64(earliestAvailableSlot))
if err := bucket.Put(earliestAvailableSlotKey, bytes); err != nil {
return errors.Wrap(err, "put earliest available slot")
}
return nil
}); err != nil {
return err
}
log.WithField("earliestAvailableSlot", storedEarliestAvailableSlot).Debug("Updated earliest available slot")
return nil
}
// UpdateSubscribedToAllDataSubnets updates the "subscribed to all data subnets" status in the database
// only if `subscribed` is `true`.
// It returns the previous subscription status.

View File

@@ -3,10 +3,13 @@ package kv
import (
"context"
"testing"
"time"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/time/slots"
bolt "go.etcd.io/bbolt"
)
@@ -132,6 +135,131 @@ func TestUpdateCustodyInfo(t *testing.T) {
})
}
func TestUpdateEarliestAvailableSlot(t *testing.T) {
ctx := t.Context()
t.Run("allow decreasing earliest slot (backfill scenario)", func(t *testing.T) {
const (
initialSlot = primitives.Slot(300)
initialCount = uint64(10)
earliestSlot = primitives.Slot(200) // Lower than initial (backfill discovered earlier blocks)
)
db := setupDB(t)
// Initialize custody info
_, _, err := db.UpdateCustodyInfo(ctx, initialSlot, initialCount)
require.NoError(t, err)
// Update with a lower slot (should update for backfill)
err = db.UpdateEarliestAvailableSlot(ctx, earliestSlot)
require.NoError(t, err)
storedSlot, storedCount := getCustodyInfoFromDB(t, db)
require.Equal(t, earliestSlot, storedSlot)
require.Equal(t, initialCount, storedCount)
})
t.Run("allow increasing slot within MIN_EPOCHS_FOR_BLOCK_REQUESTS (pruning scenario)", func(t *testing.T) {
db := setupDB(t)
// Calculate the current slot and minimum required slot based on actual current time
genesisTime := time.Unix(int64(params.BeaconConfig().MinGenesisTime+params.BeaconConfig().GenesisDelay), 0)
currentSlot := slots.CurrentSlot(genesisTime)
currentEpoch := slots.ToEpoch(currentSlot)
minEpochsForBlocks := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
var minRequiredEpoch primitives.Epoch
if currentEpoch > minEpochsForBlocks {
minRequiredEpoch = currentEpoch - minEpochsForBlocks
} else {
minRequiredEpoch = 0
}
minRequiredSlot, err := slots.EpochStart(minRequiredEpoch)
require.NoError(t, err)
// Initial setup: set earliest slot well before minRequiredSlot
const groupCount = uint64(5)
initialSlot := primitives.Slot(1000)
_, _, err = db.UpdateCustodyInfo(ctx, initialSlot, groupCount)
require.NoError(t, err)
// Try to increase to a slot that's still BEFORE minRequiredSlot (should succeed)
validSlot := minRequiredSlot - 100
err = db.UpdateEarliestAvailableSlot(ctx, validSlot)
require.NoError(t, err)
// Verify the database was updated
storedSlot, storedCount := getCustodyInfoFromDB(t, db)
require.Equal(t, validSlot, storedSlot)
require.Equal(t, groupCount, storedCount)
})
t.Run("prevent increasing slot beyond MIN_EPOCHS_FOR_BLOCK_REQUESTS", func(t *testing.T) {
db := setupDB(t)
// Calculate the current slot and minimum required slot based on actual current time
genesisTime := time.Unix(int64(params.BeaconConfig().MinGenesisTime+params.BeaconConfig().GenesisDelay), 0)
currentSlot := slots.CurrentSlot(genesisTime)
currentEpoch := slots.ToEpoch(currentSlot)
minEpochsForBlocks := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
var minRequiredEpoch primitives.Epoch
if currentEpoch > minEpochsForBlocks {
minRequiredEpoch = currentEpoch - minEpochsForBlocks
} else {
minRequiredEpoch = 0
}
minRequiredSlot, err := slots.EpochStart(minRequiredEpoch)
require.NoError(t, err)
// Initial setup: set a valid earliest slot (well before minRequiredSlot)
const initialCount = uint64(5)
initialSlot := primitives.Slot(1000)
_, _, err = db.UpdateCustodyInfo(ctx, initialSlot, initialCount)
require.NoError(t, err)
// Try to set earliest slot beyond the minimum required slot
invalidSlot := minRequiredSlot + 100
// This should fail
err = db.UpdateEarliestAvailableSlot(ctx, invalidSlot)
require.ErrorContains(t, "cannot increase earliest available slot", err)
require.ErrorContains(t, "exceeds minimum required slot", err)
// Verify the database wasn't updated
storedSlot, storedCount := getCustodyInfoFromDB(t, db)
require.Equal(t, initialSlot, storedSlot)
require.Equal(t, initialCount, storedCount)
})
t.Run("no change when slot equals current slot", func(t *testing.T) {
const (
initialSlot = primitives.Slot(100)
initialCount = uint64(5)
)
db := setupDB(t)
// Initialize custody info
_, _, err := db.UpdateCustodyInfo(ctx, initialSlot, initialCount)
require.NoError(t, err)
// Update with the same slot
err = db.UpdateEarliestAvailableSlot(ctx, initialSlot)
require.NoError(t, err)
storedSlot, storedCount := getCustodyInfoFromDB(t, db)
require.Equal(t, initialSlot, storedSlot)
require.Equal(t, initialCount, storedCount)
})
}
func TestUpdateSubscribedToAllDataSubnets(t *testing.T) {
ctx := context.Background()

View File

@@ -8,7 +8,6 @@ go_library(
"//beacon-chain:__subpackages__",
],
deps = [
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/iface:go_default_library",
"//config/params:go_default_library",
@@ -29,6 +28,7 @@ go_test(
"//consensus-types/blocks:go_default_library",
"//consensus-types/primitives:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//time/slots/testing:go_default_library",

View File

@@ -4,7 +4,6 @@ import (
"context"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/iface"
"github.com/OffchainLabs/prysm/v6/config/params"
@@ -25,17 +24,24 @@ const (
defaultNumBatchesToPrune = 15
)
// custodyUpdater is a tiny interface that p2p service implements; kept here to avoid
// importing the p2p package and creating a cycle.
type custodyUpdater interface {
UpdateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error
}
type ServiceOption func(*Service)
// WithRetentionPeriod allows the user to specify a different data retention period than the spec default.
// The retention period is specified in epochs, and must be >= MIN_EPOCHS_FOR_BLOCK_REQUESTS.
func WithRetentionPeriod(retentionEpochs primitives.Epoch) ServiceOption {
return func(s *Service) {
defaultRetentionEpochs := helpers.MinEpochsForBlockRequests() + 1
defaultRetentionEpochs := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests) + 1
if retentionEpochs < defaultRetentionEpochs {
log.WithField("userEpochs", retentionEpochs).
WithField("minRequired", defaultRetentionEpochs).
Warn("Retention period too low, using minimum required value")
Warn("Retention period too low, ignoring and using minimum required value")
retentionEpochs = defaultRetentionEpochs
}
s.ps = pruneStartSlotFunc(retentionEpochs)
@@ -58,17 +64,23 @@ type Service struct {
slotTicker slots.Ticker
backfillWaiter func() error
initSyncWaiter func() error
custody custodyUpdater
}
func New(ctx context.Context, db iface.Database, genesisTime time.Time, initSyncWaiter, backfillWaiter func() error, opts ...ServiceOption) (*Service, error) {
func New(ctx context.Context, db iface.Database, genesisTime time.Time, initSyncWaiter, backfillWaiter func() error, custody custodyUpdater, opts ...ServiceOption) (*Service, error) {
if custody == nil {
return nil, errors.New("custody updater is required for pruner but was not provided")
}
p := &Service{
ctx: ctx,
db: db,
ps: pruneStartSlotFunc(helpers.MinEpochsForBlockRequests() + 1), // Default retention epochs is MIN_EPOCHS_FOR_BLOCK_REQUESTS + 1 from the current slot.
ps: pruneStartSlotFunc(primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests) + 1), // Default retention epochs is MIN_EPOCHS_FOR_BLOCK_REQUESTS + 1 from the current slot.
done: make(chan struct{}),
slotTicker: slots.NewSlotTicker(slots.UnsafeStartTime(genesisTime, 0), params.BeaconConfig().SecondsPerSlot),
initSyncWaiter: initSyncWaiter,
backfillWaiter: backfillWaiter,
custody: custody,
}
for _, o := range opts {
@@ -157,17 +169,45 @@ func (p *Service) prune(slot primitives.Slot) error {
return errors.Wrap(err, "failed to prune batches")
}
log.WithFields(logrus.Fields{
"prunedUpto": pruneUpto,
"duration": time.Since(tt),
"currentSlot": slot,
"batchSize": defaultPrunableBatchSize,
"numBatches": numBatches,
}).Debug("Successfully pruned chain data")
earliestAvailableSlot := pruneUpto + 1
// Update pruning checkpoint.
p.prunedUpto = pruneUpto
// Update the earliest available slot after pruning
if err := p.updateEarliestAvailableSlot(earliestAvailableSlot); err != nil {
return errors.Wrap(err, "update earliest available slot")
}
log.WithFields(logrus.Fields{
"prunedUpto": pruneUpto,
"earliestAvailableSlot": earliestAvailableSlot,
"duration": time.Since(tt),
"currentSlot": slot,
"batchSize": defaultPrunableBatchSize,
"numBatches": numBatches,
}).Debug("Successfully pruned chain data")
return nil
}
// updateEarliestAvailableSlot updates the earliest available slot via the injected custody updater
// and also persists it to the database.
func (p *Service) updateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error {
if !params.FuluEnabled() {
return nil
}
// Update the p2p in-memory state
if err := p.custody.UpdateEarliestAvailableSlot(earliestAvailableSlot); err != nil {
return errors.Wrapf(err, "update earliest available slot after pruning to %d", earliestAvailableSlot)
}
// Persist to database to ensure it survives restarts
if err := p.db.UpdateEarliestAvailableSlot(p.ctx, earliestAvailableSlot); err != nil {
return errors.Wrapf(err, "update earliest available slot in database for slot %d", earliestAvailableSlot)
}
return nil
}

View File

@@ -2,6 +2,7 @@ package pruner
import (
"context"
"errors"
"testing"
"time"
@@ -15,6 +16,7 @@ import (
dbtest "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
logTest "github.com/sirupsen/logrus/hooks/test"
)
@@ -62,7 +64,9 @@ func TestPruner_PruningConditions(t *testing.T) {
if !tt.backfillCompleted {
backfillWaiter = waiter
}
p, err := New(ctx, beaconDB, time.Now(), initSyncWaiter, backfillWaiter, WithSlotTicker(slotTicker))
mockCustody := &mockCustodyUpdater{}
p, err := New(ctx, beaconDB, time.Now(), initSyncWaiter, backfillWaiter, mockCustody, WithSlotTicker(slotTicker))
require.NoError(t, err)
go p.Start()
@@ -97,12 +101,14 @@ func TestPruner_PruneSuccess(t *testing.T) {
retentionEpochs := primitives.Epoch(2)
slotTicker := &slottest.MockTicker{Channel: make(chan primitives.Slot)}
mockCustody := &mockCustodyUpdater{}
p, err := New(
ctx,
beaconDB,
time.Now(),
nil,
nil,
mockCustody,
WithSlotTicker(slotTicker),
)
require.NoError(t, err)
@@ -133,3 +139,242 @@ func TestPruner_PruneSuccess(t *testing.T) {
require.NoError(t, p.Stop())
}
// Mock custody updater for testing
type mockCustodyUpdater struct {
custodyGroupCount uint64
earliestAvailableSlot primitives.Slot
updateCallCount int
}
func (m *mockCustodyUpdater) UpdateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error {
m.updateCallCount++
m.earliestAvailableSlot = earliestAvailableSlot
return nil
}
func TestPruner_UpdatesEarliestAvailableSlot(t *testing.T) {
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
config.FuluForkEpoch = 0 // Enable Fulu from epoch 0
params.OverrideBeaconConfig(config)
logrus.SetLevel(logrus.DebugLevel)
hook := logTest.NewGlobal()
ctx, cancel := context.WithCancel(t.Context())
defer cancel()
beaconDB := dbtest.SetupDB(t)
retentionEpochs := primitives.Epoch(2)
slotTicker := &slottest.MockTicker{Channel: make(chan primitives.Slot)}
// Create mock custody updater
mockCustody := &mockCustodyUpdater{
custodyGroupCount: 4,
earliestAvailableSlot: 0,
}
// Create pruner with mock custody updater
p, err := New(
ctx,
beaconDB,
time.Now(),
nil,
nil,
mockCustody,
WithSlotTicker(slotTicker),
)
require.NoError(t, err)
p.ps = func(current primitives.Slot) primitives.Slot {
return current - primitives.Slot(retentionEpochs)*params.BeaconConfig().SlotsPerEpoch
}
// Save some blocks to be pruned
for i := primitives.Slot(1); i <= 32; i++ {
blk := util.NewBeaconBlock()
blk.Block.Slot = i
wsb, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wsb))
}
// Start pruner and trigger at slot 80 (middle of 3rd epoch)
go p.Start()
currentSlot := primitives.Slot(80)
slotTicker.Channel <- currentSlot
// Wait for pruning to complete
time.Sleep(100 * time.Millisecond)
// Check that UpdateEarliestAvailableSlot was called
assert.Equal(t, true, mockCustody.updateCallCount > 0, "UpdateEarliestAvailableSlot should have been called")
// The earliest available slot should be pruneUpto + 1
// pruneUpto = currentSlot - retentionEpochs*slotsPerEpoch = 80 - 2*32 = 16
// So earliest available slot should be 16 + 1 = 17
expectedEarliestSlot := primitives.Slot(17)
require.Equal(t, expectedEarliestSlot, mockCustody.earliestAvailableSlot, "Earliest available slot should be updated correctly")
require.Equal(t, uint64(4), mockCustody.custodyGroupCount, "Custody group count should be preserved")
// Verify that no error was logged
for _, entry := range hook.AllEntries() {
if entry.Level == logrus.ErrorLevel {
t.Errorf("Unexpected error log: %s", entry.Message)
}
}
require.NoError(t, p.Stop())
}
// Mock custody updater that returns an error for UpdateEarliestAvailableSlot
type mockCustodyUpdaterWithUpdateError struct {
updateCallCount int
}
func (m *mockCustodyUpdaterWithUpdateError) UpdateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error {
m.updateCallCount++
return errors.New("failed to update earliest available slot")
}
func TestWithRetentionPeriod_EnforcesMinimum(t *testing.T) {
// Use minimal config for testing
params.SetupTestConfigCleanup(t)
config := params.MinimalSpecConfig()
params.OverrideBeaconConfig(config)
ctx := t.Context()
beaconDB := dbtest.SetupDB(t)
// Get the minimum required epochs (272 + 1 = 273 for minimal)
minRequiredEpochs := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests + 1)
// Use a slot that's guaranteed to be after the minimum retention period
currentSlot := primitives.Slot(minRequiredEpochs+100) * (params.BeaconConfig().SlotsPerEpoch)
tests := []struct {
name string
userRetentionEpochs primitives.Epoch
expectedPruneSlot primitives.Slot
description string
}{
{
name: "User value below minimum - should use minimum",
userRetentionEpochs: 2, // Way below minimum
expectedPruneSlot: currentSlot - primitives.Slot(minRequiredEpochs)*params.BeaconConfig().SlotsPerEpoch,
description: "Should use minimum when user value is too low",
},
{
name: "User value at minimum",
userRetentionEpochs: minRequiredEpochs,
expectedPruneSlot: currentSlot - primitives.Slot(minRequiredEpochs)*params.BeaconConfig().SlotsPerEpoch,
description: "Should use user value when at minimum",
},
{
name: "User value above minimum",
userRetentionEpochs: minRequiredEpochs + 10,
expectedPruneSlot: currentSlot - primitives.Slot(minRequiredEpochs+10)*params.BeaconConfig().SlotsPerEpoch,
description: "Should use user value when above minimum",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
hook := logTest.NewGlobal()
logrus.SetLevel(logrus.WarnLevel)
mockCustody := &mockCustodyUpdater{}
// Create pruner with retention period
p, err := New(
ctx,
beaconDB,
time.Now(),
nil,
nil,
mockCustody,
WithRetentionPeriod(tt.userRetentionEpochs),
)
require.NoError(t, err)
// Test the pruning calculation
pruneUptoSlot := p.ps(currentSlot)
// Verify the pruning slot
assert.Equal(t, tt.expectedPruneSlot, pruneUptoSlot, tt.description)
// Check if warning was logged when value was too low
if tt.userRetentionEpochs < minRequiredEpochs {
assert.LogsContain(t, hook, "Retention period too low, ignoring and using minimum required value")
}
})
}
}
func TestPruner_UpdateEarliestSlotError(t *testing.T) {
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
config.FuluForkEpoch = 0 // Enable Fulu from epoch 0
params.OverrideBeaconConfig(config)
logrus.SetLevel(logrus.DebugLevel)
hook := logTest.NewGlobal()
ctx, cancel := context.WithCancel(t.Context())
defer cancel()
beaconDB := dbtest.SetupDB(t)
retentionEpochs := primitives.Epoch(2)
slotTicker := &slottest.MockTicker{Channel: make(chan primitives.Slot)}
// Create mock custody updater that returns an error for UpdateEarliestAvailableSlot
mockCustody := &mockCustodyUpdaterWithUpdateError{}
// Create pruner with mock custody updater
p, err := New(
ctx,
beaconDB,
time.Now(),
nil,
nil,
mockCustody,
WithSlotTicker(slotTicker),
)
require.NoError(t, err)
p.ps = func(current primitives.Slot) primitives.Slot {
return current - primitives.Slot(retentionEpochs)*params.BeaconConfig().SlotsPerEpoch
}
// Save some blocks to be pruned
for i := primitives.Slot(1); i <= 32; i++ {
blk := util.NewBeaconBlock()
blk.Block.Slot = i
wsb, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wsb))
}
// Start pruner and trigger at slot 80
go p.Start()
currentSlot := primitives.Slot(80)
slotTicker.Channel <- currentSlot
// Wait for pruning to complete
time.Sleep(100 * time.Millisecond)
// Should have called UpdateEarliestAvailableSlot
assert.Equal(t, 1, mockCustody.updateCallCount, "UpdateEarliestAvailableSlot should be called")
// Check that error was logged by the prune function
found := false
for _, entry := range hook.AllEntries() {
if entry.Level == logrus.ErrorLevel && entry.Message == "Failed to prune database" {
found = true
break
}
}
assert.Equal(t, true, found, "Should log error when UpdateEarliestAvailableSlot fails")
require.NoError(t, p.Stop())
}

View File

@@ -58,7 +58,6 @@ go_library(
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//container/slice:go_default_library",
"//encoding/bytesutil:go_default_library",
"//genesis:go_default_library",
"//monitoring/prometheus:go_default_library",
"//monitoring/tracing:go_default_library",

View File

@@ -2,11 +2,13 @@ package node
import (
"context"
"os"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/kv"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/slasherkv"
"github.com/OffchainLabs/prysm/v6/cmd"
"github.com/OffchainLabs/prysm/v6/genesis"
"github.com/pkg/errors"
"github.com/urfave/cli/v2"
)
@@ -36,6 +38,22 @@ func (c *dbClearer) clearKV(ctx context.Context, db *kv.Store) (*kv.Store, error
return kv.NewKVStore(ctx, db.DatabasePath())
}
func (c *dbClearer) clearGenesis(dir string) error {
if !c.shouldProceed() {
return nil
}
gfile, err := genesis.FindStateFile(dir)
if err != nil {
return nil
}
if err := os.Remove(gfile.FilePath()); err != nil {
return errors.Wrapf(err, "genesis state file not removed: %s", gfile.FilePath())
}
return nil
}
func (c *dbClearer) clearBlobs(bs *filesystem.BlobStorage) error {
if !c.shouldProceed() {
return nil

View File

@@ -60,7 +60,6 @@ import (
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/container/slice"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/genesis"
"github.com/OffchainLabs/prysm/v6/monitoring/prometheus"
"github.com/OffchainLabs/prysm/v6/runtime"
@@ -178,6 +177,9 @@ func New(cliCtx *cli.Context, cancel context.CancelFunc, opts ...Option) (*Beaco
}
beacon.db = kvdb
if err := dbClearer.clearGenesis(dataDir); err != nil {
return nil, errors.Wrap(err, "could not clear genesis state")
}
providers := append(beacon.GenesisProviders, kv.NewLegacyGenesisProvider(kvdb))
if err := genesis.Initialize(ctx, dataDir, providers...); err != nil {
return nil, errors.Wrap(err, "could not initialize genesis state")
@@ -598,22 +600,7 @@ func (b *BeaconNode) startStateGen(ctx context.Context, bfs coverage.AvailableBl
return err
}
r := bytesutil.ToBytes32(cp.Root)
// Consider edge case where finalized root are zeros instead of genesis root hash.
if r == params.BeaconConfig().ZeroHash {
genesisBlock, err := b.db.GenesisBlock(ctx)
if err != nil {
return err
}
if genesisBlock != nil && !genesisBlock.IsNil() {
r, err = genesisBlock.Block().HashTreeRoot()
if err != nil {
return err
}
}
}
b.finalizedStateAtStartUp, err = sg.StateByRoot(ctx, r)
b.finalizedStateAtStartUp, err = sg.StateByRoot(ctx, [32]byte(cp.Root))
if err != nil {
return err
}
@@ -1121,6 +1108,7 @@ func (b *BeaconNode) registerPrunerService(cliCtx *cli.Context) error {
genesis,
initSyncWaiter(cliCtx.Context, b.initialSyncComplete),
backfillService.WaitForCompletion,
b.fetchP2P(),
opts...,
)
if err != nil {

View File

@@ -115,6 +115,57 @@ func (s *Service) UpdateCustodyInfo(earliestAvailableSlot primitives.Slot, custo
return earliestAvailableSlot, custodyGroupCount, nil
}
// UpdateEarliestAvailableSlot updates the earliest available slot.
//
// IMPORTANT: This function should only be called when Fulu is enabled. The caller is responsible
// for checking params.FuluEnabled() before calling this function.
func (s *Service) UpdateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error {
s.custodyInfoLock.Lock()
defer s.custodyInfoLock.Unlock()
if s.custodyInfo == nil {
return errors.New("no custody info available")
}
currentSlot := slots.CurrentSlot(s.genesisTime)
currentEpoch := slots.ToEpoch(currentSlot)
// Allow decrease (for backfill scenarios)
if earliestAvailableSlot < s.custodyInfo.earliestAvailableSlot {
s.custodyInfo.earliestAvailableSlot = earliestAvailableSlot
return nil
}
// Prevent increase within the MIN_EPOCHS_FOR_BLOCK_REQUESTS period
// This ensures we don't voluntarily refuse to serve mandatory block data
// This check applies regardless of whether we're early or late in the chain
minEpochsForBlocks := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
// Calculate the minimum required epoch (or 0 if we're early in the chain)
minRequiredEpoch := primitives.Epoch(0)
if currentEpoch > minEpochsForBlocks {
minRequiredEpoch = currentEpoch - minEpochsForBlocks
}
// Convert to slot to ensure we compare at slot-level granularity, not epoch-level
// This prevents allowing increases to slots within minRequiredEpoch that are after its first slot
minRequiredSlot, err := slots.EpochStart(minRequiredEpoch)
if err != nil {
return errors.Wrap(err, "epoch start")
}
// Prevent any increase that would put earliest slot beyond the minimum required slot
if earliestAvailableSlot > s.custodyInfo.earliestAvailableSlot && earliestAvailableSlot > minRequiredSlot {
return errors.Errorf(
"cannot increase earliest available slot to %d (epoch %d) as it exceeds minimum required slot %d (epoch %d)",
earliestAvailableSlot, slots.ToEpoch(earliestAvailableSlot), minRequiredSlot, minRequiredEpoch,
)
}
s.custodyInfo.earliestAvailableSlot = earliestAvailableSlot
return nil
}
// CustodyGroupCountFromPeer retrieves custody group count from a peer.
// It first tries to get the custody group count from the peer's metadata,
// then falls back to the ENR value if the metadata is not available, then

View File

@@ -4,6 +4,7 @@ import (
"context"
"strings"
"testing"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers"
@@ -167,6 +168,148 @@ func TestUpdateCustodyInfo(t *testing.T) {
}
}
func TestUpdateEarliestAvailableSlot(t *testing.T) {
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
config.FuluForkEpoch = 0 // Enable Fulu from epoch 0
params.OverrideBeaconConfig(config)
t.Run("Valid update", func(t *testing.T) {
const (
initialSlot primitives.Slot = 50
newSlot primitives.Slot = 100
groupCount uint64 = 5
)
// Set up a scenario where we're far enough in the chain that increasing to newSlot is valid
minEpochsForBlocks := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
currentEpoch := minEpochsForBlocks + 100 // Well beyond MIN_EPOCHS_FOR_BLOCK_REQUESTS
currentSlot := primitives.Slot(currentEpoch) * primitives.Slot(params.BeaconConfig().SlotsPerEpoch)
service := &Service{
// Set genesis time in the past so currentSlot is the "current" slot
genesisTime: time.Now().Add(-time.Duration(currentSlot) * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second),
custodyInfo: &custodyInfo{
earliestAvailableSlot: initialSlot,
groupCount: groupCount,
},
}
err := service.UpdateEarliestAvailableSlot(newSlot)
require.NoError(t, err)
require.Equal(t, newSlot, service.custodyInfo.earliestAvailableSlot)
require.Equal(t, groupCount, service.custodyInfo.groupCount) // Should preserve group count
})
t.Run("Earlier slot - allowed for backfill", func(t *testing.T) {
const initialSlot primitives.Slot = 100
const earlierSlot primitives.Slot = 50
service := &Service{
genesisTime: time.Now(),
custodyInfo: &custodyInfo{
earliestAvailableSlot: initialSlot,
groupCount: 5,
},
}
err := service.UpdateEarliestAvailableSlot(earlierSlot)
require.NoError(t, err)
require.Equal(t, earlierSlot, service.custodyInfo.earliestAvailableSlot) // Should decrease for backfill
})
t.Run("Prevent increase within MIN_EPOCHS_FOR_BLOCK_REQUESTS - late in chain", func(t *testing.T) {
// Set current time far enough in the future to have a meaningful MIN_EPOCHS_FOR_BLOCK_REQUESTS period
minEpochsForBlocks := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
currentEpoch := minEpochsForBlocks + 100 // Well beyond the minimum
currentSlot := primitives.Slot(currentEpoch) * primitives.Slot(params.BeaconConfig().SlotsPerEpoch)
// Calculate the minimum allowed epoch
minRequiredEpoch := currentEpoch - minEpochsForBlocks
minRequiredSlot := primitives.Slot(minRequiredEpoch) * primitives.Slot(params.BeaconConfig().SlotsPerEpoch)
// Try to set earliest slot to a value within the MIN_EPOCHS_FOR_BLOCK_REQUESTS period (should fail)
attemptedSlot := minRequiredSlot + 1000 // Within the mandatory retention period
service := &Service{
genesisTime: time.Now().Add(-time.Duration(currentSlot) * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second),
custodyInfo: &custodyInfo{
earliestAvailableSlot: minRequiredSlot - 100, // Current value is before the min required
groupCount: 5,
},
}
err := service.UpdateEarliestAvailableSlot(attemptedSlot)
require.NotNil(t, err)
require.Equal(t, true, strings.Contains(err.Error(), "cannot increase earliest available slot"))
})
t.Run("Prevent increase at epoch boundary - slot precision matters", func(t *testing.T) {
minEpochsForBlocks := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
currentEpoch := minEpochsForBlocks + 976 // Current epoch
currentSlot := primitives.Slot(currentEpoch) * primitives.Slot(params.BeaconConfig().SlotsPerEpoch)
minRequiredEpoch := currentEpoch - minEpochsForBlocks // = 976
storedEarliestSlot := primitives.Slot(minRequiredEpoch)*primitives.Slot(params.BeaconConfig().SlotsPerEpoch) - 232 // Before minRequired
// Try to set earliest to slot 8 of the minRequiredEpoch (should fail with slot comparison)
attemptedSlot := primitives.Slot(minRequiredEpoch)*primitives.Slot(params.BeaconConfig().SlotsPerEpoch) + 8
service := &Service{
genesisTime: time.Now().Add(-time.Duration(currentSlot) * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second),
custodyInfo: &custodyInfo{
earliestAvailableSlot: storedEarliestSlot,
groupCount: 5,
},
}
err := service.UpdateEarliestAvailableSlot(attemptedSlot)
require.NotNil(t, err, "Should prevent increasing earliest slot beyond the minimum required SLOT (not just epoch)")
require.Equal(t, true, strings.Contains(err.Error(), "cannot increase earliest available slot"))
})
t.Run("Prevent increase within MIN_EPOCHS_FOR_BLOCK_REQUESTS - early in chain", func(t *testing.T) {
minEpochsForBlocks := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
currentEpoch := minEpochsForBlocks - 10 // Early in chain, BEFORE we have MIN_EPOCHS_FOR_BLOCK_REQUESTS of history
currentSlot := primitives.Slot(currentEpoch) * primitives.Slot(params.BeaconConfig().SlotsPerEpoch)
// Current earliest slot is at slot 100
currentEarliestSlot := primitives.Slot(100)
// Try to increase earliest slot to slot 1000 (which would be within the mandatory window from currentSlot)
attemptedSlot := primitives.Slot(1000)
service := &Service{
genesisTime: time.Now().Add(-time.Duration(currentSlot) * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second),
custodyInfo: &custodyInfo{
earliestAvailableSlot: currentEarliestSlot,
groupCount: 5,
},
}
err := service.UpdateEarliestAvailableSlot(attemptedSlot)
require.NotNil(t, err, "Should prevent increasing earliest slot within the mandatory retention window, even early in chain")
require.Equal(t, true, strings.Contains(err.Error(), "cannot increase earliest available slot"))
})
t.Run("Nil custody info - should return error", func(t *testing.T) {
service := &Service{
genesisTime: time.Now(),
custodyInfo: nil, // No custody info set
}
err := service.UpdateEarliestAvailableSlot(100)
require.NotNil(t, err)
require.Equal(t, true, strings.Contains(err.Error(), "no custody info available"))
})
}
func TestCustodyGroupCountFromPeer(t *testing.T) {
const (
expectedENR uint64 = 7

View File

@@ -126,6 +126,7 @@ type (
EarliestAvailableSlot(ctx context.Context) (primitives.Slot, error)
CustodyGroupCount(ctx context.Context) (uint64, error)
UpdateCustodyInfo(earliestAvailableSlot primitives.Slot, custodyGroupCount uint64) (primitives.Slot, uint64, error)
UpdateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error
CustodyGroupCountFromPeer(peer.ID) uint64
}
)

View File

@@ -11,14 +11,15 @@ import (
var (
knownAgentVersions = []string{
"erigon/caplin",
"grandine",
"js-libp2p",
"lighthouse",
"lodestar",
"nimbus",
"prysm",
"teku",
"lodestar",
"js-libp2p",
"rust-libp2p",
"erigon/caplin",
}
p2pPeerCount = promauto.NewGaugeVec(prometheus.GaugeOpts{
Name: "p2p_peer_count",

View File

@@ -514,17 +514,26 @@ func initializePersistentSubnets(id enode.ID, epoch primitives.Epoch) error {
//
// return [compute_subscribed_subnet(node_id, epoch, index) for index in range(SUBNETS_PER_NODE)]
func computeSubscribedSubnets(nodeID enode.ID, epoch primitives.Epoch) ([]uint64, error) {
subnetsPerNode := params.BeaconConfig().SubnetsPerNode
subs := make([]uint64, 0, subnetsPerNode)
beaconConfig := params.BeaconConfig()
for i := uint64(0); i < subnetsPerNode; i++ {
if flags.Get().SubscribeToAllSubnets {
subnets := make([]uint64, 0, beaconConfig.AttestationSubnetCount)
for i := range beaconConfig.AttestationSubnetCount {
subnets = append(subnets, i)
}
return subnets, nil
}
subnets := make([]uint64, 0, beaconConfig.SubnetsPerNode)
for i := range beaconConfig.SubnetsPerNode {
sub, err := computeSubscribedSubnet(nodeID, epoch, i)
if err != nil {
return nil, err
return nil, errors.Wrap(err, "compute subscribed subnet")
}
subs = append(subs, sub)
subnets = append(subnets, sub)
}
return subs, nil
return subnets, nil
}
// Spec pseudocode definition:

View File

@@ -514,17 +514,39 @@ func TestDataColumnSubnets(t *testing.T) {
func TestSubnetComputation(t *testing.T) {
db, err := enode.OpenDB("")
assert.NoError(t, err)
require.NoError(t, err)
defer db.Close()
priv, _, err := crypto.GenerateSecp256k1Key(rand.Reader)
assert.NoError(t, err)
convertedKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(priv)
assert.NoError(t, err)
localNode := enode.NewLocalNode(db, convertedKey)
retrievedSubnets, err := computeSubscribedSubnets(localNode.ID(), 1000)
assert.NoError(t, err)
assert.Equal(t, retrievedSubnets[0]+1, retrievedSubnets[1])
priv, _, err := crypto.GenerateSecp256k1Key(rand.Reader)
require.NoError(t, err)
convertedKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(priv)
require.NoError(t, err)
localNode := enode.NewLocalNode(db, convertedKey)
beaconConfig := params.BeaconConfig()
t.Run("standard", func(t *testing.T) {
retrievedSubnets, err := computeSubscribedSubnets(localNode.ID(), 1000)
require.NoError(t, err)
require.Equal(t, beaconConfig.SubnetsPerNode, uint64(len(retrievedSubnets)))
require.Equal(t, retrievedSubnets[0]+1, retrievedSubnets[1])
})
t.Run("subscribed to all", func(t *testing.T) {
gFlags := new(flags.GlobalFlags)
gFlags.SubscribeToAllSubnets = true
flags.Init(gFlags)
defer flags.Init(new(flags.GlobalFlags))
retrievedSubnets, err := computeSubscribedSubnets(localNode.ID(), 1000)
require.NoError(t, err)
require.Equal(t, beaconConfig.AttestationSubnetCount, uint64(len(retrievedSubnets)))
for i := range beaconConfig.AttestationSubnetCount {
require.Equal(t, i, retrievedSubnets[i])
}
})
}
func TestInitializePersistentSubnets(t *testing.T) {

View File

@@ -213,6 +213,11 @@ func (s *FakeP2P) UpdateCustodyInfo(earliestAvailableSlot primitives.Slot, custo
return earliestAvailableSlot, custodyGroupCount, nil
}
// UpdateEarliestAvailableSlot -- fake.
func (*FakeP2P) UpdateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error {
return nil
}
// CustodyGroupCountFromPeer -- fake.
func (*FakeP2P) CustodyGroupCountFromPeer(peer.ID) uint64 {
return 0

View File

@@ -499,6 +499,15 @@ func (s *TestP2P) UpdateCustodyInfo(earliestAvailableSlot primitives.Slot, custo
return s.earliestAvailableSlot, s.custodyGroupCount, nil
}
// UpdateEarliestAvailableSlot .
func (s *TestP2P) UpdateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error {
s.custodyInfoMut.Lock()
defer s.custodyInfoMut.Unlock()
s.earliestAvailableSlot = earliestAvailableSlot
return nil
}
// CustodyGroupCountFromPeer retrieves custody group count from a peer.
// It first tries to get the custody group count from the peer's metadata,
// then falls back to the ENR value if the metadata is not available, then

View File

@@ -12,6 +12,7 @@ go_library(
importpath = "github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/core",
visibility = ["//visibility:public"],
deps = [
"//api/server:go_default_library",
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/cache:go_default_library",
"//beacon-chain/core/altair:go_default_library",

View File

@@ -7,6 +7,7 @@ import (
"sort"
"time"
"github.com/OffchainLabs/prysm/v6/api/server"
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/altair"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/epoch/precompute"
@@ -36,24 +37,6 @@ import (
var errOptimisticMode = errors.New("the node is currently optimistic and cannot serve validators")
// AggregateBroadcastFailedError represents an error scenario where
// broadcasting an aggregate selection proof failed.
type AggregateBroadcastFailedError struct {
err error
}
// NewAggregateBroadcastFailedError creates a new error instance.
func NewAggregateBroadcastFailedError(err error) AggregateBroadcastFailedError {
return AggregateBroadcastFailedError{
err: err,
}
}
// Error returns the underlying error message.
func (e *AggregateBroadcastFailedError) Error() string {
return fmt.Sprintf("could not broadcast signed aggregated attestation: %s", e.err.Error())
}
// ComputeValidatorPerformance reports the validator's latest balance along with other important metrics on
// rewards and penalties throughout its lifecycle in the beacon chain.
func (s *Service) ComputeValidatorPerformance(
@@ -360,7 +343,8 @@ func (s *Service) SubmitSignedContributionAndProof(
// Wait for p2p broadcast to complete and return the first error (if any)
err := errs.Wait()
if err != nil {
return &RpcError{Err: err, Reason: Internal}
log.WithError(err).Debug("Could not broadcast signed contribution and proof")
return &RpcError{Err: server.NewBroadcastFailedError("SignedContributionAndProof", err), Reason: Internal}
}
s.OperationNotifier.OperationFeed().Send(&feed.Event{
@@ -411,7 +395,8 @@ func (s *Service) SubmitSignedAggregateSelectionProof(
}
if err := s.Broadcaster.Broadcast(ctx, agg); err != nil {
return &RpcError{Err: &AggregateBroadcastFailedError{err: err}, Reason: Internal}
log.WithError(err).Debug("Could not broadcast signed aggregate att and proof")
return &RpcError{Err: server.NewBroadcastFailedError("SignedAggregateAttAndProof", err), Reason: Internal}
}
if logrus.GetLevel() >= logrus.DebugLevel {

View File

@@ -97,7 +97,7 @@ func (s *Service) endpoints(
endpoints = append(endpoints, s.beaconEndpoints(ch, stater, blocker, validatorServer, coreService)...)
endpoints = append(endpoints, s.configEndpoints()...)
endpoints = append(endpoints, s.eventsEndpoints()...)
endpoints = append(endpoints, s.prysmBeaconEndpoints(ch, stater, coreService)...)
endpoints = append(endpoints, s.prysmBeaconEndpoints(ch, stater, blocker, coreService)...)
endpoints = append(endpoints, s.prysmNodeEndpoints()...)
endpoints = append(endpoints, s.prysmValidatorEndpoints(stater, coreService)...)
@@ -1184,6 +1184,7 @@ func (s *Service) eventsEndpoints() []endpoint {
func (s *Service) prysmBeaconEndpoints(
ch *stategen.CanonicalHistory,
stater lookup.Stater,
blocker lookup.Blocker,
coreService *core.Service,
) []endpoint {
server := &beaconprysm.Server{
@@ -1194,6 +1195,7 @@ func (s *Service) prysmBeaconEndpoints(
CanonicalHistory: ch,
BeaconDB: s.cfg.BeaconDB,
Stater: stater,
Blocker: blocker,
ChainInfoFetcher: s.cfg.ChainInfoFetcher,
FinalizationFetcher: s.cfg.FinalizationFetcher,
CoreService: coreService,
@@ -1266,6 +1268,28 @@ func (s *Service) prysmBeaconEndpoints(
handler: server.PublishBlobs,
methods: []string{http.MethodPost},
},
{
template: "/prysm/v1/beacon/states/{state_id}/query",
name: namespace + ".QueryBeaconState",
middleware: []middleware.Middleware{
middleware.ContentTypeHandler([]string{api.JsonMediaType}),
middleware.AcceptHeaderHandler([]string{api.OctetStreamMediaType}),
middleware.AcceptEncodingHeaderHandler(),
},
handler: server.QueryBeaconState,
methods: []string{http.MethodPost},
},
{
template: "/prysm/v1/beacon/blocks/{block_id}/query",
name: namespace + ".QueryBeaconBlock",
middleware: []middleware.Middleware{
middleware.ContentTypeHandler([]string{api.JsonMediaType}),
middleware.AcceptHeaderHandler([]string{api.OctetStreamMediaType}),
middleware.AcceptEncodingHeaderHandler(),
},
handler: server.QueryBeaconBlock,
methods: []string{http.MethodPost},
},
}
}

View File

@@ -127,6 +127,8 @@ func Test_endpoints(t *testing.T) {
"/prysm/v1/beacon/states/{state_id}/validator_count": {http.MethodGet},
"/prysm/v1/beacon/chain_head": {http.MethodGet},
"/prysm/v1/beacon/blobs": {http.MethodPost},
"/prysm/v1/beacon/states/{state_id}/query": {http.MethodPost},
"/prysm/v1/beacon/blocks/{block_id}/query": {http.MethodPost},
}
prysmNodeRoutes := map[string][]string{

View File

@@ -6,8 +6,6 @@ import (
"fmt"
"io"
"net/http"
"strconv"
"strings"
"time"
"github.com/OffchainLabs/prysm/v6/api"
@@ -31,6 +29,7 @@ import (
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
const broadcastBLSChangesRateLimit = 128
@@ -200,22 +199,23 @@ func (s *Server) SubmitAttestations(w http.ResponseWriter, r *http.Request) {
return
}
if len(failedBroadcasts) > 0 {
httputil.HandleError(
w,
fmt.Sprintf("Attestations at index %s could not be broadcasted", strings.Join(failedBroadcasts, ", ")),
http.StatusInternalServerError,
)
return
}
if len(attFailures) > 0 {
failuresErr := &server.IndexedVerificationFailureError{
failuresErr := &server.IndexedErrorContainer{
Code: http.StatusBadRequest,
Message: "One or more attestations failed validation",
Message: server.ErrIndexedValidationFail,
Failures: attFailures,
}
httputil.WriteError(w, failuresErr)
return
}
if len(failedBroadcasts) > 0 {
failuresErr := &server.IndexedErrorContainer{
Code: http.StatusInternalServerError,
Message: server.ErrIndexedBroadcastFail,
Failures: failedBroadcasts,
}
httputil.WriteError(w, failuresErr)
return
}
}
@@ -247,8 +247,8 @@ func (s *Server) SubmitAttestationsV2(w http.ResponseWriter, r *http.Request) {
return
}
var attFailures []*server.IndexedVerificationFailure
var failedBroadcasts []string
var attFailures []*server.IndexedError
var failedBroadcasts []*server.IndexedError
if v >= version.Electra {
attFailures, failedBroadcasts, err = s.handleAttestationsElectra(ctx, req.Data)
@@ -260,29 +260,30 @@ func (s *Server) SubmitAttestationsV2(w http.ResponseWriter, r *http.Request) {
return
}
if len(failedBroadcasts) > 0 {
httputil.HandleError(
w,
fmt.Sprintf("Attestations at index %s could not be broadcasted", strings.Join(failedBroadcasts, ", ")),
http.StatusInternalServerError,
)
return
}
if len(attFailures) > 0 {
failuresErr := &server.IndexedVerificationFailureError{
failuresErr := &server.IndexedErrorContainer{
Code: http.StatusBadRequest,
Message: "One or more attestations failed validation",
Message: server.ErrIndexedValidationFail,
Failures: attFailures,
}
httputil.WriteError(w, failuresErr)
return
}
if len(failedBroadcasts) > 0 {
failuresErr := &server.IndexedErrorContainer{
Code: http.StatusInternalServerError,
Message: server.ErrIndexedBroadcastFail,
Failures: failedBroadcasts,
}
httputil.WriteError(w, failuresErr)
return
}
}
func (s *Server) handleAttestationsElectra(
ctx context.Context,
data json.RawMessage,
) (attFailures []*server.IndexedVerificationFailure, failedBroadcasts []string, err error) {
) (attFailures []*server.IndexedError, failedBroadcasts []*server.IndexedError, err error) {
var sourceAttestations []*structs.SingleAttestation
currentEpoch := slots.ToEpoch(s.TimeFetcher.CurrentSlot())
if currentEpoch < params.BeaconConfig().ElectraForkEpoch {
@@ -301,14 +302,14 @@ func (s *Server) handleAttestationsElectra(
for i, sourceAtt := range sourceAttestations {
att, err := sourceAtt.ToConsensus()
if err != nil {
attFailures = append(attFailures, &server.IndexedVerificationFailure{
attFailures = append(attFailures, &server.IndexedError{
Index: i,
Message: "Could not convert request attestation to consensus attestation: " + err.Error(),
})
continue
}
if _, err = bls.SignatureFromBytes(att.Signature); err != nil {
attFailures = append(attFailures, &server.IndexedVerificationFailure{
attFailures = append(attFailures, &server.IndexedError{
Index: i,
Message: "Incorrect attestation signature: " + err.Error(),
})
@@ -317,6 +318,13 @@ func (s *Server) handleAttestationsElectra(
validAttestations = append(validAttestations, att)
}
// We store the error for the first failed broadcast and use it in the log message in case
// there are broadcast issues. Having a single log at the end instead of logging
// for every failed broadcast prevents log noise in case there are many failures.
// Even though we only retain the first error, there is a very good chance that all
// broadcasts fail for the same reason, so this should be sufficient in most cases.
var broadcastErr error
for i, singleAtt := range validAttestations {
s.OperationNotifier.OperationFeed().Send(&feed.Event{
Type: operation.SingleAttReceived,
@@ -338,31 +346,45 @@ func (s *Server) handleAttestationsElectra(
wantedEpoch := slots.ToEpoch(att.Data.Slot)
vals, err := s.HeadFetcher.HeadValidatorsIndices(ctx, wantedEpoch)
if err != nil {
failedBroadcasts = append(failedBroadcasts, strconv.Itoa(i))
continue
return nil, nil, errors.Wrap(err, "could not get head validator indices")
}
subnet := corehelpers.ComputeSubnetFromCommitteeAndSlot(uint64(len(vals)), att.GetCommitteeIndex(), att.Data.Slot)
if err = s.Broadcaster.BroadcastAttestation(ctx, subnet, singleAtt); err != nil {
log.WithError(err).Errorf("could not broadcast attestation at index %d", i)
failedBroadcasts = append(failedBroadcasts, strconv.Itoa(i))
failedBroadcasts = append(failedBroadcasts, &server.IndexedError{
Index: i,
Message: server.NewBroadcastFailedError("SingleAttestation", err).Error(),
})
if broadcastErr == nil {
broadcastErr = err
}
continue
}
if features.Get().EnableExperimentalAttestationPool {
if err = s.AttestationCache.Add(att); err != nil {
log.WithError(err).Error("could not save attestation")
log.WithError(err).Error("Could not save attestation")
}
} else {
if err = s.AttestationsPool.SaveUnaggregatedAttestation(att); err != nil {
log.WithError(err).Error("could not save attestation")
log.WithError(err).Error("Could not save attestation")
}
}
}
if len(failedBroadcasts) > 0 {
log.WithFields(logrus.Fields{
"failedCount": len(failedBroadcasts),
"totalCount": len(validAttestations),
}).WithError(broadcastErr).Error("Some attestations failed to be broadcast")
}
return attFailures, failedBroadcasts, nil
}
func (s *Server) handleAttestations(ctx context.Context, data json.RawMessage) (attFailures []*server.IndexedVerificationFailure, failedBroadcasts []string, err error) {
func (s *Server) handleAttestations(
ctx context.Context,
data json.RawMessage,
) (attFailures []*server.IndexedError, failedBroadcasts []*server.IndexedError, err error) {
var sourceAttestations []*structs.Attestation
if slots.ToEpoch(s.TimeFetcher.CurrentSlot()) >= params.BeaconConfig().ElectraForkEpoch {
@@ -381,14 +403,14 @@ func (s *Server) handleAttestations(ctx context.Context, data json.RawMessage) (
for i, sourceAtt := range sourceAttestations {
att, err := sourceAtt.ToConsensus()
if err != nil {
attFailures = append(attFailures, &server.IndexedVerificationFailure{
attFailures = append(attFailures, &server.IndexedError{
Index: i,
Message: "Could not convert request attestation to consensus attestation: " + err.Error(),
})
continue
}
if _, err = bls.SignatureFromBytes(att.Signature); err != nil {
attFailures = append(attFailures, &server.IndexedVerificationFailure{
attFailures = append(attFailures, &server.IndexedError{
Index: i,
Message: "Incorrect attestation signature: " + err.Error(),
})
@@ -397,6 +419,13 @@ func (s *Server) handleAttestations(ctx context.Context, data json.RawMessage) (
validAttestations = append(validAttestations, att)
}
// We store the error for the first failed broadcast and use it in the log message in case
// there are broadcast issues. Having a single log at the end instead of logging
// for every failed broadcast prevents log noise in case there are many failures.
// Even though we only retain the first error, there is a very good chance that all
// broadcasts fail for the same reason, so this should be sufficient in most cases.
var broadcastErr error
for i, att := range validAttestations {
// Broadcast the unaggregated attestation on a feed to notify other services in the beacon node
// of a received unaggregated attestation.
@@ -413,32 +442,43 @@ func (s *Server) handleAttestations(ctx context.Context, data json.RawMessage) (
wantedEpoch := slots.ToEpoch(att.Data.Slot)
vals, err := s.HeadFetcher.HeadValidatorsIndices(ctx, wantedEpoch)
if err != nil {
failedBroadcasts = append(failedBroadcasts, strconv.Itoa(i))
continue
return nil, nil, errors.Wrap(err, "could not get head validator indices")
}
subnet := corehelpers.ComputeSubnetFromCommitteeAndSlot(uint64(len(vals)), att.Data.CommitteeIndex, att.Data.Slot)
if err = s.Broadcaster.BroadcastAttestation(ctx, subnet, att); err != nil {
log.WithError(err).Errorf("could not broadcast attestation at index %d", i)
failedBroadcasts = append(failedBroadcasts, strconv.Itoa(i))
failedBroadcasts = append(failedBroadcasts, &server.IndexedError{
Index: i,
Message: server.NewBroadcastFailedError("Attestation", err).Error(),
})
if broadcastErr == nil {
broadcastErr = err
}
continue
}
if features.Get().EnableExperimentalAttestationPool {
if err = s.AttestationCache.Add(att); err != nil {
log.WithError(err).Error("could not save attestation")
log.WithError(err).Error("Could not save attestation")
}
} else if att.IsAggregated() {
if err = s.AttestationsPool.SaveAggregatedAttestation(att); err != nil {
log.WithError(err).Error("could not save aggregated attestation")
log.WithError(err).Error("Could not save aggregated attestation")
}
} else {
if err = s.AttestationsPool.SaveUnaggregatedAttestation(att); err != nil {
log.WithError(err).Error("could not save unaggregated attestation")
log.WithError(err).Error("Could not save unaggregated attestation")
}
}
}
if len(failedBroadcasts) > 0 {
log.WithFields(logrus.Fields{
"failedCount": len(failedBroadcasts),
"totalCount": len(validAttestations),
}).WithError(broadcastErr).Error("Some attestations failed to be broadcast")
}
return attFailures, failedBroadcasts, nil
}
@@ -541,11 +581,11 @@ func (s *Server) SubmitSyncCommitteeSignatures(w http.ResponseWriter, r *http.Re
}
var validMessages []*eth.SyncCommitteeMessage
var msgFailures []*server.IndexedVerificationFailure
var msgFailures []*server.IndexedError
for i, sourceMsg := range req.Data {
msg, err := sourceMsg.ToConsensus()
if err != nil {
msgFailures = append(msgFailures, &server.IndexedVerificationFailure{
msgFailures = append(msgFailures, &server.IndexedError{
Index: i,
Message: "Could not convert request message to consensus message: " + err.Error(),
})
@@ -562,7 +602,7 @@ func (s *Server) SubmitSyncCommitteeSignatures(w http.ResponseWriter, r *http.Re
}
if len(msgFailures) > 0 {
failuresErr := &server.IndexedVerificationFailureError{
failuresErr := &server.IndexedErrorContainer{
Code: http.StatusBadRequest,
Message: "One or more messages failed validation",
Failures: msgFailures,
@@ -581,7 +621,7 @@ func (s *Server) SubmitBLSToExecutionChanges(w http.ResponseWriter, r *http.Requ
httputil.HandleError(w, fmt.Sprintf("Could not get head state: %v", err), http.StatusInternalServerError)
return
}
var failures []*server.IndexedVerificationFailure
var failures []*server.IndexedError
var toBroadcast []*eth.SignedBLSToExecutionChange
var req []*structs.SignedBLSToExecutionChange
@@ -602,7 +642,7 @@ func (s *Server) SubmitBLSToExecutionChanges(w http.ResponseWriter, r *http.Requ
for i, change := range req {
sbls, err := change.ToConsensus()
if err != nil {
failures = append(failures, &server.IndexedVerificationFailure{
failures = append(failures, &server.IndexedError{
Index: i,
Message: "Unable to decode SignedBLSToExecutionChange: " + err.Error(),
})
@@ -610,14 +650,14 @@ func (s *Server) SubmitBLSToExecutionChanges(w http.ResponseWriter, r *http.Requ
}
_, err = blocks.ValidateBLSToExecutionChange(st, sbls)
if err != nil {
failures = append(failures, &server.IndexedVerificationFailure{
failures = append(failures, &server.IndexedError{
Index: i,
Message: "Could not validate SignedBLSToExecutionChange: " + err.Error(),
})
continue
}
if err := blocks.VerifyBLSChangeSignature(st, sbls); err != nil {
failures = append(failures, &server.IndexedVerificationFailure{
failures = append(failures, &server.IndexedError{
Index: i,
Message: "Could not validate signature: " + err.Error(),
})
@@ -636,9 +676,9 @@ func (s *Server) SubmitBLSToExecutionChanges(w http.ResponseWriter, r *http.Requ
}
go s.broadcastBLSChanges(context.Background(), toBroadcast)
if len(failures) > 0 {
failuresErr := &server.IndexedVerificationFailureError{
failuresErr := &server.IndexedErrorContainer{
Code: http.StatusBadRequest,
Message: "One or more BLSToExecutionChange failed validation",
Message: server.ErrIndexedValidationFail,
Failures: failures,
}
httputil.WriteError(w, failuresErr)
@@ -655,18 +695,18 @@ func (s *Server) broadcastBLSBatch(ctx context.Context, ptr *[]*eth.SignedBLSToE
}
st, err := s.ChainInfoFetcher.HeadStateReadOnly(ctx)
if err != nil {
log.WithError(err).Error("could not get head state")
log.WithError(err).Error("Could not get head state")
return
}
for _, ch := range (*ptr)[:limit] {
if ch != nil {
_, err := blocks.ValidateBLSToExecutionChange(st, ch)
if err != nil {
log.WithError(err).Error("could not validate BLS to execution change")
log.WithError(err).Error("Could not validate BLS to execution change")
continue
}
if err := s.Broadcaster.Broadcast(ctx, ch); err != nil {
log.WithError(err).Error("could not broadcast BLS to execution changes.")
log.WithError(err).Error("Could not broadcast BLS to execution changes.")
}
}
}

View File

@@ -638,7 +638,7 @@ func TestSubmitAttestations(t *testing.T) {
s.SubmitAttestations(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &server.IndexedVerificationFailureError{}
e := &server.IndexedErrorContainer{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
require.Equal(t, 1, len(e.Failures))
@@ -772,7 +772,7 @@ func TestSubmitAttestations(t *testing.T) {
s.SubmitAttestationsV2(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &server.IndexedVerificationFailureError{}
e := &server.IndexedErrorContainer{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
require.Equal(t, 1, len(e.Failures))
@@ -873,7 +873,7 @@ func TestSubmitAttestations(t *testing.T) {
s.SubmitAttestationsV2(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &server.IndexedVerificationFailureError{}
e := &server.IndexedErrorContainer{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
require.Equal(t, 1, len(e.Failures))
@@ -1538,7 +1538,7 @@ func TestSubmitSignedBLSToExecutionChanges_Failures(t *testing.T) {
s.SubmitBLSToExecutionChanges(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
time.Sleep(10 * time.Millisecond) // Delay to allow the routine to start
require.StringContains(t, "One or more BLSToExecutionChange failed validation", writer.Body.String())
require.StringContains(t, "One or more messages failed validation", writer.Body.String())
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
assert.Equal(t, numValidators, len(broadcaster.BroadcastMessages)+1)

View File

@@ -151,7 +151,7 @@ func (s *Server) SyncCommitteeRewards(w http.ResponseWriter, r *http.Request) {
}
}
_, proposerReward, err := altair.ProcessSyncAggregate(r.Context(), st, sa)
_, proposerReward, err := altair.ProcessSyncAggregateNoVerifySig(r.Context(), st, sa)
if err != nil {
httputil.HandleError(w, "Could not get sync aggregate rewards: "+err.Error(), http.StatusInternalServerError)
return

View File

@@ -73,7 +73,7 @@ func (rs *BlockRewardService) GetBlockRewardsData(ctx context.Context, blk inter
// ExitInformation is expensive to compute, only do it if we need it.
exitInfo = validators.ExitInformation(st)
}
st, err = coreblocks.ProcessAttesterSlashings(ctx, st, blk.Body().AttesterSlashings(), exitInfo)
st, err = coreblocks.ProcessAttesterSlashingsNoVerify(ctx, st, blk.Body().AttesterSlashings(), exitInfo)
if err != nil {
return nil, &httputil.DefaultJsonError{
Message: "Could not get attester slashing rewards: " + err.Error(),
@@ -87,7 +87,7 @@ func (rs *BlockRewardService) GetBlockRewardsData(ctx context.Context, blk inter
Code: http.StatusInternalServerError,
}
}
st, err = coreblocks.ProcessProposerSlashings(ctx, st, blk.Body().ProposerSlashings(), exitInfo)
st, err = coreblocks.ProcessProposerSlashingsNoVerify(ctx, st, blk.Body().ProposerSlashings(), exitInfo)
if err != nil {
return nil, &httputil.DefaultJsonError{
Message: "Could not get proposer slashing rewards: " + err.Error(),
@@ -109,7 +109,7 @@ func (rs *BlockRewardService) GetBlockRewardsData(ctx context.Context, blk inter
}
}
var syncCommitteeReward uint64
_, syncCommitteeReward, err = altair.ProcessSyncAggregate(ctx, st, sa)
_, syncCommitteeReward, err = altair.ProcessSyncAggregateNoVerifySig(ctx, st, sa)
if err != nil {
return nil, &httputil.DefaultJsonError{
Message: "Could not get sync aggregate rewards: " + err.Error(),

View File

@@ -12,6 +12,7 @@ go_library(
visibility = ["//visibility:public"],
deps = [
"//api:go_default_library",
"//api/server:go_default_library",
"//api/server/structs:go_default_library",
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/builder:go_default_library",

View File

@@ -14,6 +14,7 @@ import (
"time"
"github.com/OffchainLabs/prysm/v6/api"
"github.com/OffchainLabs/prysm/v6/api/server"
"github.com/OffchainLabs/prysm/v6/api/server/structs"
"github.com/OffchainLabs/prysm/v6/beacon-chain/builder"
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
@@ -268,22 +269,61 @@ func (s *Server) SubmitContributionAndProofs(w http.ResponseWriter, r *http.Requ
return
}
for _, item := range reqData {
var failures []*server.IndexedError
var failedBroadcasts []*server.IndexedError
for i, item := range reqData {
var contribution structs.SignedContributionAndProof
if err := json.Unmarshal(item, &contribution); err != nil {
httputil.HandleError(w, "Could not decode item: "+err.Error(), http.StatusBadRequest)
return
failures = append(failures, &server.IndexedError{
Index: i,
Message: "Could not unmarshal message: " + err.Error(),
})
continue
}
consensusItem, err := contribution.ToConsensus()
if err != nil {
httputil.HandleError(w, "Could not convert contribution to consensus format: "+err.Error(), http.StatusBadRequest)
return
failures = append(failures, &server.IndexedError{
Index: i,
Message: "Could not convert request contribution to consensus contribution: " + err.Error(),
})
continue
}
if rpcError := s.CoreService.SubmitSignedContributionAndProof(ctx, consensusItem); rpcError != nil {
httputil.HandleError(w, rpcError.Err.Error(), core.ErrorReasonToHTTP(rpcError.Reason))
return
rpcError := s.CoreService.SubmitSignedContributionAndProof(ctx, consensusItem)
if rpcError != nil {
var broadcastFailedErr *server.BroadcastFailedError
if errors.As(rpcError.Err, &broadcastFailedErr) {
failedBroadcasts = append(failedBroadcasts, &server.IndexedError{
Index: i,
Message: rpcError.Err.Error(),
})
continue
} else {
httputil.HandleError(w, rpcError.Err.Error(), core.ErrorReasonToHTTP(rpcError.Reason))
return
}
}
}
if len(failures) > 0 {
failuresErr := &server.IndexedErrorContainer{
Code: http.StatusBadRequest,
Message: server.ErrIndexedValidationFail,
Failures: failures,
}
httputil.WriteError(w, failuresErr)
return
}
if len(failedBroadcasts) > 0 {
failuresErr := &server.IndexedErrorContainer{
Code: http.StatusInternalServerError,
Message: server.ErrIndexedBroadcastFail,
Failures: failedBroadcasts,
}
httputil.WriteError(w, failuresErr)
return
}
}
// Deprecated: use SubmitAggregateAndProofsV2 instead
@@ -322,8 +362,8 @@ func (s *Server) SubmitAggregateAndProofs(w http.ResponseWriter, r *http.Request
}
rpcError := s.CoreService.SubmitSignedAggregateSelectionProof(ctx, consensusItem)
if rpcError != nil {
var aggregateBroadcastFailedError *core.AggregateBroadcastFailedError
ok := errors.As(rpcError.Err, &aggregateBroadcastFailedError)
var broadcastFailedErr *server.BroadcastFailedError
ok := errors.As(rpcError.Err, &broadcastFailedErr)
if ok {
broadcastFailed = true
} else {
@@ -368,49 +408,83 @@ func (s *Server) SubmitAggregateAndProofsV2(w http.ResponseWriter, r *http.Reque
return
}
broadcastFailed := false
var failures []*server.IndexedError
var failedBroadcasts []*server.IndexedError
var rpcError *core.RpcError
for _, raw := range reqData {
for i, raw := range reqData {
if v >= version.Electra {
var signedAggregate structs.SignedAggregateAttestationAndProofElectra
err = json.Unmarshal(raw, &signedAggregate)
if err != nil {
httputil.HandleError(w, "Failed to parse aggregate attestation and proof: "+err.Error(), http.StatusBadRequest)
return
failures = append(failures, &server.IndexedError{
Index: i,
Message: "Could not parse message: " + err.Error(),
})
continue
}
consensusItem, err := signedAggregate.ToConsensus()
if err != nil {
httputil.HandleError(w, "Could not convert request aggregate to consensus aggregate: "+err.Error(), http.StatusBadRequest)
return
failures = append(failures, &server.IndexedError{
Index: i,
Message: "Could not convert request aggregate to consensus aggregate: " + err.Error(),
})
continue
}
rpcError = s.CoreService.SubmitSignedAggregateSelectionProof(ctx, consensusItem)
} else {
var signedAggregate structs.SignedAggregateAttestationAndProof
err = json.Unmarshal(raw, &signedAggregate)
if err != nil {
httputil.HandleError(w, "Failed to parse aggregate attestation and proof: "+err.Error(), http.StatusBadRequest)
return
failures = append(failures, &server.IndexedError{
Index: i,
Message: "Could not parse message: " + err.Error(),
})
continue
}
consensusItem, err := signedAggregate.ToConsensus()
if err != nil {
httputil.HandleError(w, "Could not convert request aggregate to consensus aggregate: "+err.Error(), http.StatusBadRequest)
return
failures = append(failures, &server.IndexedError{
Index: i,
Message: "Could not convert request aggregate to consensus aggregate: " + err.Error(),
})
continue
}
rpcError = s.CoreService.SubmitSignedAggregateSelectionProof(ctx, consensusItem)
}
if rpcError != nil {
var aggregateBroadcastFailedError *core.AggregateBroadcastFailedError
if errors.As(rpcError.Err, &aggregateBroadcastFailedError) {
broadcastFailed = true
var broadcastFailedErr *server.BroadcastFailedError
if errors.As(rpcError.Err, &broadcastFailedErr) {
failedBroadcasts = append(failedBroadcasts, &server.IndexedError{
Index: i,
Message: rpcError.Err.Error(),
})
continue
} else {
httputil.HandleError(w, rpcError.Err.Error(), core.ErrorReasonToHTTP(rpcError.Reason))
return
}
}
}
if broadcastFailed {
httputil.HandleError(w, "Could not broadcast one or more signed aggregated attestations", http.StatusInternalServerError)
if len(failures) > 0 {
failuresErr := &server.IndexedErrorContainer{
Code: http.StatusBadRequest,
Message: server.ErrIndexedValidationFail,
Failures: failures,
}
httputil.WriteError(w, failuresErr)
return
}
if len(failedBroadcasts) > 0 {
failuresErr := &server.IndexedErrorContainer{
Code: http.StatusInternalServerError,
Message: server.ErrIndexedBroadcastFail,
Failures: failedBroadcasts,
}
httputil.WriteError(w, failuresErr)
return
}
}
@@ -523,7 +597,18 @@ func (s *Server) SubmitSyncCommitteeSubscription(w http.ResponseWriter, r *http.
epochDuration := time.Duration(params.BeaconConfig().SlotsPerEpoch.Mul(params.BeaconConfig().SecondsPerSlot)) * time.Second
totalDuration := epochDuration * time.Duration(epochsToWatch)
cache.SyncSubnetIDs.AddSyncCommitteeSubnets(pubkey48[:], startEpoch, sub.SyncCommitteeIndices, totalDuration)
subcommitteeSize := params.BeaconConfig().SyncCommitteeSize / params.BeaconConfig().SyncCommitteeSubnetCount
seen := make(map[uint64]bool)
var subnetIndices []uint64
for _, idx := range sub.SyncCommitteeIndices {
subnetIdx := idx / subcommitteeSize
if !seen[subnetIdx] {
seen[subnetIdx] = true
subnetIndices = append(subnetIndices, subnetIdx)
}
}
cache.SyncSubnetIDs.AddSyncCommitteeSubnets(pubkey48[:], startEpoch, subnetIndices, totalDuration)
}
}

View File

@@ -1049,9 +1049,8 @@ func TestSubmitSyncCommitteeSubscription(t *testing.T) {
s.SubmitSyncCommitteeSubscription(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
subnets, _, _, _ := cache.SyncSubnetIDs.GetSyncCommitteeSubnets(pubkeys[1], 0)
require.Equal(t, 2, len(subnets))
require.Equal(t, 1, len(subnets))
assert.Equal(t, uint64(0), subnets[0])
assert.Equal(t, uint64(2), subnets[1])
})
t.Run("multiple", func(t *testing.T) {
cache.SyncSubnetIDs.EmptyAllCaches()
@@ -1070,7 +1069,7 @@ func TestSubmitSyncCommitteeSubscription(t *testing.T) {
assert.Equal(t, uint64(0), subnets[0])
subnets, _, _, _ = cache.SyncSubnetIDs.GetSyncCommitteeSubnets(pubkeys[1], 0)
require.Equal(t, 1, len(subnets))
assert.Equal(t, uint64(2), subnets[0])
assert.Equal(t, uint64(0), subnets[0])
})
t.Run("no body", func(t *testing.T) {
request := httptest.NewRequest(http.MethodPost, "http://example.com", nil)

View File

@@ -3,6 +3,7 @@ package lookup
import (
"context"
"fmt"
"math"
"strconv"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain"
@@ -283,9 +284,13 @@ func (p *BeaconDbBlocker) Blobs(ctx context.Context, id string, opts ...options.
return make([]*blocks.VerifiedROBlob, 0), nil
}
fuluForkSlot, err := slots.EpochStart(params.BeaconConfig().FuluForkEpoch)
if err != nil {
return nil, &core.RpcError{Err: errors.Wrap(err, "could not calculate Fulu start slot"), Reason: core.Internal}
// Compute the first Fulu slot.
fuluForkSlot := primitives.Slot(math.MaxUint64)
if fuluForkEpoch := params.BeaconConfig().FuluForkEpoch; fuluForkEpoch != primitives.Epoch(math.MaxUint64) {
fuluForkSlot, err = slots.EpochStart(fuluForkEpoch)
if err != nil {
return nil, &core.RpcError{Err: errors.Wrap(err, "could not calculate Fulu start slot"), Reason: core.Internal}
}
}
// Convert versioned hashes to indices if provided

View File

@@ -587,6 +587,51 @@ func TestGetBlob(t *testing.T) {
require.Equal(t, http.StatusBadRequest, core.ErrorReasonToHTTP(rpcErr.Reason))
require.StringContains(t, "not supported before", rpcErr.Err.Error())
})
t.Run("fulu fork epoch not set (MaxUint64)", func(t *testing.T) {
// Setup with Deneb fork enabled but Fulu fork epoch set to MaxUint64 (not set/far future)
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.DenebForkEpoch = 1
cfg.FuluForkEpoch = primitives.Epoch(math.MaxUint64) // Not set / far future
params.OverrideBeaconConfig(cfg)
// Create and save Deneb block and blob sidecars
denebSlot := util.SlotAtEpoch(t, cfg.DenebForkEpoch)
_, tempBlobStorage := filesystem.NewEphemeralBlobStorageAndFs(t)
denebBlockWithBlobs, denebBlobSidecars := util.GenerateTestDenebBlockWithSidecar(t, [fieldparams.RootLength]byte{}, denebSlot, 2, util.WithDenebSlot(denebSlot))
denebBlockRoot := denebBlockWithBlobs.Root()
verifiedDenebBlobs := verification.FakeVerifySliceForTest(t, denebBlobSidecars)
for i := range verifiedDenebBlobs {
err := tempBlobStorage.Save(verifiedDenebBlobs[i])
require.NoError(t, err)
}
err := db.SaveBlock(t.Context(), denebBlockWithBlobs)
require.NoError(t, err)
blocker := &BeaconDbBlocker{
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
},
BeaconDB: db,
BlobStorage: tempBlobStorage,
}
// Should successfully retrieve blobs even when FuluForkEpoch is not set
retrievedBlobs, rpcErr := blocker.Blobs(ctx, hexutil.Encode(denebBlockRoot[:]))
require.IsNil(t, rpcErr)
require.Equal(t, 2, len(retrievedBlobs))
// Verify blob content matches
for i, retrievedBlob := range retrievedBlobs {
require.NotNil(t, retrievedBlob.BlobSidecar)
require.DeepEqual(t, denebBlobSidecars[i].Blob, retrievedBlob.Blob)
require.DeepEqual(t, denebBlobSidecars[i].KzgCommitment, retrievedBlob.KzgCommitment)
}
})
}
func TestBlobs_CommitmentOrdering(t *testing.T) {

View File

@@ -5,11 +5,13 @@ go_library(
srcs = [
"handlers.go",
"server.go",
"ssz_query.go",
"validator_count.go",
],
importpath = "github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/prysm/beacon",
visibility = ["//visibility:public"],
deps = [
"//api:go_default_library",
"//api/server/structs:go_default_library",
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
@@ -27,10 +29,13 @@ go_library(
"//consensus-types/primitives:go_default_library",
"//consensus-types/validator:go_default_library",
"//encoding/bytesutil:go_default_library",
"//encoding/ssz/query:go_default_library",
"//monitoring/tracing/trace:go_default_library",
"//network/httputil:go_default_library",
"//proto/eth/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/ssz_query:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
@@ -41,10 +46,12 @@ go_test(
name = "go_default_test",
srcs = [
"handlers_test.go",
"ssz_query_test.go",
"validator_count_test.go",
],
embed = [":go_default_library"],
deps = [
"//api:go_default_library",
"//api/server/structs:go_default_library",
"//beacon-chain/blockchain/testing:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
@@ -63,10 +70,13 @@ go_test(
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//network/httputil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/ssz_query:go_default_library",
"//runtime/version:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",

View File

@@ -18,6 +18,7 @@ type Server struct {
CanonicalHistory *stategen.CanonicalHistory
BeaconDB beacondb.ReadOnlyDatabase
Stater lookup.Stater
Blocker lookup.Blocker
ChainInfoFetcher blockchain.ChainInfoFetcher
FinalizationFetcher blockchain.FinalizationFetcher
CoreService *core.Service

View File

@@ -0,0 +1,202 @@
package beacon
import (
"encoding/json"
"errors"
"io"
"net/http"
"github.com/OffchainLabs/prysm/v6/api"
"github.com/OffchainLabs/prysm/v6/api/server/structs"
"github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/eth/shared"
"github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/lookup"
"github.com/OffchainLabs/prysm/v6/encoding/ssz/query"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
"github.com/OffchainLabs/prysm/v6/network/httputil"
sszquerypb "github.com/OffchainLabs/prysm/v6/proto/ssz_query"
"github.com/OffchainLabs/prysm/v6/runtime/version"
)
// QueryBeaconState handles SSZ Query request for BeaconState.
// Returns as bytes serialized SSZQueryResponse.
func (s *Server) QueryBeaconState(w http.ResponseWriter, r *http.Request) {
ctx, span := trace.StartSpan(r.Context(), "beacon.QueryBeaconState")
defer span.End()
stateID := r.PathValue("state_id")
if stateID == "" {
httputil.HandleError(w, "state_id is required in URL params", http.StatusBadRequest)
return
}
// Validate path before lookup: it might be expensive.
var req structs.SSZQueryRequest
err := json.NewDecoder(r.Body).Decode(&req)
switch {
case errors.Is(err, io.EOF):
httputil.HandleError(w, "No data submitted", http.StatusBadRequest)
return
case err != nil:
httputil.HandleError(w, "Could not decode request body: "+err.Error(), http.StatusBadRequest)
return
}
if len(req.Query) == 0 {
httputil.HandleError(w, "Empty query submitted", http.StatusBadRequest)
return
}
path, err := query.ParsePath(req.Query)
if err != nil {
httputil.HandleError(w, "Could not parse path '"+req.Query+"': "+err.Error(), http.StatusBadRequest)
return
}
stateRoot, err := s.Stater.StateRoot(ctx, []byte(stateID))
if err != nil {
var rootNotFoundErr *lookup.StateRootNotFoundError
if errors.As(err, &rootNotFoundErr) {
httputil.HandleError(w, "State root not found: "+rootNotFoundErr.Error(), http.StatusNotFound)
return
}
httputil.HandleError(w, "Could not get state root: "+err.Error(), http.StatusInternalServerError)
return
}
st, err := s.Stater.State(ctx, []byte(stateID))
if err != nil {
shared.WriteStateFetchError(w, err)
return
}
// NOTE: Using unsafe conversion to proto is acceptable here,
// as we play with a copy of the state returned by Stater.
sszObject, ok := st.ToProtoUnsafe().(query.SSZObject)
if !ok {
httputil.HandleError(w, "Unsupported state version for querying: "+version.String(st.Version()), http.StatusBadRequest)
return
}
info, err := query.AnalyzeObject(sszObject)
if err != nil {
httputil.HandleError(w, "Could not analyze state object: "+err.Error(), http.StatusInternalServerError)
return
}
_, offset, length, err := query.CalculateOffsetAndLength(info, path)
if err != nil {
httputil.HandleError(w, "Could not calculate offset and length for path '"+req.Query+"': "+err.Error(), http.StatusInternalServerError)
return
}
encodedState, err := st.MarshalSSZ()
if err != nil {
httputil.HandleError(w, "Could not marshal state to SSZ: "+err.Error(), http.StatusInternalServerError)
return
}
response := &sszquerypb.SSZQueryResponse{
Root: stateRoot,
Result: encodedState[offset : offset+length],
}
responseSsz, err := response.MarshalSSZ()
if err != nil {
httputil.HandleError(w, "Could not marshal response to SSZ: "+err.Error(), http.StatusInternalServerError)
return
}
w.Header().Set(api.VersionHeader, version.String(st.Version()))
httputil.WriteSsz(w, responseSsz)
}
// QueryBeaconState handles SSZ Query request for BeaconState.
// Returns as bytes serialized SSZQueryResponse.
func (s *Server) QueryBeaconBlock(w http.ResponseWriter, r *http.Request) {
ctx, span := trace.StartSpan(r.Context(), "beacon.QueryBeaconBlock")
defer span.End()
blockId := r.PathValue("block_id")
if blockId == "" {
httputil.HandleError(w, "block_id is required in URL params", http.StatusBadRequest)
return
}
// Validate path before lookup: it might be expensive.
var req structs.SSZQueryRequest
err := json.NewDecoder(r.Body).Decode(&req)
switch {
case errors.Is(err, io.EOF):
httputil.HandleError(w, "No data submitted", http.StatusBadRequest)
return
case err != nil:
httputil.HandleError(w, "Could not decode request body: "+err.Error(), http.StatusBadRequest)
return
}
if len(req.Query) == 0 {
httputil.HandleError(w, "Empty query submitted", http.StatusBadRequest)
return
}
path, err := query.ParsePath(req.Query)
if err != nil {
httputil.HandleError(w, "Could not parse path '"+req.Query+"': "+err.Error(), http.StatusBadRequest)
return
}
signedBlock, err := s.Blocker.Block(ctx, []byte(blockId))
if !shared.WriteBlockFetchError(w, signedBlock, err) {
return
}
protoBlock, err := signedBlock.Block().Proto()
if err != nil {
httputil.HandleError(w, "Could not convert block to proto: "+err.Error(), http.StatusInternalServerError)
return
}
block, ok := protoBlock.(query.SSZObject)
if !ok {
httputil.HandleError(w, "Unsupported block version for querying: "+version.String(signedBlock.Version()), http.StatusBadRequest)
return
}
info, err := query.AnalyzeObject(block)
if err != nil {
httputil.HandleError(w, "Could not analyze block object: "+err.Error(), http.StatusInternalServerError)
return
}
_, offset, length, err := query.CalculateOffsetAndLength(info, path)
if err != nil {
httputil.HandleError(w, "Could not calculate offset and length for path '"+req.Query+"': "+err.Error(), http.StatusInternalServerError)
return
}
encodedBlock, err := signedBlock.Block().MarshalSSZ()
if err != nil {
httputil.HandleError(w, "Could not marshal block to SSZ: "+err.Error(), http.StatusInternalServerError)
return
}
blockRoot, err := block.HashTreeRoot()
if err != nil {
httputil.HandleError(w, "Could not compute block root: "+err.Error(), http.StatusInternalServerError)
return
}
response := &sszquerypb.SSZQueryResponse{
Root: blockRoot[:],
Result: encodedBlock[offset : offset+length],
}
responseSsz, err := response.MarshalSSZ()
if err != nil {
httputil.HandleError(w, "Could not marshal response to SSZ: "+err.Error(), http.StatusInternalServerError)
return
}
w.Header().Set(api.VersionHeader, version.String(signedBlock.Version()))
httputil.WriteSsz(w, responseSsz)
}

View File

@@ -0,0 +1,335 @@
package beacon
import (
"bytes"
"context"
"encoding/binary"
"encoding/json"
"net/http"
"net/http/httptest"
"strings"
"testing"
"github.com/OffchainLabs/prysm/v6/api"
"github.com/OffchainLabs/prysm/v6/api/server/structs"
chainMock "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/testutil"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
eth "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
sszquerypb "github.com/OffchainLabs/prysm/v6/proto/ssz_query"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/prysmaticlabs/go-bitfield"
)
func TestQueryBeaconState(t *testing.T) {
ctx := context.Background()
st, _ := util.DeterministicGenesisState(t, 16)
require.NoError(t, st.SetSlot(primitives.Slot(42)))
stateRoot, err := st.HashTreeRoot(ctx)
require.NoError(t, err)
require.NoError(t, st.UpdateBalancesAtIndex(0, 42000000000))
tests := []struct {
path string
expectedValue []byte
}{
{
path: ".slot",
expectedValue: func() []byte {
slot := st.Slot()
result, _ := slot.MarshalSSZ()
return result
}(),
},
{
path: ".latest_block_header",
expectedValue: func() []byte {
header := st.LatestBlockHeader()
result, _ := header.MarshalSSZ()
return result
}(),
},
{
path: ".validators",
expectedValue: func() []byte {
b := make([]byte, 0)
validators := st.Validators()
for _, v := range validators {
vBytes, _ := v.MarshalSSZ()
b = append(b, vBytes...)
}
return b
}(),
},
{
path: ".validators[0]",
expectedValue: func() []byte {
v, _ := st.ValidatorAtIndex(0)
result, _ := v.MarshalSSZ()
return result
}(),
},
{
path: ".validators[0].withdrawal_credentials",
expectedValue: func() []byte {
v, _ := st.ValidatorAtIndex(0)
return v.WithdrawalCredentials
}(),
},
{
path: ".validators[0].effective_balance",
expectedValue: func() []byte {
v, _ := st.ValidatorAtIndex(0)
b := make([]byte, 8)
binary.LittleEndian.PutUint64(b, uint64(v.EffectiveBalance))
return b
}(),
},
}
for _, tt := range tests {
t.Run(tt.path, func(t *testing.T) {
chainService := &chainMock.ChainService{Optimistic: false, FinalizedRoots: make(map[[32]byte]bool)}
s := &Server{
OptimisticModeFetcher: chainService,
FinalizationFetcher: chainService,
Stater: &testutil.MockStater{
BeaconStateRoot: stateRoot[:],
BeaconState: st,
},
}
requestBody := &structs.SSZQueryRequest{
Query: tt.path,
}
var buf bytes.Buffer
require.NoError(t, json.NewEncoder(&buf).Encode(requestBody))
request := httptest.NewRequest(http.MethodPost, "http://example.com/prysm/v1/beacon/states/{state_id}/query", &buf)
request.SetPathValue("state_id", "head")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.QueryBeaconState(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, version.String(version.Phase0), writer.Header().Get(api.VersionHeader))
expectedResponse := &sszquerypb.SSZQueryResponse{
Root: stateRoot[:],
Result: tt.expectedValue,
}
sszExpectedResponse, err := expectedResponse.MarshalSSZ()
require.NoError(t, err)
assert.DeepEqual(t, sszExpectedResponse, writer.Body.Bytes())
})
}
}
func TestQueryBeaconStateInvalidRequest(t *testing.T) {
ctx := context.Background()
st, _ := util.DeterministicGenesisState(t, 16)
require.NoError(t, st.SetSlot(primitives.Slot(42)))
stateRoot, err := st.HashTreeRoot(ctx)
require.NoError(t, err)
tests := []struct {
name string
stateId string
path string
code int
errorString string
}{
{
name: "empty query submitted",
stateId: "head",
path: "",
errorString: "Empty query submitted",
},
{
name: "invalid path",
stateId: "head",
path: ".invalid[]]",
errorString: "Could not parse path",
},
{
name: "non-existent field",
stateId: "head",
path: ".non_existent_field",
code: http.StatusInternalServerError,
errorString: "Could not calculate offset and length for path",
},
{
name: "empty state ID",
stateId: "",
path: "",
},
{
name: "far future slot",
stateId: "1000000000000",
path: "",
},
}
for _, tt := range tests {
t.Run(tt.path, func(t *testing.T) {
chainService := &chainMock.ChainService{Optimistic: false, FinalizedRoots: make(map[[32]byte]bool)}
s := &Server{
OptimisticModeFetcher: chainService,
FinalizationFetcher: chainService,
Stater: &testutil.MockStater{
BeaconStateRoot: stateRoot[:],
BeaconState: st,
},
}
requestBody := &structs.SSZQueryRequest{
Query: tt.path,
}
var buf bytes.Buffer
require.NoError(t, json.NewEncoder(&buf).Encode(requestBody))
request := httptest.NewRequest(http.MethodPost, "http://example.com/prysm/v1/beacon/states/{state_id}/query", &buf)
request.SetPathValue("state_id", tt.stateId)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.QueryBeaconState(writer, request)
if tt.code == 0 {
tt.code = http.StatusBadRequest
}
require.Equal(t, tt.code, writer.Code)
if tt.errorString != "" {
errorString := writer.Body.String()
require.Equal(t, true, strings.Contains(errorString, tt.errorString))
}
})
}
}
func TestQueryBeaconBlock(t *testing.T) {
randaoReveal, err := hexutil.Decode("0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505")
require.NoError(t, err)
root, err := hexutil.Decode("0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")
require.NoError(t, err)
signature, err := hexutil.Decode("0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505")
require.NoError(t, err)
att := &eth.Attestation{
AggregationBits: bitfield.Bitlist{0x01},
Data: &eth.AttestationData{
Slot: 1,
CommitteeIndex: 1,
BeaconBlockRoot: root,
Source: &eth.Checkpoint{
Epoch: 1,
Root: root,
},
Target: &eth.Checkpoint{
Epoch: 1,
Root: root,
},
},
Signature: signature,
}
tests := []struct {
name string
path string
block interfaces.ReadOnlySignedBeaconBlock
expectedValue []byte
}{
{
name: "slot",
path: ".slot",
block: func() interfaces.ReadOnlySignedBeaconBlock {
b := util.NewBeaconBlock()
b.Block.Slot = 123
sb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
return sb
}(),
expectedValue: func() []byte {
b := make([]byte, 8)
binary.LittleEndian.PutUint64(b, 123)
return b
}(),
},
{
name: "randao_reveal",
path: ".body.randao_reveal",
block: func() interfaces.ReadOnlySignedBeaconBlock {
b := util.NewBeaconBlock()
b.Block.Body.RandaoReveal = randaoReveal
sb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
return sb
}(),
expectedValue: randaoReveal,
},
{
name: "attestations",
path: ".body.attestations",
block: func() interfaces.ReadOnlySignedBeaconBlock {
b := util.NewBeaconBlock()
b.Block.Body.Attestations = []*eth.Attestation{
att,
}
sb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
return sb
}(),
expectedValue: func() []byte {
b, err := att.MarshalSSZ()
require.NoError(t, err)
return b
}(),
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
mockBlockFetcher := &testutil.MockBlocker{BlockToReturn: tt.block}
mockChainService := &chainMock.ChainService{
FinalizedRoots: map[[32]byte]bool{},
}
s := &Server{
FinalizationFetcher: mockChainService,
Blocker: mockBlockFetcher,
}
requestBody := &structs.SSZQueryRequest{
Query: tt.path,
}
var buf bytes.Buffer
require.NoError(t, json.NewEncoder(&buf).Encode(requestBody))
request := httptest.NewRequest(http.MethodPost, "http://example.com/prysm/v1/beacon/blocks/{block_id}/query", &buf)
request.SetPathValue("block_id", "head")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.QueryBeaconBlock(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, version.String(version.Phase0), writer.Header().Get(api.VersionHeader))
blockRoot, err := tt.block.Block().HashTreeRoot()
require.NoError(t, err)
expectedResponse := &sszquerypb.SSZQueryResponse{
Root: blockRoot[:],
Result: tt.expectedValue,
}
sszExpectedResponse, err := expectedResponse.MarshalSSZ()
require.NoError(t, err)
assert.DeepEqual(t, sszExpectedResponse, writer.Body.Bytes())
})
}
}

View File

@@ -229,7 +229,7 @@ func (vs *Server) BuildBlockParallel(ctx context.Context, sBlk interfaces.Signed
sBlk.SetVoluntaryExits(vs.getExits(head, sBlk.Block().Slot()))
// Set sync aggregate. New in Altair.
vs.setSyncAggregate(ctx, sBlk)
vs.setSyncAggregate(ctx, sBlk, head)
// Set bls to execution change. New in Capella.
vs.setBlsToExecData(sBlk, head)
@@ -316,6 +316,10 @@ func (vs *Server) ProposeBeaconBlock(ctx context.Context, req *ethpb.GenericSign
blobSidecars, dataColumnSidecars, err = vs.handleUnblindedBlock(rob, req)
}
if err != nil {
if errors.Is(err, builderapi.ErrBadGateway) && block.IsBlinded() {
log.WithError(err).Info("Optimistically proposed block - builder relay temporarily unavailable, block may arrive over P2P")
return &ethpb.ProposeResponse{BlockRoot: root[:]}, nil
}
return nil, status.Errorf(codes.Internal, "%s: %v", "handle block failed", err)
}

View File

@@ -5,6 +5,7 @@ import (
"context"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
@@ -20,12 +21,12 @@ import (
"github.com/prysmaticlabs/go-bitfield"
)
func (vs *Server) setSyncAggregate(ctx context.Context, blk interfaces.SignedBeaconBlock) {
func (vs *Server) setSyncAggregate(ctx context.Context, blk interfaces.SignedBeaconBlock, headState state.BeaconState) {
if blk.Version() < version.Altair {
return
}
syncAggregate, err := vs.getSyncAggregate(ctx, slots.PrevSlot(blk.Block().Slot()), blk.Block().ParentRoot())
syncAggregate, err := vs.getSyncAggregate(ctx, slots.PrevSlot(blk.Block().Slot()), blk.Block().ParentRoot(), headState)
if err != nil {
log.WithError(err).Error("Could not get sync aggregate")
emptySig := [96]byte{0xC0}
@@ -47,7 +48,7 @@ func (vs *Server) setSyncAggregate(ctx context.Context, blk interfaces.SignedBea
// getSyncAggregate retrieves the sync contributions from the pool to construct the sync aggregate object.
// The contributions are filtered based on matching of the input root and slot then profitability.
func (vs *Server) getSyncAggregate(ctx context.Context, slot primitives.Slot, root [32]byte) (*ethpb.SyncAggregate, error) {
func (vs *Server) getSyncAggregate(ctx context.Context, slot primitives.Slot, root [32]byte, headState state.BeaconState) (*ethpb.SyncAggregate, error) {
_, span := trace.StartSpan(ctx, "ProposerServer.getSyncAggregate")
defer span.End()
@@ -62,7 +63,7 @@ func (vs *Server) getSyncAggregate(ctx context.Context, slot primitives.Slot, ro
// Contributions have to match the input root
proposerContributions := proposerSyncContributions(poolContributions).filterByBlockRoot(root)
aggregatedContributions, err := vs.aggregatedSyncCommitteeMessages(ctx, slot, root, poolContributions)
aggregatedContributions, err := vs.aggregatedSyncCommitteeMessages(ctx, slot, root, poolContributions, headState)
if err != nil {
return nil, errors.Wrap(err, "could not get aggregated sync committee messages")
}
@@ -123,6 +124,7 @@ func (vs *Server) aggregatedSyncCommitteeMessages(
slot primitives.Slot,
root [32]byte,
poolContributions []*ethpb.SyncCommitteeContribution,
st state.BeaconState,
) ([]*ethpb.SyncCommitteeContribution, error) {
subcommitteeCount := params.BeaconConfig().SyncCommitteeSubnetCount
subcommitteeSize := params.BeaconConfig().SyncCommitteeSize / subcommitteeCount
@@ -146,10 +148,7 @@ func (vs *Server) aggregatedSyncCommitteeMessages(
messageSigs = append(messageSigs, msg.Signature)
}
}
st, err := vs.HeadFetcher.HeadState(ctx)
if err != nil {
return nil, errors.Wrap(err, "could not get head state")
}
positions, err := helpers.CurrentPeriodPositions(st, messageIndices)
if err != nil {
return nil, errors.Wrap(err, "could not get sync committee positions")

View File

@@ -9,6 +9,7 @@ import (
mockSync "github.com/OffchainLabs/prysm/v6/beacon-chain/sync/initial-sync/testing"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/crypto/bls"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
@@ -51,15 +52,15 @@ func TestProposer_GetSyncAggregate_OK(t *testing.T) {
require.NoError(t, proposerServer.SyncCommitteePool.SaveSyncCommitteeContribution(cont))
}
aggregate, err := proposerServer.getSyncAggregate(t.Context(), 1, bytesutil.ToBytes32(conts[0].BlockRoot))
aggregate, err := proposerServer.getSyncAggregate(t.Context(), 1, bytesutil.ToBytes32(conts[0].BlockRoot), st)
require.NoError(t, err)
require.DeepEqual(t, bitfield.Bitvector32{0xf, 0xf, 0xf, 0xf}, aggregate.SyncCommitteeBits)
aggregate, err = proposerServer.getSyncAggregate(t.Context(), 2, bytesutil.ToBytes32(conts[0].BlockRoot))
aggregate, err = proposerServer.getSyncAggregate(t.Context(), 2, bytesutil.ToBytes32(conts[0].BlockRoot), st)
require.NoError(t, err)
require.DeepEqual(t, bitfield.Bitvector32{0xaa, 0xaa, 0xaa, 0xaa}, aggregate.SyncCommitteeBits)
aggregate, err = proposerServer.getSyncAggregate(t.Context(), 3, bytesutil.ToBytes32(conts[0].BlockRoot))
aggregate, err = proposerServer.getSyncAggregate(t.Context(), 3, bytesutil.ToBytes32(conts[0].BlockRoot), st)
require.NoError(t, err)
require.DeepEqual(t, bitfield.NewBitvector32(), aggregate.SyncCommitteeBits)
}
@@ -68,7 +69,7 @@ func TestServer_SetSyncAggregate_EmptyCase(t *testing.T) {
b, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlockAltair())
require.NoError(t, err)
s := &Server{} // Sever is not initialized with sync committee pool.
s.setSyncAggregate(t.Context(), b)
s.setSyncAggregate(t.Context(), b, nil)
agg, err := b.Block().Body().SyncAggregate()
require.NoError(t, err)
@@ -138,7 +139,7 @@ func TestProposer_GetSyncAggregate_IncludesSyncCommitteeMessages(t *testing.T) {
}
// The final sync aggregates must have indexes [0,1,2,3] set for both subcommittees
sa, err := proposerServer.getSyncAggregate(t.Context(), 1, r)
sa, err := proposerServer.getSyncAggregate(t.Context(), 1, r, st)
require.NoError(t, err)
assert.Equal(t, true, sa.SyncCommitteeBits.BitAt(0))
assert.Equal(t, true, sa.SyncCommitteeBits.BitAt(1))
@@ -194,8 +195,99 @@ func Test_aggregatedSyncCommitteeMessages_NoIntersectionWithPoolContributions(t
BlockRoot: r[:],
}
aggregated, err := proposerServer.aggregatedSyncCommitteeMessages(t.Context(), 1, r, []*ethpb.SyncCommitteeContribution{cont})
aggregated, err := proposerServer.aggregatedSyncCommitteeMessages(t.Context(), 1, r, []*ethpb.SyncCommitteeContribution{cont}, st)
require.NoError(t, err)
require.Equal(t, 1, len(aggregated))
assert.Equal(t, false, aggregated[0].AggregationBits.BitAt(3))
}
func TestGetSyncAggregate_CorrectStateAtSyncCommitteePeriodBoundary(t *testing.T) {
helpers.ClearCache()
syncPeriodBoundaryEpoch := primitives.Epoch(274176) // Real epoch from the bug report
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
preEpochState, keys := util.DeterministicGenesisStateAltair(t, 100)
require.NoError(t, preEpochState.SetSlot(primitives.Slot(syncPeriodBoundaryEpoch)*slotsPerEpoch-1)) // Last slot of previous epoch
postEpochState := preEpochState.Copy()
require.NoError(t, postEpochState.SetSlot(primitives.Slot(syncPeriodBoundaryEpoch)*slotsPerEpoch+2)) // After 2 missed slots
oldCommittee := &ethpb.SyncCommittee{
Pubkeys: make([][]byte, params.BeaconConfig().SyncCommitteeSize),
}
newCommittee := &ethpb.SyncCommittee{
Pubkeys: make([][]byte, params.BeaconConfig().SyncCommitteeSize),
}
for i := 0; i < int(params.BeaconConfig().SyncCommitteeSize); i++ {
if i < len(keys) {
oldCommittee.Pubkeys[i] = keys[i%len(keys)].PublicKey().Marshal()
// Use different keys for new committee to simulate rotation
newCommittee.Pubkeys[i] = keys[(i+10)%len(keys)].PublicKey().Marshal()
}
}
require.NoError(t, preEpochState.SetCurrentSyncCommittee(oldCommittee))
require.NoError(t, postEpochState.SetCurrentSyncCommittee(newCommittee))
mockChainService := &chainmock.ChainService{
State: postEpochState,
}
proposerServer := &Server{
HeadFetcher: mockChainService,
SyncChecker: &mockSync.Sync{IsSyncing: false},
SyncCommitteePool: synccommittee.NewStore(),
}
slot := primitives.Slot(syncPeriodBoundaryEpoch)*slotsPerEpoch + 1 // First slot of new epoch
blockRoot := [32]byte{0x01, 0x02, 0x03}
msg1 := &ethpb.SyncCommitteeMessage{
Slot: slot,
BlockRoot: blockRoot[:],
ValidatorIndex: 0, // This validator is in position 0 of OLD committee
Signature: bls.NewAggregateSignature().Marshal(),
}
msg2 := &ethpb.SyncCommitteeMessage{
Slot: slot,
BlockRoot: blockRoot[:],
ValidatorIndex: 1, // This validator is in position 1 of OLD committee
Signature: bls.NewAggregateSignature().Marshal(),
}
require.NoError(t, proposerServer.SyncCommitteePool.SaveSyncCommitteeMessage(msg1))
require.NoError(t, proposerServer.SyncCommitteePool.SaveSyncCommitteeMessage(msg2))
aggregateWrongState, err := proposerServer.getSyncAggregate(t.Context(), slot, blockRoot, postEpochState)
require.NoError(t, err)
aggregateCorrectState, err := proposerServer.getSyncAggregate(t.Context(), slot, blockRoot, preEpochState)
require.NoError(t, err)
wrongStateBits := bitfield.Bitlist(aggregateWrongState.SyncCommitteeBits)
correctStateBits := bitfield.Bitlist(aggregateCorrectState.SyncCommitteeBits)
wrongStateHasValidators := false
correctStateHasValidators := false
for i := 0; i < len(wrongStateBits); i++ {
if wrongStateBits[i] != 0 {
wrongStateHasValidators = true
break
}
}
for i := 0; i < len(correctStateBits); i++ {
if correctStateBits[i] != 0 {
correctStateHasValidators = true
break
}
}
assert.Equal(t, true, correctStateHasValidators, "Correct state should include validators that sent messages")
assert.Equal(t, false, wrongStateHasValidators, "Wrong state should not find validators in incorrect sync committee")
t.Logf("Wrong state aggregate bits: %x (has validators: %v)", wrongStateBits, wrongStateHasValidators)
t.Logf("Correct state aggregate bits: %x (has validators: %v)", correctStateBits, correctStateHasValidators)
}

View File

@@ -6,6 +6,7 @@ import (
"testing"
"time"
builderapi "github.com/OffchainLabs/prysm/v6/api/client/builder"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
mock "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/builder"
@@ -3634,4 +3635,52 @@ func TestServer_ProposeBeaconBlock_PostFuluBlindedBlock(t *testing.T) {
require.NotNil(t, res)
require.NotEmpty(t, res.BlockRoot)
})
t.Run("blinded block - 502 error handling", func(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.FuluForkEpoch = 10
params.OverrideBeaconConfig(cfg)
mockBuilder := &builderTest.MockBuilderService{
HasConfigured: true,
Cfg: &builderTest.Config{BeaconDB: db},
PayloadDeneb: &enginev1.ExecutionPayloadDeneb{},
ErrSubmitBlindedBlock: builderapi.ErrBadGateway,
}
c := &mock.ChainService{State: beaconState, Root: parentRoot[:]}
proposerServer := &Server{
ChainStartFetcher: &mockExecution.Chain{},
Eth1InfoFetcher: &mockExecution.Chain{},
Eth1BlockFetcher: &mockExecution.Chain{},
BlockReceiver: c,
BlobReceiver: c,
HeadFetcher: c,
BlockNotifier: c.BlockNotifier(),
OperationNotifier: c.OperationNotifier(),
StateGen: stategen.New(db, doublylinkedtree.New()),
TimeFetcher: c,
SyncChecker: &mockSync.Sync{IsSyncing: false},
BeaconDB: db,
BlockBuilder: mockBuilder,
P2P: &mockp2p.MockBroadcaster{},
}
blindedBlock := util.NewBlindedBeaconBlockDeneb()
blindedBlock.Message.Slot = 160 // This puts us at epoch 5 (160/32 = 5)
blindedBlock.Message.ProposerIndex = 0
blindedBlock.Message.ParentRoot = parentRoot[:]
blindedBlock.Message.StateRoot = make([]byte, 32)
req := &ethpb.GenericSignedBeaconBlock{
Block: &ethpb.GenericSignedBeaconBlock_BlindedDeneb{BlindedDeneb: blindedBlock},
}
// Should handle 502 error gracefully and continue with original blinded block
res, err := proposerServer.ProposeBeaconBlock(ctx, req)
require.NoError(t, err)
require.NotNil(t, res)
require.NotEmpty(t, res.BlockRoot)
})
}

View File

@@ -266,6 +266,8 @@ type WriteOnlyEth1Data interface {
SetEth1DepositIndex(val uint64) error
ExitEpochAndUpdateChurn(exitBalance primitives.Gwei) (primitives.Epoch, error)
ExitEpochAndUpdateChurnForTotalBal(totalActiveBalance primitives.Gwei, exitBalance primitives.Gwei) (primitives.Epoch, error)
SetExitBalanceToConsume(val primitives.Gwei) error
SetEarliestExitEpoch(val primitives.Epoch) error
}
// WriteOnlyValidators defines a struct which only has write access to validators methods.
@@ -333,6 +335,7 @@ type WriteOnlyWithdrawals interface {
DequeuePendingPartialWithdrawals(num uint64) error
SetNextWithdrawalIndex(i uint64) error
SetNextWithdrawalValidatorIndex(i primitives.ValidatorIndex) error
SetPendingPartialWithdrawals(val []*ethpb.PendingPartialWithdrawal) error
}
type WriteOnlyConsolidations interface {

View File

@@ -91,3 +91,33 @@ func (b *BeaconState) exitEpochAndUpdateChurn(totalActiveBalance primitives.Gwei
return b.earliestExitEpoch, nil
}
// SetExitBalanceToConsume sets the exit balance to consume. This method mutates the state.
func (b *BeaconState) SetExitBalanceToConsume(exitBalanceToConsume primitives.Gwei) error {
if b.version < version.Electra {
return errNotSupported("SetExitBalanceToConsume", b.version)
}
b.lock.Lock()
defer b.lock.Unlock()
b.exitBalanceToConsume = exitBalanceToConsume
b.markFieldAsDirty(types.ExitBalanceToConsume)
return nil
}
// SetEarliestExitEpoch sets the earliest exit epoch. This method mutates the state.
func (b *BeaconState) SetEarliestExitEpoch(earliestExitEpoch primitives.Epoch) error {
if b.version < version.Electra {
return errNotSupported("SetEarliestExitEpoch", b.version)
}
b.lock.Lock()
defer b.lock.Unlock()
b.earliestExitEpoch = earliestExitEpoch
b.markFieldAsDirty(types.EarliestExitEpoch)
return nil
}

View File

@@ -100,3 +100,24 @@ func (b *BeaconState) DequeuePendingPartialWithdrawals(n uint64) error {
return nil
}
// SetPendingPartialWithdrawals sets the pending partial withdrawals. This method mutates the state.
func (b *BeaconState) SetPendingPartialWithdrawals(pendingPartialWithdrawals []*eth.PendingPartialWithdrawal) error {
if b.version < version.Electra {
return errNotSupported("SetPendingPartialWithdrawals", b.version)
}
b.lock.Lock()
defer b.lock.Unlock()
if pendingPartialWithdrawals == nil {
return errors.New("cannot set nil pending partial withdrawals")
}
b.sharedFieldReferences[types.PendingPartialWithdrawals].MinusRef()
b.sharedFieldReferences[types.PendingPartialWithdrawals] = stateutil.NewRef(1)
b.pendingPartialWithdrawals = pendingPartialWithdrawals
b.markFieldAsDirty(types.PendingPartialWithdrawals)
return nil
}

View File

@@ -650,6 +650,11 @@ func InitializeFromProtoUnsafeFulu(st *ethpb.BeaconStateFulu) (state.BeaconState
for i, v := range st.ProposerLookahead {
proposerLookahead[i] = primitives.ValidatorIndex(v)
}
// Proposer lookahead must be exactly 2 * SLOTS_PER_EPOCH in length. We fill in with zeroes instead of erroring out here
for i := len(proposerLookahead); i < 2*fieldparams.SlotsPerEpoch; i++ {
proposerLookahead = append(proposerLookahead, 0)
}
fieldCount := params.BeaconConfig().BeaconStateFuluFieldCount
b := &BeaconState{
version: version.Fulu,

View File

@@ -17,7 +17,6 @@ go_library(
importpath = "github.com/OffchainLabs/prysm/v6/beacon-chain/sync/backfill",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/das:go_default_library",
"//beacon-chain/db:go_default_library",
@@ -61,7 +60,6 @@ go_test(
],
embed = [":go_default_library"],
deps = [
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/das:go_default_library",
"//beacon-chain/db:go_default_library",

View File

@@ -3,12 +3,12 @@ package backfill
import (
"context"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
"github.com/OffchainLabs/prysm/v6/beacon-chain/sync"
"github.com/OffchainLabs/prysm/v6/beacon-chain/verification"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
@@ -348,7 +348,7 @@ func (*Service) Status() error {
// minimumBackfillSlot determines the lowest slot that backfill needs to download based on looking back
// MIN_EPOCHS_FOR_BLOCK_REQUESTS from the current slot.
func minimumBackfillSlot(current primitives.Slot) primitives.Slot {
oe := helpers.MinEpochsForBlockRequests()
oe := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
if oe > slots.MaxSafeEpoch() {
oe = slots.MaxSafeEpoch()
}

View File

@@ -5,7 +5,6 @@ import (
"testing"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
p2ptest "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
@@ -84,7 +83,7 @@ func TestServiceInit(t *testing.T) {
}
func TestMinimumBackfillSlot(t *testing.T) {
oe := helpers.MinEpochsForBlockRequests()
oe := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
currSlot := (oe + 100).Mul(uint64(params.BeaconConfig().SlotsPerEpoch))
minSlot := minimumBackfillSlot(primitives.Slot(currSlot))
@@ -109,7 +108,7 @@ func testReadN(ctx context.Context, t *testing.T, c chan batch, n int, into []ba
}
func TestBackfillMinSlotDefault(t *testing.T) {
oe := helpers.MinEpochsForBlockRequests()
oe := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
current := primitives.Slot((oe + 100).Mul(uint64(params.BeaconConfig().SlotsPerEpoch)))
s := &Service{}
specMin := minimumBackfillSlot(current)

View File

@@ -14,7 +14,7 @@ import (
"github.com/pkg/errors"
)
const signatureVerificationInterval = 50 * time.Millisecond
const signatureVerificationInterval = 5 * time.Millisecond
type signatureVerifier struct {
set *bls.SignatureBatch

View File

@@ -169,7 +169,7 @@ func FetchDataColumnSidecars(
result[root] = sidecars
}
log.WithField("finalMissingRootCount", len(incompleteRoots)).Debug("Failed to fetch data column sidecars from storage and peers using rescue mode")
log.WithField("finalMissingRootCount", len(incompleteRoots)).Warning("Failed to fetch data column sidecars")
return result, missingByRoot, nil
}
@@ -738,7 +738,7 @@ func fetchDataColumnSidecarsFromPeers(
roDataColumns, err := sendDataColumnSidecarsRequest(params, slotByRoot, slotsWithCommitments, peerID, indicesByRoot)
if err != nil {
log.WithError(err).Warning("Failed to send data column sidecars request")
log.WithError(err).Debug("Failed to send data column sidecars request")
return
}
@@ -1122,19 +1122,21 @@ func randomPeer(
}
}
slices.Sort(nonRateLimitedPeers)
if len(nonRateLimitedPeers) == 0 {
log.WithFields(logrus.Fields{
"peerCount": peerCount,
"delay": waitPeriod,
}).Debug("Waiting for a peer with enough bandwidth for data column sidecars")
time.Sleep(waitPeriod)
continue
if len(nonRateLimitedPeers) > 0 {
slices.Sort(nonRateLimitedPeers)
randomIndex := randomSource.Intn(len(nonRateLimitedPeers))
return nonRateLimitedPeers[randomIndex], nil
}
randomIndex := randomSource.Intn(len(nonRateLimitedPeers))
return nonRateLimitedPeers[randomIndex], nil
log.WithFields(logrus.Fields{
"peerCount": peerCount,
"delay": waitPeriod,
}).Debug("Waiting for a peer with enough bandwidth for data column sidecars")
select {
case <-time.After(waitPeriod):
case <-ctx.Done():
}
}
return "", ctx.Err()

View File

@@ -45,6 +45,7 @@ func TestFetchDataColumnSidecars(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.FuluForkEpoch = 0
cfg.BlobSchedule = []params.BlobScheduleEntry{{Epoch: 0, MaxBlobsPerBlock: 10}}
params.OverrideBeaconConfig(cfg)
// Start the trusted setup.
@@ -760,6 +761,12 @@ func TestVerifyDataColumnSidecarsByPeer(t *testing.T) {
err := kzg.Start()
require.NoError(t, err)
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.FuluForkEpoch = 0
cfg.BlobSchedule = []params.BlobScheduleEntry{{Epoch: 0, MaxBlobsPerBlock: 2}}
params.OverrideBeaconConfig(cfg)
t.Run("nominal", func(t *testing.T) {
const (
start, stop = 0, 15

View File

@@ -683,6 +683,7 @@ func TestFetchOriginColumns(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.FuluForkEpoch = 0
cfg.BlobSchedule = []params.BlobScheduleEntry{{Epoch: 0, MaxBlobsPerBlock: 10}}
params.OverrideBeaconConfig(cfg)
const (

View File

@@ -241,6 +241,9 @@ func (s *Service) saveAttestation(att ethpb.Att) error {
if features.Get().EnableExperimentalAttestationPool {
return s.cfg.attestationCache.Add(att)
}
if att.IsAggregated() {
return s.cfg.attPool.SaveAggregatedAttestation(att)
}
return s.cfg.attPool.SaveUnaggregatedAttestation(att)
}
@@ -300,40 +303,26 @@ func (s *Service) processVerifiedAttestation(
}
func (s *Service) processAggregate(ctx context.Context, aggregate ethpb.SignedAggregateAttAndProof) {
att := aggregate.AggregateAttestationAndProof().AggregateVal()
// Save the pending aggregated attestation to the pool if it passes the aggregated
// validation steps.
valRes, err := s.validateAggregatedAtt(ctx, aggregate)
res, err := s.validateAggregatedAtt(ctx, aggregate)
if err != nil {
log.WithError(err).Debug("Pending aggregated attestation failed validation")
return
}
aggValid := pubsub.ValidationAccept == valRes
if aggValid && s.validateBlockInAttestation(ctx, aggregate) {
if features.Get().EnableExperimentalAttestationPool {
if err = s.cfg.attestationCache.Add(att); err != nil {
log.WithError(err).Debug("Could not save aggregated attestation")
return
}
} else {
if att.IsAggregated() {
if err = s.cfg.attPool.SaveAggregatedAttestation(att); err != nil {
log.WithError(err).Debug("Could not save aggregated attestation")
return
}
} else if err = s.cfg.attPool.SaveUnaggregatedAttestation(att); err != nil {
log.WithError(err).Debug("Could not save unaggregated attestation")
return
}
}
if res != pubsub.ValidationAccept || !s.validateBlockInAttestation(ctx, aggregate) {
log.Debug("Pending aggregated attestation failed validation")
return
}
s.setAggregatorIndexEpochSeen(att.GetData().Target.Epoch, aggregate.AggregateAttestationAndProof().GetAggregatorIndex())
att := aggregate.AggregateAttestationAndProof().AggregateVal()
if err := s.saveAttestation(att); err != nil {
log.WithError(err).Debug("Could not save aggregated attestation")
return
}
// Broadcasting the signed attestation again once a node is able to process it.
if err := s.cfg.p2p.Broadcast(ctx, aggregate); err != nil {
log.WithError(err).Debug("Could not broadcast")
}
s.setAggregatorIndexEpochSeen(att.GetData().Target.Epoch, aggregate.AggregateAttestationAndProof().GetAggregatorIndex())
if err := s.cfg.p2p.Broadcast(ctx, aggregate); err != nil {
log.WithError(err).Debug("Could not broadcast aggregated attestation")
}
}

View File

@@ -147,6 +147,11 @@ func (s *Service) processPendingBlocks(ctx context.Context) error {
}
cancelFunction()
// Process pending attestations for this block.
if err := s.processPendingAttsForBlock(ctx, blkRoot); err != nil {
log.WithError(err).Debug("Failed to process pending attestations for block")
}
// Remove the processed block from the queue.
if err := s.removeBlockFromQueue(b, blkRoot); err != nil {
return err

View File

@@ -70,6 +70,7 @@ func (s *Service) dataColumnSidecarsByRangeRPCHandler(ctx context.Context, msg i
log.Trace("Serving data column sidecars by range")
if rangeParameters == nil {
closeStream(stream, log)
return nil
}

View File

@@ -23,16 +23,15 @@ import (
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
consensusblocks "github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
pb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/util"
)
func TestDataColumnSidecarsByRangeRPCHandler(t *testing.T) {
params.SetupTestConfigCleanup(t)
beaconConfig := params.BeaconConfig()
//beaconConfig.FuluForkEpoch = beaconConfig.ElectraForkEpoch + 100
beaconConfig.FuluForkEpoch = 0
params.OverrideBeaconConfig(beaconConfig)
params.BeaconConfig().InitializeForkSchedule()
@@ -47,6 +46,7 @@ func TestDataColumnSidecarsByRangeRPCHandler(t *testing.T) {
ctxMap, err := ContextByteVersionsForValRoot(params.BeaconConfig().GenesisValidatorsRoot)
require.NoError(t, err)
t.Run("invalid request", func(t *testing.T) {
slot := primitives.Slot(400)
mockNower.SetSlot(t, clock, slot)
@@ -72,8 +72,8 @@ func TestDataColumnSidecarsByRangeRPCHandler(t *testing.T) {
remoteP2P.BHost.SetStreamHandler(protocolID, func(stream network.Stream) {
defer wg.Done()
code, _, err := readStatusCodeNoDeadline(stream, localP2P.Encoding())
require.NoError(t, err)
require.Equal(t, responseCodeInvalidRequest, code)
assert.NoError(t, err)
assert.Equal(t, responseCodeInvalidRequest, code)
})
localP2P.Connect(remoteP2P)
@@ -94,6 +94,48 @@ func TestDataColumnSidecarsByRangeRPCHandler(t *testing.T) {
}
})
t.Run("in the future", func(t *testing.T) {
slot := primitives.Slot(400)
mockNower.SetSlot(t, clock, slot)
localP2P, remoteP2P := p2ptest.NewTestP2P(t), p2ptest.NewTestP2P(t)
protocolID := protocol.ID(fmt.Sprintf("%s/ssz_snappy", p2p.RPCDataColumnSidecarsByRangeTopicV1))
service := &Service{
cfg: &config{
p2p: localP2P,
chain: &chainMock.ChainService{
Slot: &slot,
},
clock: clock,
},
rateLimiter: newRateLimiter(localP2P),
}
var wg sync.WaitGroup
wg.Add(1)
remoteP2P.BHost.SetStreamHandler(protocolID, func(stream network.Stream) {
defer wg.Done()
_, err := readChunkedDataColumnSidecar(stream, remoteP2P, ctxMap)
assert.Equal(t, true, errors.Is(err, io.EOF))
})
localP2P.Connect(remoteP2P)
stream, err := localP2P.BHost.NewStream(ctx, remoteP2P.BHost.ID(), protocolID)
require.NoError(t, err)
msg := &pb.DataColumnSidecarsByRangeRequest{
StartSlot: slot + 1,
Count: 50,
Columns: []uint64{1, 2, 3, 4, 6, 7, 8, 9, 10},
}
err = service.dataColumnSidecarsByRangeRPCHandler(ctx, msg, stream)
require.NoError(t, err)
})
t.Run("nominal", func(t *testing.T) {
slot := primitives.Slot(400)
@@ -133,12 +175,12 @@ func TestDataColumnSidecarsByRangeRPCHandler(t *testing.T) {
signedBeaconBlockPb.Block.ParentRoot = roots[i-1][:]
}
signedBeaconBlock, err := consensusblocks.NewSignedBeaconBlock(signedBeaconBlockPb)
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(signedBeaconBlockPb)
require.NoError(t, err)
// There is a discrepancy between the root of the beacon block and the rodata column root,
// but for the sake of this test, we actually don't care.
roblock, err := consensusblocks.NewROBlockWithRoot(signedBeaconBlock, roots[i])
roblock, err := blocks.NewROBlockWithRoot(signedBeaconBlock, roots[i])
require.NoError(t, err)
roBlocks = append(roBlocks, roblock)
@@ -178,28 +220,28 @@ func TestDataColumnSidecarsByRangeRPCHandler(t *testing.T) {
break
}
require.NoError(t, err)
assert.NoError(t, err)
sidecars = append(sidecars, sidecar)
}
require.Equal(t, 8, len(sidecars))
require.Equal(t, root0, sidecars[0].BlockRoot())
require.Equal(t, root0, sidecars[1].BlockRoot())
require.Equal(t, root0, sidecars[2].BlockRoot())
require.Equal(t, root3, sidecars[3].BlockRoot())
require.Equal(t, root3, sidecars[4].BlockRoot())
require.Equal(t, root5, sidecars[5].BlockRoot())
require.Equal(t, root5, sidecars[6].BlockRoot())
require.Equal(t, root5, sidecars[7].BlockRoot())
assert.Equal(t, 8, len(sidecars))
assert.Equal(t, root0, sidecars[0].BlockRoot())
assert.Equal(t, root0, sidecars[1].BlockRoot())
assert.Equal(t, root0, sidecars[2].BlockRoot())
assert.Equal(t, root3, sidecars[3].BlockRoot())
assert.Equal(t, root3, sidecars[4].BlockRoot())
assert.Equal(t, root5, sidecars[5].BlockRoot())
assert.Equal(t, root5, sidecars[6].BlockRoot())
assert.Equal(t, root5, sidecars[7].BlockRoot())
require.Equal(t, uint64(1), sidecars[0].Index)
require.Equal(t, uint64(2), sidecars[1].Index)
require.Equal(t, uint64(3), sidecars[2].Index)
require.Equal(t, uint64(4), sidecars[3].Index)
require.Equal(t, uint64(6), sidecars[4].Index)
require.Equal(t, uint64(7), sidecars[5].Index)
require.Equal(t, uint64(8), sidecars[6].Index)
require.Equal(t, uint64(9), sidecars[7].Index)
assert.Equal(t, uint64(1), sidecars[0].Index)
assert.Equal(t, uint64(2), sidecars[1].Index)
assert.Equal(t, uint64(3), sidecars[2].Index)
assert.Equal(t, uint64(4), sidecars[3].Index)
assert.Equal(t, uint64(6), sidecars[4].Index)
assert.Equal(t, uint64(7), sidecars[5].Index)
assert.Equal(t, uint64(8), sidecars[6].Index)
assert.Equal(t, uint64(9), sidecars[7].Index)
})
localP2P.Connect(remoteP2P)
@@ -215,7 +257,6 @@ func TestDataColumnSidecarsByRangeRPCHandler(t *testing.T) {
err = service.dataColumnSidecarsByRangeRPCHandler(ctx, msg, stream)
require.NoError(t, err)
})
}
func TestValidateDataColumnsByRange(t *testing.T) {

View File

@@ -716,10 +716,6 @@ func (s *Service) samplingSize() (uint64, error) {
}
func (s *Service) persistentAndAggregatorSubnetIndices(currentSlot primitives.Slot) map[uint64]bool {
if flags.Get().SubscribeToAllSubnets {
return mapFromCount(params.BeaconConfig().AttestationSubnetCount)
}
persistentSubnetIndices := persistentSubnetIndices()
aggregatorSubnetIndices := aggregatorSubnetIndices(currentSlot)

View File

@@ -94,9 +94,11 @@ func TestVerifyIndexInCommittee_ExistsInBeaconCommittee(t *testing.T) {
assert.ErrorContains(t, wanted, err)
assert.Equal(t, pubsub.ValidationReject, result)
att.Data.CommitteeIndex = 10000
// Test the edge case where committee index equals count (should be rejected)
// With 64 validators and minimal config, count = 2, so valid indices are 0 and 1
att.Data.CommitteeIndex = 2
_, _, result, err = service.validateCommitteeIndexAndCount(ctx, att, s)
require.ErrorContains(t, "committee index 10000 > 2", err)
require.ErrorContains(t, "committee index 2 >= 2", err)
assert.Equal(t, pubsub.ValidationReject, result)
}

View File

@@ -112,8 +112,12 @@ func (s *Service) validateCommitteeIndexBeaconAttestation(
// Verify the block being voted and the processed state is in beaconDB and the block has passed validation if it's in the beaconDB.
blockRoot := bytesutil.ToBytes32(data.BeaconBlockRoot)
if !s.hasBlockAndState(ctx, blockRoot) {
// Block not yet available - save attestation to pending queue for later processing
// when the block arrives. Return ValidationIgnore so gossip doesn't potentially penalize the peer.
s.savePendingAtt(att)
return pubsub.ValidationIgnore, nil
}
// Block exists - verify it's in forkchoice (i.e., it's a descendant of the finalized checkpoint)
if !s.cfg.chain.InForkchoice(blockRoot) {
tracing.AnnotateError(span, blockchain.ErrNotDescendantOfFinalized)
return pubsub.ValidationIgnore, blockchain.ErrNotDescendantOfFinalized
@@ -274,8 +278,8 @@ func (s *Service) validateCommitteeIndexAndCount(
} else {
ci = a.GetCommitteeIndex()
}
if uint64(ci) > count {
return 0, 0, pubsub.ValidationReject, fmt.Errorf("committee index %d > %d", ci, count)
if uint64(ci) >= count {
return 0, 0, pubsub.ValidationReject, fmt.Errorf("committee index %d >= %d", ci, count)
}
return ci, valCount, pubsub.ValidationAccept, nil
}

Some files were not shown because too many files have changed in this diff Show More