Compare commits

...

44 Commits

Author SHA1 Message Date
potuz
2e2b850a74 Remove condition of C itself 2025-11-05 09:28:10 -03:00
potuz
db426e3c35 remove mod 32 2025-10-31 14:40:31 -03:00
potuz
bc9fc37f83 Add table of contents and diagram
🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-31 14:20:27 -03:00
potuz
9749592a40 Add design doc for gossip validation 2025-10-31 14:12:07 -03:00
Muzry
374bae9c81 Fix incorrect version used when sending attestation version in Fulu (#15950)
* Fix incorrect version used when sending attestation version in Fulu

* update typo

* fix Eth-Consensus-Version in submit_signed_aggregate_proof.go
2025-10-31 13:17:44 +00:00
kasey
3e0492a636 also ignore errors from readdirnames (#15947)
* also ignore errors from readdirnames

* test case for empty blobs dir

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-10-30 19:02:25 +00:00
rocksload
5a1a5b5ae5 refactor: use slices.Contains to simplify code (#15646)
Signed-off-by: rocksload <rocksload@outlook.com>
2025-10-29 14:40:33 +00:00
james-prysm
dbb2f0b047 changelog (#15929) 2025-10-28 15:42:05 +00:00
Manu NALEPA
7b3c11c818 Do not serve sidecars if corresponding block is not available in the database (#15933)
* Implement `AvailableBlocks`.

* `blobSidecarByRootRPCHandler`: Do not serve a sidecar if the corresponding block is not available.

* `dataColumnSidecarByRootRPCHandler`: Do not do extra work if only needed for TRACE logging.

* `TestDataColumnSidecarsByRootRPCHandler`: Re-arrange (no functional change).

* `TestDataColumnSidecarsByRootRPCHandler`: Save blocks corresponding to sidecars into DB.

* `dataColumnSidecarByRootRPCHandler`: Do not serve a sidecar if the corresponding block is not available.

* Add changelog

* `TestDataColumnSidecarsByRootRPCHandler`: Use `assert` instead of `require` in goroutines.

https://github.com/stretchr/testify?tab=readme-ov-file#require-package
2025-10-28 15:39:35 +00:00
Manu NALEPA
c9b34d556d Update go-netroute to v0.3.0 (#15934) 2025-10-28 12:57:14 +00:00
fernantho
10a2f0687b SSZ-QL: calculate generalized indices for elements (#15873)
* added tests for calculating generalized indices

* added first version of GI calculation walking the specified path with no recursion. Extended test coverage for bitlist and bitvectors.
vectors need more testing

* refactored code. Detached PathElement processing, currently done at the beginning. Swap to regex to gain flexibility.

* added an updateRoot function with the GI formula. more refactoring

* added changelog

* replaced TODO tag

* udpated some comments

* simplified code - removed duplicated code in processingLengthField function

* run gazelle

* merging all input path processing into path.go

* reviewed Jun's feedback

* removed unnecessary idx pointer var + fixed error with length data type (uint64 instead of uint8)

* refactored path.go after merging path elements from generalized_indices.go

* re-computed GIs for tests as VariableTestContainer added a new field.

* added minor comment - rawPath MUST be snake case

removed extractFieldName func.

* fixed vector GI calculation - updated tests GIs

* removed updateRoot function in favor of inline code

* path input data enforced to be snake case

* added sanity checks for accessing outbound element indices - checked against vector.length/list.limit

* fixed issues triggered after merging develop

* Removed redundant comment

Co-authored-by: Jun Song <87601811+syjn99@users.noreply.github.com>

* removed unreachable condition as `strings.Split` always return a slice with length >= 1

If s does not contain sep and sep is not empty, Split returns a slice of
length 1 whose only element is s.

* added tests to cover edge cases + cleaned code (toLower is no longer needed in extractFieldName function

* added Jun's feedback + more testing

* postponed snake case conversion to do it on a per-element-basis. Added more testing focused mainly in snake case conversion

* addressed several Jun's comments.

* added sanity check to prevent length of a multi-dimensional array. added more tests with extended paths

* Update encoding/ssz/query/generalized_index.go

Co-authored-by: Radosław Kapka <radoslaw.kapka@gmail.com>

* Update encoding/ssz/query/generalized_index.go

Co-authored-by: Radosław Kapka <radoslaw.kapka@gmail.com>

* Update encoding/ssz/query/generalized_index.go

Co-authored-by: Radosław Kapka <radoslaw.kapka@gmail.com>

* placed constant bitsPerChunk in the right place. Exported BitsPerChunk and BytesPerChunk and updated code that use them

* added helpers for computing GI of each data type

* changed %q in favor of %s

* Update encoding/ssz/query/path.go

Co-authored-by: Jun Song <87601811+syjn99@users.noreply.github.com>

* removed the least restrictive condition isBasicType

* replaced length of containerInfo.order for containerInfo.fields for clarity

* removed outdated comment

* removed toSnakeCase conversion.

* moved isBasicType func to its natural place, SSZType

* cosmetic refactor

- renamed itemLengthFromInfo to itemLength (same name is in spec).
- arranged all SSZ helpers.

* cleaned tests

* renamed "root" to "index"

* removed unnecessary check for negative integers. Replaced %q for %s.

* refactored regex variables and prevented re-assignation

* added length regex explanation

* added more testing for stressing regex for path processing

* renamed currentIndex to parentIndex for clarity and documented the returns from calculate<Type>GeneralizedIndex functions

* Update encoding/ssz/query/generalized_index.go

Co-authored-by: Radosław Kapka <radoslaw.kapka@gmail.com>

* run gazelle

* fixed never asserted error. Updated error message

---------

Co-authored-by: Jun Song <87601811+syjn99@users.noreply.github.com>
Co-authored-by: Radosław Kapka <radoslaw.kapka@gmail.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-10-27 23:27:34 +00:00
Manu NALEPA
4fb75d6d0b Add some metrics improvements (#15922)
* Define TCP and QUIC as `InternetProtocol` (no functional change).

* Group types. (No functional changes)

* Rename variables and use range syntax.

* Add `p2pMaxPeers` and `p2pPeerCountDirectionType` metrics

* `p2p_subscribed_topic_peer_total`: Reset to avoid dangling values.

* `validateConfig`:
- Use `Warning` with fields instead of `Warnf`.
- Avoid to both modify in place the input value and return it.

* Add `p2p_minimum_peers_per_subnet` metric.

* `beaconConfig` => `cfg`.

https://github.com/OffchainLabs/prysm/pull/15880#discussion_r2436826215

* Add changelog

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-10-26 15:16:05 +00:00
terence
6d596edea2 Use SlotTicker instead of time.Ticker for attestation pool pruning (#15917)
* Use SlotTicker instead of time.Ticker for attestation pool pruning

* Offset one second before slot start
2025-10-24 15:35:26 +00:00
Bastin
9153c5a202 light client logging (#15927) 2025-10-24 14:42:27 +00:00
james-prysm
26ce94e224 removes misleading keymanager info log (#15926)
* simple change

* fixing test"
"
2025-10-24 14:28:30 +00:00
terence
255ea2fac1 Return optimistic response only when handling blinded blocks (#15925)
* Return optimistic response only when handling blinded blocks in proposer

* Remove blind condition
2025-10-24 03:37:32 +00:00
terence
46bc81b4c8 Add metric to track data columns recovered from execution layer (#15924) 2025-10-23 15:50:25 +00:00
kasey
9c4774b82e default new blob storage layouts to by-epoch (#15904)
* default new blob storage layouts to by-epoch

also, do not log migration message until we see a directory that needs to be migrated

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* manu feedback

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2025-10-22 20:09:18 +00:00
terence
7dd4f5948c Update consensus spec tests to v1.6.0-beta.1 with new hashes and URL template (#15918) 2025-10-22 18:22:19 +00:00
Radosław Kapka
2f090c52d9 Allow custom headers in validator client HTTP requests (#15884)
* Allow custom headers in validator client HTTP requests

* changelog <3

* improve flag description

* Bastin's review

* James' review

* add godoc for NodeConnectionOption
2025-10-22 13:47:35 +00:00
Manu NALEPA
3ecb5d0b67 Remove Reading static P2P private key from a file. log if Fulu is enabled. (#15913) 2025-10-22 11:52:31 +00:00
james-prysm
253f91930a changelog v6.1.3 (#15901)
* updating changelog

* adding changelog

* kasey's comment
2025-10-21 16:46:44 +00:00
terence
7c3e45637f Fix proposer to use advanced state for sync committee position calculation (#15905)
* Sync committee use correct state to calculate position

* Unit test
2025-10-21 15:29:46 +00:00
Manu NALEPA
96429c5089 updateCustodyInfoInDB: Use NumberOfCustodyGroups instead of NumberOfColumns. (#15908)
* `updateCustodyInfoInDB`: Add tests.

* `updateCustodyInfoInDB`: Use `NumberOfCustodyGroups` instead of `NumberOfColumns`.

* Add changelog.

* Fix Potuz's comment.
2025-10-21 14:37:04 +00:00
satushh
d613f3a262 Update Earliest available slot when pruning (#15694)
* Update Earliest available slot when pruning

* bazel run //:gazelle -- fix

* custodyUpdater interface to avoid import cycle

* bazel run //:gazelle -- fix

* simplify test

* separation of concerns

* debug log for updating eas

* UpdateEarliestAvailableSlot function in CustodyManager

* fix test

* UpdateEarliestAvailableSlot function for FakeP2P

* lint

* UpdateEarliestAvailableSlot instead of UpdateCustodyInfo + check for Fulu

* fix test and lint

* bugfix: enforce minimum retention period in pruner

* remove MinEpochsForBlockRequests function and use from config

* remove modifying earliest_available_slot after data column pruning

* correct earliestAvailableSlot validation: allow backfill decrease but prevent increase within MIN_EPOCHS_FOR_BLOCK_REQUESTS

* lint

* bazel run //:gazelle -- fix

* lint and remove unwanted debug logs

* Return a wrapped error, and let the caller decide what to do

* fix tests because updateEarliestSlot returns error now

* avoid re-doing computation in the test function

* lint and correct changelog

* custody updater should be a mandatory part of the pruner service

* ensure never increase eas if we are in the block requests window

* slot level granularity edge case

* update the value stored in the DB

* log tidy up

* use errNoCustodyInfo

* allow earliestAvailableSlot edit when custodyGroupCount doesnt change

* undo the minimal config change

* add context to CustodyGroupCount after merging from develop

* cosmetic change

* shift responsibility from caller to callee, protection for updateEarliestSlot. UpdateEarliestAvailableSlot returns cgc

* allow increase in earliestAvailableSlot only when custodyGroupCount also increases

* remove CustodyGroupCount as it is no longer needed as UpdateEarliestAvailableSlot returns cgc now

* proper place for log and name refactor

* test for Nil custody info

* allow decreasing earliest slot in DB (just like in memory)

* invert if statement to make more readable

* UpdateEarliestAvailableSlot for DB (equivalent of p2p's UpdateEarliestAvailableSlot) & undo changes made to UpdateCustodyInfo

* in UpdateEarliestAvailableSlot, no need to return unused values

* no need to log stored group count

* log.WithField instead of log.WithFields
2025-10-21 13:54:52 +00:00
MozirDmitriy
5751dbf134 kv: write recovered state summaries to stateSummaryBucket (#15896)
* kv: write recovered state summaries to stateSummaryBucket

* Create MozirDmitriy_fix_kv-recover-state-summurt-bucket.md

* add a test
2025-10-21 11:21:10 +00:00
Potuz
426fbcc3b0 Add state diff serialization (#15250)
* Add serialization code for state diffs

Adds serialization code for state diffs.
Adds code to create and apply state diffs
Adds fuzz tests and benchmarks for serialization/deserialization

Co-authored-by: Claude <noreply@anthropic.com>

* Add Fulu support

* Review #1

* gazelle

* Fix some fuzzers

* Failing cases from the fuzzers in consensus-types/hdiff

* Fix more fuzz tests

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* add comparison tests

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Use ConvertToElectra in UpgradeToElectra

* Add comments on constants

* Fix readEth1Data

* remove colons from error messages

* Add design doc

* Apply suggestions from code review

Bast

Co-authored-by: Bastin <43618253+Inspector-Butters@users.noreply.github.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Preston Van Loon <preston@pvl.dev>
Co-authored-by: Bastin <43618253+Inspector-Butters@users.noreply.github.com>
2025-10-20 21:52:32 +00:00
Manu NALEPA
a3baf98b05 VerifyDataColumnsSidecarKZGProofs: Check if sizes match. (#15892) 2025-10-20 17:06:13 +00:00
Jun Song
5a897dfa6b SSZ-QL: Add endpoints (BeaconState/BeaconBlock) (#15888)
* Move ssz_query objects into testing folder (ensuring test objects only used in test environment)

* Add containers for response

* Export sszInfo

* Add QueryBeaconState/Block

* Add comments and few refactor

* Fix merge conflict issues

* Return 500 when calculate offset fails

* Add test for QueryBeaconState

* Add test for QueryBeaconBlock

* Changelog :)

* Rename `QuerySSZRequest` to `SSZQueryRequest`

* Fix middleware hooks for RPC to accept JSON from client and return SSZ

* Convert to `SSZObject` directly from proto

* Move marshalling/calculating hash tree root part after `CalculateOffsetAndLength`

* Make nogo happy

* Add informing comment for using proto unsafe conversion

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-10-20 16:24:06 +00:00
Muzry
90190883bc Fixed metadata extraction on Windows by correctly splitting file paths (#15899)
* Fixed metadata extraction on Windows by correctly splitting file paths

* `TestExtractFileMetadata`: Refactor a bit.

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2025-10-20 14:17:32 +00:00
terence
64ec665890 Fix sync committee subscription to use subnet indices instead of committee indices (#15885)
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-10-17 19:03:53 +00:00
kasey
fdb06ea461 clear genesis state file when --(force-)clear-db is specified (#15883)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-10-17 14:03:15 +00:00
Manu NALEPA
0486631d73 Improve error message when the byte count read from disk when reading a data column sidecar is lower than expected. (Mostly, because the file is truncated.) (#15881)
* `VerifiedRODataColumnError`: Don't reuse Blob error.

* `VerifiedRODataColumnFromDisk`: Use a specific error when the count of read bytes is lower than expected.

* Add changelog.
2025-10-16 21:49:11 +00:00
Manu NALEPA
47764696ce randomPeer: Return if the context is cancelled when waiting for peers. (#15876)
* `randomPeer`: Return if the context is cancelled when waiting for peers.

* `randomPeer`: Refactor to reduce indentation.
2025-10-16 21:13:11 +00:00
Manu NALEPA
b2d350b988 Correctly advertise (in ENR and metadata) attestation subnets when using --subscribe-all-subnets. (#15880) 2025-10-16 21:12:00 +00:00
kasey
41e7607092 Decrease att batch deadline to 5ms for faster net prop (#15882)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-10-16 17:30:59 +00:00
Jun Song
cd429dc253 SSZ-QL: Access n-th element in List/Vector. (#15767)
* Add basic parsing feature for accessing by index

* Add more tests for 2d byte vector

* Add List case for access indexing

* Handle 2D bytes List example

* Fix misleading cases for CalculateOffsetAndLength

* Use elementSizes[index] if it is the last path element

* Add variable_container_list field for mocking attester_slashings in BeaconBlockBody

* Remove redundant protobuf message

* Better documentation

* Changelog

* Fix `expectedSize` of `VariableTestContainer`: as we added `variable_container_list` here

* Apply reviews from Radek
2025-10-15 16:11:12 +00:00
phrwlk
5ced1125f2 fix: reject out-of-range attestation committee index (#15855)
* reject committee index >= committees_per_slot in unaggregated attestation validation

* Create phrwlk_fix-attestation-committee-index-bound.md

* add a unit test

* fix test

* fixing test

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Co-authored-by: james-prysm <james@prysmaticlabs.com>
2025-10-15 16:02:08 +00:00
Potuz
f67ca6ae5e Fix epoch transition on head event (#15871)
h/t to the NuConstruct team for reporting this. The event feed
incorrectly sends epoch transition flag on head events when the first
slot of the epoch is missing (or reorgs across epoch transition).

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-10-15 15:13:49 +00:00
Manu NALEPA
9742333f68 WithDataColumnRetentionEpochs: Use dataColumnRetentionEpoch instead of blobColumnRetentionEpoch. (#15872) 2025-10-15 14:44:49 +00:00
Manu NALEPA
c811fadf33 VerifyDataColumnSidecar: Check if there is no too many commitments. (#15859)
* `VerifyDataColumnSidecar`: Check if there is no too many commitments.

* `TestVerifyDataColumnSidecar`: Refactor using test cases.

* Add changelog.
2025-10-15 12:18:04 +00:00
Manu NALEPA
55b9448d41 dataColumnSidecarsByRangeRPCHandler: Gracefully close the stream if no data to return. (#15866)
* `TestDataColumnSidecarsByRangeRPCHandler`: Remove commented code.

* Remove double import

* `dataColumnSidecarsByRangeRPCHandler`: Gracefully close the stream if no data to return.

* Tests: Change `require` to `assert` in goroutines in tests.

https://pkg.go.dev/github.com/stretchr/testify/require#hdr-Assertions

* Add changelog.
2025-10-15 12:16:05 +00:00
Manu NALEPA
10f8d8c26e Fix /eth/v1/beacon/blob_sidecars/ beacon API if the fulu fork epoch is set to the far future epoch. (#15867)
* Fix `/eth/v1/beacon/blob_sidecars/` beacon API is the fulu fork epoch is set to the far future epoch.

* Fix Terence's comment.

* adding a test

---------

Co-authored-by: james-prysm <james@prysmaticlabs.com>
2025-10-14 21:38:12 +00:00
Jun Song
4eab41ea4c SSZ-QL: use fastssz-generated SizeSSZ method & clarify Size method (#15864)
* Add SizeSSZ as a member of SSZObject

* Temporarily rename dereferencePointer function

* Fix analyzeType: use reflect.Value for analyzing

* Fix PopulateVariableLengthInfo: change function signature & reset pointer

* Remove Container arm for Size function as it'll be handled in the previous branch

* Remove OffsetBytes function in listInfo

* Refactor and document codes

* Remove misleading "fixedSize" concept & Add Uint8...64 SSZTypes

* Add size testing

* Move TestSSZObject_Batch and rename it as TestHashTreeRoot

* Changelog :)

* Rename endOffset to fixedOffset

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-10-14 17:33:52 +00:00
226 changed files with 12952 additions and 1807 deletions

View File

@@ -4,6 +4,67 @@ All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
## [v6.1.4](https://github.com/prysmaticlabs/prysm/compare/v6.1.3...v6.1.4) - 2025-10-24
This release includes a bug fix affecting block proposals in rare cases, along with an important update for Windows users running post-Fusaka fork.
### Added
- SSZ-QL: Add endpoints for `BeaconState`/`BeaconBlock`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15888)
- Add native state diff type and marshalling functions. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15250)
- Update the earliest available slot after pruning operations in beacon chain database pruner. This ensures the P2P layer accurately knows which historical data is available after pruning, preventing nodes from advertising or attempting to serve data that has been pruned. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15694)
### Fixed
- Correctly advertise (in ENR and beacon API) attestation subnets when using `--subscribe-all-subnets`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15880)
- `randomPeer`: Return if the context is cancelled when waiting for peers. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15876)
- Improve error message when the byte count read from disk when reading a data column sidecars is lower than expected. (Mostly, because the file is truncated.). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15881)
- Delete the genesis state file when --clear-db / --force-clear-db is specified. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15883)
- Fix sync committee subscription to use subnet indices instead of committee indices. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15885)
- Fixed metadata extraction on Windows by correctly splitting file paths. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15899)
- `VerifyDataColumnsSidecarKZGProofs`: Check if sizes match. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15892)
- Fix recoverStateSummary to persist state summaries in stateSummaryBucket instead of stateBucket (#15896). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15896)
- `updateCustodyInfoInDB`: Use `NumberOfCustodyGroups` instead of `NumberOfColumns`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15908)
- Sync committee uses correct state to calculate position. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15905)
## [v6.1.3](https://github.com/prysmaticlabs/prysm/compare/v6.1.2...v6.1.3) - 2025-10-20
This release has several important beacon API and p2p fixes.
### Added
- Add Grandine to P2P known agents. (Useful for metrics). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15829)
- Delegate sszInfo HashTreeRoot to FastSSZ-generated implementations via SSZObject, enabling roots calculation for generated types while avoiding duplicate logic. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15805)
- SSZ-QL: Use `fastssz`'s `SizeSSZ` method for calculating the size of `Container` type. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15864)
- SSZ-QL: Access n-th element in `List`/`Vector`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15767)
### Changed
- Do not verify block data when calculating rewards. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15819)
- Process pending attestations after pending blocks are cleared. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15824)
- updated web3signer to 25.9.1. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15832)
- Gracefully handle submit blind block returning 502 errors. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15848)
- Improve returning individual message errors from Beacon API. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15835)
- SSZ-QL: Clarify `Size` method with more sophisticated `SSZType`s. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15864)
### Fixed
- Use service context and continue on slasher attestation errors (#15803). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15803)
- block event probably shouldn't be sent on certain block processing failures, now sends only on successing processing Block is NON-CANONICAL, Block IS CANONICAL but getFCUArgs FAILS, and Full success. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15814)
- Fixed web3signer e2e, issues caused due to a regression on old fork support. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15832)
- Do not mark blocks as invalid from ErrNotDescendantOfFinalized. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15846)
- Fixed [#15812](https://github.com/OffchainLabs/prysm/issues/15812): Gossip attestation validation incorrectly rejecting attestations that arrive before their referenced blocks. Previously, attestations were saved to the pending queue but immediately rejected by forkchoice validation, causing "not descendant of finalized checkpoint" errors. Now attestations for missing blocks return `ValidationIgnore` without error, allowing them to be properly processed when their blocks arrive. This eliminates false positive rejections and prevents potential incorrect peer downscoring during network congestion. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15840)
- Mark the block as invalid if it has an invalid signature. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15847)
- Display error messages from the server verbatim when they are not encoded as `application/json`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15860)
- `HasAtLeastOneIndex`: Check the index is not too high. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15865)
- Fix `/eth/v1/beacon/blob_sidecars/` beacon API is the fulu fork epoch is set to the far future epoch. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15867)
- `dataColumnSidecarsByRangeRPCHandler`: Gracefully close the stream if no data to return. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15866)
- `VerifyDataColumnSidecar`: Check if there is no too many commitments. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15859)
- `WithDataColumnRetentionEpochs`: Use `dataColumnRetentionEpoch` instead of `blobColumnRetentionEpoch`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15872)
- Mark epoch transition correctly on new head events. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15871)
- reject committee index >= committees_per_slot in unaggregated attestation validation. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15855)
- Decreased attestation gossip validation batch deadline to 5ms. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15882)
## [v6.1.2](https://github.com/prysmaticlabs/prysm/compare/v6.1.1...v6.1.2) - 2025-10-10
This release has several important fixes to improve Prysm's peering, stability, and attestation inclusion on mainnet and all testnets. All node operators are encouraged to update to this release as soon as practical for the best mainnet performance.
@@ -3759,4 +3820,4 @@ There are no security updates in this release.
# Older than v2.0.0
For changelog history for releases older than v2.0.0, please refer to https://github.com/prysmaticlabs/prysm/releases
For changelog history for releases older than v2.0.0, please refer to https://github.com/prysmaticlabs/prysm/releases

View File

@@ -253,16 +253,16 @@ filegroup(
url = "https://github.com/ethereum/EIPs/archive/5480440fe51742ed23342b68cf106cefd427e39d.tar.gz",
)
consensus_spec_version = "v1.6.0-beta.0"
consensus_spec_version = "v1.6.0-beta.1"
load("@prysm//tools:download_spectests.bzl", "consensus_spec_tests")
consensus_spec_tests(
name = "consensus_spec_tests",
flavors = {
"general": "sha256-rT3jQp2+ZaDiO66gIQggetzqr+kGeexaLqEhbx4HDMY=",
"minimal": "sha256-wowwwyvd0KJLsE+oDOtPkrhZyJndJpJ0lbXYsLH6XBw=",
"mainnet": "sha256-4ZLrLNeO7NihZ4TuWH5V5fUhvW9Y3mAPBQDCqrfShps=",
"general": "sha256-oEj0MTViJHjZo32nABK36gfvSXpbwkBk/jt6Mj7pWFI=",
"minimal": "sha256-cS4NPv6IRBoCSmWomQ8OEo8IsVNW9YawUFqoRZQBUj4=",
"mainnet": "sha256-BYuLndMPAh4p13IRJgNfVakrCVL69KRrNw2tdc3ETbE=",
},
version = consensus_spec_version,
)
@@ -278,7 +278,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-sBe3Rx8zGq9IrvfgIhZQpYidGjy3mE1SiCb6/+pjLdY=",
integrity = "sha256-yrq3tdwPS8Ri+ueeLAHssIT3ssMrX7zvHiJ8Xf9GVYs=",
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
)

View File

@@ -6,6 +6,7 @@ go_library(
"client.go",
"errors.go",
"options.go",
"transport.go",
],
importpath = "github.com/OffchainLabs/prysm/v6/api/client",
visibility = ["//visibility:public"],
@@ -14,7 +15,13 @@ go_library(
go_test(
name = "go_default_test",
srcs = ["client_test.go"],
srcs = [
"client_test.go",
"transport_test.go",
],
embed = [":go_default_library"],
deps = ["//testing/require:go_default_library"],
deps = [
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
],
)

25
api/client/transport.go Normal file
View File

@@ -0,0 +1,25 @@
package client
import "net/http"
// CustomHeadersTransport adds custom headers to each request
type CustomHeadersTransport struct {
base http.RoundTripper
headers map[string][]string
}
func NewCustomHeadersTransport(base http.RoundTripper, headers map[string][]string) *CustomHeadersTransport {
return &CustomHeadersTransport{
base: base,
headers: headers,
}
}
func (t *CustomHeadersTransport) RoundTrip(req *http.Request) (*http.Response, error) {
for header, values := range t.headers {
for _, value := range values {
req.Header.Add(header, value)
}
}
return t.base.RoundTrip(req)
}

View File

@@ -0,0 +1,25 @@
package client
import (
"net/http"
"net/http/httptest"
"testing"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
)
type noopTransport struct{}
func (*noopTransport) RoundTrip(*http.Request) (*http.Response, error) {
return nil, nil
}
func TestRoundTrip(t *testing.T) {
tr := &CustomHeadersTransport{base: &noopTransport{}, headers: map[string][]string{"key1": []string{"value1", "value2"}, "key2": []string{"value3"}}}
req := httptest.NewRequest("GET", "http://foo", nil)
_, err := tr.RoundTrip(req)
require.NoError(t, err)
assert.DeepEqual(t, []string{"value1", "value2"}, req.Header.Values("key1"))
assert.DeepEqual(t, []string{"value3"}, req.Header.Values("key2"))
}

View File

@@ -296,3 +296,8 @@ type GetBlobsResponse struct {
Finalized bool `json:"finalized"`
Data []string `json:"data"` //blobs
}
type SSZQueryRequest struct {
Query string `json:"query"`
IncludeProof bool `json:"include_proof,omitempty"`
}

View File

@@ -173,6 +173,7 @@ go_test(
"//beacon-chain/state/state-native:go_default_library",
"//beacon-chain/state/stategen:go_default_library",
"//beacon-chain/verification:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",

View File

@@ -346,13 +346,24 @@ func (s *Service) notifyNewHeadEvent(
if err != nil {
return errors.Wrap(err, "could not check if node is optimistically synced")
}
parentRoot, err := s.ParentRoot([32]byte(newHeadRoot))
if err != nil {
return errors.Wrap(err, "could not obtain parent root in forkchoice")
}
parentSlot, err := s.RecentBlockSlot(parentRoot)
if err != nil {
return errors.Wrap(err, "could not obtain parent slot in forkchoice")
}
epochTransition := slots.ToEpoch(newHeadSlot) > slots.ToEpoch(parentSlot)
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.NewHead,
Data: &ethpbv1.EventHead{
Slot: newHeadSlot,
Block: newHeadRoot,
State: newHeadStateRoot,
EpochTransition: slots.IsEpochStart(newHeadSlot),
EpochTransition: epochTransition,
PreviousDutyDependentRoot: previousDutyDependentRoot[:],
CurrentDutyDependentRoot: currentDutyDependentRoot[:],
ExecutionOptimistic: isOptimistic,

View File

@@ -162,6 +162,9 @@ func Test_notifyNewHeadEvent(t *testing.T) {
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(t.Context(), st, blk))
newHeadStateRoot := [32]byte{2}
newHeadRoot := [32]byte{3}
st, blk, err = prepareForkchoiceState(t.Context(), 1, newHeadRoot, [32]byte{}, [32]byte{}, &ethpb.Checkpoint{}, &ethpb.Checkpoint{})
require.NoError(t, err)
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(t.Context(), st, blk))
require.NoError(t, srv.notifyNewHeadEvent(t.Context(), 1, bState, newHeadStateRoot[:], newHeadRoot[:]))
events := notifier.ReceivedEvents()
require.Equal(t, 1, len(events))
@@ -196,6 +199,9 @@ func Test_notifyNewHeadEvent(t *testing.T) {
newHeadStateRoot := [32]byte{2}
newHeadRoot := [32]byte{3}
st, blk, err = prepareForkchoiceState(t.Context(), 0, newHeadRoot, [32]byte{}, [32]byte{}, &ethpb.Checkpoint{}, &ethpb.Checkpoint{})
require.NoError(t, err)
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(t.Context(), st, blk))
err = srv.notifyNewHeadEvent(t.Context(), epoch2Start, bState, newHeadStateRoot[:], newHeadRoot[:])
require.NoError(t, err)
events := notifier.ReceivedEvents()
@@ -213,6 +219,37 @@ func Test_notifyNewHeadEvent(t *testing.T) {
}
require.DeepSSZEqual(t, wanted, eventHead)
})
t.Run("epoch transition", func(t *testing.T) {
bState, _ := util.DeterministicGenesisState(t, 10)
srv := testServiceWithDB(t)
srv.SetGenesisTime(time.Now())
notifier := srv.cfg.StateNotifier.(*mock.MockStateNotifier)
srv.originBlockRoot = [32]byte{1}
st, blk, err := prepareForkchoiceState(t.Context(), 0, [32]byte{}, [32]byte{}, [32]byte{}, &ethpb.Checkpoint{}, &ethpb.Checkpoint{})
require.NoError(t, err)
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(t.Context(), st, blk))
newHeadStateRoot := [32]byte{2}
newHeadRoot := [32]byte{3}
st, blk, err = prepareForkchoiceState(t.Context(), 32, newHeadRoot, [32]byte{}, [32]byte{}, &ethpb.Checkpoint{}, &ethpb.Checkpoint{})
require.NoError(t, err)
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(t.Context(), st, blk))
newHeadSlot := params.BeaconConfig().SlotsPerEpoch
require.NoError(t, srv.notifyNewHeadEvent(t.Context(), newHeadSlot, bState, newHeadStateRoot[:], newHeadRoot[:]))
events := notifier.ReceivedEvents()
require.Equal(t, 1, len(events))
eventHead, ok := events[0].Data.(*ethpbv1.EventHead)
require.Equal(t, true, ok)
wanted := &ethpbv1.EventHead{
Slot: newHeadSlot,
Block: newHeadRoot[:],
State: newHeadStateRoot[:],
EpochTransition: true,
PreviousDutyDependentRoot: params.BeaconConfig().ZeroHash[:],
CurrentDutyDependentRoot: srv.originBlockRoot[:],
}
require.DeepSSZEqual(t, wanted, eventHead)
})
}
func TestRetrieveHead_ReadOnly(t *testing.T) {

View File

@@ -472,8 +472,8 @@ func (s *Service) removeStartupState() {
func (s *Service) updateCustodyInfoInDB(slot primitives.Slot) (primitives.Slot, uint64, error) {
isSubscribedToAllDataSubnets := flags.Get().SubscribeAllDataSubnets
beaconConfig := params.BeaconConfig()
custodyRequirement := beaconConfig.CustodyRequirement
cfg := params.BeaconConfig()
custodyRequirement := cfg.CustodyRequirement
// Check if the node was previously subscribed to all data subnets, and if so,
// store the new status accordingly.
@@ -493,7 +493,7 @@ func (s *Service) updateCustodyInfoInDB(slot primitives.Slot) (primitives.Slot,
// Compute the custody group count.
custodyGroupCount := custodyRequirement
if isSubscribedToAllDataSubnets {
custodyGroupCount = beaconConfig.NumberOfColumns
custodyGroupCount = cfg.NumberOfCustodyGroups
}
// Safely compute the fulu fork slot.
@@ -536,11 +536,11 @@ func spawnCountdownIfPreGenesis(ctx context.Context, genesisTime time.Time, db d
}
func fuluForkSlot() (primitives.Slot, error) {
beaconConfig := params.BeaconConfig()
cfg := params.BeaconConfig()
fuluForkEpoch := beaconConfig.FuluForkEpoch
if fuluForkEpoch == beaconConfig.FarFutureEpoch {
return beaconConfig.FarFutureSlot, nil
fuluForkEpoch := cfg.FuluForkEpoch
if fuluForkEpoch == cfg.FarFutureEpoch {
return cfg.FarFutureSlot, nil
}
forkFuluSlot, err := slots.EpochStart(fuluForkEpoch)

View File

@@ -23,9 +23,11 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
state_native "github.com/OffchainLabs/prysm/v6/beacon-chain/state/state-native"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state/stategen"
"github.com/OffchainLabs/prysm/v6/cmd/beacon-chain/flags"
"github.com/OffchainLabs/prysm/v6/config/features"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
consensusblocks "github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
@@ -596,3 +598,103 @@ func TestNotifyIndex(t *testing.T) {
t.Errorf("Notifier channel did not receive the index")
}
}
func TestUpdateCustodyInfoInDB(t *testing.T) {
const (
fuluForkEpoch = 10
custodyRequirement = uint64(4)
earliestStoredSlot = primitives.Slot(12)
numberOfCustodyGroups = uint64(64)
numberOfColumns = uint64(128)
)
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.FuluForkEpoch = fuluForkEpoch
cfg.CustodyRequirement = custodyRequirement
cfg.NumberOfCustodyGroups = numberOfCustodyGroups
cfg.NumberOfColumns = numberOfColumns
params.OverrideBeaconConfig(cfg)
ctx := t.Context()
pbBlock := util.NewBeaconBlock()
pbBlock.Block.Slot = 12
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(pbBlock)
require.NoError(t, err)
roBlock, err := blocks.NewROBlock(signedBeaconBlock)
require.NoError(t, err)
t.Run("CGC increases before fulu", func(t *testing.T) {
service, requirements := minimalTestService(t)
err = requirements.db.SaveBlock(ctx, roBlock)
require.NoError(t, err)
// Before Fulu
// -----------
actualEas, actualCgc, err := service.updateCustodyInfoInDB(15)
require.NoError(t, err)
require.Equal(t, earliestStoredSlot, actualEas)
require.Equal(t, custodyRequirement, actualCgc)
actualEas, actualCgc, err = service.updateCustodyInfoInDB(17)
require.NoError(t, err)
require.Equal(t, earliestStoredSlot, actualEas)
require.Equal(t, custodyRequirement, actualCgc)
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.SubscribeAllDataSubnets = true
flags.Init(gFlags)
defer flags.Init(resetFlags)
actualEas, actualCgc, err = service.updateCustodyInfoInDB(19)
require.NoError(t, err)
require.Equal(t, earliestStoredSlot, actualEas)
require.Equal(t, numberOfCustodyGroups, actualCgc)
// After Fulu
// ----------
actualEas, actualCgc, err = service.updateCustodyInfoInDB(fuluForkEpoch*primitives.Slot(cfg.SlotsPerEpoch) + 1)
require.NoError(t, err)
require.Equal(t, earliestStoredSlot, actualEas)
require.Equal(t, numberOfCustodyGroups, actualCgc)
})
t.Run("CGC increases after fulu", func(t *testing.T) {
service, requirements := minimalTestService(t)
err = requirements.db.SaveBlock(ctx, roBlock)
require.NoError(t, err)
// Before Fulu
// -----------
actualEas, actualCgc, err := service.updateCustodyInfoInDB(15)
require.NoError(t, err)
require.Equal(t, earliestStoredSlot, actualEas)
require.Equal(t, custodyRequirement, actualCgc)
actualEas, actualCgc, err = service.updateCustodyInfoInDB(17)
require.NoError(t, err)
require.Equal(t, earliestStoredSlot, actualEas)
require.Equal(t, custodyRequirement, actualCgc)
// After Fulu
// ----------
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.SubscribeAllDataSubnets = true
flags.Init(gFlags)
defer flags.Init(resetFlags)
slot := fuluForkEpoch*primitives.Slot(cfg.SlotsPerEpoch) + 1
actualEas, actualCgc, err = service.updateCustodyInfoInDB(slot)
require.NoError(t, err)
require.Equal(t, slot, actualEas)
require.Equal(t, numberOfCustodyGroups, actualCgc)
actualEas, actualCgc, err = service.updateCustodyInfoInDB(slot + 2)
require.NoError(t, err)
require.Equal(t, slot, actualEas)
require.Equal(t, numberOfCustodyGroups, actualCgc)
})
}

View File

@@ -130,6 +130,14 @@ func (dch *mockCustodyManager) UpdateCustodyInfo(earliestAvailableSlot primitive
return earliestAvailableSlot, custodyGroupCount, nil
}
func (dch *mockCustodyManager) UpdateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error {
dch.mut.Lock()
defer dch.mut.Unlock()
dch.earliestAvailableSlot = earliestAvailableSlot
return nil
}
func (dch *mockCustodyManager) CustodyGroupCountFromPeer(peer.ID) uint64 {
return 0
}

View File

@@ -472,6 +472,36 @@ func (s *ChainService) HasBlock(ctx context.Context, rt [32]byte) bool {
return s.InitSyncBlockRoots[rt]
}
func (s *ChainService) AvailableBlocks(ctx context.Context, blockRoots [][32]byte) map[[32]byte]bool {
if s.DB == nil {
return nil
}
count := len(blockRoots)
availableRoots := make(map[[32]byte]bool, count)
notInDBRoots := make([][32]byte, 0, count)
for _, root := range blockRoots {
if s.DB.HasBlock(ctx, root) {
availableRoots[root] = true
continue
}
notInDBRoots = append(notInDBRoots, root)
}
if s.InitSyncBlockRoots == nil {
return availableRoots
}
for _, root := range notInDBRoots {
if s.InitSyncBlockRoots[root] {
availableRoots[root] = true
}
}
return availableRoots
}
// RecentBlockSlot mocks the same method in the chain service.
func (s *ChainService) RecentBlockSlot([32]byte) (primitives.Slot, error) {
return s.BlockSlot, nil

View File

@@ -3,6 +3,7 @@ package blockchain
import (
"context"
"fmt"
"slices"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filters"
"github.com/OffchainLabs/prysm/v6/config/params"
@@ -81,12 +82,10 @@ func (v *WeakSubjectivityVerifier) VerifyWeakSubjectivity(ctx context.Context, f
if err != nil {
return errors.Wrap(err, "error while retrieving block roots to verify weak subjectivity")
}
for _, root := range roots {
if v.root == root {
log.Info("Weak subjectivity check has passed!!")
v.verified = true
return nil
}
if slices.Contains(roots, v.root) {
log.Info("Weak subjectivity check has passed!!")
v.verified = true
return nil
}
return errors.Wrap(errWSBlockNotFoundInEpoch, fmt.Sprintf("root=%#x, epoch=%d", v.root, v.epoch))
}

View File

@@ -12,6 +12,46 @@ import (
"github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1/attestation"
)
// ConvertToAltair converts a Phase 0 beacon state to an Altair beacon state.
func ConvertToAltair(state state.BeaconState) (state.BeaconState, error) {
epoch := time.CurrentEpoch(state)
numValidators := state.NumValidators()
s := &ethpb.BeaconStateAltair{
GenesisTime: uint64(state.GenesisTime().Unix()),
GenesisValidatorsRoot: state.GenesisValidatorsRoot(),
Slot: state.Slot(),
Fork: &ethpb.Fork{
PreviousVersion: state.Fork().CurrentVersion,
CurrentVersion: params.BeaconConfig().AltairForkVersion,
Epoch: epoch,
},
LatestBlockHeader: state.LatestBlockHeader(),
BlockRoots: state.BlockRoots(),
StateRoots: state.StateRoots(),
HistoricalRoots: state.HistoricalRoots(),
Eth1Data: state.Eth1Data(),
Eth1DataVotes: state.Eth1DataVotes(),
Eth1DepositIndex: state.Eth1DepositIndex(),
Validators: state.Validators(),
Balances: state.Balances(),
RandaoMixes: state.RandaoMixes(),
Slashings: state.Slashings(),
PreviousEpochParticipation: make([]byte, numValidators),
CurrentEpochParticipation: make([]byte, numValidators),
JustificationBits: state.JustificationBits(),
PreviousJustifiedCheckpoint: state.PreviousJustifiedCheckpoint(),
CurrentJustifiedCheckpoint: state.CurrentJustifiedCheckpoint(),
FinalizedCheckpoint: state.FinalizedCheckpoint(),
InactivityScores: make([]uint64, numValidators),
}
newState, err := state_native.InitializeFromProtoUnsafeAltair(s)
if err != nil {
return nil, err
}
return newState, nil
}
// UpgradeToAltair updates input state to return the version Altair state.
//
// Spec code:
@@ -64,39 +104,7 @@ import (
// post.next_sync_committee = get_next_sync_committee(post)
// return post
func UpgradeToAltair(ctx context.Context, state state.BeaconState) (state.BeaconState, error) {
epoch := time.CurrentEpoch(state)
numValidators := state.NumValidators()
s := &ethpb.BeaconStateAltair{
GenesisTime: uint64(state.GenesisTime().Unix()),
GenesisValidatorsRoot: state.GenesisValidatorsRoot(),
Slot: state.Slot(),
Fork: &ethpb.Fork{
PreviousVersion: state.Fork().CurrentVersion,
CurrentVersion: params.BeaconConfig().AltairForkVersion,
Epoch: epoch,
},
LatestBlockHeader: state.LatestBlockHeader(),
BlockRoots: state.BlockRoots(),
StateRoots: state.StateRoots(),
HistoricalRoots: state.HistoricalRoots(),
Eth1Data: state.Eth1Data(),
Eth1DataVotes: state.Eth1DataVotes(),
Eth1DepositIndex: state.Eth1DepositIndex(),
Validators: state.Validators(),
Balances: state.Balances(),
RandaoMixes: state.RandaoMixes(),
Slashings: state.Slashings(),
PreviousEpochParticipation: make([]byte, numValidators),
CurrentEpochParticipation: make([]byte, numValidators),
JustificationBits: state.JustificationBits(),
PreviousJustifiedCheckpoint: state.PreviousJustifiedCheckpoint(),
CurrentJustifiedCheckpoint: state.CurrentJustifiedCheckpoint(),
FinalizedCheckpoint: state.FinalizedCheckpoint(),
InactivityScores: make([]uint64, numValidators),
}
newState, err := state_native.InitializeFromProtoUnsafeAltair(s)
newState, err := ConvertToAltair(state)
if err != nil {
return nil, err
}

View File

@@ -15,6 +15,129 @@ import (
"github.com/pkg/errors"
)
// ConvertToElectra converts a Deneb beacon state to an Electra beacon state. It does not perform any fork logic.
func ConvertToElectra(beaconState state.BeaconState) (state.BeaconState, error) {
currentSyncCommittee, err := beaconState.CurrentSyncCommittee()
if err != nil {
return nil, err
}
nextSyncCommittee, err := beaconState.NextSyncCommittee()
if err != nil {
return nil, err
}
prevEpochParticipation, err := beaconState.PreviousEpochParticipation()
if err != nil {
return nil, err
}
currentEpochParticipation, err := beaconState.CurrentEpochParticipation()
if err != nil {
return nil, err
}
inactivityScores, err := beaconState.InactivityScores()
if err != nil {
return nil, err
}
payloadHeader, err := beaconState.LatestExecutionPayloadHeader()
if err != nil {
return nil, err
}
txRoot, err := payloadHeader.TransactionsRoot()
if err != nil {
return nil, err
}
wdRoot, err := payloadHeader.WithdrawalsRoot()
if err != nil {
return nil, err
}
wi, err := beaconState.NextWithdrawalIndex()
if err != nil {
return nil, err
}
vi, err := beaconState.NextWithdrawalValidatorIndex()
if err != nil {
return nil, err
}
summaries, err := beaconState.HistoricalSummaries()
if err != nil {
return nil, err
}
excessBlobGas, err := payloadHeader.ExcessBlobGas()
if err != nil {
return nil, err
}
blobGasUsed, err := payloadHeader.BlobGasUsed()
if err != nil {
return nil, err
}
s := &ethpb.BeaconStateElectra{
GenesisTime: uint64(beaconState.GenesisTime().Unix()),
GenesisValidatorsRoot: beaconState.GenesisValidatorsRoot(),
Slot: beaconState.Slot(),
Fork: &ethpb.Fork{
PreviousVersion: beaconState.Fork().CurrentVersion,
CurrentVersion: params.BeaconConfig().ElectraForkVersion,
Epoch: time.CurrentEpoch(beaconState),
},
LatestBlockHeader: beaconState.LatestBlockHeader(),
BlockRoots: beaconState.BlockRoots(),
StateRoots: beaconState.StateRoots(),
HistoricalRoots: beaconState.HistoricalRoots(),
Eth1Data: beaconState.Eth1Data(),
Eth1DataVotes: beaconState.Eth1DataVotes(),
Eth1DepositIndex: beaconState.Eth1DepositIndex(),
Validators: beaconState.Validators(),
Balances: beaconState.Balances(),
RandaoMixes: beaconState.RandaoMixes(),
Slashings: beaconState.Slashings(),
PreviousEpochParticipation: prevEpochParticipation,
CurrentEpochParticipation: currentEpochParticipation,
JustificationBits: beaconState.JustificationBits(),
PreviousJustifiedCheckpoint: beaconState.PreviousJustifiedCheckpoint(),
CurrentJustifiedCheckpoint: beaconState.CurrentJustifiedCheckpoint(),
FinalizedCheckpoint: beaconState.FinalizedCheckpoint(),
InactivityScores: inactivityScores,
CurrentSyncCommittee: currentSyncCommittee,
NextSyncCommittee: nextSyncCommittee,
LatestExecutionPayloadHeader: &enginev1.ExecutionPayloadHeaderDeneb{
ParentHash: payloadHeader.ParentHash(),
FeeRecipient: payloadHeader.FeeRecipient(),
StateRoot: payloadHeader.StateRoot(),
ReceiptsRoot: payloadHeader.ReceiptsRoot(),
LogsBloom: payloadHeader.LogsBloom(),
PrevRandao: payloadHeader.PrevRandao(),
BlockNumber: payloadHeader.BlockNumber(),
GasLimit: payloadHeader.GasLimit(),
GasUsed: payloadHeader.GasUsed(),
Timestamp: payloadHeader.Timestamp(),
ExtraData: payloadHeader.ExtraData(),
BaseFeePerGas: payloadHeader.BaseFeePerGas(),
BlockHash: payloadHeader.BlockHash(),
TransactionsRoot: txRoot,
WithdrawalsRoot: wdRoot,
ExcessBlobGas: excessBlobGas,
BlobGasUsed: blobGasUsed,
},
NextWithdrawalIndex: wi,
NextWithdrawalValidatorIndex: vi,
HistoricalSummaries: summaries,
DepositRequestsStartIndex: params.BeaconConfig().UnsetDepositRequestsStartIndex,
DepositBalanceToConsume: 0,
EarliestConsolidationEpoch: helpers.ActivationExitEpoch(slots.ToEpoch(beaconState.Slot())),
PendingDeposits: make([]*ethpb.PendingDeposit, 0),
PendingPartialWithdrawals: make([]*ethpb.PendingPartialWithdrawal, 0),
PendingConsolidations: make([]*ethpb.PendingConsolidation, 0),
}
// need to cast the beaconState to use in helper functions
post, err := state_native.InitializeFromProtoUnsafeElectra(s)
if err != nil {
return nil, errors.Wrap(err, "failed to initialize post electra beaconState")
}
return post, nil
}
// UpgradeToElectra updates inputs a generic state to return the version Electra state.
//
// nolint:dupword
@@ -126,55 +249,7 @@ import (
//
// return post
func UpgradeToElectra(beaconState state.BeaconState) (state.BeaconState, error) {
currentSyncCommittee, err := beaconState.CurrentSyncCommittee()
if err != nil {
return nil, err
}
nextSyncCommittee, err := beaconState.NextSyncCommittee()
if err != nil {
return nil, err
}
prevEpochParticipation, err := beaconState.PreviousEpochParticipation()
if err != nil {
return nil, err
}
currentEpochParticipation, err := beaconState.CurrentEpochParticipation()
if err != nil {
return nil, err
}
inactivityScores, err := beaconState.InactivityScores()
if err != nil {
return nil, err
}
payloadHeader, err := beaconState.LatestExecutionPayloadHeader()
if err != nil {
return nil, err
}
txRoot, err := payloadHeader.TransactionsRoot()
if err != nil {
return nil, err
}
wdRoot, err := payloadHeader.WithdrawalsRoot()
if err != nil {
return nil, err
}
wi, err := beaconState.NextWithdrawalIndex()
if err != nil {
return nil, err
}
vi, err := beaconState.NextWithdrawalValidatorIndex()
if err != nil {
return nil, err
}
summaries, err := beaconState.HistoricalSummaries()
if err != nil {
return nil, err
}
excessBlobGas, err := payloadHeader.ExcessBlobGas()
if err != nil {
return nil, err
}
blobGasUsed, err := payloadHeader.BlobGasUsed()
s, err := ConvertToElectra(beaconState)
if err != nil {
return nil, err
}
@@ -206,97 +281,38 @@ func UpgradeToElectra(beaconState state.BeaconState) (state.BeaconState, error)
if err != nil {
return nil, errors.Wrap(err, "failed to get total active balance")
}
s := &ethpb.BeaconStateElectra{
GenesisTime: uint64(beaconState.GenesisTime().Unix()),
GenesisValidatorsRoot: beaconState.GenesisValidatorsRoot(),
Slot: beaconState.Slot(),
Fork: &ethpb.Fork{
PreviousVersion: beaconState.Fork().CurrentVersion,
CurrentVersion: params.BeaconConfig().ElectraForkVersion,
Epoch: time.CurrentEpoch(beaconState),
},
LatestBlockHeader: beaconState.LatestBlockHeader(),
BlockRoots: beaconState.BlockRoots(),
StateRoots: beaconState.StateRoots(),
HistoricalRoots: beaconState.HistoricalRoots(),
Eth1Data: beaconState.Eth1Data(),
Eth1DataVotes: beaconState.Eth1DataVotes(),
Eth1DepositIndex: beaconState.Eth1DepositIndex(),
Validators: beaconState.Validators(),
Balances: beaconState.Balances(),
RandaoMixes: beaconState.RandaoMixes(),
Slashings: beaconState.Slashings(),
PreviousEpochParticipation: prevEpochParticipation,
CurrentEpochParticipation: currentEpochParticipation,
JustificationBits: beaconState.JustificationBits(),
PreviousJustifiedCheckpoint: beaconState.PreviousJustifiedCheckpoint(),
CurrentJustifiedCheckpoint: beaconState.CurrentJustifiedCheckpoint(),
FinalizedCheckpoint: beaconState.FinalizedCheckpoint(),
InactivityScores: inactivityScores,
CurrentSyncCommittee: currentSyncCommittee,
NextSyncCommittee: nextSyncCommittee,
LatestExecutionPayloadHeader: &enginev1.ExecutionPayloadHeaderDeneb{
ParentHash: payloadHeader.ParentHash(),
FeeRecipient: payloadHeader.FeeRecipient(),
StateRoot: payloadHeader.StateRoot(),
ReceiptsRoot: payloadHeader.ReceiptsRoot(),
LogsBloom: payloadHeader.LogsBloom(),
PrevRandao: payloadHeader.PrevRandao(),
BlockNumber: payloadHeader.BlockNumber(),
GasLimit: payloadHeader.GasLimit(),
GasUsed: payloadHeader.GasUsed(),
Timestamp: payloadHeader.Timestamp(),
ExtraData: payloadHeader.ExtraData(),
BaseFeePerGas: payloadHeader.BaseFeePerGas(),
BlockHash: payloadHeader.BlockHash(),
TransactionsRoot: txRoot,
WithdrawalsRoot: wdRoot,
ExcessBlobGas: excessBlobGas,
BlobGasUsed: blobGasUsed,
},
NextWithdrawalIndex: wi,
NextWithdrawalValidatorIndex: vi,
HistoricalSummaries: summaries,
DepositRequestsStartIndex: params.BeaconConfig().UnsetDepositRequestsStartIndex,
DepositBalanceToConsume: 0,
ExitBalanceToConsume: helpers.ActivationExitChurnLimit(primitives.Gwei(tab)),
EarliestExitEpoch: earliestExitEpoch,
ConsolidationBalanceToConsume: helpers.ConsolidationChurnLimit(primitives.Gwei(tab)),
EarliestConsolidationEpoch: helpers.ActivationExitEpoch(slots.ToEpoch(beaconState.Slot())),
PendingDeposits: make([]*ethpb.PendingDeposit, 0),
PendingPartialWithdrawals: make([]*ethpb.PendingPartialWithdrawal, 0),
PendingConsolidations: make([]*ethpb.PendingConsolidation, 0),
if err := s.SetExitBalanceToConsume(helpers.ActivationExitChurnLimit(primitives.Gwei(tab))); err != nil {
return nil, errors.Wrap(err, "failed to set exit balance to consume")
}
if err := s.SetEarliestExitEpoch(earliestExitEpoch); err != nil {
return nil, errors.Wrap(err, "failed to set earliest exit epoch")
}
if err := s.SetConsolidationBalanceToConsume(helpers.ConsolidationChurnLimit(primitives.Gwei(tab))); err != nil {
return nil, errors.Wrap(err, "failed to set consolidation balance to consume")
}
// Sorting preActivationIndices based on a custom criteria
vals := s.Validators()
sort.Slice(preActivationIndices, func(i, j int) bool {
// Comparing based on ActivationEligibilityEpoch and then by index if the epochs are the same
if s.Validators[preActivationIndices[i]].ActivationEligibilityEpoch == s.Validators[preActivationIndices[j]].ActivationEligibilityEpoch {
if vals[preActivationIndices[i]].ActivationEligibilityEpoch == vals[preActivationIndices[j]].ActivationEligibilityEpoch {
return preActivationIndices[i] < preActivationIndices[j]
}
return s.Validators[preActivationIndices[i]].ActivationEligibilityEpoch < s.Validators[preActivationIndices[j]].ActivationEligibilityEpoch
return vals[preActivationIndices[i]].ActivationEligibilityEpoch < vals[preActivationIndices[j]].ActivationEligibilityEpoch
})
// need to cast the beaconState to use in helper functions
post, err := state_native.InitializeFromProtoUnsafeElectra(s)
if err != nil {
return nil, errors.Wrap(err, "failed to initialize post electra beaconState")
}
for _, index := range preActivationIndices {
if err := QueueEntireBalanceAndResetValidator(post, index); err != nil {
if err := QueueEntireBalanceAndResetValidator(s, index); err != nil {
return nil, errors.Wrap(err, "failed to queue entire balance and reset validator")
}
}
// Ensure early adopters of compounding credentials go through the activation churn
for _, index := range compoundWithdrawalIndices {
if err := QueueExcessActiveBalance(post, index); err != nil {
if err := QueueExcessActiveBalance(s, index); err != nil {
return nil, errors.Wrap(err, "failed to queue excess active balance")
}
}
return post, nil
return s, nil
}

View File

@@ -7,6 +7,7 @@ go_library(
visibility = [
"//beacon-chain:__subpackages__",
"//cmd/prysmctl/testnet:__pkg__",
"//consensus-types/hdiff:__subpackages__",
"//testing/spectest:__subpackages__",
"//validator/client:__pkg__",
],

View File

@@ -15,6 +15,7 @@ go_library(
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//monitoring/tracing/trace:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",

View File

@@ -8,6 +8,7 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
state_native "github.com/OffchainLabs/prysm/v6/beacon-chain/state/state-native"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
enginev1 "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/time/slots"
@@ -17,6 +18,25 @@ import (
// UpgradeToFulu updates inputs a generic state to return the version Fulu state.
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/fork.md#upgrading-the-state
func UpgradeToFulu(ctx context.Context, beaconState state.BeaconState) (state.BeaconState, error) {
s, err := ConvertToFulu(beaconState)
if err != nil {
return nil, errors.Wrap(err, "could not convert to fulu")
}
proposerLookahead, err := helpers.InitializeProposerLookahead(ctx, beaconState, slots.ToEpoch(beaconState.Slot()))
if err != nil {
return nil, err
}
pl := make([]primitives.ValidatorIndex, len(proposerLookahead))
for i, v := range proposerLookahead {
pl[i] = primitives.ValidatorIndex(v)
}
if err := s.SetProposerLookahead(pl); err != nil {
return nil, errors.Wrap(err, "failed to set proposer lookahead")
}
return s, nil
}
func ConvertToFulu(beaconState state.BeaconState) (state.BeaconState, error) {
currentSyncCommittee, err := beaconState.CurrentSyncCommittee()
if err != nil {
return nil, err
@@ -105,11 +125,6 @@ func UpgradeToFulu(ctx context.Context, beaconState state.BeaconState) (state.Be
if err != nil {
return nil, err
}
proposerLookahead, err := helpers.InitializeProposerLookahead(ctx, beaconState, slots.ToEpoch(beaconState.Slot()))
if err != nil {
return nil, err
}
s := &ethpb.BeaconStateFulu{
GenesisTime: uint64(beaconState.GenesisTime().Unix()),
GenesisValidatorsRoot: beaconState.GenesisValidatorsRoot(),
@@ -171,14 +186,6 @@ func UpgradeToFulu(ctx context.Context, beaconState state.BeaconState) (state.Be
PendingDeposits: pendingDeposits,
PendingPartialWithdrawals: pendingPartialWithdrawals,
PendingConsolidations: pendingConsolidations,
ProposerLookahead: proposerLookahead,
}
// Need to cast the beaconState to use in helper functions
post, err := state_native.InitializeFromProtoUnsafeFulu(s)
if err != nil {
return nil, errors.Wrap(err, "failed to initialize post fulu beaconState")
}
return post, nil
return state_native.InitializeFromProtoUnsafeFulu(s)
}

View File

@@ -401,7 +401,7 @@ func ComputeProposerIndex(bState state.ReadOnlyBeaconState, activeIndices []prim
return 0, errors.New("empty active indices list")
}
hashFunc := hash.CustomSHA256Hasher()
beaconConfig := params.BeaconConfig()
cfg := params.BeaconConfig()
seedBuffer := make([]byte, len(seed)+8)
copy(seedBuffer, seed[:])
@@ -426,14 +426,14 @@ func ComputeProposerIndex(bState state.ReadOnlyBeaconState, activeIndices []prim
offset := (i % 16) * 2
randomValue := uint64(randomBytes[offset]) | uint64(randomBytes[offset+1])<<8
if effectiveBal*fieldparams.MaxRandomValueElectra >= beaconConfig.MaxEffectiveBalanceElectra*randomValue {
if effectiveBal*fieldparams.MaxRandomValueElectra >= cfg.MaxEffectiveBalanceElectra*randomValue {
return candidateIndex, nil
}
} else {
binary.LittleEndian.PutUint64(seedBuffer[len(seed):], i/32)
randomByte := hashFunc(seedBuffer)[i%32]
if effectiveBal*fieldparams.MaxRandomByte >= beaconConfig.MaxEffectiveBalance*uint64(randomByte) {
if effectiveBal*fieldparams.MaxRandomByte >= cfg.MaxEffectiveBalance*uint64(randomByte) {
return candidateIndex, nil
}
}

View File

@@ -201,14 +201,3 @@ func ParseWeakSubjectivityInputString(wsCheckpointString string) (*v1alpha1.Chec
Root: bRoot,
}, nil
}
// MinEpochsForBlockRequests computes the number of epochs of block history that we need to maintain,
// relative to the current epoch, per the p2p specs. This is used to compute the slot where backfill is complete.
// value defined:
// https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/p2p-interface.md#configuration
// MIN_VALIDATOR_WITHDRAWABILITY_DELAY + CHURN_LIMIT_QUOTIENT // 2 (= 33024, ~5 months)
// detailed rationale: https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/p2p-interface.md#why-are-blocksbyrange-requests-only-required-to-be-served-for-the-latest-min_epochs_for_block_requests-epochs
func MinEpochsForBlockRequests() primitives.Epoch {
return params.BeaconConfig().MinValidatorWithdrawabilityDelay +
primitives.Epoch(params.BeaconConfig().ChurnLimitQuotient/2)
}

View File

@@ -286,20 +286,3 @@ func genState(t *testing.T, valCount, avgBalance uint64) state.BeaconState {
return beaconState
}
func TestMinEpochsForBlockRequests(t *testing.T) {
helpers.ClearCache()
params.SetActiveTestCleanup(t, params.MainnetConfig())
var expected primitives.Epoch = 33024
// expected value of 33024 via spec commentary:
// https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/p2p-interface.md#why-are-blocksbyrange-requests-only-required-to-be-served-for-the-latest-min_epochs_for_block_requests-epochs
// MIN_EPOCHS_FOR_BLOCK_REQUESTS is calculated using the arithmetic from compute_weak_subjectivity_period found in the weak subjectivity guide. Specifically to find this max epoch range, we use the worst case event of a very large validator size (>= MIN_PER_EPOCH_CHURN_LIMIT * CHURN_LIMIT_QUOTIENT).
//
// MIN_EPOCHS_FOR_BLOCK_REQUESTS = (
// MIN_VALIDATOR_WITHDRAWABILITY_DELAY
// + MAX_SAFETY_DECAY * CHURN_LIMIT_QUOTIENT // (2 * 100)
// )
//
// Where MAX_SAFETY_DECAY = 100 and thus MIN_EPOCHS_FOR_BLOCK_REQUESTS = 33024 (~5 months).
require.Equal(t, expected, helpers.MinEpochsForBlockRequests())
}

View File

@@ -89,14 +89,14 @@ func CustodyGroups(nodeId enode.ID, custodyGroupCount uint64) ([]uint64, error)
// ComputeColumnsForCustodyGroup computes the columns for a given custody group.
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/das-core.md#compute_columns_for_custody_group
func ComputeColumnsForCustodyGroup(custodyGroup uint64) ([]uint64, error) {
beaconConfig := params.BeaconConfig()
numberOfCustodyGroups := beaconConfig.NumberOfCustodyGroups
cfg := params.BeaconConfig()
numberOfCustodyGroups := cfg.NumberOfCustodyGroups
if custodyGroup >= numberOfCustodyGroups {
return nil, ErrCustodyGroupTooLarge
}
numberOfColumns := beaconConfig.NumberOfColumns
numberOfColumns := cfg.NumberOfColumns
columnsPerGroup := numberOfColumns / numberOfCustodyGroups
@@ -112,9 +112,9 @@ func ComputeColumnsForCustodyGroup(custodyGroup uint64) ([]uint64, error) {
// ComputeCustodyGroupForColumn computes the custody group for a given column.
// It is the reciprocal function of ComputeColumnsForCustodyGroup.
func ComputeCustodyGroupForColumn(columnIndex uint64) (uint64, error) {
beaconConfig := params.BeaconConfig()
numberOfColumns := beaconConfig.NumberOfColumns
numberOfCustodyGroups := beaconConfig.NumberOfCustodyGroups
cfg := params.BeaconConfig()
numberOfColumns := cfg.NumberOfColumns
numberOfCustodyGroups := cfg.NumberOfCustodyGroups
if columnIndex >= numberOfColumns {
return 0, ErrIndexTooLarge

View File

@@ -43,6 +43,13 @@ func VerifyDataColumnSidecar(sidecar blocks.RODataColumn) error {
return ErrNoKzgCommitments
}
// A sidecar with more commitments than the max blob count for this block is invalid.
slot := sidecar.Slot()
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(slot)
if len(sidecar.KzgCommitments) > maxBlobsPerBlock {
return ErrTooManyCommitments
}
// The column length must be equal to the number of commitments/proofs.
if len(sidecar.Column) != len(sidecar.KzgCommitments) || len(sidecar.Column) != len(sidecar.KzgProofs) {
return ErrMismatchLength
@@ -72,10 +79,30 @@ func VerifyDataColumnsSidecarKZGProofs(sidecars []blocks.RODataColumn) error {
for _, sidecar := range sidecars {
for i := range sidecar.Column {
commitments = append(commitments, kzg.Bytes48(sidecar.KzgCommitments[i]))
var (
commitment kzg.Bytes48
cell kzg.Cell
proof kzg.Bytes48
)
commitmentBytes := sidecar.KzgCommitments[i]
cellBytes := sidecar.Column[i]
proofBytes := sidecar.KzgProofs[i]
if len(commitmentBytes) != len(commitment) ||
len(cellBytes) != len(cell) ||
len(proofBytes) != len(proof) {
return ErrMismatchLength
}
copy(commitment[:], commitmentBytes)
copy(cell[:], cellBytes)
copy(proof[:], proofBytes)
commitments = append(commitments, commitment)
indices = append(indices, sidecar.Index)
cells = append(cells, kzg.Cell(sidecar.Column[i]))
proofs = append(proofs, kzg.Bytes48(sidecar.KzgProofs[i]))
cells = append(cells, cell)
proofs = append(proofs, proof)
}
}

View File

@@ -18,38 +18,46 @@ import (
)
func TestVerifyDataColumnSidecar(t *testing.T) {
t.Run("index too large", func(t *testing.T) {
roSidecar := createTestSidecar(t, 1_000_000, nil, nil, nil)
err := peerdas.VerifyDataColumnSidecar(roSidecar)
require.ErrorIs(t, err, peerdas.ErrIndexTooLarge)
})
testCases := []struct {
name string
index uint64
blobCount int
commitmentCount int
proofCount int
maxBlobsPerBlock uint64
expectedError error
}{
{name: "index too large", index: 1_000_000, expectedError: peerdas.ErrIndexTooLarge},
{name: "no commitments", expectedError: peerdas.ErrNoKzgCommitments},
{name: "too many commitments", blobCount: 10, commitmentCount: 10, proofCount: 10, maxBlobsPerBlock: 2, expectedError: peerdas.ErrTooManyCommitments},
{name: "commitments size mismatch", commitmentCount: 1, maxBlobsPerBlock: 1, expectedError: peerdas.ErrMismatchLength},
{name: "proofs size mismatch", blobCount: 1, commitmentCount: 1, maxBlobsPerBlock: 1, expectedError: peerdas.ErrMismatchLength},
{name: "nominal", blobCount: 1, commitmentCount: 1, proofCount: 1, maxBlobsPerBlock: 1, expectedError: nil},
}
t.Run("no commitments", func(t *testing.T) {
roSidecar := createTestSidecar(t, 0, nil, nil, nil)
err := peerdas.VerifyDataColumnSidecar(roSidecar)
require.ErrorIs(t, err, peerdas.ErrNoKzgCommitments)
})
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.FuluForkEpoch = 0
cfg.BlobSchedule = []params.BlobScheduleEntry{{Epoch: 0, MaxBlobsPerBlock: tc.maxBlobsPerBlock}}
params.OverrideBeaconConfig(cfg)
t.Run("KZG commitments size mismatch", func(t *testing.T) {
kzgCommitments := make([][]byte, 1)
roSidecar := createTestSidecar(t, 0, nil, kzgCommitments, nil)
err := peerdas.VerifyDataColumnSidecar(roSidecar)
require.ErrorIs(t, err, peerdas.ErrMismatchLength)
})
column := make([][]byte, tc.blobCount)
kzgCommitments := make([][]byte, tc.commitmentCount)
kzgProof := make([][]byte, tc.proofCount)
t.Run("KZG proofs size mismatch", func(t *testing.T) {
column, kzgCommitments := make([][]byte, 1), make([][]byte, 1)
roSidecar := createTestSidecar(t, 0, column, kzgCommitments, nil)
err := peerdas.VerifyDataColumnSidecar(roSidecar)
require.ErrorIs(t, err, peerdas.ErrMismatchLength)
})
roSidecar := createTestSidecar(t, tc.index, column, kzgCommitments, kzgProof)
err := peerdas.VerifyDataColumnSidecar(roSidecar)
t.Run("nominal", func(t *testing.T) {
column, kzgCommitments, kzgProofs := make([][]byte, 1), make([][]byte, 1), make([][]byte, 1)
roSidecar := createTestSidecar(t, 0, column, kzgCommitments, kzgProofs)
err := peerdas.VerifyDataColumnSidecar(roSidecar)
require.NoError(t, err)
})
if tc.expectedError != nil {
require.ErrorIs(t, err, tc.expectedError)
return
}
require.NoError(t, err)
})
}
}
func TestVerifyDataColumnSidecarKZGProofs(t *testing.T) {
@@ -60,6 +68,14 @@ func TestVerifyDataColumnSidecarKZGProofs(t *testing.T) {
err := kzg.Start()
require.NoError(t, err)
t.Run("size mismatch", func(t *testing.T) {
sidecars := generateRandomSidecars(t, seed, blobCount)
sidecars[0].Column[0] = sidecars[0].Column[0][:len(sidecars[0].Column[0])-1] // Remove one byte to create size mismatch
err := peerdas.VerifyDataColumnsSidecarKZGProofs(sidecars)
require.ErrorIs(t, err, peerdas.ErrMismatchLength)
})
t.Run("invalid proof", func(t *testing.T) {
sidecars := generateRandomSidecars(t, seed, blobCount)
sidecars[0].Column[0][0]++ // It is OK to overflow

View File

@@ -84,10 +84,10 @@ func ValidatorsCustodyRequirement(state beaconState.ReadOnlyBeaconState, validat
totalNodeBalance += validator.EffectiveBalance()
}
beaconConfig := params.BeaconConfig()
numberOfCustodyGroups := beaconConfig.NumberOfCustodyGroups
validatorCustodyRequirement := beaconConfig.ValidatorCustodyRequirement
balancePerAdditionalCustodyGroup := beaconConfig.BalancePerAdditionalCustodyGroup
cfg := params.BeaconConfig()
numberOfCustodyGroups := cfg.NumberOfCustodyGroups
validatorCustodyRequirement := cfg.ValidatorCustodyRequirement
balancePerAdditionalCustodyGroup := cfg.BalancePerAdditionalCustodyGroup
count := totalNodeBalance / balancePerAdditionalCustodyGroup
return min(max(count, validatorCustodyRequirement), numberOfCustodyGroups), nil

View File

@@ -196,7 +196,7 @@ func TestAltairCompatible(t *testing.T) {
}
func TestCanUpgradeTo(t *testing.T) {
beaconConfig := params.BeaconConfig()
cfg := params.BeaconConfig()
outerTestCases := []struct {
name string
@@ -205,32 +205,32 @@ func TestCanUpgradeTo(t *testing.T) {
}{
{
name: "Altair",
forkEpoch: &beaconConfig.AltairForkEpoch,
forkEpoch: &cfg.AltairForkEpoch,
upgradeFunc: time.CanUpgradeToAltair,
},
{
name: "Bellatrix",
forkEpoch: &beaconConfig.BellatrixForkEpoch,
forkEpoch: &cfg.BellatrixForkEpoch,
upgradeFunc: time.CanUpgradeToBellatrix,
},
{
name: "Capella",
forkEpoch: &beaconConfig.CapellaForkEpoch,
forkEpoch: &cfg.CapellaForkEpoch,
upgradeFunc: time.CanUpgradeToCapella,
},
{
name: "Deneb",
forkEpoch: &beaconConfig.DenebForkEpoch,
forkEpoch: &cfg.DenebForkEpoch,
upgradeFunc: time.CanUpgradeToDeneb,
},
{
name: "Electra",
forkEpoch: &beaconConfig.ElectraForkEpoch,
forkEpoch: &cfg.ElectraForkEpoch,
upgradeFunc: time.CanUpgradeToElectra,
},
{
name: "Fulu",
forkEpoch: &beaconConfig.FuluForkEpoch,
forkEpoch: &cfg.FuluForkEpoch,
upgradeFunc: time.CanUpgradeToFulu,
},
}
@@ -238,7 +238,7 @@ func TestCanUpgradeTo(t *testing.T) {
for _, otc := range outerTestCases {
params.SetupTestConfigCleanup(t)
*otc.forkEpoch = 5
params.OverrideBeaconConfig(beaconConfig)
params.OverrideBeaconConfig(cfg)
innerTestCases := []struct {
name string

View File

@@ -200,6 +200,7 @@ func (dcs *DataColumnStorage) WarmCache() {
fileMetadata, err := extractFileMetadata(path)
if err != nil {
log.WithError(err).Error("Error encountered while extracting file metadata")
return nil
}
// Open the data column filesystem file.
@@ -988,8 +989,8 @@ func filePath(root [fieldparams.RootLength]byte, epoch primitives.Epoch) string
// extractFileMetadata extracts the metadata from a file path.
// If the path is not a leaf, it returns nil.
func extractFileMetadata(path string) (*fileMetadata, error) {
// Is this Windows friendly?
parts := strings.Split(path, "/")
// Use filepath.Separator to handle both Windows (\) and Unix (/) path separators
parts := strings.Split(path, string(filepath.Separator))
if len(parts) != 3 {
return nil, errors.Errorf("unexpected file %s", path)
}
@@ -1032,5 +1033,5 @@ func extractFileMetadata(path string) (*fileMetadata, error) {
// period computes the period of a given epoch.
func period(epoch primitives.Epoch) uint64 {
return uint64(epoch / params.BeaconConfig().MinEpochsForBlobsSidecarsRequest)
return uint64(epoch / params.BeaconConfig().MinEpochsForDataColumnSidecarsRequest)
}

View File

@@ -3,6 +3,7 @@ package filesystem
import (
"encoding/binary"
"os"
"path/filepath"
"testing"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
@@ -725,3 +726,37 @@ func TestPrune(t *testing.T) {
require.Equal(t, true, compareSlices([]string{"0x0de28a18cae63cbc6f0b20dc1afb0b1df38da40824a5f09f92d485ade04de97f.sszs"}, dirs))
})
}
func TestExtractFileMetadata(t *testing.T) {
t.Run("Unix", func(t *testing.T) {
// Test with Unix-style path separators (/)
path := "12/1234/0x8bb2f09de48c102635622dc27e6de03ae2b22639df7c33edbc8222b2ec423746.sszs"
metadata, err := extractFileMetadata(path)
if filepath.Separator == '/' {
// On Unix systems, this should succeed
require.NoError(t, err)
require.Equal(t, uint64(12), metadata.period)
require.Equal(t, primitives.Epoch(1234), metadata.epoch)
return
}
// On Windows systems, this should fail because it uses the wrong separator
require.NotNil(t, err)
})
t.Run("Windows", func(t *testing.T) {
// Test with Windows-style path separators (\)
path := "12\\1234\\0x8bb2f09de48c102635622dc27e6de03ae2b22639df7c33edbc8222b2ec423746.sszs"
metadata, err := extractFileMetadata(path)
if filepath.Separator == '\\' {
// On Windows systems, this should succeed
require.NoError(t, err)
require.Equal(t, uint64(12), metadata.period)
require.Equal(t, primitives.Epoch(1234), metadata.epoch)
return
}
// On Unix systems, this should fail because it uses the wrong separator
require.NotNil(t, err)
})
}

View File

@@ -212,7 +212,8 @@ func filterNoop(_ string) bool {
return true
}
func isRootDir(p string) bool {
// IsBlockRootDir returns true if the path segment looks like a block root directory.
func IsBlockRootDir(p string) bool {
dir := filepath.Base(p)
return len(dir) == rootStringLen && strings.HasPrefix(dir, "0x")
}

View File

@@ -188,7 +188,7 @@ func TestListDir(t *testing.T) {
name: "root filter",
dirPath: ".",
expected: []string{childlessBlob.name, blobWithSsz.name, blobWithSszAndTmp.name},
filter: isRootDir,
filter: IsBlockRootDir,
},
{
name: "ssz filter",

View File

@@ -19,12 +19,14 @@ import (
const (
// Full root in directory will be 66 chars, eg:
// >>> len('0x0002fb4db510b8618b04dc82d023793739c26346a8b02eb73482e24b0fec0555') == 66
rootStringLen = 66
sszExt = "ssz"
partExt = "part"
periodicEpochBaseDir = "by-epoch"
rootStringLen = 66
sszExt = "ssz"
partExt = "part"
)
// PeriodicEpochBaseDir is the name of the base directory for the by-epoch layout.
const PeriodicEpochBaseDir = "by-epoch"
const (
LayoutNameFlat = "flat"
LayoutNameByEpoch = "by-epoch"
@@ -130,11 +132,11 @@ func migrateLayout(fs afero.Fs, from, to fsLayout, cache *blobStorageSummaryCach
if iter.atEOF() {
return errLayoutNotDetected
}
log.WithField("fromLayout", from.name()).WithField("toLayout", to.name()).Info("Migrating blob filesystem layout. This one-time operation can take extra time (up to a few minutes for systems with extended blob storage and a cold disk cache).")
lastMoved := ""
parentDirs := make(map[string]bool) // this map should have < 65k keys by design
moved := 0
dc := newDirCleaner()
migrationLogged := false
for ident, err := iter.next(); !errors.Is(err, io.EOF); ident, err = iter.next() {
if err != nil {
if errors.Is(err, errIdentFailure) {
@@ -146,6 +148,11 @@ func migrateLayout(fs afero.Fs, from, to fsLayout, cache *blobStorageSummaryCach
}
return errors.Wrapf(errMigrationFailure, "failed to iterate previous layout structure while migrating blobs, err=%s", err.Error())
}
if !migrationLogged {
log.WithField("fromLayout", from.name()).WithField("toLayout", to.name()).
Info("Migrating blob filesystem layout. This one-time operation can take extra time (up to a few minutes for systems with extended blob storage and a cold disk cache).")
migrationLogged = true
}
src := from.dir(ident)
target := to.dir(ident)
if src != lastMoved {

View File

@@ -34,7 +34,7 @@ func (l *periodicEpochLayout) name() string {
func (l *periodicEpochLayout) blockParentDirs(ident blobIdent) []string {
return []string{
periodicEpochBaseDir,
PeriodicEpochBaseDir,
l.periodDir(ident.epoch),
l.epochDir(ident.epoch),
}
@@ -50,28 +50,28 @@ func (l *periodicEpochLayout) notify(ident blobIdent) error {
// If before == 0, it won't be used as a filter and all idents will be returned.
func (l *periodicEpochLayout) iterateIdents(before primitives.Epoch) (*identIterator, error) {
_, err := l.fs.Stat(periodicEpochBaseDir)
_, err := l.fs.Stat(PeriodicEpochBaseDir)
if err != nil {
if os.IsNotExist(err) {
return &identIterator{eof: true}, nil // The directory is non-existent, which is fine; stop iteration.
}
return nil, errors.Wrapf(err, "error reading path %s", periodicEpochBaseDir)
return nil, errors.Wrapf(err, "error reading path %s", PeriodicEpochBaseDir)
}
// iterate root, which should have directories named by "period"
entries, err := listDir(l.fs, periodicEpochBaseDir)
entries, err := listDir(l.fs, PeriodicEpochBaseDir)
if err != nil {
return nil, errors.Wrapf(err, "failed to list %s", periodicEpochBaseDir)
return nil, errors.Wrapf(err, "failed to list %s", PeriodicEpochBaseDir)
}
return &identIterator{
fs: l.fs,
path: periodicEpochBaseDir,
path: PeriodicEpochBaseDir,
// Please see comments on the `layers` field in `identIterator`` if the role of the layers is unclear.
layers: []layoutLayer{
{populateIdent: populateNoop, filter: isBeforePeriod(before)},
{populateIdent: populateEpoch, filter: isBeforeEpoch(before)},
{populateIdent: populateRoot, filter: isRootDir}, // extract root from path
{populateIdent: populateIndex, filter: isSszFile}, // extract index from filename
{populateIdent: populateRoot, filter: IsBlockRootDir}, // extract root from path
{populateIdent: populateIndex, filter: isSszFile}, // extract index from filename
},
entries: entries,
}, nil
@@ -98,7 +98,7 @@ func (l *periodicEpochLayout) epochDir(epoch primitives.Epoch) string {
}
func (l *periodicEpochLayout) periodDir(epoch primitives.Epoch) string {
return filepath.Join(periodicEpochBaseDir, fmt.Sprintf("%d", periodForEpoch(epoch)))
return filepath.Join(PeriodicEpochBaseDir, fmt.Sprintf("%d", periodForEpoch(epoch)))
}
func (l *periodicEpochLayout) sszPath(n blobIdent) string {

View File

@@ -30,7 +30,7 @@ func (l *flatLayout) iterateIdents(before primitives.Epoch) (*identIterator, err
if os.IsNotExist(err) {
return &identIterator{eof: true}, nil // The directory is non-existent, which is fine; stop iteration.
}
return nil, errors.Wrapf(err, "error reading path %s", periodicEpochBaseDir)
return nil, errors.Wrap(err, "error reading blob base dir")
}
entries, err := listDir(l.fs, ".")
if err != nil {
@@ -199,10 +199,10 @@ func (l *flatSlotReader) isSSZAndBefore(fname string) bool {
// the epoch can be determined.
func isFlatCachedAndBefore(cache *blobStorageSummaryCache, before primitives.Epoch) func(string) bool {
if before == 0 {
return isRootDir
return IsBlockRootDir
}
return func(p string) bool {
if !isRootDir(p) {
if !IsBlockRootDir(p) {
return false
}
root, err := rootFromPath(p)

View File

@@ -126,7 +126,7 @@ func NewWarmedEphemeralDataColumnStorageUsingFs(t testing.TB, fs afero.Fs, opts
func NewEphemeralDataColumnStorageUsingFs(t testing.TB, fs afero.Fs, opts ...DataColumnStorageOption) *DataColumnStorage {
opts = append(opts,
WithDataColumnRetentionEpochs(params.BeaconConfig().MinEpochsForBlobsSidecarsRequest),
WithDataColumnRetentionEpochs(params.BeaconConfig().MinEpochsForDataColumnSidecarsRequest),
WithDataColumnFs(fs),
)

View File

@@ -28,6 +28,7 @@ type ReadOnlyDatabase interface {
BlocksBySlot(ctx context.Context, slot primitives.Slot) ([]interfaces.ReadOnlySignedBeaconBlock, error)
BlockRootsBySlot(ctx context.Context, slot primitives.Slot) (bool, [][32]byte, error)
HasBlock(ctx context.Context, blockRoot [32]byte) bool
AvailableBlocks(ctx context.Context, blockRoots [][32]byte) map[[32]byte]bool
GenesisBlock(ctx context.Context) (interfaces.ReadOnlySignedBeaconBlock, error)
GenesisBlockRoot(ctx context.Context) ([32]byte, error)
IsFinalizedBlock(ctx context.Context, blockRoot [32]byte) bool
@@ -129,6 +130,7 @@ type NoHeadAccessDatabase interface {
// Custody operations.
UpdateSubscribedToAllDataSubnets(ctx context.Context, subscribed bool) (bool, error)
UpdateCustodyInfo(ctx context.Context, earliestAvailableSlot primitives.Slot, custodyGroupCount uint64) (primitives.Slot, uint64, error)
UpdateEarliestAvailableSlot(ctx context.Context, earliestAvailableSlot primitives.Slot) error
// P2P Metadata operations.
SaveMetadataSeqNum(ctx context.Context, seqNum uint64) error

View File

@@ -336,6 +336,42 @@ func (s *Store) HasBlock(ctx context.Context, blockRoot [32]byte) bool {
return exists
}
// AvailableBlocks returns a set of roots indicating which blocks corresponding to `blockRoots` are available in the storage.
func (s *Store) AvailableBlocks(ctx context.Context, blockRoots [][32]byte) map[[32]byte]bool {
_, span := trace.StartSpan(ctx, "BeaconDB.AvailableBlocks")
defer span.End()
count := len(blockRoots)
availableRoots := make(map[[32]byte]bool, count)
// First, check the cache for each block root.
notInCacheRoots := make([][32]byte, 0, count)
for _, root := range blockRoots {
if v, ok := s.blockCache.Get(string(root[:])); v != nil && ok {
availableRoots[root] = true
continue
}
notInCacheRoots = append(notInCacheRoots, root)
}
// Next, check the database for the remaining block roots.
if err := s.db.View(func(tx *bolt.Tx) error {
bkt := tx.Bucket(blocksBucket)
for _, root := range notInCacheRoots {
if bkt.Get(root[:]) != nil {
availableRoots[root] = true
}
}
return nil
}); err != nil {
panic(err) // lint:nopanic -- View never returns an error.
}
return availableRoots
}
// BlocksBySlot retrieves a list of beacon blocks and its respective roots by slot.
func (s *Store) BlocksBySlot(ctx context.Context, slot primitives.Slot) ([]interfaces.ReadOnlySignedBeaconBlock, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.BlocksBySlot")

View File

@@ -656,6 +656,44 @@ func TestStore_BlocksCRUD_NoCache(t *testing.T) {
}
}
func TestAvailableBlocks(t *testing.T) {
ctx := t.Context()
db := setupDB(t)
b0, b1, b2 := util.NewBeaconBlock(), util.NewBeaconBlock(), util.NewBeaconBlock()
b0.Block.Slot, b1.Block.Slot, b2.Block.Slot = 10, 20, 30
sb0, err := blocks.NewSignedBeaconBlock(b0)
require.NoError(t, err)
r0, err := b0.Block.HashTreeRoot()
require.NoError(t, err)
// Save b0 but remove it from cache.
err = db.SaveBlock(ctx, sb0)
require.NoError(t, err)
db.blockCache.Del(string(r0[:]))
// b1 is not saved at all.
r1, err := b1.Block.HashTreeRoot()
require.NoError(t, err)
// Save b2 in cache and DB.
sb2, err := blocks.NewSignedBeaconBlock(b2)
require.NoError(t, err)
r2, err := b2.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, db.SaveBlock(ctx, sb2))
require.NoError(t, err)
expected := map[[32]byte]bool{r0: true, r2: true}
actual := db.AvailableBlocks(ctx, [][32]byte{r0, r1, r2})
require.Equal(t, len(expected), len(actual))
for i := range expected {
require.Equal(t, true, actual[i])
}
}
func TestStore_Blocks_FiltersCorrectly(t *testing.T) {
for _, tt := range blockTests {
t.Run(tt.name, func(t *testing.T) {

View File

@@ -132,6 +132,6 @@ func recoverStateSummary(ctx context.Context, tx *bolt.Tx, root []byte) error {
if err != nil {
return err
}
summaryBucket := tx.Bucket(stateBucket)
summaryBucket := tx.Bucket(stateSummaryBucket)
return summaryBucket.Put(root, summaryEnc)
}

View File

@@ -137,3 +137,32 @@ func TestStore_FinalizedCheckpoint_StateMustExist(t *testing.T) {
require.ErrorContains(t, errMissingStateForCheckpoint.Error(), db.SaveFinalizedCheckpoint(ctx, cp))
}
// Regression test: verify that saving a checkpoint triggers recovery which writes
// the state summary into the correct stateSummaryBucket so that HasStateSummary/StateSummary see it.
func TestRecoverStateSummary_WritesToStateSummaryBucket(t *testing.T) {
db := setupDB(t)
ctx := t.Context()
// Create a block without saving a state or summary, so recovery is needed.
blk := util.HydrateSignedBeaconBlock(&ethpb.SignedBeaconBlock{})
root, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
wsb, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, db.SaveBlock(ctx, wsb))
// Precondition: summary not present yet.
require.Equal(t, false, db.HasStateSummary(ctx, root))
// Saving justified checkpoint should trigger recovery path calling recoverStateSummary.
cp := &ethpb.Checkpoint{Epoch: 2, Root: root[:]}
require.NoError(t, db.SaveJustifiedCheckpoint(ctx, cp))
// Postcondition: summary is visible via the public summary APIs (which read stateSummaryBucket).
require.Equal(t, true, db.HasStateSummary(ctx, root))
summary, err := db.StateSummary(ctx, root)
require.NoError(t, err)
require.NotNil(t, summary)
assert.DeepEqual(t, &ethpb.StateSummary{Slot: blk.Block.Slot, Root: root[:]}, summary)
}

View File

@@ -2,16 +2,19 @@ package kv
import (
"context"
"time"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
bolt "go.etcd.io/bbolt"
)
// UpdateCustodyInfo atomically updates the custody group count only it is greater than the stored one.
// UpdateCustodyInfo atomically updates the custody group count only if it is greater than the stored one.
// In this case, it also updates the earliest available slot with the provided value.
// It returns the (potentially updated) custody group count and earliest available slot.
func (s *Store) UpdateCustodyInfo(ctx context.Context, earliestAvailableSlot primitives.Slot, custodyGroupCount uint64) (primitives.Slot, uint64, error) {
@@ -70,6 +73,79 @@ func (s *Store) UpdateCustodyInfo(ctx context.Context, earliestAvailableSlot pri
return storedEarliestAvailableSlot, storedGroupCount, nil
}
// UpdateEarliestAvailableSlot updates the earliest available slot.
func (s *Store) UpdateEarliestAvailableSlot(ctx context.Context, earliestAvailableSlot primitives.Slot) error {
_, span := trace.StartSpan(ctx, "BeaconDB.UpdateEarliestAvailableSlot")
defer span.End()
storedEarliestAvailableSlot := primitives.Slot(0)
if err := s.db.Update(func(tx *bolt.Tx) error {
// Retrieve the custody bucket.
bucket, err := tx.CreateBucketIfNotExists(custodyBucket)
if err != nil {
return errors.Wrap(err, "create custody bucket")
}
// Retrieve the stored earliest available slot.
storedEarliestAvailableSlotBytes := bucket.Get(earliestAvailableSlotKey)
if len(storedEarliestAvailableSlotBytes) != 0 {
storedEarliestAvailableSlot = primitives.Slot(bytesutil.BytesToUint64BigEndian(storedEarliestAvailableSlotBytes))
}
// Allow decrease (for backfill scenarios)
if earliestAvailableSlot <= storedEarliestAvailableSlot {
storedEarliestAvailableSlot = earliestAvailableSlot
bytes := bytesutil.Uint64ToBytesBigEndian(uint64(earliestAvailableSlot))
if err := bucket.Put(earliestAvailableSlotKey, bytes); err != nil {
return errors.Wrap(err, "put earliest available slot")
}
return nil
}
// Prevent increase within the MIN_EPOCHS_FOR_BLOCK_REQUESTS period
// This ensures we don't voluntarily refuse to serve mandatory block data
genesisTime := time.Unix(int64(params.BeaconConfig().MinGenesisTime+params.BeaconConfig().GenesisDelay), 0)
currentSlot := slots.CurrentSlot(genesisTime)
currentEpoch := slots.ToEpoch(currentSlot)
minEpochsForBlocks := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
// Calculate the minimum required epoch (or 0 if we're early in the chain)
minRequiredEpoch := primitives.Epoch(0)
if currentEpoch > minEpochsForBlocks {
minRequiredEpoch = currentEpoch - minEpochsForBlocks
}
// Convert to slot to ensure we compare at slot-level granularity
minRequiredSlot, err := slots.EpochStart(minRequiredEpoch)
if err != nil {
return errors.Wrap(err, "calculate minimum required slot")
}
// Prevent any increase that would put earliest available slot beyond the minimum required slot
if earliestAvailableSlot > minRequiredSlot {
return errors.Errorf(
"cannot increase earliest available slot to %d (epoch %d) as it exceeds minimum required slot %d (epoch %d)",
earliestAvailableSlot, slots.ToEpoch(earliestAvailableSlot),
minRequiredSlot, minRequiredEpoch,
)
}
storedEarliestAvailableSlot = earliestAvailableSlot
bytes := bytesutil.Uint64ToBytesBigEndian(uint64(earliestAvailableSlot))
if err := bucket.Put(earliestAvailableSlotKey, bytes); err != nil {
return errors.Wrap(err, "put earliest available slot")
}
return nil
}); err != nil {
return err
}
log.WithField("earliestAvailableSlot", storedEarliestAvailableSlot).Debug("Updated earliest available slot")
return nil
}
// UpdateSubscribedToAllDataSubnets updates the "subscribed to all data subnets" status in the database
// only if `subscribed` is `true`.
// It returns the previous subscription status.

View File

@@ -3,10 +3,13 @@ package kv
import (
"context"
"testing"
"time"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/time/slots"
bolt "go.etcd.io/bbolt"
)
@@ -132,6 +135,131 @@ func TestUpdateCustodyInfo(t *testing.T) {
})
}
func TestUpdateEarliestAvailableSlot(t *testing.T) {
ctx := t.Context()
t.Run("allow decreasing earliest slot (backfill scenario)", func(t *testing.T) {
const (
initialSlot = primitives.Slot(300)
initialCount = uint64(10)
earliestSlot = primitives.Slot(200) // Lower than initial (backfill discovered earlier blocks)
)
db := setupDB(t)
// Initialize custody info
_, _, err := db.UpdateCustodyInfo(ctx, initialSlot, initialCount)
require.NoError(t, err)
// Update with a lower slot (should update for backfill)
err = db.UpdateEarliestAvailableSlot(ctx, earliestSlot)
require.NoError(t, err)
storedSlot, storedCount := getCustodyInfoFromDB(t, db)
require.Equal(t, earliestSlot, storedSlot)
require.Equal(t, initialCount, storedCount)
})
t.Run("allow increasing slot within MIN_EPOCHS_FOR_BLOCK_REQUESTS (pruning scenario)", func(t *testing.T) {
db := setupDB(t)
// Calculate the current slot and minimum required slot based on actual current time
genesisTime := time.Unix(int64(params.BeaconConfig().MinGenesisTime+params.BeaconConfig().GenesisDelay), 0)
currentSlot := slots.CurrentSlot(genesisTime)
currentEpoch := slots.ToEpoch(currentSlot)
minEpochsForBlocks := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
var minRequiredEpoch primitives.Epoch
if currentEpoch > minEpochsForBlocks {
minRequiredEpoch = currentEpoch - minEpochsForBlocks
} else {
minRequiredEpoch = 0
}
minRequiredSlot, err := slots.EpochStart(minRequiredEpoch)
require.NoError(t, err)
// Initial setup: set earliest slot well before minRequiredSlot
const groupCount = uint64(5)
initialSlot := primitives.Slot(1000)
_, _, err = db.UpdateCustodyInfo(ctx, initialSlot, groupCount)
require.NoError(t, err)
// Try to increase to a slot that's still BEFORE minRequiredSlot (should succeed)
validSlot := minRequiredSlot - 100
err = db.UpdateEarliestAvailableSlot(ctx, validSlot)
require.NoError(t, err)
// Verify the database was updated
storedSlot, storedCount := getCustodyInfoFromDB(t, db)
require.Equal(t, validSlot, storedSlot)
require.Equal(t, groupCount, storedCount)
})
t.Run("prevent increasing slot beyond MIN_EPOCHS_FOR_BLOCK_REQUESTS", func(t *testing.T) {
db := setupDB(t)
// Calculate the current slot and minimum required slot based on actual current time
genesisTime := time.Unix(int64(params.BeaconConfig().MinGenesisTime+params.BeaconConfig().GenesisDelay), 0)
currentSlot := slots.CurrentSlot(genesisTime)
currentEpoch := slots.ToEpoch(currentSlot)
minEpochsForBlocks := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
var minRequiredEpoch primitives.Epoch
if currentEpoch > minEpochsForBlocks {
minRequiredEpoch = currentEpoch - minEpochsForBlocks
} else {
minRequiredEpoch = 0
}
minRequiredSlot, err := slots.EpochStart(minRequiredEpoch)
require.NoError(t, err)
// Initial setup: set a valid earliest slot (well before minRequiredSlot)
const initialCount = uint64(5)
initialSlot := primitives.Slot(1000)
_, _, err = db.UpdateCustodyInfo(ctx, initialSlot, initialCount)
require.NoError(t, err)
// Try to set earliest slot beyond the minimum required slot
invalidSlot := minRequiredSlot + 100
// This should fail
err = db.UpdateEarliestAvailableSlot(ctx, invalidSlot)
require.ErrorContains(t, "cannot increase earliest available slot", err)
require.ErrorContains(t, "exceeds minimum required slot", err)
// Verify the database wasn't updated
storedSlot, storedCount := getCustodyInfoFromDB(t, db)
require.Equal(t, initialSlot, storedSlot)
require.Equal(t, initialCount, storedCount)
})
t.Run("no change when slot equals current slot", func(t *testing.T) {
const (
initialSlot = primitives.Slot(100)
initialCount = uint64(5)
)
db := setupDB(t)
// Initialize custody info
_, _, err := db.UpdateCustodyInfo(ctx, initialSlot, initialCount)
require.NoError(t, err)
// Update with the same slot
err = db.UpdateEarliestAvailableSlot(ctx, initialSlot)
require.NoError(t, err)
storedSlot, storedCount := getCustodyInfoFromDB(t, db)
require.Equal(t, initialSlot, storedSlot)
require.Equal(t, initialCount, storedCount)
})
}
func TestUpdateSubscribedToAllDataSubnets(t *testing.T) {
ctx := context.Background()

View File

@@ -8,7 +8,6 @@ go_library(
"//beacon-chain:__subpackages__",
],
deps = [
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/iface:go_default_library",
"//config/params:go_default_library",
@@ -29,6 +28,7 @@ go_test(
"//consensus-types/blocks:go_default_library",
"//consensus-types/primitives:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//time/slots/testing:go_default_library",

View File

@@ -4,7 +4,6 @@ import (
"context"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/iface"
"github.com/OffchainLabs/prysm/v6/config/params"
@@ -25,17 +24,24 @@ const (
defaultNumBatchesToPrune = 15
)
// custodyUpdater is a tiny interface that p2p service implements; kept here to avoid
// importing the p2p package and creating a cycle.
type custodyUpdater interface {
UpdateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error
}
type ServiceOption func(*Service)
// WithRetentionPeriod allows the user to specify a different data retention period than the spec default.
// The retention period is specified in epochs, and must be >= MIN_EPOCHS_FOR_BLOCK_REQUESTS.
func WithRetentionPeriod(retentionEpochs primitives.Epoch) ServiceOption {
return func(s *Service) {
defaultRetentionEpochs := helpers.MinEpochsForBlockRequests() + 1
defaultRetentionEpochs := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests) + 1
if retentionEpochs < defaultRetentionEpochs {
log.WithField("userEpochs", retentionEpochs).
WithField("minRequired", defaultRetentionEpochs).
Warn("Retention period too low, using minimum required value")
Warn("Retention period too low, ignoring and using minimum required value")
retentionEpochs = defaultRetentionEpochs
}
s.ps = pruneStartSlotFunc(retentionEpochs)
@@ -58,17 +64,23 @@ type Service struct {
slotTicker slots.Ticker
backfillWaiter func() error
initSyncWaiter func() error
custody custodyUpdater
}
func New(ctx context.Context, db iface.Database, genesisTime time.Time, initSyncWaiter, backfillWaiter func() error, opts ...ServiceOption) (*Service, error) {
func New(ctx context.Context, db iface.Database, genesisTime time.Time, initSyncWaiter, backfillWaiter func() error, custody custodyUpdater, opts ...ServiceOption) (*Service, error) {
if custody == nil {
return nil, errors.New("custody updater is required for pruner but was not provided")
}
p := &Service{
ctx: ctx,
db: db,
ps: pruneStartSlotFunc(helpers.MinEpochsForBlockRequests() + 1), // Default retention epochs is MIN_EPOCHS_FOR_BLOCK_REQUESTS + 1 from the current slot.
ps: pruneStartSlotFunc(primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests) + 1), // Default retention epochs is MIN_EPOCHS_FOR_BLOCK_REQUESTS + 1 from the current slot.
done: make(chan struct{}),
slotTicker: slots.NewSlotTicker(slots.UnsafeStartTime(genesisTime, 0), params.BeaconConfig().SecondsPerSlot),
initSyncWaiter: initSyncWaiter,
backfillWaiter: backfillWaiter,
custody: custody,
}
for _, o := range opts {
@@ -157,17 +169,45 @@ func (p *Service) prune(slot primitives.Slot) error {
return errors.Wrap(err, "failed to prune batches")
}
log.WithFields(logrus.Fields{
"prunedUpto": pruneUpto,
"duration": time.Since(tt),
"currentSlot": slot,
"batchSize": defaultPrunableBatchSize,
"numBatches": numBatches,
}).Debug("Successfully pruned chain data")
earliestAvailableSlot := pruneUpto + 1
// Update pruning checkpoint.
p.prunedUpto = pruneUpto
// Update the earliest available slot after pruning
if err := p.updateEarliestAvailableSlot(earliestAvailableSlot); err != nil {
return errors.Wrap(err, "update earliest available slot")
}
log.WithFields(logrus.Fields{
"prunedUpto": pruneUpto,
"earliestAvailableSlot": earliestAvailableSlot,
"duration": time.Since(tt),
"currentSlot": slot,
"batchSize": defaultPrunableBatchSize,
"numBatches": numBatches,
}).Debug("Successfully pruned chain data")
return nil
}
// updateEarliestAvailableSlot updates the earliest available slot via the injected custody updater
// and also persists it to the database.
func (p *Service) updateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error {
if !params.FuluEnabled() {
return nil
}
// Update the p2p in-memory state
if err := p.custody.UpdateEarliestAvailableSlot(earliestAvailableSlot); err != nil {
return errors.Wrapf(err, "update earliest available slot after pruning to %d", earliestAvailableSlot)
}
// Persist to database to ensure it survives restarts
if err := p.db.UpdateEarliestAvailableSlot(p.ctx, earliestAvailableSlot); err != nil {
return errors.Wrapf(err, "update earliest available slot in database for slot %d", earliestAvailableSlot)
}
return nil
}

View File

@@ -2,6 +2,7 @@ package pruner
import (
"context"
"errors"
"testing"
"time"
@@ -15,6 +16,7 @@ import (
dbtest "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
logTest "github.com/sirupsen/logrus/hooks/test"
)
@@ -62,7 +64,9 @@ func TestPruner_PruningConditions(t *testing.T) {
if !tt.backfillCompleted {
backfillWaiter = waiter
}
p, err := New(ctx, beaconDB, time.Now(), initSyncWaiter, backfillWaiter, WithSlotTicker(slotTicker))
mockCustody := &mockCustodyUpdater{}
p, err := New(ctx, beaconDB, time.Now(), initSyncWaiter, backfillWaiter, mockCustody, WithSlotTicker(slotTicker))
require.NoError(t, err)
go p.Start()
@@ -97,12 +101,14 @@ func TestPruner_PruneSuccess(t *testing.T) {
retentionEpochs := primitives.Epoch(2)
slotTicker := &slottest.MockTicker{Channel: make(chan primitives.Slot)}
mockCustody := &mockCustodyUpdater{}
p, err := New(
ctx,
beaconDB,
time.Now(),
nil,
nil,
mockCustody,
WithSlotTicker(slotTicker),
)
require.NoError(t, err)
@@ -133,3 +139,242 @@ func TestPruner_PruneSuccess(t *testing.T) {
require.NoError(t, p.Stop())
}
// Mock custody updater for testing
type mockCustodyUpdater struct {
custodyGroupCount uint64
earliestAvailableSlot primitives.Slot
updateCallCount int
}
func (m *mockCustodyUpdater) UpdateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error {
m.updateCallCount++
m.earliestAvailableSlot = earliestAvailableSlot
return nil
}
func TestPruner_UpdatesEarliestAvailableSlot(t *testing.T) {
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
config.FuluForkEpoch = 0 // Enable Fulu from epoch 0
params.OverrideBeaconConfig(config)
logrus.SetLevel(logrus.DebugLevel)
hook := logTest.NewGlobal()
ctx, cancel := context.WithCancel(t.Context())
defer cancel()
beaconDB := dbtest.SetupDB(t)
retentionEpochs := primitives.Epoch(2)
slotTicker := &slottest.MockTicker{Channel: make(chan primitives.Slot)}
// Create mock custody updater
mockCustody := &mockCustodyUpdater{
custodyGroupCount: 4,
earliestAvailableSlot: 0,
}
// Create pruner with mock custody updater
p, err := New(
ctx,
beaconDB,
time.Now(),
nil,
nil,
mockCustody,
WithSlotTicker(slotTicker),
)
require.NoError(t, err)
p.ps = func(current primitives.Slot) primitives.Slot {
return current - primitives.Slot(retentionEpochs)*params.BeaconConfig().SlotsPerEpoch
}
// Save some blocks to be pruned
for i := primitives.Slot(1); i <= 32; i++ {
blk := util.NewBeaconBlock()
blk.Block.Slot = i
wsb, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wsb))
}
// Start pruner and trigger at slot 80 (middle of 3rd epoch)
go p.Start()
currentSlot := primitives.Slot(80)
slotTicker.Channel <- currentSlot
// Wait for pruning to complete
time.Sleep(100 * time.Millisecond)
// Check that UpdateEarliestAvailableSlot was called
assert.Equal(t, true, mockCustody.updateCallCount > 0, "UpdateEarliestAvailableSlot should have been called")
// The earliest available slot should be pruneUpto + 1
// pruneUpto = currentSlot - retentionEpochs*slotsPerEpoch = 80 - 2*32 = 16
// So earliest available slot should be 16 + 1 = 17
expectedEarliestSlot := primitives.Slot(17)
require.Equal(t, expectedEarliestSlot, mockCustody.earliestAvailableSlot, "Earliest available slot should be updated correctly")
require.Equal(t, uint64(4), mockCustody.custodyGroupCount, "Custody group count should be preserved")
// Verify that no error was logged
for _, entry := range hook.AllEntries() {
if entry.Level == logrus.ErrorLevel {
t.Errorf("Unexpected error log: %s", entry.Message)
}
}
require.NoError(t, p.Stop())
}
// Mock custody updater that returns an error for UpdateEarliestAvailableSlot
type mockCustodyUpdaterWithUpdateError struct {
updateCallCount int
}
func (m *mockCustodyUpdaterWithUpdateError) UpdateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error {
m.updateCallCount++
return errors.New("failed to update earliest available slot")
}
func TestWithRetentionPeriod_EnforcesMinimum(t *testing.T) {
// Use minimal config for testing
params.SetupTestConfigCleanup(t)
config := params.MinimalSpecConfig()
params.OverrideBeaconConfig(config)
ctx := t.Context()
beaconDB := dbtest.SetupDB(t)
// Get the minimum required epochs (272 + 1 = 273 for minimal)
minRequiredEpochs := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests + 1)
// Use a slot that's guaranteed to be after the minimum retention period
currentSlot := primitives.Slot(minRequiredEpochs+100) * (params.BeaconConfig().SlotsPerEpoch)
tests := []struct {
name string
userRetentionEpochs primitives.Epoch
expectedPruneSlot primitives.Slot
description string
}{
{
name: "User value below minimum - should use minimum",
userRetentionEpochs: 2, // Way below minimum
expectedPruneSlot: currentSlot - primitives.Slot(minRequiredEpochs)*params.BeaconConfig().SlotsPerEpoch,
description: "Should use minimum when user value is too low",
},
{
name: "User value at minimum",
userRetentionEpochs: minRequiredEpochs,
expectedPruneSlot: currentSlot - primitives.Slot(minRequiredEpochs)*params.BeaconConfig().SlotsPerEpoch,
description: "Should use user value when at minimum",
},
{
name: "User value above minimum",
userRetentionEpochs: minRequiredEpochs + 10,
expectedPruneSlot: currentSlot - primitives.Slot(minRequiredEpochs+10)*params.BeaconConfig().SlotsPerEpoch,
description: "Should use user value when above minimum",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
hook := logTest.NewGlobal()
logrus.SetLevel(logrus.WarnLevel)
mockCustody := &mockCustodyUpdater{}
// Create pruner with retention period
p, err := New(
ctx,
beaconDB,
time.Now(),
nil,
nil,
mockCustody,
WithRetentionPeriod(tt.userRetentionEpochs),
)
require.NoError(t, err)
// Test the pruning calculation
pruneUptoSlot := p.ps(currentSlot)
// Verify the pruning slot
assert.Equal(t, tt.expectedPruneSlot, pruneUptoSlot, tt.description)
// Check if warning was logged when value was too low
if tt.userRetentionEpochs < minRequiredEpochs {
assert.LogsContain(t, hook, "Retention period too low, ignoring and using minimum required value")
}
})
}
}
func TestPruner_UpdateEarliestSlotError(t *testing.T) {
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
config.FuluForkEpoch = 0 // Enable Fulu from epoch 0
params.OverrideBeaconConfig(config)
logrus.SetLevel(logrus.DebugLevel)
hook := logTest.NewGlobal()
ctx, cancel := context.WithCancel(t.Context())
defer cancel()
beaconDB := dbtest.SetupDB(t)
retentionEpochs := primitives.Epoch(2)
slotTicker := &slottest.MockTicker{Channel: make(chan primitives.Slot)}
// Create mock custody updater that returns an error for UpdateEarliestAvailableSlot
mockCustody := &mockCustodyUpdaterWithUpdateError{}
// Create pruner with mock custody updater
p, err := New(
ctx,
beaconDB,
time.Now(),
nil,
nil,
mockCustody,
WithSlotTicker(slotTicker),
)
require.NoError(t, err)
p.ps = func(current primitives.Slot) primitives.Slot {
return current - primitives.Slot(retentionEpochs)*params.BeaconConfig().SlotsPerEpoch
}
// Save some blocks to be pruned
for i := primitives.Slot(1); i <= 32; i++ {
blk := util.NewBeaconBlock()
blk.Block.Slot = i
wsb, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wsb))
}
// Start pruner and trigger at slot 80
go p.Start()
currentSlot := primitives.Slot(80)
slotTicker.Channel <- currentSlot
// Wait for pruning to complete
time.Sleep(100 * time.Millisecond)
// Should have called UpdateEarliestAvailableSlot
assert.Equal(t, 1, mockCustody.updateCallCount, "UpdateEarliestAvailableSlot should be called")
// Check that error was logged by the prune function
found := false
for _, entry := range hook.AllEntries() {
if entry.Level == logrus.ErrorLevel && entry.Message == "Failed to prune database" {
found = true
break
}
}
assert.Equal(t, true, found, "Should log error when UpdateEarliestAvailableSlot fails")
require.NoError(t, p.Stop())
}

View File

@@ -6,6 +6,7 @@ go_library(
"cache.go",
"helpers.go",
"lightclient.go",
"log.go",
"store.go",
],
importpath = "github.com/OffchainLabs/prysm/v6/beacon-chain/light-client",

View File

@@ -0,0 +1,5 @@
package light_client
import "github.com/sirupsen/logrus"
var log = logrus.WithField("prefix", "light-client")

View File

@@ -14,7 +14,6 @@ import (
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/pkg/errors"
log "github.com/sirupsen/logrus"
)
var ErrLightClientBootstrapNotFound = errors.New("light client bootstrap not found")

View File

@@ -2,11 +2,13 @@ package node
import (
"context"
"os"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/kv"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/slasherkv"
"github.com/OffchainLabs/prysm/v6/cmd"
"github.com/OffchainLabs/prysm/v6/genesis"
"github.com/pkg/errors"
"github.com/urfave/cli/v2"
)
@@ -36,6 +38,22 @@ func (c *dbClearer) clearKV(ctx context.Context, db *kv.Store) (*kv.Store, error
return kv.NewKVStore(ctx, db.DatabasePath())
}
func (c *dbClearer) clearGenesis(dir string) error {
if !c.shouldProceed() {
return nil
}
gfile, err := genesis.FindStateFile(dir)
if err != nil {
return nil
}
if err := os.Remove(gfile.FilePath()); err != nil {
return errors.Wrapf(err, "genesis state file not removed: %s", gfile.FilePath())
}
return nil
}
func (c *dbClearer) clearBlobs(bs *filesystem.BlobStorage) error {
if !c.shouldProceed() {
return nil

View File

@@ -12,6 +12,7 @@ import (
"os"
"os/signal"
"path/filepath"
"slices"
"strconv"
"strings"
"sync"
@@ -177,6 +178,9 @@ func New(cliCtx *cli.Context, cancel context.CancelFunc, opts ...Option) (*Beaco
}
beacon.db = kvdb
if err := dbClearer.clearGenesis(dataDir); err != nil {
return nil, errors.Wrap(err, "could not clear genesis state")
}
providers := append(beacon.GenesisProviders, kv.NewLegacyGenesisProvider(kvdb))
if err := genesis.Initialize(ctx, dataDir, providers...); err != nil {
return nil, errors.Wrap(err, "could not initialize genesis state")
@@ -1105,6 +1109,7 @@ func (b *BeaconNode) registerPrunerService(cliCtx *cli.Context) error {
genesis,
initSyncWaiter(cliCtx.Context, b.initialSyncComplete),
backfillService.WaitForCompletion,
b.fetchP2P(),
opts...,
)
if err != nil {
@@ -1131,10 +1136,8 @@ func (b *BeaconNode) registerLightClientStore() {
func hasNetworkFlag(cliCtx *cli.Context) bool {
for _, flag := range features.NetworkFlags {
for _, name := range flag.Names() {
if cliCtx.IsSet(name) {
return true
}
if slices.ContainsFunc(flag.Names(), cliCtx.IsSet) {
return true
}
}
return false

View File

@@ -10,11 +10,13 @@ import (
// pruneExpired prunes attestations pool on every slot interval.
func (s *Service) pruneExpired() {
ticker := time.NewTicker(s.cfg.pruneInterval)
defer ticker.Stop()
secondsPerSlot := params.BeaconConfig().SecondsPerSlot
offset := time.Duration(secondsPerSlot-1) * time.Second
slotTicker := slots.NewSlotTickerWithOffset(s.genesisTime, offset, secondsPerSlot)
defer slotTicker.Done()
for {
select {
case <-ticker.C:
case <-slotTicker.C():
s.pruneExpiredAtts()
s.updateMetrics()
case <-s.ctx.Done():

View File

@@ -17,7 +17,9 @@ import (
)
func TestPruneExpired_Ticker(t *testing.T) {
ctx, cancel := context.WithTimeout(t.Context(), 3*time.Second)
// Need timeout longer than the offset (secondsPerSlot - 1) + some buffer
timeout := time.Duration(params.BeaconConfig().SecondsPerSlot+5) * time.Second
ctx, cancel := context.WithTimeout(t.Context(), timeout)
defer cancel()
s, err := NewService(ctx, &Config{

View File

@@ -7,6 +7,7 @@ import (
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
"github.com/sirupsen/logrus"
)
// This is the default queue size used if we have specified an invalid one.
@@ -63,12 +64,17 @@ func (cfg *Config) connManagerLowHigh() (int, int) {
return low, high
}
// validateConfig validates whether the values provided are accurate and will set
// the appropriate values for those that are invalid.
func validateConfig(cfg *Config) *Config {
if cfg.QueueSize == 0 {
log.Warnf("Invalid pubsub queue size of %d initialized, setting the quese size as %d instead", cfg.QueueSize, defaultPubsubQueueSize)
cfg.QueueSize = defaultPubsubQueueSize
// validateConfig validates whether the provided config has valid values and sets
// the invalid ones to default.
func validateConfig(cfg *Config) {
if cfg.QueueSize > 0 {
return
}
return cfg
log.WithFields(logrus.Fields{
"queueSize": cfg.QueueSize,
"default": defaultPubsubQueueSize,
}).Warning("Invalid pubsub queue size, setting the queue size to the default value")
cfg.QueueSize = defaultPubsubQueueSize
}

View File

@@ -115,6 +115,57 @@ func (s *Service) UpdateCustodyInfo(earliestAvailableSlot primitives.Slot, custo
return earliestAvailableSlot, custodyGroupCount, nil
}
// UpdateEarliestAvailableSlot updates the earliest available slot.
//
// IMPORTANT: This function should only be called when Fulu is enabled. The caller is responsible
// for checking params.FuluEnabled() before calling this function.
func (s *Service) UpdateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error {
s.custodyInfoLock.Lock()
defer s.custodyInfoLock.Unlock()
if s.custodyInfo == nil {
return errors.New("no custody info available")
}
currentSlot := slots.CurrentSlot(s.genesisTime)
currentEpoch := slots.ToEpoch(currentSlot)
// Allow decrease (for backfill scenarios)
if earliestAvailableSlot < s.custodyInfo.earliestAvailableSlot {
s.custodyInfo.earliestAvailableSlot = earliestAvailableSlot
return nil
}
// Prevent increase within the MIN_EPOCHS_FOR_BLOCK_REQUESTS period
// This ensures we don't voluntarily refuse to serve mandatory block data
// This check applies regardless of whether we're early or late in the chain
minEpochsForBlocks := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
// Calculate the minimum required epoch (or 0 if we're early in the chain)
minRequiredEpoch := primitives.Epoch(0)
if currentEpoch > minEpochsForBlocks {
minRequiredEpoch = currentEpoch - minEpochsForBlocks
}
// Convert to slot to ensure we compare at slot-level granularity, not epoch-level
// This prevents allowing increases to slots within minRequiredEpoch that are after its first slot
minRequiredSlot, err := slots.EpochStart(minRequiredEpoch)
if err != nil {
return errors.Wrap(err, "epoch start")
}
// Prevent any increase that would put earliest slot beyond the minimum required slot
if earliestAvailableSlot > s.custodyInfo.earliestAvailableSlot && earliestAvailableSlot > minRequiredSlot {
return errors.Errorf(
"cannot increase earliest available slot to %d (epoch %d) as it exceeds minimum required slot %d (epoch %d)",
earliestAvailableSlot, slots.ToEpoch(earliestAvailableSlot), minRequiredSlot, minRequiredEpoch,
)
}
s.custodyInfo.earliestAvailableSlot = earliestAvailableSlot
return nil
}
// CustodyGroupCountFromPeer retrieves custody group count from a peer.
// It first tries to get the custody group count from the peer's metadata,
// then falls back to the ENR value if the metadata is not available, then
@@ -208,11 +259,11 @@ func (s *Service) custodyGroupCountFromPeerENR(pid peer.ID) uint64 {
}
func fuluForkSlot() (primitives.Slot, error) {
beaconConfig := params.BeaconConfig()
cfg := params.BeaconConfig()
fuluForkEpoch := beaconConfig.FuluForkEpoch
if fuluForkEpoch == beaconConfig.FarFutureEpoch {
return beaconConfig.FarFutureSlot, nil
fuluForkEpoch := cfg.FuluForkEpoch
if fuluForkEpoch == cfg.FarFutureEpoch {
return cfg.FarFutureSlot, nil
}
forkFuluSlot, err := slots.EpochStart(fuluForkEpoch)

View File

@@ -4,6 +4,7 @@ import (
"context"
"strings"
"testing"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers"
@@ -167,6 +168,148 @@ func TestUpdateCustodyInfo(t *testing.T) {
}
}
func TestUpdateEarliestAvailableSlot(t *testing.T) {
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
config.FuluForkEpoch = 0 // Enable Fulu from epoch 0
params.OverrideBeaconConfig(config)
t.Run("Valid update", func(t *testing.T) {
const (
initialSlot primitives.Slot = 50
newSlot primitives.Slot = 100
groupCount uint64 = 5
)
// Set up a scenario where we're far enough in the chain that increasing to newSlot is valid
minEpochsForBlocks := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
currentEpoch := minEpochsForBlocks + 100 // Well beyond MIN_EPOCHS_FOR_BLOCK_REQUESTS
currentSlot := primitives.Slot(currentEpoch) * primitives.Slot(params.BeaconConfig().SlotsPerEpoch)
service := &Service{
// Set genesis time in the past so currentSlot is the "current" slot
genesisTime: time.Now().Add(-time.Duration(currentSlot) * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second),
custodyInfo: &custodyInfo{
earliestAvailableSlot: initialSlot,
groupCount: groupCount,
},
}
err := service.UpdateEarliestAvailableSlot(newSlot)
require.NoError(t, err)
require.Equal(t, newSlot, service.custodyInfo.earliestAvailableSlot)
require.Equal(t, groupCount, service.custodyInfo.groupCount) // Should preserve group count
})
t.Run("Earlier slot - allowed for backfill", func(t *testing.T) {
const initialSlot primitives.Slot = 100
const earlierSlot primitives.Slot = 50
service := &Service{
genesisTime: time.Now(),
custodyInfo: &custodyInfo{
earliestAvailableSlot: initialSlot,
groupCount: 5,
},
}
err := service.UpdateEarliestAvailableSlot(earlierSlot)
require.NoError(t, err)
require.Equal(t, earlierSlot, service.custodyInfo.earliestAvailableSlot) // Should decrease for backfill
})
t.Run("Prevent increase within MIN_EPOCHS_FOR_BLOCK_REQUESTS - late in chain", func(t *testing.T) {
// Set current time far enough in the future to have a meaningful MIN_EPOCHS_FOR_BLOCK_REQUESTS period
minEpochsForBlocks := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
currentEpoch := minEpochsForBlocks + 100 // Well beyond the minimum
currentSlot := primitives.Slot(currentEpoch) * primitives.Slot(params.BeaconConfig().SlotsPerEpoch)
// Calculate the minimum allowed epoch
minRequiredEpoch := currentEpoch - minEpochsForBlocks
minRequiredSlot := primitives.Slot(minRequiredEpoch) * primitives.Slot(params.BeaconConfig().SlotsPerEpoch)
// Try to set earliest slot to a value within the MIN_EPOCHS_FOR_BLOCK_REQUESTS period (should fail)
attemptedSlot := minRequiredSlot + 1000 // Within the mandatory retention period
service := &Service{
genesisTime: time.Now().Add(-time.Duration(currentSlot) * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second),
custodyInfo: &custodyInfo{
earliestAvailableSlot: minRequiredSlot - 100, // Current value is before the min required
groupCount: 5,
},
}
err := service.UpdateEarliestAvailableSlot(attemptedSlot)
require.NotNil(t, err)
require.Equal(t, true, strings.Contains(err.Error(), "cannot increase earliest available slot"))
})
t.Run("Prevent increase at epoch boundary - slot precision matters", func(t *testing.T) {
minEpochsForBlocks := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
currentEpoch := minEpochsForBlocks + 976 // Current epoch
currentSlot := primitives.Slot(currentEpoch) * primitives.Slot(params.BeaconConfig().SlotsPerEpoch)
minRequiredEpoch := currentEpoch - minEpochsForBlocks // = 976
storedEarliestSlot := primitives.Slot(minRequiredEpoch)*primitives.Slot(params.BeaconConfig().SlotsPerEpoch) - 232 // Before minRequired
// Try to set earliest to slot 8 of the minRequiredEpoch (should fail with slot comparison)
attemptedSlot := primitives.Slot(minRequiredEpoch)*primitives.Slot(params.BeaconConfig().SlotsPerEpoch) + 8
service := &Service{
genesisTime: time.Now().Add(-time.Duration(currentSlot) * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second),
custodyInfo: &custodyInfo{
earliestAvailableSlot: storedEarliestSlot,
groupCount: 5,
},
}
err := service.UpdateEarliestAvailableSlot(attemptedSlot)
require.NotNil(t, err, "Should prevent increasing earliest slot beyond the minimum required SLOT (not just epoch)")
require.Equal(t, true, strings.Contains(err.Error(), "cannot increase earliest available slot"))
})
t.Run("Prevent increase within MIN_EPOCHS_FOR_BLOCK_REQUESTS - early in chain", func(t *testing.T) {
minEpochsForBlocks := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
currentEpoch := minEpochsForBlocks - 10 // Early in chain, BEFORE we have MIN_EPOCHS_FOR_BLOCK_REQUESTS of history
currentSlot := primitives.Slot(currentEpoch) * primitives.Slot(params.BeaconConfig().SlotsPerEpoch)
// Current earliest slot is at slot 100
currentEarliestSlot := primitives.Slot(100)
// Try to increase earliest slot to slot 1000 (which would be within the mandatory window from currentSlot)
attemptedSlot := primitives.Slot(1000)
service := &Service{
genesisTime: time.Now().Add(-time.Duration(currentSlot) * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second),
custodyInfo: &custodyInfo{
earliestAvailableSlot: currentEarliestSlot,
groupCount: 5,
},
}
err := service.UpdateEarliestAvailableSlot(attemptedSlot)
require.NotNil(t, err, "Should prevent increasing earliest slot within the mandatory retention window, even early in chain")
require.Equal(t, true, strings.Contains(err.Error(), "cannot increase earliest available slot"))
})
t.Run("Nil custody info - should return error", func(t *testing.T) {
service := &Service{
genesisTime: time.Now(),
custodyInfo: nil, // No custody info set
}
err := service.UpdateEarliestAvailableSlot(100)
require.NotNil(t, err)
require.Equal(t, true, strings.Contains(err.Error(), "no custody info available"))
})
}
func TestCustodyGroupCountFromPeer(t *testing.T) {
const (
expectedENR uint64 = 7

View File

@@ -126,6 +126,7 @@ type (
EarliestAvailableSlot(ctx context.Context) (primitives.Slot, error)
CustodyGroupCount(ctx context.Context) (uint64, error)
UpdateCustodyInfo(earliestAvailableSlot primitives.Slot, custodyGroupCount uint64) (primitives.Slot, uint64, error)
UpdateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error
CustodyGroupCountFromPeer(peer.ID) uint64
}
)

View File

@@ -3,6 +3,7 @@ package p2p
import (
"strings"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/libp2p/go-libp2p/core/peerstore"
"github.com/prometheus/client_golang/prometheus"
@@ -26,12 +27,25 @@ var (
Help: "The number of peers in a given state.",
},
[]string{"state"})
p2pMaxPeers = promauto.NewGauge(prometheus.GaugeOpts{
Name: "p2p_max_peers",
Help: "The target maximum number of peers.",
})
p2pPeerCountDirectionType = promauto.NewGaugeVec(prometheus.GaugeOpts{
Name: "p2p_peer_count_direction_type",
Help: "The number of peers in a given direction and type.",
},
[]string{"direction", "type"})
connectedPeersCount = promauto.NewGaugeVec(prometheus.GaugeOpts{
Name: "connected_libp2p_peers",
Help: "Tracks the total number of connected libp2p peers by agent string",
},
[]string{"agent"},
)
minimumPeersPerSubnet = promauto.NewGauge(prometheus.GaugeOpts{
Name: "p2p_minimum_peers_per_subnet",
Help: "The minimum number of peers to connect to per subnet",
})
avgScoreConnectedClients = promauto.NewGaugeVec(prometheus.GaugeOpts{
Name: "connected_libp2p_peers_average_scores",
Help: "Tracks the overall p2p scores of connected libp2p peers by agent string",
@@ -174,18 +188,26 @@ var (
)
func (s *Service) updateMetrics() {
store := s.Host().Peerstore()
connectedPeers := s.peers.Connected()
p2pPeerCount.WithLabelValues("Connected").Set(float64(len(connectedPeers)))
p2pPeerCount.WithLabelValues("Disconnected").Set(float64(len(s.peers.Disconnected())))
p2pPeerCount.WithLabelValues("Connecting").Set(float64(len(s.peers.Connecting())))
p2pPeerCount.WithLabelValues("Disconnecting").Set(float64(len(s.peers.Disconnecting())))
p2pPeerCount.WithLabelValues("Bad").Set(float64(len(s.peers.Bad())))
store := s.Host().Peerstore()
numConnectedPeersByClient := make(map[string]float64)
upperTCP := strings.ToUpper(string(peers.TCP))
upperQUIC := strings.ToUpper(string(peers.QUIC))
p2pPeerCountDirectionType.WithLabelValues("inbound", upperTCP).Set(float64(len(s.peers.InboundConnectedWithProtocol(peers.TCP))))
p2pPeerCountDirectionType.WithLabelValues("inbound", upperQUIC).Set(float64(len(s.peers.InboundConnectedWithProtocol(peers.QUIC))))
p2pPeerCountDirectionType.WithLabelValues("outbound", upperTCP).Set(float64(len(s.peers.OutboundConnectedWithProtocol(peers.TCP))))
p2pPeerCountDirectionType.WithLabelValues("outbound", upperQUIC).Set(float64(len(s.peers.OutboundConnectedWithProtocol(peers.QUIC))))
connectedPeersCountByClient := make(map[string]float64)
peerScoresByClient := make(map[string][]float64)
for i := 0; i < len(connectedPeers); i++ {
p := connectedPeers[i]
for _, p := range connectedPeers {
pid, err := peer.Decode(p.String())
if err != nil {
log.WithError(err).Debug("Could not decode peer string")
@@ -193,16 +215,18 @@ func (s *Service) updateMetrics() {
}
foundName := agentFromPid(pid, store)
numConnectedPeersByClient[foundName] += 1
connectedPeersCountByClient[foundName] += 1
// Get peer scoring data.
overallScore := s.peers.Scorers().Score(pid)
peerScoresByClient[foundName] = append(peerScoresByClient[foundName], overallScore)
}
connectedPeersCount.Reset() // Clear out previous results.
for agent, total := range numConnectedPeersByClient {
for agent, total := range connectedPeersCountByClient {
connectedPeersCount.WithLabelValues(agent).Set(total)
}
avgScoreConnectedClients.Reset() // Clear out previous results.
for agent, scoringData := range peerScoresByClient {
avgScore := average(scoringData)

View File

@@ -25,6 +25,7 @@ package peers
import (
"context"
"net"
"slices"
"sort"
"strings"
"time"
@@ -81,29 +82,31 @@ const (
type InternetProtocol string
const (
TCP = "tcp"
QUIC = "quic"
TCP = InternetProtocol("tcp")
QUIC = InternetProtocol("quic")
)
// Status is the structure holding the peer status information.
type Status struct {
ctx context.Context
scorers *scorers.Service
store *peerdata.Store
ipTracker map[string]uint64
rand *rand.Rand
ipColocationWhitelist []*net.IPNet
}
type (
// Status is the structure holding the peer status information.
Status struct {
ctx context.Context
scorers *scorers.Service
store *peerdata.Store
ipTracker map[string]uint64
rand *rand.Rand
ipColocationWhitelist []*net.IPNet
}
// StatusConfig represents peer status service params.
type StatusConfig struct {
// PeerLimit specifies maximum amount of concurrent peers that are expected to be connect to the node.
PeerLimit int
// ScorerParams holds peer scorer configuration params.
ScorerParams *scorers.Config
// IPColocationWhitelist contains CIDR ranges that are exempt from IP colocation limits.
IPColocationWhitelist []*net.IPNet
}
// StatusConfig represents peer status service params.
StatusConfig struct {
// PeerLimit specifies maximum amount of concurrent peers that are expected to be connect to the node.
PeerLimit int
// ScorerParams holds peer scorer configuration params.
ScorerParams *scorers.Config
// IPColocationWhitelist contains CIDR ranges that are exempt from IP colocation limits.
IPColocationWhitelist []*net.IPNet
}
)
// NewStatus creates a new status entity.
func NewStatus(ctx context.Context, config *StatusConfig) *Status {
@@ -304,11 +307,8 @@ func (p *Status) SubscribedToSubnet(index uint64) []peer.ID {
connectedStatus := peerData.ConnState == Connecting || peerData.ConnState == Connected
if connectedStatus && peerData.MetaData != nil && !peerData.MetaData.IsNil() && peerData.MetaData.AttnetsBitfield() != nil {
indices := indicesFromBitfield(peerData.MetaData.AttnetsBitfield())
for _, idx := range indices {
if idx == index {
peers = append(peers, pid)
break
}
if slices.Contains(indices, index) {
peers = append(peers, pid)
}
}
}

View File

@@ -345,17 +345,17 @@ func TopicFromMessage(msg string, epoch primitives.Epoch) (string, error) {
return "", errors.Errorf("%s: %s", invalidRPCMessageType, msg)
}
beaconConfig := params.BeaconConfig()
cfg := params.BeaconConfig()
// Check if the message is to be updated in fulu.
if epoch >= beaconConfig.FuluForkEpoch {
if epoch >= cfg.FuluForkEpoch {
if version, ok := fuluMapping[msg]; ok {
return protocolPrefix + msg + version, nil
}
}
// Check if the message is to be updated in altair.
if epoch >= beaconConfig.AltairForkEpoch {
if epoch >= cfg.AltairForkEpoch {
if version, ok := altairMapping[msg]; ok {
return protocolPrefix + msg + version, nil
}

View File

@@ -14,6 +14,7 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers/scorers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/types"
"github.com/OffchainLabs/prysm/v6/cmd/beacon-chain/flags"
"github.com/OffchainLabs/prysm/v6/config/features"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
@@ -106,12 +107,16 @@ func NewService(ctx context.Context, cfg *Config) (*Service, error) {
ctx, cancel := context.WithCancel(ctx)
_ = cancel // govet fix for lost cancel. Cancel is handled in service.Stop().
cfg = validateConfig(cfg)
validateConfig(cfg)
privKey, err := privKey(cfg)
if err != nil {
return nil, errors.Wrapf(err, "failed to generate p2p private key")
}
p2pMaxPeers.Set(float64(cfg.MaxPeers))
minimumPeersPerSubnet.Set(float64(flags.Get().MinimumPeersPerSubnet))
metaData, err := metaDataFromDB(ctx, cfg.DB)
if err != nil {
log.WithError(err).Error("Failed to create peer metadata")

View File

@@ -514,17 +514,26 @@ func initializePersistentSubnets(id enode.ID, epoch primitives.Epoch) error {
//
// return [compute_subscribed_subnet(node_id, epoch, index) for index in range(SUBNETS_PER_NODE)]
func computeSubscribedSubnets(nodeID enode.ID, epoch primitives.Epoch) ([]uint64, error) {
subnetsPerNode := params.BeaconConfig().SubnetsPerNode
subs := make([]uint64, 0, subnetsPerNode)
cfg := params.BeaconConfig()
for i := uint64(0); i < subnetsPerNode; i++ {
if flags.Get().SubscribeToAllSubnets {
subnets := make([]uint64, 0, cfg.AttestationSubnetCount)
for i := range cfg.AttestationSubnetCount {
subnets = append(subnets, i)
}
return subnets, nil
}
subnets := make([]uint64, 0, cfg.SubnetsPerNode)
for i := range cfg.SubnetsPerNode {
sub, err := computeSubscribedSubnet(nodeID, epoch, i)
if err != nil {
return nil, err
return nil, errors.Wrap(err, "compute subscribed subnet")
}
subs = append(subs, sub)
subnets = append(subnets, sub)
}
return subs, nil
return subnets, nil
}
// Spec pseudocode definition:

View File

@@ -514,17 +514,39 @@ func TestDataColumnSubnets(t *testing.T) {
func TestSubnetComputation(t *testing.T) {
db, err := enode.OpenDB("")
assert.NoError(t, err)
require.NoError(t, err)
defer db.Close()
priv, _, err := crypto.GenerateSecp256k1Key(rand.Reader)
assert.NoError(t, err)
convertedKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(priv)
assert.NoError(t, err)
localNode := enode.NewLocalNode(db, convertedKey)
retrievedSubnets, err := computeSubscribedSubnets(localNode.ID(), 1000)
assert.NoError(t, err)
assert.Equal(t, retrievedSubnets[0]+1, retrievedSubnets[1])
priv, _, err := crypto.GenerateSecp256k1Key(rand.Reader)
require.NoError(t, err)
convertedKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(priv)
require.NoError(t, err)
localNode := enode.NewLocalNode(db, convertedKey)
cfg := params.BeaconConfig()
t.Run("standard", func(t *testing.T) {
retrievedSubnets, err := computeSubscribedSubnets(localNode.ID(), 1000)
require.NoError(t, err)
require.Equal(t, cfg.SubnetsPerNode, uint64(len(retrievedSubnets)))
require.Equal(t, retrievedSubnets[0]+1, retrievedSubnets[1])
})
t.Run("subscribed to all", func(t *testing.T) {
gFlags := new(flags.GlobalFlags)
gFlags.SubscribeToAllSubnets = true
flags.Init(gFlags)
defer flags.Init(new(flags.GlobalFlags))
retrievedSubnets, err := computeSubscribedSubnets(localNode.ID(), 1000)
require.NoError(t, err)
require.Equal(t, cfg.AttestationSubnetCount, uint64(len(retrievedSubnets)))
for i := range cfg.AttestationSubnetCount {
require.Equal(t, i, retrievedSubnets[i])
}
})
}
func TestInitializePersistentSubnets(t *testing.T) {

View File

@@ -213,6 +213,11 @@ func (s *FakeP2P) UpdateCustodyInfo(earliestAvailableSlot primitives.Slot, custo
return earliestAvailableSlot, custodyGroupCount, nil
}
// UpdateEarliestAvailableSlot -- fake.
func (*FakeP2P) UpdateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error {
return nil
}
// CustodyGroupCountFromPeer -- fake.
func (*FakeP2P) CustodyGroupCountFromPeer(peer.ID) uint64 {
return 0

View File

@@ -499,6 +499,15 @@ func (s *TestP2P) UpdateCustodyInfo(earliestAvailableSlot primitives.Slot, custo
return s.earliestAvailableSlot, s.custodyGroupCount, nil
}
// UpdateEarliestAvailableSlot .
func (s *TestP2P) UpdateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error {
s.custodyInfoMut.Lock()
defer s.custodyInfoMut.Unlock()
s.earliestAvailableSlot = earliestAvailableSlot
return nil
}
// CustodyGroupCountFromPeer retrieves custody group count from a peer.
// It first tries to get the custody group count from the peer's metadata,
// then falls back to the ENR value if the metadata is not available, then

View File

@@ -68,7 +68,10 @@ func privKey(cfg *Config) (*ecdsa.PrivateKey, error) {
}
if defaultKeysExist {
log.WithField("filePath", defaultKeyPath).Info("Reading static P2P private key from a file. To generate a new random private key at every start, please remove this file.")
if !params.FuluEnabled() {
log.WithField("filePath", defaultKeyPath).Info("Reading static P2P private key from a file. To generate a new random private key at every start, please remove this file.")
}
return privKeyFromFile(defaultKeyPath)
}

View File

@@ -97,7 +97,7 @@ func (s *Service) endpoints(
endpoints = append(endpoints, s.beaconEndpoints(ch, stater, blocker, validatorServer, coreService)...)
endpoints = append(endpoints, s.configEndpoints()...)
endpoints = append(endpoints, s.eventsEndpoints()...)
endpoints = append(endpoints, s.prysmBeaconEndpoints(ch, stater, coreService)...)
endpoints = append(endpoints, s.prysmBeaconEndpoints(ch, stater, blocker, coreService)...)
endpoints = append(endpoints, s.prysmNodeEndpoints()...)
endpoints = append(endpoints, s.prysmValidatorEndpoints(stater, coreService)...)
@@ -1184,6 +1184,7 @@ func (s *Service) eventsEndpoints() []endpoint {
func (s *Service) prysmBeaconEndpoints(
ch *stategen.CanonicalHistory,
stater lookup.Stater,
blocker lookup.Blocker,
coreService *core.Service,
) []endpoint {
server := &beaconprysm.Server{
@@ -1194,6 +1195,7 @@ func (s *Service) prysmBeaconEndpoints(
CanonicalHistory: ch,
BeaconDB: s.cfg.BeaconDB,
Stater: stater,
Blocker: blocker,
ChainInfoFetcher: s.cfg.ChainInfoFetcher,
FinalizationFetcher: s.cfg.FinalizationFetcher,
CoreService: coreService,
@@ -1266,6 +1268,28 @@ func (s *Service) prysmBeaconEndpoints(
handler: server.PublishBlobs,
methods: []string{http.MethodPost},
},
{
template: "/prysm/v1/beacon/states/{state_id}/query",
name: namespace + ".QueryBeaconState",
middleware: []middleware.Middleware{
middleware.ContentTypeHandler([]string{api.JsonMediaType}),
middleware.AcceptHeaderHandler([]string{api.OctetStreamMediaType}),
middleware.AcceptEncodingHeaderHandler(),
},
handler: server.QueryBeaconState,
methods: []string{http.MethodPost},
},
{
template: "/prysm/v1/beacon/blocks/{block_id}/query",
name: namespace + ".QueryBeaconBlock",
middleware: []middleware.Middleware{
middleware.ContentTypeHandler([]string{api.JsonMediaType}),
middleware.AcceptHeaderHandler([]string{api.OctetStreamMediaType}),
middleware.AcceptEncodingHeaderHandler(),
},
handler: server.QueryBeaconBlock,
methods: []string{http.MethodPost},
},
}
}

View File

@@ -127,6 +127,8 @@ func Test_endpoints(t *testing.T) {
"/prysm/v1/beacon/states/{state_id}/validator_count": {http.MethodGet},
"/prysm/v1/beacon/chain_head": {http.MethodGet},
"/prysm/v1/beacon/blobs": {http.MethodPost},
"/prysm/v1/beacon/states/{state_id}/query": {http.MethodPost},
"/prysm/v1/beacon/blocks/{block_id}/query": {http.MethodPost},
}
prysmNodeRoutes := map[string][]string{

View File

@@ -4,6 +4,7 @@ import (
"encoding/json"
"fmt"
"net/http"
"slices"
"strconv"
"strings"
@@ -388,12 +389,9 @@ func syncRewardsVals(
scIndices := make([]primitives.ValidatorIndex, 0, len(allScIndices))
scVals := make([]*precompute.Validator, 0, len(allScIndices))
for _, valIdx := range valIndices {
for _, scIdx := range allScIndices {
if valIdx == scIdx {
scVals = append(scVals, allVals[valIdx])
scIndices = append(scIndices, valIdx)
break
}
if slices.Contains(allScIndices, valIdx) {
scVals = append(scVals, allVals[valIdx])
scIndices = append(scIndices, valIdx)
}
}

View File

@@ -597,7 +597,18 @@ func (s *Server) SubmitSyncCommitteeSubscription(w http.ResponseWriter, r *http.
epochDuration := time.Duration(params.BeaconConfig().SlotsPerEpoch.Mul(params.BeaconConfig().SecondsPerSlot)) * time.Second
totalDuration := epochDuration * time.Duration(epochsToWatch)
cache.SyncSubnetIDs.AddSyncCommitteeSubnets(pubkey48[:], startEpoch, sub.SyncCommitteeIndices, totalDuration)
subcommitteeSize := params.BeaconConfig().SyncCommitteeSize / params.BeaconConfig().SyncCommitteeSubnetCount
seen := make(map[uint64]bool)
var subnetIndices []uint64
for _, idx := range sub.SyncCommitteeIndices {
subnetIdx := idx / subcommitteeSize
if !seen[subnetIdx] {
seen[subnetIdx] = true
subnetIndices = append(subnetIndices, subnetIdx)
}
}
cache.SyncSubnetIDs.AddSyncCommitteeSubnets(pubkey48[:], startEpoch, subnetIndices, totalDuration)
}
}

View File

@@ -1049,9 +1049,8 @@ func TestSubmitSyncCommitteeSubscription(t *testing.T) {
s.SubmitSyncCommitteeSubscription(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
subnets, _, _, _ := cache.SyncSubnetIDs.GetSyncCommitteeSubnets(pubkeys[1], 0)
require.Equal(t, 2, len(subnets))
require.Equal(t, 1, len(subnets))
assert.Equal(t, uint64(0), subnets[0])
assert.Equal(t, uint64(2), subnets[1])
})
t.Run("multiple", func(t *testing.T) {
cache.SyncSubnetIDs.EmptyAllCaches()
@@ -1070,7 +1069,7 @@ func TestSubmitSyncCommitteeSubscription(t *testing.T) {
assert.Equal(t, uint64(0), subnets[0])
subnets, _, _, _ = cache.SyncSubnetIDs.GetSyncCommitteeSubnets(pubkeys[1], 0)
require.Equal(t, 1, len(subnets))
assert.Equal(t, uint64(2), subnets[0])
assert.Equal(t, uint64(0), subnets[0])
})
t.Run("no body", func(t *testing.T) {
request := httptest.NewRequest(http.MethodPost, "http://example.com", nil)

View File

@@ -3,6 +3,7 @@ package lookup
import (
"context"
"fmt"
"math"
"strconv"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain"
@@ -283,9 +284,13 @@ func (p *BeaconDbBlocker) Blobs(ctx context.Context, id string, opts ...options.
return make([]*blocks.VerifiedROBlob, 0), nil
}
fuluForkSlot, err := slots.EpochStart(params.BeaconConfig().FuluForkEpoch)
if err != nil {
return nil, &core.RpcError{Err: errors.Wrap(err, "could not calculate Fulu start slot"), Reason: core.Internal}
// Compute the first Fulu slot.
fuluForkSlot := primitives.Slot(math.MaxUint64)
if fuluForkEpoch := params.BeaconConfig().FuluForkEpoch; fuluForkEpoch != primitives.Epoch(math.MaxUint64) {
fuluForkSlot, err = slots.EpochStart(fuluForkEpoch)
if err != nil {
return nil, &core.RpcError{Err: errors.Wrap(err, "could not calculate Fulu start slot"), Reason: core.Internal}
}
}
// Convert versioned hashes to indices if provided

View File

@@ -587,6 +587,51 @@ func TestGetBlob(t *testing.T) {
require.Equal(t, http.StatusBadRequest, core.ErrorReasonToHTTP(rpcErr.Reason))
require.StringContains(t, "not supported before", rpcErr.Err.Error())
})
t.Run("fulu fork epoch not set (MaxUint64)", func(t *testing.T) {
// Setup with Deneb fork enabled but Fulu fork epoch set to MaxUint64 (not set/far future)
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.DenebForkEpoch = 1
cfg.FuluForkEpoch = primitives.Epoch(math.MaxUint64) // Not set / far future
params.OverrideBeaconConfig(cfg)
// Create and save Deneb block and blob sidecars
denebSlot := util.SlotAtEpoch(t, cfg.DenebForkEpoch)
_, tempBlobStorage := filesystem.NewEphemeralBlobStorageAndFs(t)
denebBlockWithBlobs, denebBlobSidecars := util.GenerateTestDenebBlockWithSidecar(t, [fieldparams.RootLength]byte{}, denebSlot, 2, util.WithDenebSlot(denebSlot))
denebBlockRoot := denebBlockWithBlobs.Root()
verifiedDenebBlobs := verification.FakeVerifySliceForTest(t, denebBlobSidecars)
for i := range verifiedDenebBlobs {
err := tempBlobStorage.Save(verifiedDenebBlobs[i])
require.NoError(t, err)
}
err := db.SaveBlock(t.Context(), denebBlockWithBlobs)
require.NoError(t, err)
blocker := &BeaconDbBlocker{
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
},
BeaconDB: db,
BlobStorage: tempBlobStorage,
}
// Should successfully retrieve blobs even when FuluForkEpoch is not set
retrievedBlobs, rpcErr := blocker.Blobs(ctx, hexutil.Encode(denebBlockRoot[:]))
require.IsNil(t, rpcErr)
require.Equal(t, 2, len(retrievedBlobs))
// Verify blob content matches
for i, retrievedBlob := range retrievedBlobs {
require.NotNil(t, retrievedBlob.BlobSidecar)
require.DeepEqual(t, denebBlobSidecars[i].Blob, retrievedBlob.Blob)
require.DeepEqual(t, denebBlobSidecars[i].KzgCommitment, retrievedBlob.KzgCommitment)
}
})
}
func TestBlobs_CommitmentOrdering(t *testing.T) {

View File

@@ -5,11 +5,13 @@ go_library(
srcs = [
"handlers.go",
"server.go",
"ssz_query.go",
"validator_count.go",
],
importpath = "github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/prysm/beacon",
visibility = ["//visibility:public"],
deps = [
"//api:go_default_library",
"//api/server/structs:go_default_library",
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
@@ -27,10 +29,13 @@ go_library(
"//consensus-types/primitives:go_default_library",
"//consensus-types/validator:go_default_library",
"//encoding/bytesutil:go_default_library",
"//encoding/ssz/query:go_default_library",
"//monitoring/tracing/trace:go_default_library",
"//network/httputil:go_default_library",
"//proto/eth/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/ssz_query:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
@@ -41,10 +46,12 @@ go_test(
name = "go_default_test",
srcs = [
"handlers_test.go",
"ssz_query_test.go",
"validator_count_test.go",
],
embed = [":go_default_library"],
deps = [
"//api:go_default_library",
"//api/server/structs:go_default_library",
"//beacon-chain/blockchain/testing:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
@@ -63,10 +70,13 @@ go_test(
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//network/httputil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/ssz_query:go_default_library",
"//runtime/version:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",

View File

@@ -18,6 +18,7 @@ type Server struct {
CanonicalHistory *stategen.CanonicalHistory
BeaconDB beacondb.ReadOnlyDatabase
Stater lookup.Stater
Blocker lookup.Blocker
ChainInfoFetcher blockchain.ChainInfoFetcher
FinalizationFetcher blockchain.FinalizationFetcher
CoreService *core.Service

View File

@@ -0,0 +1,202 @@
package beacon
import (
"encoding/json"
"errors"
"io"
"net/http"
"github.com/OffchainLabs/prysm/v6/api"
"github.com/OffchainLabs/prysm/v6/api/server/structs"
"github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/eth/shared"
"github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/lookup"
"github.com/OffchainLabs/prysm/v6/encoding/ssz/query"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
"github.com/OffchainLabs/prysm/v6/network/httputil"
sszquerypb "github.com/OffchainLabs/prysm/v6/proto/ssz_query"
"github.com/OffchainLabs/prysm/v6/runtime/version"
)
// QueryBeaconState handles SSZ Query request for BeaconState.
// Returns as bytes serialized SSZQueryResponse.
func (s *Server) QueryBeaconState(w http.ResponseWriter, r *http.Request) {
ctx, span := trace.StartSpan(r.Context(), "beacon.QueryBeaconState")
defer span.End()
stateID := r.PathValue("state_id")
if stateID == "" {
httputil.HandleError(w, "state_id is required in URL params", http.StatusBadRequest)
return
}
// Validate path before lookup: it might be expensive.
var req structs.SSZQueryRequest
err := json.NewDecoder(r.Body).Decode(&req)
switch {
case errors.Is(err, io.EOF):
httputil.HandleError(w, "No data submitted", http.StatusBadRequest)
return
case err != nil:
httputil.HandleError(w, "Could not decode request body: "+err.Error(), http.StatusBadRequest)
return
}
if len(req.Query) == 0 {
httputil.HandleError(w, "Empty query submitted", http.StatusBadRequest)
return
}
path, err := query.ParsePath(req.Query)
if err != nil {
httputil.HandleError(w, "Could not parse path '"+req.Query+"': "+err.Error(), http.StatusBadRequest)
return
}
stateRoot, err := s.Stater.StateRoot(ctx, []byte(stateID))
if err != nil {
var rootNotFoundErr *lookup.StateRootNotFoundError
if errors.As(err, &rootNotFoundErr) {
httputil.HandleError(w, "State root not found: "+rootNotFoundErr.Error(), http.StatusNotFound)
return
}
httputil.HandleError(w, "Could not get state root: "+err.Error(), http.StatusInternalServerError)
return
}
st, err := s.Stater.State(ctx, []byte(stateID))
if err != nil {
shared.WriteStateFetchError(w, err)
return
}
// NOTE: Using unsafe conversion to proto is acceptable here,
// as we play with a copy of the state returned by Stater.
sszObject, ok := st.ToProtoUnsafe().(query.SSZObject)
if !ok {
httputil.HandleError(w, "Unsupported state version for querying: "+version.String(st.Version()), http.StatusBadRequest)
return
}
info, err := query.AnalyzeObject(sszObject)
if err != nil {
httputil.HandleError(w, "Could not analyze state object: "+err.Error(), http.StatusInternalServerError)
return
}
_, offset, length, err := query.CalculateOffsetAndLength(info, path)
if err != nil {
httputil.HandleError(w, "Could not calculate offset and length for path '"+req.Query+"': "+err.Error(), http.StatusInternalServerError)
return
}
encodedState, err := st.MarshalSSZ()
if err != nil {
httputil.HandleError(w, "Could not marshal state to SSZ: "+err.Error(), http.StatusInternalServerError)
return
}
response := &sszquerypb.SSZQueryResponse{
Root: stateRoot,
Result: encodedState[offset : offset+length],
}
responseSsz, err := response.MarshalSSZ()
if err != nil {
httputil.HandleError(w, "Could not marshal response to SSZ: "+err.Error(), http.StatusInternalServerError)
return
}
w.Header().Set(api.VersionHeader, version.String(st.Version()))
httputil.WriteSsz(w, responseSsz)
}
// QueryBeaconState handles SSZ Query request for BeaconState.
// Returns as bytes serialized SSZQueryResponse.
func (s *Server) QueryBeaconBlock(w http.ResponseWriter, r *http.Request) {
ctx, span := trace.StartSpan(r.Context(), "beacon.QueryBeaconBlock")
defer span.End()
blockId := r.PathValue("block_id")
if blockId == "" {
httputil.HandleError(w, "block_id is required in URL params", http.StatusBadRequest)
return
}
// Validate path before lookup: it might be expensive.
var req structs.SSZQueryRequest
err := json.NewDecoder(r.Body).Decode(&req)
switch {
case errors.Is(err, io.EOF):
httputil.HandleError(w, "No data submitted", http.StatusBadRequest)
return
case err != nil:
httputil.HandleError(w, "Could not decode request body: "+err.Error(), http.StatusBadRequest)
return
}
if len(req.Query) == 0 {
httputil.HandleError(w, "Empty query submitted", http.StatusBadRequest)
return
}
path, err := query.ParsePath(req.Query)
if err != nil {
httputil.HandleError(w, "Could not parse path '"+req.Query+"': "+err.Error(), http.StatusBadRequest)
return
}
signedBlock, err := s.Blocker.Block(ctx, []byte(blockId))
if !shared.WriteBlockFetchError(w, signedBlock, err) {
return
}
protoBlock, err := signedBlock.Block().Proto()
if err != nil {
httputil.HandleError(w, "Could not convert block to proto: "+err.Error(), http.StatusInternalServerError)
return
}
block, ok := protoBlock.(query.SSZObject)
if !ok {
httputil.HandleError(w, "Unsupported block version for querying: "+version.String(signedBlock.Version()), http.StatusBadRequest)
return
}
info, err := query.AnalyzeObject(block)
if err != nil {
httputil.HandleError(w, "Could not analyze block object: "+err.Error(), http.StatusInternalServerError)
return
}
_, offset, length, err := query.CalculateOffsetAndLength(info, path)
if err != nil {
httputil.HandleError(w, "Could not calculate offset and length for path '"+req.Query+"': "+err.Error(), http.StatusInternalServerError)
return
}
encodedBlock, err := signedBlock.Block().MarshalSSZ()
if err != nil {
httputil.HandleError(w, "Could not marshal block to SSZ: "+err.Error(), http.StatusInternalServerError)
return
}
blockRoot, err := block.HashTreeRoot()
if err != nil {
httputil.HandleError(w, "Could not compute block root: "+err.Error(), http.StatusInternalServerError)
return
}
response := &sszquerypb.SSZQueryResponse{
Root: blockRoot[:],
Result: encodedBlock[offset : offset+length],
}
responseSsz, err := response.MarshalSSZ()
if err != nil {
httputil.HandleError(w, "Could not marshal response to SSZ: "+err.Error(), http.StatusInternalServerError)
return
}
w.Header().Set(api.VersionHeader, version.String(signedBlock.Version()))
httputil.WriteSsz(w, responseSsz)
}

View File

@@ -0,0 +1,335 @@
package beacon
import (
"bytes"
"context"
"encoding/binary"
"encoding/json"
"net/http"
"net/http/httptest"
"strings"
"testing"
"github.com/OffchainLabs/prysm/v6/api"
"github.com/OffchainLabs/prysm/v6/api/server/structs"
chainMock "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/testutil"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
eth "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
sszquerypb "github.com/OffchainLabs/prysm/v6/proto/ssz_query"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/prysmaticlabs/go-bitfield"
)
func TestQueryBeaconState(t *testing.T) {
ctx := context.Background()
st, _ := util.DeterministicGenesisState(t, 16)
require.NoError(t, st.SetSlot(primitives.Slot(42)))
stateRoot, err := st.HashTreeRoot(ctx)
require.NoError(t, err)
require.NoError(t, st.UpdateBalancesAtIndex(0, 42000000000))
tests := []struct {
path string
expectedValue []byte
}{
{
path: ".slot",
expectedValue: func() []byte {
slot := st.Slot()
result, _ := slot.MarshalSSZ()
return result
}(),
},
{
path: ".latest_block_header",
expectedValue: func() []byte {
header := st.LatestBlockHeader()
result, _ := header.MarshalSSZ()
return result
}(),
},
{
path: ".validators",
expectedValue: func() []byte {
b := make([]byte, 0)
validators := st.Validators()
for _, v := range validators {
vBytes, _ := v.MarshalSSZ()
b = append(b, vBytes...)
}
return b
}(),
},
{
path: ".validators[0]",
expectedValue: func() []byte {
v, _ := st.ValidatorAtIndex(0)
result, _ := v.MarshalSSZ()
return result
}(),
},
{
path: ".validators[0].withdrawal_credentials",
expectedValue: func() []byte {
v, _ := st.ValidatorAtIndex(0)
return v.WithdrawalCredentials
}(),
},
{
path: ".validators[0].effective_balance",
expectedValue: func() []byte {
v, _ := st.ValidatorAtIndex(0)
b := make([]byte, 8)
binary.LittleEndian.PutUint64(b, uint64(v.EffectiveBalance))
return b
}(),
},
}
for _, tt := range tests {
t.Run(tt.path, func(t *testing.T) {
chainService := &chainMock.ChainService{Optimistic: false, FinalizedRoots: make(map[[32]byte]bool)}
s := &Server{
OptimisticModeFetcher: chainService,
FinalizationFetcher: chainService,
Stater: &testutil.MockStater{
BeaconStateRoot: stateRoot[:],
BeaconState: st,
},
}
requestBody := &structs.SSZQueryRequest{
Query: tt.path,
}
var buf bytes.Buffer
require.NoError(t, json.NewEncoder(&buf).Encode(requestBody))
request := httptest.NewRequest(http.MethodPost, "http://example.com/prysm/v1/beacon/states/{state_id}/query", &buf)
request.SetPathValue("state_id", "head")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.QueryBeaconState(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, version.String(version.Phase0), writer.Header().Get(api.VersionHeader))
expectedResponse := &sszquerypb.SSZQueryResponse{
Root: stateRoot[:],
Result: tt.expectedValue,
}
sszExpectedResponse, err := expectedResponse.MarshalSSZ()
require.NoError(t, err)
assert.DeepEqual(t, sszExpectedResponse, writer.Body.Bytes())
})
}
}
func TestQueryBeaconStateInvalidRequest(t *testing.T) {
ctx := context.Background()
st, _ := util.DeterministicGenesisState(t, 16)
require.NoError(t, st.SetSlot(primitives.Slot(42)))
stateRoot, err := st.HashTreeRoot(ctx)
require.NoError(t, err)
tests := []struct {
name string
stateId string
path string
code int
errorString string
}{
{
name: "empty query submitted",
stateId: "head",
path: "",
errorString: "Empty query submitted",
},
{
name: "invalid path",
stateId: "head",
path: ".invalid[]]",
errorString: "Could not parse path",
},
{
name: "non-existent field",
stateId: "head",
path: ".non_existent_field",
code: http.StatusInternalServerError,
errorString: "Could not calculate offset and length for path",
},
{
name: "empty state ID",
stateId: "",
path: "",
},
{
name: "far future slot",
stateId: "1000000000000",
path: "",
},
}
for _, tt := range tests {
t.Run(tt.path, func(t *testing.T) {
chainService := &chainMock.ChainService{Optimistic: false, FinalizedRoots: make(map[[32]byte]bool)}
s := &Server{
OptimisticModeFetcher: chainService,
FinalizationFetcher: chainService,
Stater: &testutil.MockStater{
BeaconStateRoot: stateRoot[:],
BeaconState: st,
},
}
requestBody := &structs.SSZQueryRequest{
Query: tt.path,
}
var buf bytes.Buffer
require.NoError(t, json.NewEncoder(&buf).Encode(requestBody))
request := httptest.NewRequest(http.MethodPost, "http://example.com/prysm/v1/beacon/states/{state_id}/query", &buf)
request.SetPathValue("state_id", tt.stateId)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.QueryBeaconState(writer, request)
if tt.code == 0 {
tt.code = http.StatusBadRequest
}
require.Equal(t, tt.code, writer.Code)
if tt.errorString != "" {
errorString := writer.Body.String()
require.Equal(t, true, strings.Contains(errorString, tt.errorString))
}
})
}
}
func TestQueryBeaconBlock(t *testing.T) {
randaoReveal, err := hexutil.Decode("0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505")
require.NoError(t, err)
root, err := hexutil.Decode("0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")
require.NoError(t, err)
signature, err := hexutil.Decode("0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505")
require.NoError(t, err)
att := &eth.Attestation{
AggregationBits: bitfield.Bitlist{0x01},
Data: &eth.AttestationData{
Slot: 1,
CommitteeIndex: 1,
BeaconBlockRoot: root,
Source: &eth.Checkpoint{
Epoch: 1,
Root: root,
},
Target: &eth.Checkpoint{
Epoch: 1,
Root: root,
},
},
Signature: signature,
}
tests := []struct {
name string
path string
block interfaces.ReadOnlySignedBeaconBlock
expectedValue []byte
}{
{
name: "slot",
path: ".slot",
block: func() interfaces.ReadOnlySignedBeaconBlock {
b := util.NewBeaconBlock()
b.Block.Slot = 123
sb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
return sb
}(),
expectedValue: func() []byte {
b := make([]byte, 8)
binary.LittleEndian.PutUint64(b, 123)
return b
}(),
},
{
name: "randao_reveal",
path: ".body.randao_reveal",
block: func() interfaces.ReadOnlySignedBeaconBlock {
b := util.NewBeaconBlock()
b.Block.Body.RandaoReveal = randaoReveal
sb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
return sb
}(),
expectedValue: randaoReveal,
},
{
name: "attestations",
path: ".body.attestations",
block: func() interfaces.ReadOnlySignedBeaconBlock {
b := util.NewBeaconBlock()
b.Block.Body.Attestations = []*eth.Attestation{
att,
}
sb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
return sb
}(),
expectedValue: func() []byte {
b, err := att.MarshalSSZ()
require.NoError(t, err)
return b
}(),
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
mockBlockFetcher := &testutil.MockBlocker{BlockToReturn: tt.block}
mockChainService := &chainMock.ChainService{
FinalizedRoots: map[[32]byte]bool{},
}
s := &Server{
FinalizationFetcher: mockChainService,
Blocker: mockBlockFetcher,
}
requestBody := &structs.SSZQueryRequest{
Query: tt.path,
}
var buf bytes.Buffer
require.NoError(t, json.NewEncoder(&buf).Encode(requestBody))
request := httptest.NewRequest(http.MethodPost, "http://example.com/prysm/v1/beacon/blocks/{block_id}/query", &buf)
request.SetPathValue("block_id", "head")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.QueryBeaconBlock(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, version.String(version.Phase0), writer.Header().Get(api.VersionHeader))
blockRoot, err := tt.block.Block().HashTreeRoot()
require.NoError(t, err)
expectedResponse := &sszquerypb.SSZQueryResponse{
Root: blockRoot[:],
Result: tt.expectedValue,
}
sszExpectedResponse, err := expectedResponse.MarshalSSZ()
require.NoError(t, err)
assert.DeepEqual(t, sszExpectedResponse, writer.Body.Bytes())
})
}
}

View File

@@ -229,7 +229,7 @@ func (vs *Server) BuildBlockParallel(ctx context.Context, sBlk interfaces.Signed
sBlk.SetVoluntaryExits(vs.getExits(head, sBlk.Block().Slot()))
// Set sync aggregate. New in Altair.
vs.setSyncAggregate(ctx, sBlk)
vs.setSyncAggregate(ctx, sBlk, head)
// Set bls to execution change. New in Capella.
vs.setBlsToExecData(sBlk, head)
@@ -312,14 +312,14 @@ func (vs *Server) ProposeBeaconBlock(ctx context.Context, req *ethpb.GenericSign
rob, err := blocks.NewROBlockWithRoot(block, root)
if block.IsBlinded() {
block, blobSidecars, err = vs.handleBlindedBlock(ctx, block)
if errors.Is(err, builderapi.ErrBadGateway) {
log.WithError(err).Info("Optimistically proposed block - builder relay temporarily unavailable, block may arrive over P2P")
return &ethpb.ProposeResponse{BlockRoot: root[:]}, nil
}
} else if block.Version() >= version.Deneb {
blobSidecars, dataColumnSidecars, err = vs.handleUnblindedBlock(rob, req)
}
if err != nil {
if errors.Is(err, builderapi.ErrBadGateway) && block.IsBlinded() {
log.WithError(err).Info("Optimistically proposed block - builder relay temporarily unavailable, block may arrive over P2P")
return &ethpb.ProposeResponse{BlockRoot: root[:]}, nil
}
return nil, status.Errorf(codes.Internal, "%s: %v", "handle block failed", err)
}

View File

@@ -5,6 +5,7 @@ import (
"context"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
@@ -20,12 +21,12 @@ import (
"github.com/prysmaticlabs/go-bitfield"
)
func (vs *Server) setSyncAggregate(ctx context.Context, blk interfaces.SignedBeaconBlock) {
func (vs *Server) setSyncAggregate(ctx context.Context, blk interfaces.SignedBeaconBlock, headState state.BeaconState) {
if blk.Version() < version.Altair {
return
}
syncAggregate, err := vs.getSyncAggregate(ctx, slots.PrevSlot(blk.Block().Slot()), blk.Block().ParentRoot())
syncAggregate, err := vs.getSyncAggregate(ctx, slots.PrevSlot(blk.Block().Slot()), blk.Block().ParentRoot(), headState)
if err != nil {
log.WithError(err).Error("Could not get sync aggregate")
emptySig := [96]byte{0xC0}
@@ -47,7 +48,7 @@ func (vs *Server) setSyncAggregate(ctx context.Context, blk interfaces.SignedBea
// getSyncAggregate retrieves the sync contributions from the pool to construct the sync aggregate object.
// The contributions are filtered based on matching of the input root and slot then profitability.
func (vs *Server) getSyncAggregate(ctx context.Context, slot primitives.Slot, root [32]byte) (*ethpb.SyncAggregate, error) {
func (vs *Server) getSyncAggregate(ctx context.Context, slot primitives.Slot, root [32]byte, headState state.BeaconState) (*ethpb.SyncAggregate, error) {
_, span := trace.StartSpan(ctx, "ProposerServer.getSyncAggregate")
defer span.End()
@@ -62,7 +63,7 @@ func (vs *Server) getSyncAggregate(ctx context.Context, slot primitives.Slot, ro
// Contributions have to match the input root
proposerContributions := proposerSyncContributions(poolContributions).filterByBlockRoot(root)
aggregatedContributions, err := vs.aggregatedSyncCommitteeMessages(ctx, slot, root, poolContributions)
aggregatedContributions, err := vs.aggregatedSyncCommitteeMessages(ctx, slot, root, poolContributions, headState)
if err != nil {
return nil, errors.Wrap(err, "could not get aggregated sync committee messages")
}
@@ -123,6 +124,7 @@ func (vs *Server) aggregatedSyncCommitteeMessages(
slot primitives.Slot,
root [32]byte,
poolContributions []*ethpb.SyncCommitteeContribution,
st state.BeaconState,
) ([]*ethpb.SyncCommitteeContribution, error) {
subcommitteeCount := params.BeaconConfig().SyncCommitteeSubnetCount
subcommitteeSize := params.BeaconConfig().SyncCommitteeSize / subcommitteeCount
@@ -146,10 +148,7 @@ func (vs *Server) aggregatedSyncCommitteeMessages(
messageSigs = append(messageSigs, msg.Signature)
}
}
st, err := vs.HeadFetcher.HeadState(ctx)
if err != nil {
return nil, errors.Wrap(err, "could not get head state")
}
positions, err := helpers.CurrentPeriodPositions(st, messageIndices)
if err != nil {
return nil, errors.Wrap(err, "could not get sync committee positions")

View File

@@ -9,6 +9,7 @@ import (
mockSync "github.com/OffchainLabs/prysm/v6/beacon-chain/sync/initial-sync/testing"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/crypto/bls"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
@@ -51,15 +52,15 @@ func TestProposer_GetSyncAggregate_OK(t *testing.T) {
require.NoError(t, proposerServer.SyncCommitteePool.SaveSyncCommitteeContribution(cont))
}
aggregate, err := proposerServer.getSyncAggregate(t.Context(), 1, bytesutil.ToBytes32(conts[0].BlockRoot))
aggregate, err := proposerServer.getSyncAggregate(t.Context(), 1, bytesutil.ToBytes32(conts[0].BlockRoot), st)
require.NoError(t, err)
require.DeepEqual(t, bitfield.Bitvector32{0xf, 0xf, 0xf, 0xf}, aggregate.SyncCommitteeBits)
aggregate, err = proposerServer.getSyncAggregate(t.Context(), 2, bytesutil.ToBytes32(conts[0].BlockRoot))
aggregate, err = proposerServer.getSyncAggregate(t.Context(), 2, bytesutil.ToBytes32(conts[0].BlockRoot), st)
require.NoError(t, err)
require.DeepEqual(t, bitfield.Bitvector32{0xaa, 0xaa, 0xaa, 0xaa}, aggregate.SyncCommitteeBits)
aggregate, err = proposerServer.getSyncAggregate(t.Context(), 3, bytesutil.ToBytes32(conts[0].BlockRoot))
aggregate, err = proposerServer.getSyncAggregate(t.Context(), 3, bytesutil.ToBytes32(conts[0].BlockRoot), st)
require.NoError(t, err)
require.DeepEqual(t, bitfield.NewBitvector32(), aggregate.SyncCommitteeBits)
}
@@ -68,7 +69,7 @@ func TestServer_SetSyncAggregate_EmptyCase(t *testing.T) {
b, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlockAltair())
require.NoError(t, err)
s := &Server{} // Sever is not initialized with sync committee pool.
s.setSyncAggregate(t.Context(), b)
s.setSyncAggregate(t.Context(), b, nil)
agg, err := b.Block().Body().SyncAggregate()
require.NoError(t, err)
@@ -138,7 +139,7 @@ func TestProposer_GetSyncAggregate_IncludesSyncCommitteeMessages(t *testing.T) {
}
// The final sync aggregates must have indexes [0,1,2,3] set for both subcommittees
sa, err := proposerServer.getSyncAggregate(t.Context(), 1, r)
sa, err := proposerServer.getSyncAggregate(t.Context(), 1, r, st)
require.NoError(t, err)
assert.Equal(t, true, sa.SyncCommitteeBits.BitAt(0))
assert.Equal(t, true, sa.SyncCommitteeBits.BitAt(1))
@@ -194,8 +195,99 @@ func Test_aggregatedSyncCommitteeMessages_NoIntersectionWithPoolContributions(t
BlockRoot: r[:],
}
aggregated, err := proposerServer.aggregatedSyncCommitteeMessages(t.Context(), 1, r, []*ethpb.SyncCommitteeContribution{cont})
aggregated, err := proposerServer.aggregatedSyncCommitteeMessages(t.Context(), 1, r, []*ethpb.SyncCommitteeContribution{cont}, st)
require.NoError(t, err)
require.Equal(t, 1, len(aggregated))
assert.Equal(t, false, aggregated[0].AggregationBits.BitAt(3))
}
func TestGetSyncAggregate_CorrectStateAtSyncCommitteePeriodBoundary(t *testing.T) {
helpers.ClearCache()
syncPeriodBoundaryEpoch := primitives.Epoch(274176) // Real epoch from the bug report
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
preEpochState, keys := util.DeterministicGenesisStateAltair(t, 100)
require.NoError(t, preEpochState.SetSlot(primitives.Slot(syncPeriodBoundaryEpoch)*slotsPerEpoch-1)) // Last slot of previous epoch
postEpochState := preEpochState.Copy()
require.NoError(t, postEpochState.SetSlot(primitives.Slot(syncPeriodBoundaryEpoch)*slotsPerEpoch+2)) // After 2 missed slots
oldCommittee := &ethpb.SyncCommittee{
Pubkeys: make([][]byte, params.BeaconConfig().SyncCommitteeSize),
}
newCommittee := &ethpb.SyncCommittee{
Pubkeys: make([][]byte, params.BeaconConfig().SyncCommitteeSize),
}
for i := 0; i < int(params.BeaconConfig().SyncCommitteeSize); i++ {
if i < len(keys) {
oldCommittee.Pubkeys[i] = keys[i%len(keys)].PublicKey().Marshal()
// Use different keys for new committee to simulate rotation
newCommittee.Pubkeys[i] = keys[(i+10)%len(keys)].PublicKey().Marshal()
}
}
require.NoError(t, preEpochState.SetCurrentSyncCommittee(oldCommittee))
require.NoError(t, postEpochState.SetCurrentSyncCommittee(newCommittee))
mockChainService := &chainmock.ChainService{
State: postEpochState,
}
proposerServer := &Server{
HeadFetcher: mockChainService,
SyncChecker: &mockSync.Sync{IsSyncing: false},
SyncCommitteePool: synccommittee.NewStore(),
}
slot := primitives.Slot(syncPeriodBoundaryEpoch)*slotsPerEpoch + 1 // First slot of new epoch
blockRoot := [32]byte{0x01, 0x02, 0x03}
msg1 := &ethpb.SyncCommitteeMessage{
Slot: slot,
BlockRoot: blockRoot[:],
ValidatorIndex: 0, // This validator is in position 0 of OLD committee
Signature: bls.NewAggregateSignature().Marshal(),
}
msg2 := &ethpb.SyncCommitteeMessage{
Slot: slot,
BlockRoot: blockRoot[:],
ValidatorIndex: 1, // This validator is in position 1 of OLD committee
Signature: bls.NewAggregateSignature().Marshal(),
}
require.NoError(t, proposerServer.SyncCommitteePool.SaveSyncCommitteeMessage(msg1))
require.NoError(t, proposerServer.SyncCommitteePool.SaveSyncCommitteeMessage(msg2))
aggregateWrongState, err := proposerServer.getSyncAggregate(t.Context(), slot, blockRoot, postEpochState)
require.NoError(t, err)
aggregateCorrectState, err := proposerServer.getSyncAggregate(t.Context(), slot, blockRoot, preEpochState)
require.NoError(t, err)
wrongStateBits := bitfield.Bitlist(aggregateWrongState.SyncCommitteeBits)
correctStateBits := bitfield.Bitlist(aggregateCorrectState.SyncCommitteeBits)
wrongStateHasValidators := false
correctStateHasValidators := false
for i := 0; i < len(wrongStateBits); i++ {
if wrongStateBits[i] != 0 {
wrongStateHasValidators = true
break
}
}
for i := 0; i < len(correctStateBits); i++ {
if correctStateBits[i] != 0 {
correctStateHasValidators = true
break
}
}
assert.Equal(t, true, correctStateHasValidators, "Correct state should include validators that sent messages")
assert.Equal(t, false, wrongStateHasValidators, "Wrong state should not find validators in incorrect sync committee")
t.Logf("Wrong state aggregate bits: %x (has validators: %v)", wrongStateBits, wrongStateHasValidators)
t.Logf("Correct state aggregate bits: %x (has validators: %v)", correctStateBits, correctStateHasValidators)
}

View File

@@ -266,6 +266,8 @@ type WriteOnlyEth1Data interface {
SetEth1DepositIndex(val uint64) error
ExitEpochAndUpdateChurn(exitBalance primitives.Gwei) (primitives.Epoch, error)
ExitEpochAndUpdateChurnForTotalBal(totalActiveBalance primitives.Gwei, exitBalance primitives.Gwei) (primitives.Epoch, error)
SetExitBalanceToConsume(val primitives.Gwei) error
SetEarliestExitEpoch(val primitives.Epoch) error
}
// WriteOnlyValidators defines a struct which only has write access to validators methods.
@@ -333,6 +335,7 @@ type WriteOnlyWithdrawals interface {
DequeuePendingPartialWithdrawals(num uint64) error
SetNextWithdrawalIndex(i uint64) error
SetNextWithdrawalValidatorIndex(i primitives.ValidatorIndex) error
SetPendingPartialWithdrawals(val []*ethpb.PendingPartialWithdrawal) error
}
type WriteOnlyConsolidations interface {

View File

@@ -91,3 +91,33 @@ func (b *BeaconState) exitEpochAndUpdateChurn(totalActiveBalance primitives.Gwei
return b.earliestExitEpoch, nil
}
// SetExitBalanceToConsume sets the exit balance to consume. This method mutates the state.
func (b *BeaconState) SetExitBalanceToConsume(exitBalanceToConsume primitives.Gwei) error {
if b.version < version.Electra {
return errNotSupported("SetExitBalanceToConsume", b.version)
}
b.lock.Lock()
defer b.lock.Unlock()
b.exitBalanceToConsume = exitBalanceToConsume
b.markFieldAsDirty(types.ExitBalanceToConsume)
return nil
}
// SetEarliestExitEpoch sets the earliest exit epoch. This method mutates the state.
func (b *BeaconState) SetEarliestExitEpoch(earliestExitEpoch primitives.Epoch) error {
if b.version < version.Electra {
return errNotSupported("SetEarliestExitEpoch", b.version)
}
b.lock.Lock()
defer b.lock.Unlock()
b.earliestExitEpoch = earliestExitEpoch
b.markFieldAsDirty(types.EarliestExitEpoch)
return nil
}

View File

@@ -100,3 +100,24 @@ func (b *BeaconState) DequeuePendingPartialWithdrawals(n uint64) error {
return nil
}
// SetPendingPartialWithdrawals sets the pending partial withdrawals. This method mutates the state.
func (b *BeaconState) SetPendingPartialWithdrawals(pendingPartialWithdrawals []*eth.PendingPartialWithdrawal) error {
if b.version < version.Electra {
return errNotSupported("SetPendingPartialWithdrawals", b.version)
}
b.lock.Lock()
defer b.lock.Unlock()
if pendingPartialWithdrawals == nil {
return errors.New("cannot set nil pending partial withdrawals")
}
b.sharedFieldReferences[types.PendingPartialWithdrawals].MinusRef()
b.sharedFieldReferences[types.PendingPartialWithdrawals] = stateutil.NewRef(1)
b.pendingPartialWithdrawals = pendingPartialWithdrawals
b.markFieldAsDirty(types.PendingPartialWithdrawals)
return nil
}

View File

@@ -650,6 +650,11 @@ func InitializeFromProtoUnsafeFulu(st *ethpb.BeaconStateFulu) (state.BeaconState
for i, v := range st.ProposerLookahead {
proposerLookahead[i] = primitives.ValidatorIndex(v)
}
// Proposer lookahead must be exactly 2 * SLOTS_PER_EPOCH in length. We fill in with zeroes instead of erroring out here
for i := len(proposerLookahead); i < 2*fieldparams.SlotsPerEpoch; i++ {
proposerLookahead = append(proposerLookahead, 0)
}
fieldCount := params.BeaconConfig().BeaconStateFuluFieldCount
b := &BeaconState{
version: version.Fulu,

View File

@@ -17,7 +17,6 @@ go_library(
importpath = "github.com/OffchainLabs/prysm/v6/beacon-chain/sync/backfill",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/das:go_default_library",
"//beacon-chain/db:go_default_library",
@@ -61,7 +60,6 @@ go_test(
],
embed = [":go_default_library"],
deps = [
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/das:go_default_library",
"//beacon-chain/db:go_default_library",

View File

@@ -3,12 +3,12 @@ package backfill
import (
"context"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
"github.com/OffchainLabs/prysm/v6/beacon-chain/sync"
"github.com/OffchainLabs/prysm/v6/beacon-chain/verification"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
@@ -348,7 +348,7 @@ func (*Service) Status() error {
// minimumBackfillSlot determines the lowest slot that backfill needs to download based on looking back
// MIN_EPOCHS_FOR_BLOCK_REQUESTS from the current slot.
func minimumBackfillSlot(current primitives.Slot) primitives.Slot {
oe := helpers.MinEpochsForBlockRequests()
oe := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
if oe > slots.MaxSafeEpoch() {
oe = slots.MaxSafeEpoch()
}

View File

@@ -5,7 +5,6 @@ import (
"testing"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
p2ptest "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
@@ -84,7 +83,7 @@ func TestServiceInit(t *testing.T) {
}
func TestMinimumBackfillSlot(t *testing.T) {
oe := helpers.MinEpochsForBlockRequests()
oe := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
currSlot := (oe + 100).Mul(uint64(params.BeaconConfig().SlotsPerEpoch))
minSlot := minimumBackfillSlot(primitives.Slot(currSlot))
@@ -109,7 +108,7 @@ func testReadN(ctx context.Context, t *testing.T, c chan batch, n int, into []ba
}
func TestBackfillMinSlotDefault(t *testing.T) {
oe := helpers.MinEpochsForBlockRequests()
oe := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
current := primitives.Slot((oe + 100).Mul(uint64(params.BeaconConfig().SlotsPerEpoch)))
s := &Service{}
specMin := minimumBackfillSlot(current)

View File

@@ -14,7 +14,7 @@ import (
"github.com/pkg/errors"
)
const signatureVerificationInterval = 50 * time.Millisecond
const signatureVerificationInterval = 5 * time.Millisecond
type signatureVerifier struct {
set *bls.SignatureBatch

View File

@@ -90,10 +90,10 @@ func (s *Service) updateCustodyInfoIfNeeded() error {
// custodyGroupCount computes the custody group count based on the custody requirement,
// the validators custody requirement, and whether the node is subscribed to all data subnets.
func (s *Service) custodyGroupCount(context.Context) (uint64, error) {
beaconConfig := params.BeaconConfig()
cfg := params.BeaconConfig()
if flags.Get().SubscribeAllDataSubnets {
return beaconConfig.NumberOfCustodyGroups, nil
return cfg.NumberOfCustodyGroups, nil
}
validatorsCustodyRequirement, err := s.validatorsCustodyRequirement()
@@ -101,7 +101,7 @@ func (s *Service) custodyGroupCount(context.Context) (uint64, error) {
return 0, errors.Wrap(err, "validators custody requirement")
}
return max(beaconConfig.CustodyRequirement, validatorsCustodyRequirement), nil
return max(cfg.CustodyRequirement, validatorsCustodyRequirement), nil
}
// validatorsCustodyRequirements computes the custody requirements based on the

View File

@@ -116,11 +116,11 @@ func withSubscribeAllDataSubnets(t *testing.T, fn func()) {
func TestUpdateCustodyInfoIfNeeded(t *testing.T) {
params.SetupTestConfigCleanup(t)
beaconConfig := params.BeaconConfig()
beaconConfig.NumberOfCustodyGroups = 128
beaconConfig.CustodyRequirement = 4
beaconConfig.SamplesPerSlot = 8
params.OverrideBeaconConfig(beaconConfig)
cfg := params.BeaconConfig()
cfg.NumberOfCustodyGroups = 128
cfg.CustodyRequirement = 4
cfg.SamplesPerSlot = 8
params.OverrideBeaconConfig(cfg)
t.Run("Skip update when actual custody count >= target", func(t *testing.T) {
setup := setupCustodyTest(t, false)
@@ -159,7 +159,7 @@ func TestUpdateCustodyInfoIfNeeded(t *testing.T) {
require.NoError(t, err)
const expectedSlot = primitives.Slot(100)
setup.assertCustodyInfo(t, expectedSlot, beaconConfig.NumberOfCustodyGroups)
setup.assertCustodyInfo(t, expectedSlot, cfg.NumberOfCustodyGroups)
})
})
}

View File

@@ -1122,19 +1122,21 @@ func randomPeer(
}
}
slices.Sort(nonRateLimitedPeers)
if len(nonRateLimitedPeers) == 0 {
log.WithFields(logrus.Fields{
"peerCount": peerCount,
"delay": waitPeriod,
}).Debug("Waiting for a peer with enough bandwidth for data column sidecars")
time.Sleep(waitPeriod)
continue
if len(nonRateLimitedPeers) > 0 {
slices.Sort(nonRateLimitedPeers)
randomIndex := randomSource.Intn(len(nonRateLimitedPeers))
return nonRateLimitedPeers[randomIndex], nil
}
randomIndex := randomSource.Intn(len(nonRateLimitedPeers))
return nonRateLimitedPeers[randomIndex], nil
log.WithFields(logrus.Fields{
"peerCount": peerCount,
"delay": waitPeriod,
}).Debug("Waiting for a peer with enough bandwidth for data column sidecars")
select {
case <-time.After(waitPeriod):
case <-ctx.Done():
}
}
return "", ctx.Err()

View File

@@ -45,6 +45,7 @@ func TestFetchDataColumnSidecars(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.FuluForkEpoch = 0
cfg.BlobSchedule = []params.BlobScheduleEntry{{Epoch: 0, MaxBlobsPerBlock: 10}}
params.OverrideBeaconConfig(cfg)
// Start the trusted setup.
@@ -760,6 +761,12 @@ func TestVerifyDataColumnSidecarsByPeer(t *testing.T) {
err := kzg.Start()
require.NoError(t, err)
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.FuluForkEpoch = 0
cfg.BlobSchedule = []params.BlobScheduleEntry{{Epoch: 0, MaxBlobsPerBlock: 2}}
params.OverrideBeaconConfig(cfg)
t.Run("nominal", func(t *testing.T) {
const (
start, stop = 0, 15

View File

@@ -0,0 +1,133 @@
# Gossip validation
**Note:** This design doc currently details some topics of gossip validation. Additional topics about gossip validation will be added in the future. When the document is complete we will remove this note.
## Table of Contents
- [State usage in gossip validation](#state-usage-in-gossip-validation)
- [Beacon Blocks](#beacon-blocks)
- [Head state is often good enough](#head-state-is-often-good-enough)
- [Attestations](#attestations)
- [Head is good again](#head-is-good-again)
- [Other verifications and caches](#other-verifications-and-caches)
- [Dropping expensive computations](#dropping-expensive-computations)
## State usage in gossip validation
The beacon node needs to verify different objects that arrive via gossipsub: beacon blocks, attestations, aggregated attestations, sync committee messages, data column sidecars, slashings, etc. Each of these objects requires a different validation path. However, they all have in common that in order for them to be verified, one needs access to information from a beacon state. The question is *what beacon state should we use?*.
### Beacon Blocks
Before we get into implementation details, let us analyze some explicit checks that we need to perform. Suppose this is the forkchoice state of the node
```
A <--- B <--- C <----------- E <-- .... <--- Y (<--- head of the chain)
\
----- D
```
Here the block `A` is finalized, the block `B` is justified and the head of the chain (from the point of view of this beacon node) is at `Y`. Suppose moreover that many slots and even epochs have happened between `D` and `Y`. The node now receives a block based on `D`.
```
D <--- Z
```
How can we validate that the proposer index was indeed supposed to propose during this slot? Which state should we use to check what is the proposer index? If we take the post state of `Y`, which is this blocks current head state, and advance it to the slot of `Z`, the proposer index may be different than if you take the post state of `D` and advance it accordingly. Now we put ourselves in the shoes of the proposer of `Z`. This validator may have honestly not seen the chain `E <-- ... <--- Y` and instead kept `D` all the time as head, simply processing slots. Eventually she finds in the position of proposing a block. She needs to base it on `D`. Hence the phrasing on the p2p-spec:
```
- _[REJECT]_ The block is proposed by the expected `proposer_index` for the
block's slot in the context of the current shuffling (defined by
`parent_root`/`slot`). If the `proposer_index` cannot immediately be verified
against the expected shuffling, the block MAY be queued for later processing
while proposers for the block's branch are calculated -- in such a case _do
not_ `REJECT`, instead `IGNORE` this message.
```
### Head state is often good enough
So when is the head state good enough to validate the proposer index in the above case? The situation is slightly different pre-Fulu than post-Fulu with the proposer lookahead, but essentially what we need to verify, is that there couldn't have been different shufflings when considering the post-state of `Y` and the post-state of `D` when advanced to the current slot.
Let `S` be `Z`'s slot and `E` be its epoch. The proposer shuffling for `Z` was determined at slot `32 (E - 1)`. Let `X` be the latest ancestor of `Z` with slot less or equal to `32 (E - 1)`. If `X` is an ancestor of `C` (but not `C` itself), then the shuffling on the `Z` branch will be the same as on the `Y` branch for slot `S`. This for example is forced to happen if all `Z`, `Y` and `D` are in the same epoch `E`.
This takes care of the shuffling. However, the actual computation for the proposer index requires also the active validator indices, and this slice is determined at the latest epoch transition into `Z`'s epoch.
So a good algorithm is as follows when importing `Z` at slot `S` and epoch `E`.
1. Check if the head state is at epoch `E`
2. Check if the target checkpoint for `Y` in `E` equals the target checkpoint for `Z` at `E`.
If both these points hold, then the head state already has the right proposer index.
3. If either 1) or 2) does not hold, then the checkpoint state on the branch of `Z`, at `E` will hold the right proposer index for `Z`'s slot. Often times this state is faster to get than that of `D`, since being a checkpoint it will be cached in case that this checkpoint was canonical at some point.
This takes care of most reorgs that happen on mainnet, and the only problem occurs when deep forks are attempted (usually by struggling nodes building on some old block). In these cases, often times the parent block is already finalized and therefore we don't even attempt to import those blocks. But this problem is exacerbated when the chain is not finalizing because any such struggling block will cause a fork and will fail the above checks to use the head state to consider the proposer index.
### Attestations
Something similar happens for attestations. When receiving an attestation
```
AttestationData(
slot,
index,
beacon_block_root,
source,
target=Checkpoint(
epoch: E,
root: R,
)
)
```
We make sure that we know the block with root `beacon_block_root`. We also check that the target checkpoint is consistent. In particular, we know that the beacon state of `R` (possibly advanced) at the slot `32 E` is at the same epoch as `slot` and has the right beacon committees to check if the attester was supposed to attest at `slot` or not. Indeed the ingredients to compute the Beacon committee at the given slot are built out of the `randao_mixes` of the epoch `E - 2` (it's `E - MIN_SEED_LOOKAHEAD - 1`) and the active validator indices of the epoch `E`. Therefore any state that belongs to the same chain containing `R` and `beacon_block_root` and has epoch greater or equal than `E-2` will contain all the information necessary to validate the randao mix, and it needs to be exactly `E` to validate the active validator indices. We thus always take the checkpoint state, that is `R` advanced to `32 E`.
### Head is good again
Now when is the head state good enough to validate an attestation as above? We already have the answer in the previous paragraph: the state needs to have the right active validator indices and the same randao mix. The mix is rarely a problem, this requires that the head state's checkpoint at `E-2` coincides with the `beacon_block_root` checkpoint at `E-2`. But the active validator indices are more likely to differ, the check here is very simple, if:
1. The head state's epoch is `E`.
2. The head target at `E` has root `R`.
Then the head state is good to validate this attestation. If the above two conditions fail, then the right state to validate it is `R` advanced to `32 E`, which is likely to be cached if this state happened to be a checkpoint in a canonical chain.
### Other verifications and caches
So we see we have two types of verifications: verifications related to randao mix, seeds and such to determine committees. These typically require a state from 1 or 2 epochs ago where the seed was fixed. And verifications related to active validator indices which require a state at the start of the current epoch (or the epoch of the object being validated). This applies to all verifications: proposer index, beacon committee attester index, sync committee index, PTC index, etc.
Since computing active validator indices, proposer indices, beacon committees, etc. is very expensive, we keep several caches (more than what we actually need and some need to be removed from our codebase) for these. Since these are updated at epoch transition they are keyed by either the latest state root before the epoch transition or by the checkpoint root itself.
In addition, forkchoice keeps an O(1) cache for each block, it gives the corresponding target checkpoint. So a general algorithm to perform verifications for arriving gossip elements is as follows:
```
Gossiped Element Arrives
|
v
┌──────────────────────────────────────────────────┐
│ Is element part of head state or descendant? │
└──────────────────────────────────────────────────┘
/ \
YES NO
| |
v v
┌──────────────────┐ ┌──────────────────────────────────────┐
│ Use Head State │ │ Is target same as head's target for │
│ (possibly │ │ current epoch? │
│ advanced to same │ └──────────────────────────────────────┘
│ epoch as element)│ / \
└──────────────────┘ YES NO
| |
v v
┌──────────────┐ ┌────────────────────┐
│ Use Head │ │ Targets differ: │
│ State │ │ Get target state │
└──────────────┘ └────────────────────┘
|
v
┌──────────────────────────────┐
│ Is parent in same epoch? │
└──────────────────────────────┘
/ \
YES NO
| |
v v
┌──────────────────────────┐ ┌────────────────────────┐
│ Use forkchoice to get │ │ Take parent state and │
│ parent's target (equals │ │ advance to current │
│ gossiped element target).│ │ epoch (= target state).│
│ Use checkpoint cache. │ │ │
└──────────────────────────┘ └────────────────────────┘
```
### Dropping expensive computations
If the checkpoint cache misses (for example if the checkpoint was not really a checkpoint in our canonical chain ever), then regenerating the checkpoint state could be very expensive. In this case we should consider dropping or queueing the gossiped object. For attestations we have some heuristics for this to avoid validating old useless attestations. For beacon blocks this is not the case and we will try to always import a block that we receive over gossip. This is dangerous in case of non-finality as this can lead to very old regeneration of states.

View File

@@ -1366,16 +1366,16 @@ func TestFetchSidecars(t *testing.T) {
})
t.Run("Nominal", func(t *testing.T) {
beaconConfig := params.BeaconConfig()
numberOfColumns := beaconConfig.NumberOfColumns
samplesPerSlot := beaconConfig.SamplesPerSlot
cfg := params.BeaconConfig()
numberOfColumns := cfg.NumberOfColumns
samplesPerSlot := cfg.SamplesPerSlot
// Define "now" to be one epoch after genesis time + retention period.
genesisTime := time.Date(2025, time.August, 10, 0, 0, 0, 0, time.UTC)
secondsPerSlot := beaconConfig.SecondsPerSlot
slotsPerEpoch := beaconConfig.SlotsPerEpoch
secondsPerSlot := cfg.SecondsPerSlot
slotsPerEpoch := cfg.SlotsPerEpoch
secondsPerEpoch := uint64(slotsPerEpoch.Mul(secondsPerSlot))
retentionEpochs := beaconConfig.MinEpochsForDataColumnSidecarsRequest
retentionEpochs := cfg.MinEpochsForDataColumnSidecarsRequest
nowWrtGenesisSecs := retentionEpochs.Add(1).Mul(secondsPerEpoch)
now := genesisTime.Add(time.Duration(nowWrtGenesisSecs) * time.Second)

View File

@@ -530,12 +530,12 @@ func TestOriginOutsideRetention(t *testing.T) {
func TestFetchOriginSidecars(t *testing.T) {
ctx := t.Context()
beaconConfig := params.BeaconConfig()
cfg := params.BeaconConfig()
genesisTime := time.Date(2025, time.August, 10, 0, 0, 0, 0, time.UTC)
secondsPerSlot := beaconConfig.SecondsPerSlot
slotsPerEpoch := beaconConfig.SlotsPerEpoch
secondsPerSlot := cfg.SecondsPerSlot
slotsPerEpoch := cfg.SlotsPerEpoch
secondsPerEpoch := uint64(slotsPerEpoch.Mul(secondsPerSlot))
retentionEpochs := beaconConfig.MinEpochsForDataColumnSidecarsRequest
retentionEpochs := cfg.MinEpochsForDataColumnSidecarsRequest
genesisValidatorRoot := [fieldparams.RootLength]byte{}
@@ -683,6 +683,7 @@ func TestFetchOriginColumns(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.FuluForkEpoch = 0
cfg.BlobSchedule = []params.BlobScheduleEntry{{Epoch: 0, MaxBlobsPerBlock: 10}}
params.OverrideBeaconConfig(cfg)
const (

View File

@@ -192,6 +192,13 @@ var (
},
)
dataColumnsRecoveredFromELTotal = promauto.NewCounter(
prometheus.CounterOpts{
Name: "data_columns_recovered_from_el_total",
Help: "Count the number of times data columns have been recovered from the execution layer.",
},
)
// Data column sidecar validation, beacon metrics specs
dataColumnSidecarVerificationRequestsCounter = promauto.NewCounter(prometheus.CounterOpts{
Name: "beacon_data_column_sidecar_processing_requests_total",
@@ -279,6 +286,7 @@ func (s *Service) updateMetrics() {
topicPeerCount.WithLabelValues(formattedTopic).Set(float64(len(s.cfg.p2p.PubSub().ListPeers(formattedTopic))))
}
subscribedTopicPeerCount.Reset()
for _, topic := range s.cfg.p2p.PubSub().GetTopics() {
subscribedTopicPeerCount.WithLabelValues(topic).Set(float64(len(s.cfg.p2p.PubSub().ListPeers(topic))))
}

View File

@@ -5,6 +5,7 @@ import (
"context"
"encoding/hex"
"fmt"
"slices"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/blocks"
@@ -382,10 +383,8 @@ func (s *Service) savePending(root [32]byte, pending any, isEqual func(other any
// Skip if the attestation/aggregate from the same validator already exists in
// the pending queue.
for _, a := range s.blkRootToPendingAtts[root] {
if isEqual(a) {
return
}
if slices.ContainsFunc(s.blkRootToPendingAtts[root], isEqual) {
return
}
pendingAttCount.Inc()

View File

@@ -10,6 +10,7 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/types"
"github.com/OffchainLabs/prysm/v6/cmd/beacon-chain/flags"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
@@ -58,6 +59,17 @@ func (s *Service) blobSidecarByRootRPCHandler(ctx context.Context, msg interface
return errors.Wrapf(err, "unexpected error computing min valid blob request slot, current_slot=%d", cs)
}
// Extract all needed roots.
roots := make([][fieldparams.RootLength]byte, 0, len(blobIdents))
for _, ident := range blobIdents {
root := bytesutil.ToBytes32(ident.BlockRoot)
roots = append(roots, root)
}
// Filter all available roots in block storage.
availableRoots := s.cfg.beaconDB.AvailableBlocks(ctx, roots)
// Serve each requested blob sidecar.
for i := range blobIdents {
if err := ctx.Err(); err != nil {
closeStream(stream, log)
@@ -69,7 +81,15 @@ func (s *Service) blobSidecarByRootRPCHandler(ctx context.Context, msg interface
<-ticker.C
}
s.rateLimiter.add(stream, 1)
root, idx := bytesutil.ToBytes32(blobIdents[i].BlockRoot), blobIdents[i].Index
// Do not serve a blob sidecar if the corresponding block is not available.
if !availableRoots[root] {
log.Trace("Peer requested blob sidecar by root but corresponding block not found in db")
continue
}
sc, err := s.cfg.blobStorage.Get(root, idx)
if err != nil {
log := log.WithFields(logrus.Fields{
@@ -113,19 +133,19 @@ func (s *Service) blobSidecarByRootRPCHandler(ctx context.Context, msg interface
}
func validateBlobByRootRequest(blobIdents types.BlobSidecarsByRootReq, slot primitives.Slot) error {
beaconConfig := params.BeaconConfig()
cfg := params.BeaconConfig()
epoch := slots.ToEpoch(slot)
blobIdentCount := uint64(len(blobIdents))
if epoch >= beaconConfig.ElectraForkEpoch {
if blobIdentCount > beaconConfig.MaxRequestBlobSidecarsElectra {
if epoch >= cfg.ElectraForkEpoch {
if blobIdentCount > cfg.MaxRequestBlobSidecarsElectra {
return types.ErrMaxBlobReqExceeded
}
return nil
}
if blobIdentCount > beaconConfig.MaxRequestBlobSidecars {
if blobIdentCount > cfg.MaxRequestBlobSidecars {
return types.ErrMaxBlobReqExceeded
}

View File

@@ -38,8 +38,8 @@ func (s *Service) dataColumnSidecarsByRangeRPCHandler(ctx context.Context, msg i
defer cancel()
SetRPCStreamDeadlines(stream)
beaconConfig := params.BeaconConfig()
maxRequestDataColumnSidecars := beaconConfig.MaxRequestDataColumnSidecars
cfg := params.BeaconConfig()
maxRequestDataColumnSidecars := cfg.MaxRequestDataColumnSidecars
remotePeer := stream.Conn().RemotePeer()
log := log.WithFields(logrus.Fields{
@@ -70,6 +70,7 @@ func (s *Service) dataColumnSidecarsByRangeRPCHandler(ctx context.Context, msg i
log.Trace("Serving data column sidecars by range")
if rangeParameters == nil {
closeStream(stream, log)
return nil
}
@@ -101,7 +102,7 @@ func (s *Service) dataColumnSidecarsByRangeRPCHandler(ctx context.Context, msg i
// Once the quota is reached, we're done serving the request.
if maxRequestDataColumnSidecars == 0 {
log.WithField("initialQuota", beaconConfig.MaxRequestDataColumnSidecars).Trace("Reached quota for data column sidecars by range request")
log.WithField("initialQuota", cfg.MaxRequestDataColumnSidecars).Trace("Reached quota for data column sidecars by range request")
break
}
}

View File

@@ -23,18 +23,17 @@ import (
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
consensusblocks "github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
pb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/util"
)
func TestDataColumnSidecarsByRangeRPCHandler(t *testing.T) {
params.SetupTestConfigCleanup(t)
beaconConfig := params.BeaconConfig()
//beaconConfig.FuluForkEpoch = beaconConfig.ElectraForkEpoch + 100
beaconConfig.FuluForkEpoch = 0
params.OverrideBeaconConfig(beaconConfig)
cfg := params.BeaconConfig()
cfg.FuluForkEpoch = 0
params.OverrideBeaconConfig(cfg)
params.BeaconConfig().InitializeForkSchedule()
ctx := context.Background()
t.Run("wrong message type", func(t *testing.T) {
@@ -47,6 +46,7 @@ func TestDataColumnSidecarsByRangeRPCHandler(t *testing.T) {
ctxMap, err := ContextByteVersionsForValRoot(params.BeaconConfig().GenesisValidatorsRoot)
require.NoError(t, err)
t.Run("invalid request", func(t *testing.T) {
slot := primitives.Slot(400)
mockNower.SetSlot(t, clock, slot)
@@ -72,8 +72,8 @@ func TestDataColumnSidecarsByRangeRPCHandler(t *testing.T) {
remoteP2P.BHost.SetStreamHandler(protocolID, func(stream network.Stream) {
defer wg.Done()
code, _, err := readStatusCodeNoDeadline(stream, localP2P.Encoding())
require.NoError(t, err)
require.Equal(t, responseCodeInvalidRequest, code)
assert.NoError(t, err)
assert.Equal(t, responseCodeInvalidRequest, code)
})
localP2P.Connect(remoteP2P)
@@ -94,6 +94,48 @@ func TestDataColumnSidecarsByRangeRPCHandler(t *testing.T) {
}
})
t.Run("in the future", func(t *testing.T) {
slot := primitives.Slot(400)
mockNower.SetSlot(t, clock, slot)
localP2P, remoteP2P := p2ptest.NewTestP2P(t), p2ptest.NewTestP2P(t)
protocolID := protocol.ID(fmt.Sprintf("%s/ssz_snappy", p2p.RPCDataColumnSidecarsByRangeTopicV1))
service := &Service{
cfg: &config{
p2p: localP2P,
chain: &chainMock.ChainService{
Slot: &slot,
},
clock: clock,
},
rateLimiter: newRateLimiter(localP2P),
}
var wg sync.WaitGroup
wg.Add(1)
remoteP2P.BHost.SetStreamHandler(protocolID, func(stream network.Stream) {
defer wg.Done()
_, err := readChunkedDataColumnSidecar(stream, remoteP2P, ctxMap)
assert.Equal(t, true, errors.Is(err, io.EOF))
})
localP2P.Connect(remoteP2P)
stream, err := localP2P.BHost.NewStream(ctx, remoteP2P.BHost.ID(), protocolID)
require.NoError(t, err)
msg := &pb.DataColumnSidecarsByRangeRequest{
StartSlot: slot + 1,
Count: 50,
Columns: []uint64{1, 2, 3, 4, 6, 7, 8, 9, 10},
}
err = service.dataColumnSidecarsByRangeRPCHandler(ctx, msg, stream)
require.NoError(t, err)
})
t.Run("nominal", func(t *testing.T) {
slot := primitives.Slot(400)
@@ -133,12 +175,12 @@ func TestDataColumnSidecarsByRangeRPCHandler(t *testing.T) {
signedBeaconBlockPb.Block.ParentRoot = roots[i-1][:]
}
signedBeaconBlock, err := consensusblocks.NewSignedBeaconBlock(signedBeaconBlockPb)
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(signedBeaconBlockPb)
require.NoError(t, err)
// There is a discrepancy between the root of the beacon block and the rodata column root,
// but for the sake of this test, we actually don't care.
roblock, err := consensusblocks.NewROBlockWithRoot(signedBeaconBlock, roots[i])
roblock, err := blocks.NewROBlockWithRoot(signedBeaconBlock, roots[i])
require.NoError(t, err)
roBlocks = append(roBlocks, roblock)
@@ -178,28 +220,28 @@ func TestDataColumnSidecarsByRangeRPCHandler(t *testing.T) {
break
}
require.NoError(t, err)
assert.NoError(t, err)
sidecars = append(sidecars, sidecar)
}
require.Equal(t, 8, len(sidecars))
require.Equal(t, root0, sidecars[0].BlockRoot())
require.Equal(t, root0, sidecars[1].BlockRoot())
require.Equal(t, root0, sidecars[2].BlockRoot())
require.Equal(t, root3, sidecars[3].BlockRoot())
require.Equal(t, root3, sidecars[4].BlockRoot())
require.Equal(t, root5, sidecars[5].BlockRoot())
require.Equal(t, root5, sidecars[6].BlockRoot())
require.Equal(t, root5, sidecars[7].BlockRoot())
assert.Equal(t, 8, len(sidecars))
assert.Equal(t, root0, sidecars[0].BlockRoot())
assert.Equal(t, root0, sidecars[1].BlockRoot())
assert.Equal(t, root0, sidecars[2].BlockRoot())
assert.Equal(t, root3, sidecars[3].BlockRoot())
assert.Equal(t, root3, sidecars[4].BlockRoot())
assert.Equal(t, root5, sidecars[5].BlockRoot())
assert.Equal(t, root5, sidecars[6].BlockRoot())
assert.Equal(t, root5, sidecars[7].BlockRoot())
require.Equal(t, uint64(1), sidecars[0].Index)
require.Equal(t, uint64(2), sidecars[1].Index)
require.Equal(t, uint64(3), sidecars[2].Index)
require.Equal(t, uint64(4), sidecars[3].Index)
require.Equal(t, uint64(6), sidecars[4].Index)
require.Equal(t, uint64(7), sidecars[5].Index)
require.Equal(t, uint64(8), sidecars[6].Index)
require.Equal(t, uint64(9), sidecars[7].Index)
assert.Equal(t, uint64(1), sidecars[0].Index)
assert.Equal(t, uint64(2), sidecars[1].Index)
assert.Equal(t, uint64(3), sidecars[2].Index)
assert.Equal(t, uint64(4), sidecars[3].Index)
assert.Equal(t, uint64(6), sidecars[4].Index)
assert.Equal(t, uint64(7), sidecars[5].Index)
assert.Equal(t, uint64(8), sidecars[6].Index)
assert.Equal(t, uint64(9), sidecars[7].Index)
})
localP2P.Connect(remoteP2P)
@@ -215,7 +257,6 @@ func TestDataColumnSidecarsByRangeRPCHandler(t *testing.T) {
err = service.dataColumnSidecarsByRangeRPCHandler(ctx, msg, stream)
require.NoError(t, err)
})
}
func TestValidateDataColumnsByRange(t *testing.T) {

View File

@@ -56,18 +56,6 @@ func (s *Service) dataColumnSidecarByRootRPCHandler(ctx context.Context, msg int
return errors.Wrap(err, "validate data columns by root request")
}
requestedColumnsByRoot := make(map[[fieldparams.RootLength]byte][]uint64)
for _, columnIdent := range requestedColumnIdents {
var root [fieldparams.RootLength]byte
copy(root[:], columnIdent.BlockRoot)
requestedColumnsByRoot[root] = append(requestedColumnsByRoot[root], columnIdent.Columns...)
}
// Sort by column index for each root.
for _, columns := range requestedColumnsByRoot {
slices.Sort(columns)
}
// Compute the oldest slot we'll allow a peer to request, based on the current slot.
minReqSlot, err := dataColumnsRPCMinValidSlot(s.cfg.clock.CurrentSlot())
if err != nil {
@@ -84,6 +72,12 @@ func (s *Service) dataColumnSidecarByRootRPCHandler(ctx context.Context, msg int
}
if log.Logger.Level >= logrus.TraceLevel {
requestedColumnsByRoot := make(map[[fieldparams.RootLength]byte][]uint64)
for _, ident := range requestedColumnIdents {
root := bytesutil.ToBytes32(ident.BlockRoot)
requestedColumnsByRoot[root] = append(requestedColumnsByRoot[root], ident.Columns...)
}
// We optimistially assume the peer requests the same set of columns for all roots,
// pre-sizing the map accordingly.
requestedRootsByColumnSet := make(map[string][]string, 1)
@@ -96,6 +90,17 @@ func (s *Service) dataColumnSidecarByRootRPCHandler(ctx context.Context, msg int
log.WithField("requested", requestedRootsByColumnSet).Trace("Serving data column sidecars by root")
}
// Extract all requested roots.
roots := make([][fieldparams.RootLength]byte, 0, len(requestedColumnIdents))
for _, ident := range requestedColumnIdents {
root := bytesutil.ToBytes32(ident.BlockRoot)
roots = append(roots, root)
}
// Filter all available roots in block storage.
availableRoots := s.cfg.beaconDB.AvailableBlocks(ctx, roots)
// Serve each requested data column sidecar.
count := 0
for _, ident := range requestedColumnIdents {
if err := ctx.Err(); err != nil {
@@ -117,6 +122,12 @@ func (s *Service) dataColumnSidecarByRootRPCHandler(ctx context.Context, msg int
s.rateLimiter.add(stream, int64(len(columns)))
// Do not serve a blob sidecar if the corresponding block is not available.
if !availableRoots[root] {
log.Trace("Peer requested blob sidecar by root but corresponding block not found in db")
continue
}
// Retrieve the requested sidecars from the store.
verifiedRODataColumns, err := s.cfg.dataColumnStorage.Get(root, columns)
if err != nil {
@@ -163,9 +174,9 @@ func dataColumnsRPCMinValidSlot(currentSlot primitives.Slot) (primitives.Slot, e
return primitives.Slot(math.MaxUint64), nil
}
beaconConfig := params.BeaconConfig()
minReqEpochs := beaconConfig.MinEpochsForDataColumnSidecarsRequest
minStartEpoch := beaconConfig.FuluForkEpoch
cfg := params.BeaconConfig()
minReqEpochs := cfg.MinEpochsForDataColumnSidecarsRequest
minStartEpoch := cfg.FuluForkEpoch
currEpoch := slots.ToEpoch(currentSlot)
if currEpoch > minReqEpochs && currEpoch-minReqEpochs > minStartEpoch {

Some files were not shown because too many files have changed in this diff Show More