mirror of
https://github.com/OffchainLabs/prysm.git
synced 2026-01-09 13:28:01 -05:00
Compare commits
27 Commits
c6c9414d8b
...
v6.1.3
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
41e7607092 | ||
|
|
cd429dc253 | ||
|
|
5ced1125f2 | ||
|
|
f67ca6ae5e | ||
|
|
9742333f68 | ||
|
|
c811fadf33 | ||
|
|
55b9448d41 | ||
|
|
10f8d8c26e | ||
|
|
4eab41ea4c | ||
|
|
683608e34a | ||
|
|
fbbf2a1404 | ||
|
|
82f556c50f | ||
|
|
c88aa77ac1 | ||
|
|
0568bec935 | ||
|
|
e463bcd1e1 | ||
|
|
5f8eb69201 | ||
|
|
4b98451649 | ||
|
|
0aa248e663 | ||
|
|
6973cd2c5f | ||
|
|
4e47905884 | ||
|
|
a94ea1e5f5 | ||
|
|
c0ad87df4b | ||
|
|
515590e7fe | ||
|
|
83a171b439 | ||
|
|
4946b007ab | ||
|
|
3f10439de1 | ||
|
|
5b20352ac6 |
301
CHANGELOG.md
301
CHANGELOG.md
@@ -4,6 +4,307 @@ All notable changes to this project will be documented in this file.
|
||||
|
||||
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
|
||||
|
||||
## [v6.1.2](https://github.com/prysmaticlabs/prysm/compare/v6.1.1...v6.1.2) - 2025-10-10
|
||||
|
||||
This release has several important fixes to improve Prysm's peering, stability, and attestation inclusion on mainnet and all testnets. All node operators are encouraged to update to this release as soon as practical for the best mainnet performance.
|
||||
|
||||
### Added
|
||||
|
||||
- Added a 1 minute timeout on PruneAttestationOnEpoch operations to prevent very large bolt transactions. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15746)
|
||||
- Added expected delay before broadcasting light client p2p messages. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15776)
|
||||
|
||||
### Changed
|
||||
|
||||
- Replaced reflect.TypeOf with reflect.TypeFor. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15627)
|
||||
- Bazel builds with `--config=release` now properly apply `--strip=always` to strip debug symbols from the release assets. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15774)
|
||||
- Add sources for compute_fork_digest to specrefs. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15699)
|
||||
- Aggregate logs when broadcasting data column sidecars (one per root instead of one per sidecar). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15748)
|
||||
- `c-kzg-4844`: Update from `v2.1.1` to `v2.1.5`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15708)
|
||||
- Process pending attestations as soon as the block arrives. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15791)
|
||||
- Compare received LC messages over gossipsub with locally computed ones before forwarding. Also no longer save updates. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15783)
|
||||
- Optimize pending attestation processing by adding batching. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15801)
|
||||
|
||||
### Removed
|
||||
|
||||
- removed unused configs and hides prysm specific configs from `/eth/v1/config/spec` endpoint. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15797)
|
||||
|
||||
### Fixed
|
||||
|
||||
- SSZ-QL: Support nested `List` type (e.g., `ExecutionPayload.Transactions`). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15725)
|
||||
- Fixing Unsupported config field kind; value forwarded verbatim errors for type string. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15773)
|
||||
- fix /eth/v1/config/spec endpoint to properly skip omitted values. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15777)
|
||||
- Fix ProduceSyncCommitteeContribution not returning error when committee index is out of range. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15770)
|
||||
- adding in improvements to getduties v2, replaces helpers.PrecomputeCommittees() ( exepensive ) with CommitteeAssignments. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15784)
|
||||
- Avoid unnecessary calls to `ExitInformation()`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15764)
|
||||
- `inclusionProofKey`: Include the commitments in the key. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15795)
|
||||
- Do not reject peers if they have a mismatched version|digest when the next for epoch is FAR_FUTURE_EPOCH. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15798)
|
||||
- Don't include entries in the fork schedule if their epoch is set to far future epoch. Avoids reporting next_fork_version == <unscheduled fork>. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15799)
|
||||
- Wait for custody info to be initialized before querying them. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15804)
|
||||
- fixes level=error msg="Could not clean up dirty states" error="OriginBlockRoot: not found in db" prefix=state-gen error when starting in kurtosis. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15808)
|
||||
- Correctly clear disconnected peers from `connected_libp2p_peers` and `connected_libp2p_peers_average_scores`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15807)
|
||||
- `buildStatusFromStream`: Respond `statusV2` only if Fulu is enabled. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15818)
|
||||
- Send our real earliest available slot when sending a Status request post Fulu instead of `0`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15818)
|
||||
- switch to built-in min/max. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15817)
|
||||
- `findPeersWithSubnets`: If the filter function returns an error for a given peer, log an error and skip the peer instead of aborting the whole function. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15815)
|
||||
- `computeIndicesByRootByPeer`: If the loop returns an error for a given peer, log an error and skip the peer instead of aborting the whole function. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15815)
|
||||
- Fixed issue #15738 where separate goroutines assume sole responsibility for topic registration. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15779)
|
||||
|
||||
## [v6.1.0](https://github.com/prysmaticlabs/prysm/compare/v6.0.5...v6.1.0) and [v6.1.1](https://github.com/prysmaticlabs/prysm/compare/v6.1.0...v6.1.1) - 2025-09-26
|
||||
|
||||
This release has support for Fusaka testnets as well as many mainnet improvements. Testnet operators are required to updated prior to the testnet fork date. See [PR #15721](https://github.com/OffchainLabs/prysm/pull/15721).
|
||||
|
||||
Mainnet operators are encouraged to update per their regular update cadence.
|
||||
|
||||
Note: This release was re-issued as v6.1.1 to distribute release assets without debug symbols. See issue [#15760](https://github.com/OffchainLabs/prysm/issues/15760).
|
||||
|
||||
#### Noteworthy improvements, changes and bugfixes:
|
||||
- The `--disable-experimental-state` beacon-node flag has been removed, marking the full graduation of the [Copy-on-write design](https://hackmd.io/zlTJ6Qe_RiueT3y2R77BvA) for BeaconState fields, which reduces the memory overhead of keeping multiple BeaconStates in RAM for block processing. Congrats @rkapka!
|
||||
- The behavior set by the `--attest_timely` flag is now on by default, with the flag itself deprecated.
|
||||
- GetDutiesV2 introduced, lowering duty request latency and beacon-node load. Multiple other improvements and bugfixes have been made to harden the validator run loop.
|
||||
- New validator flag `--max-health-checks` configures a validator to switch to a fallback beacon node after the given number of health check failures.
|
||||
- Improvements to rest-mode validator, defaulting to SSZ where available and adding SSZ support to more Beacon API endpoints.
|
||||
- Beacon API now honors the gzip content-encoding header.
|
||||
- Log timestamps now include milliseconds.
|
||||
- Full fusaka support for testnets!
|
||||
|
||||
**Special thanks to external contributors!**: @Alleysira, @KaloyanTanev, @rose2221
|
||||
|
||||
[1] To override this limit, use the validator flag `--suggested-gas-limit` or set the `builder.gas_limit` setting in your [proposer settings file](https://prysm.offchainlabs.com/docs/configure-prysm/fee-recipient/#advanced-configure-mev-builder-and-gas-limit).
|
||||
|
||||
|
||||
### Added
|
||||
|
||||
- PeerDAS: Add `CustodyInfo` in `BeaconNode`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15378)
|
||||
- GetDutiesV2 gRPC function, removes committee list from duties, replaced with committee length, validator committee index. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15273)
|
||||
- Add SSZ support for two attestation APIs: `/eth/v1/validator/attestation_data` and. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15377)
|
||||
- Added feature flag for validator client to use get duties v2. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15380)
|
||||
- PeerDAS: Implement DAS. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15367)
|
||||
- `verifyBlobCommitmentCount`: Print max allowed blob count in error message. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15386)
|
||||
- Data column support for beacon api event end point. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15387)
|
||||
- Implement EIP-7917: Stable proposer lookahead. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15129)
|
||||
- Implement `dataColumnSidecarByRootRPCHandler`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15405)
|
||||
- New ssz-only flag for validator client to enable calling rest apis in SSZ, starting with get block endpoint. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15390)
|
||||
- Implement `dataColumnSidecarsByRangeRPCHandler`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15421)
|
||||
- Add SSZ support for `submitPoolAttestationsV2` beacon API. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15422)
|
||||
- New `StatusV2` proto message. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15423)
|
||||
- Implement `SendDataColumnSidecarsByRangeRequest`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15430)
|
||||
- Implement `SendDataColumnSidecarsByRootRequest`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15430)
|
||||
- Implement beacon API blob sidecar enpoint for Fulu. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15436)
|
||||
- PeerDAS: Implement the new Fulu Metadata. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15440)
|
||||
- PeerDAS: Implement reconstruction. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15454)
|
||||
- Implement engine method `GetBlobsV2`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15469)
|
||||
- Implement execution `ReconstructDataColumnSidecars`, which reconstruct data column sidecars from data fetched from the execution layer. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15469)
|
||||
- new `--batch-verifier-limit` flag to configure max number of signatures to batch verify on gossip. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15467)
|
||||
- `disable-attest-timely` flag to disable attest timely. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15410)
|
||||
- Added `max-health-checks` flag that sets the maximum times the validator tries to check the health of the beacon node before timing out. 0 or a negative number is indefinite. (the default is 0). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15401)
|
||||
- Add method `VersionToForkEpochMap()` to the `BeaconChainConfig` in the `params` package. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15482)
|
||||
- Add log capitalization analyzer and apply changes across codebase. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15452)
|
||||
- Slot aware cache for seen data column gossip p2p to reduce memory usages. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15477)
|
||||
- **Gzip Compression for Beacon API:**. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14982)
|
||||
- Implement data column sidecars reconstruction with data retrieved from the execution client when receiving a block via gossip. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15483)
|
||||
- Add support for parsing and handling `ExecutionPayloadAndBlobsBundleV2`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15503)
|
||||
- Added new PRYSM_API_OVERRIDE_ACCEPT environment variable to override ssz accept header as a replacement to flag. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15433)
|
||||
- Implements the `/eth/v1/beacon/states/{state_id}/proposer_lookahead` beacon api endpoint. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15525)
|
||||
- Added new metadata fields (attnets,syncnets,custody_group_count) to `/eth/v1/node/identity`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15506)
|
||||
- Add BLOB_SCHEDULE field to `/eth/v1/config/spec` endpoint response to expose blob scheduling configuration for networks. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15485)
|
||||
- Add timing metric `publish_block_v2_duration_milliseconds` to measure processing duration of the `PublishBlockV2` beacon API endpoint. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15539)
|
||||
- Add Fulu case for `saveStatesEfficientInternal`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15553)
|
||||
- Support for fusaka `nfd` enr field, and changes to the semantics of the eth2 field. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15501)
|
||||
- Implement post-Fulu MEV-boost protocol changes where relays only return status codes for blinded block submissions. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15486)
|
||||
- Added fulu block support to StreamBlocksAltair. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15583)
|
||||
- All outbound HTTP requests from the validator client now include a custom `User-Agent` header in the format `Prysm/<name>/<version>`. This enhances observability and enables upstream systems to correctly identify Prysm validator clients by their name and version. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15574)
|
||||
- Fixes [#15435](https://github.com/OffchainLabs/prysm/issues/15435). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15574)
|
||||
- Data columns syncing for Fusaka. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15564)
|
||||
- Added specification references which map spec to implementation. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15592)
|
||||
- Warm data columns storage cache at start. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15629)
|
||||
- Add `--data-column-path` flag. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15629)
|
||||
- Initialize package for SSZ Query Language. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15588)
|
||||
- In FetchDataColumnSidecars, after retrieving sidecars from peers, if still some sidecars are missing for a given root and if a reconstruction is possible (combining sidecars already retrieved from peers and sidecars in the storage), then reconstruct missing sidecars instead of trying to fetch the missing ones from peers. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15593)
|
||||
- Fulu block proposal changes for beacon api and gRPC. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15628)
|
||||
- Retry to fetch origin data column sidecars when starting from a checkpoint. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15634)
|
||||
- Aggregate and pack sync committee messages into blocks. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15608)
|
||||
- Support `List` type for SSZ-QL. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15637)
|
||||
- Configured the beacon node to seek peers when we have validator custody requirements. If one or more validators are connected to the beacon node, then the beacon node should seek a diverse set of peers such that broadcasting to all data column subnets for a block proposal is more efficient. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15654)
|
||||
- SSZ-QL: Add element information for `Vector` type. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15668)
|
||||
- SSZ-QL: Support multi-dimensional tag parsing. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15668)
|
||||
- Added more metadata for debug logs when initial sync requests fail for "invalid data returned from peer" errors. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15674)
|
||||
- Adding Fulu types for web3signer. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15498)
|
||||
- Added erigon/caplin to known p2p agent strings. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15678)
|
||||
- Add Fulu fork transition tests for mainnet and minimal configurations. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15666)
|
||||
- Fulu proposer lookahead epoch processing tests for mainnet and minimal configurations. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15667)
|
||||
- Populate sszInfo of variable-length fields in AnalyzeObjects. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15676)
|
||||
- KZG proof batch verification for data column gossip validation. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15617)
|
||||
- Added flag `--p2p-colocation-whitelist` to accept CIDRs which will bypass the p2p colocation restrictions. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15685)
|
||||
- Fulu spec tests coverage for covering the general package. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15682)
|
||||
- Implemented syncing in a disjoint network with respect to data column sidecars subscribed by peers. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15644)
|
||||
- Add retry logic when GetBlobsV2 is called. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15520)
|
||||
- Call GetBlobsV2 as soon as we receive the first data column sidecar or block. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15520)
|
||||
- Added new post fulu /eth/v1/beacon/blobs/{block_id} endpoint. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15610)
|
||||
- SSZ-QL: Handle `Bitlist` and `Bitvector` types. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15704)
|
||||
- Adding `/eth/v1/debug/beacon/data_column_sidecars/{block_id}` endpoint. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15701)
|
||||
- Support Fulu genesis block. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15652)
|
||||
- Update spectests to 1.6.0-beta.0. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15741)
|
||||
|
||||
### Changed
|
||||
|
||||
- `parseIndices`: Return `[]int` instead of `[]uint64`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15386)
|
||||
- Reclaim memory manually in some tests that fuzz the beacon state. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15395)
|
||||
- when REST api is enabled the get Block api defaults to requesting and receiving SSZ instead of JSON, JSON is the fallback. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15390)
|
||||
- Remove "invalid" from logs for incoming blob sidecar that is missing parent or out of range slot. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15428)
|
||||
- In `TopicFromMessage`: Do not assume anymore that all Fulu specific topic are V3 only. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15423)
|
||||
- `readChunkedDataColumnSidecar`: Add `validationFunctions` parameter and add tests. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15423)
|
||||
- Put the initiation of LC Store behind the `enable-light-client` flag. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15464)
|
||||
- default batch signature verification limit increased from 50 to 1000. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15467)
|
||||
- Increase mainnet DefaultBuilderGasLimit from 36M to 45M. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15455)
|
||||
- Attest timely is now default. `attest-timely` flag is now deprecated. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15410)
|
||||
- Move data col reconstruction log to a more accurate place in the code. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15475)
|
||||
- Makes the multivalue slice permanent in the state and removes old paths. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15414)
|
||||
- Previously, we optimistically believed the beacon node was healthy and tried to get chain start, but now we do a health check at the start. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15401)
|
||||
- Optimize proposer inclusion proof calcuation by pre caching subtries. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15473)
|
||||
- Move setter/getter functions for LC Bootstrap into LcStore for a unified interface. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15476)
|
||||
- Changed `enable-duties-v2` to `disable-duties-v2` to default to using duties v2. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15445)
|
||||
- Changed `uint64` genesis time to use `time.Time`. Also did some refactoring and cleanup that was enabled by these changes. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15419)
|
||||
- Add milliseconds to log timestamps. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15496)
|
||||
- Move setter/getter functions for LC Updates into LcStore for a unified interface. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15488)
|
||||
- Change LC Bootstrap logic to only save bootstraps on finalized checkpoints instead of every block. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15497)
|
||||
- Update links to consensus-specs to point to `master` branch. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15523)
|
||||
- changed from in-memory to persistent discv5 db to keep local node information persistent for the key to keep the ENR sequence number deterministic when restarting. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15519)
|
||||
- Fix some nits associated with data column sidecar verification. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15521)
|
||||
- Include state root in StateNotFoundError for better debugging of consensus validation failures. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15533)
|
||||
- when shutting down the sync service we now send p2p goodbye messages in parallel to maxmimize changes of propogating goodbyes to all peers before an unsafe shutdown. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15542)
|
||||
- Do not compare liveness response with LH in e2e Beacon API evaluator. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15556)
|
||||
- Moved the broadcast and event notifier logic for saving LC updates to the store function. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15540)
|
||||
- Fixed the issue with broadcasting more than twice per LC Finality update, and the if-case bug. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15540)
|
||||
- Separated the finality update validation rules for saving and broadcasting. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15540)
|
||||
- Update validator custody to the latest specification, including the new status message. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15532)
|
||||
- Beacon api optimize validator lookup for large batch request size. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15558)
|
||||
- Check pending block is in forkchoice before importing pending attestation. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15547)
|
||||
- Redesign the pending attestation queue. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15024)
|
||||
- Replaced hardcoded `grpc-gateway-port` with `flags.HTTPServerPort.Name` in `testing/endtoend/components/validator.go`, resolving an inline TODO for improved flag consistency. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15236)
|
||||
- Refactor `htrutil.go` by removing redundant codes. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15453)
|
||||
- Improved sync unaggregated attestation cache key outside of lock path. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15572)
|
||||
- Move aggregated attestation cache key generation outside of critical locks to improve performance. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15579)
|
||||
- Renamed various variables/functions to be more clear. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15529)
|
||||
- Update consensus spec to v1.6.0-alpha.4 and implement data column support for forkchoice spectests. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15590)
|
||||
- Reject incoming connections when the fork schedule of the connecting peer (parsed from their ENR) has a matching next_fork_epoch, but mismatched next_fork_version or nfd (next fork digest). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15604)
|
||||
- Update gohashtree to v0.0.5-beta. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15619)
|
||||
- Updated consensus spec from v1.6.0-alpha.4 to v1.6.0-alpha.5 with adjusted minimal config parameters. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15621)
|
||||
- Changed old atomic functions to new atomic.Int for safer and clearer code. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15625)
|
||||
- Start from justified checkpoint by default. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15636)
|
||||
- Updated consensus spec from v1.6.0-alpha.5 to v1.6.0-alpha.6. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15658)
|
||||
- Updated outdated documentation links for Web3Signer and Why Bazel. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15631)
|
||||
- changed validatorpb.SignRequest_AggregateAttestationAndProof signing type to use AggregateAttestationAndProofV2 on web3signer. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15498)
|
||||
- Pre-calculate exit epoch, churn and active balance before processing slashings to reduce CPU load. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14990)
|
||||
- Switching default of validator client rest call for submit block from JSON to SSZ. Fallback json will be attempted. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15645)
|
||||
- Deprecated and added error to /prysm/v1/beacon/blobs endpoint for post Fulu fork. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15643)
|
||||
- Upgraded gossipsub to v0.14.2 and libp2p to v0.39.1. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15677)
|
||||
- Prysm will now downscore peers that return invalid block_by_range responses. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15686)
|
||||
- Filtering peers for data column subnets: Added a one-epoch slack to the peer’s head slot view. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15644)
|
||||
- Fetching data column sidecars: If not all requested sidecars are available for a given root, return the successfully retrieved ones along with a map indicating which could not be fetched. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15644)
|
||||
- Fetching origin data column sidecars: If only some sidecars are fetched, save the retrieved ones and retry fetching the missing ones on the next attempt. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15644)
|
||||
- Renamed the `--enable-experimental-backfill` flag to `--enable-backfill` to signal that it is more mature. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15690)
|
||||
- Restrict best LC update collection to canonical blocks. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15585)
|
||||
- PeerDAS: Wait for a random delay, then reconstruct data column sidecars and immediately reseed instead of immediately reconstructing, waiting and then reseeding. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15705)
|
||||
- Clarified misleading log messages in beacon-chain/rpc/service gRPC module. [[PR]](https://github.com/prysmaticlabs/prysm/pull/13063)
|
||||
- Broadcast block then sidecars, instead block and sidecars concurrently. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15720)
|
||||
- Broadcast and receive sidecars in concurrently, instead sequentially. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15720)
|
||||
- Changed blst dependency from `http_archive` to `go_repository` so that gazelle can keep it in sync with go.mod. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15709)
|
||||
- Updated go to v1.25.1. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15641)
|
||||
- Updated rules_go to v0.57.0. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15641)
|
||||
- Updated protobuf to 28.3. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15641)
|
||||
- Set Fulu fork epochs for Holesky, Hoodi, and Sepolia testnets. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15721)
|
||||
- Improve logging of data column sidecars. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15728)
|
||||
- Updated go.mod to v1.25.1. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15740)
|
||||
|
||||
### Deprecated
|
||||
|
||||
- Deprecated `p2p-metadata` flag. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15554)
|
||||
|
||||
### Removed
|
||||
|
||||
- Removed //tools/eth1voting tool. This is no longer needed as the beacon chain no longer uses eth1data voting since Electra. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15415)
|
||||
- Remove deposit count from sync new block log. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15420)
|
||||
- Unused `DataColumnIdentifier` proto message. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15423)
|
||||
- Validator client will no longer need to call the canonical head api. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15480)
|
||||
- Partially reverting pr #15390 removing the `ssz-only` debug flag until there is a real usecase for the flag. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15433)
|
||||
|
||||
### Fixed
|
||||
|
||||
- Added regression test for [PR 15369](https://github.com/OffchainLabs/prysm/pull/15369). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15379)
|
||||
- Added missing `meta` field to the response of the endpoint `/eth/v1/node/peers` to align with the Beacon API spec (#15370). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15371)
|
||||
- Fix blob metric name for peer count. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15412)
|
||||
- Non deterministic output order of `dataColumnSidecarByRootRPCHandler`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15441)
|
||||
- Fixed the versioning bug for light client data types in the Beacon API. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15400)
|
||||
- `--chain-config-file`: Do not use any more mainnet boot nodes. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15460)
|
||||
- Fix panic on dutiesv2 when there is no committee assignment on the epoch. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15466)
|
||||
- Allow SSZ requests for pending deposits, partial withdrawals and consolidations. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15474)
|
||||
- Validator client shuts down cleanly on error instead of fatal error. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15401)
|
||||
- Fixes edge case starting validator client with new validator keys starts the slot ticker too early resulting in replayed slots in the main runner loop. Fixes edge case of replayed slots when waiting for account acivations. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15479)
|
||||
- DV aggregations failing first slot of the epoch. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15156)
|
||||
- Skip genesis block retrieval when EIP-6110 deposit requests have started to prevent "pruned history unavailable" errors with execution clients that have pruned pre-merge data. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15494)
|
||||
- Fixed lookahead initialization at the fulu fork. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15450)
|
||||
- Write `Content-Encoding` header in the response properly when gzip encoding is requested. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15499)
|
||||
- Subnets subscription: Avoid dynamic subscribing blocking in case not enough peers per subnets are found. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15471)
|
||||
- Do not apply the gzip middleware to the event stream API. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15517)
|
||||
- Fixed various reasons why a node is banned by its peers when it stops. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15505)
|
||||
- Use `MinEpochsForDataColumnSidecarsRequest` in `WithinDAPeriod` when in Fulu. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15522)
|
||||
- Return zero value for `Eth-Consensus-Block-Value` on error to avoid missed block proposals. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15526)
|
||||
- Moved reconstruction lock to prevent unnecessary work. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15528)
|
||||
- Fixed variable names, links, and typos in das core code. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15524)
|
||||
- Fix builder bid version compatibility to support Electra bids with Fulu blocks. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15536)
|
||||
- Fixed align submitPoolSyncCommitteeSignatures response with Beacon API specification. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15516)
|
||||
- Trigger payload attribute event as soon as an early block is processed. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15541)
|
||||
- Beacon-api proposer duty fulu computation. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15534)
|
||||
- Fixed the max proofs in `BlobsBundleV2`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15530)
|
||||
- Prevent a race on double `ReceiveBlock`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15565)
|
||||
- Fixed [#15544](https://github.com/OffchainLabs/prysm/issues/15544): Persist metadata sequence number if it is needed (e.g., use static peer ID option or Fulu enabled). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15554)
|
||||
- Fix the validateConsensus endpoint handler. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15548)
|
||||
- builder version check was using head block version instead of current fork's version based on slot, fixes e2e from https://github.com/OffchainLabs/prysm/commit/57e27199bdb9b3ef1af14c3374999aba5e0788a3. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15568)
|
||||
- Don't submit duplicate `SignedContributionAndProof` messages. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15571)
|
||||
- Genesis state, timestamp and validators root now ubiquitously available at node startup, supporting tech debt cleanup. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15470)
|
||||
- Fixed a condition where the blob cache could panic when there were less than or no sidecars in the cache entry. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15581)
|
||||
- Fixed endpoint response to return 404 or 400 after isOptimistic check. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15559)
|
||||
- Safeguard against accidental out of bounds array access in dataColumnSidecars method. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15586)
|
||||
- Fixed NewSignedBeaconBlock calls to use Block field for proper equivocation handling. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15595)
|
||||
- Fixed regression in find peer functions introduced in PR#15471, where nodes with equal sequence numbers were incorrectly skipped and the peer count was incorrectly reduced when replacing nodes with higher sequence numbers. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15578)
|
||||
- Fix bug where stale computed value in closure excludes newly required (eg attestation) subscriptions. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15603)
|
||||
- Fix bug where arguments of fillInForkChoiceMissingBlocks were incorrectly placed. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15639)
|
||||
- Fix next epoch proposer duties in Fulu by advancing the state to the beginning of the current epoch. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15642)
|
||||
- Fix getBlockAttestationsV2 to return [] instead of null when data is empty. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15651)
|
||||
- Fixed the issue of empty dirs not being deleted when using –blob-storage-layout=by-epoch. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15573)
|
||||
- Start topic-based peer discovery before initial sync completes so that we have coverage of needed columns when range syncing. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15660)
|
||||
- Fixed an off-by-one in forkchoice startup. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15684)
|
||||
- mitigate potential supernode clustering due to libp2p ConnManager pruning of non-supernodes, see https://github.com/OffchainLabs/prysm/issues/15607. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15681)
|
||||
- Initial sync: Do not request data column sidecars for blocks before the retention period. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15644)
|
||||
- Fixed incorrect attestation data request where the assigned committee index was used after Electra, instead of 0. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15696)
|
||||
- Use v2 endpoint for blinded block submission post-Fulu. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15716)
|
||||
- Fixed 'justified' block support missing on blocker.Block and optimized logic between blocker.Block and blocker.Blob. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15715)
|
||||
- Fix prysmctl panic when baseFee is not set in genesis.json. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15687)
|
||||
- Fix getStateRandao not returning historic RANDAO mix values. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15653)
|
||||
- fix race in PriorityQueue.Pop by checking emptiness under write lock. (#15726). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15726)
|
||||
- In P2P service start, wait for the custody info to be correctly initialized. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15732)
|
||||
- `createLocalNode`: Wait before retrying to retrieve the custody group count if not present. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15735)
|
||||
- Replace fmt.Printf with proper test error handling in web3signer keymanager tests, using require.NoError(t, err) instead of t.Fatalf for better error handling and debugging. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15723)
|
||||
- fixed regression introduced in PR #15715 , blocker now returns an error for not found, and error handling correctly handles error and returns 404 instead of 500. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15742)
|
||||
- da metric was not writing correctly because if statement on err was accidently flipped. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15743)
|
||||
|
||||
### Security
|
||||
|
||||
- Updated go to version 1.24.5. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15561)
|
||||
- Updated distroless/cc-debian11 to latest to resolve CVE-2024-2961. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15562)
|
||||
- Updated go to version 1.24.6. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15566)
|
||||
- Updated quic-go to latest version. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15749)
|
||||
|
||||
|
||||
## [v6.0.5](https://github.com/prysmaticlabs/prysm/compare/v6.0.4...v6.0.5) - 2025-09-26
|
||||
|
||||
We are releasing a patch update on top of v6.0.4 to address a stability issue with quic-go.
|
||||
All operators should update as soon as possible to v6.0.5 or later.
|
||||
|
||||
### Security
|
||||
|
||||
- Updated quic-go to latest version. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15749)
|
||||
|
||||
## [v6.0.4](https://github.com/prysmaticlabs/prysm/compare/v6.0.3...v6.0.4) - 2025-06-05
|
||||
|
||||
This release has more work on PeerDAS, and light client support. Additionally, we have a few bug fixes:
|
||||
|
||||
@@ -284,7 +284,7 @@ func (c *Client) SubmitChangeBLStoExecution(ctx context.Context, request []*stru
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
decoder := json.NewDecoder(resp.Body)
|
||||
decoder.DisallowUnknownFields()
|
||||
errorJson := &server.IndexedVerificationFailureError{}
|
||||
errorJson := &server.IndexedErrorContainer{}
|
||||
if err := decoder.Decode(errorJson); err != nil {
|
||||
return errors.Wrapf(err, "failed to decode error JSON for %s", resp.Request.URL)
|
||||
}
|
||||
|
||||
@@ -726,6 +726,12 @@ func unexpectedStatusErr(response *http.Response, expected int) error {
|
||||
return errors.Wrap(jsonErr, "unable to read response body")
|
||||
}
|
||||
return errors.Wrap(ErrNotOK, errMessage.Message)
|
||||
case http.StatusBadGateway:
|
||||
log.WithError(ErrBadGateway).Debug(msg)
|
||||
if jsonErr := json.Unmarshal(bodyBytes, &errMessage); jsonErr != nil {
|
||||
return errors.Wrap(jsonErr, "unable to read response body")
|
||||
}
|
||||
return errors.Wrap(ErrBadGateway, errMessage.Message)
|
||||
default:
|
||||
log.WithError(ErrNotOK).Debug(msg)
|
||||
return errors.Wrap(ErrNotOK, fmt.Sprintf("unsupported error code: %d", response.StatusCode))
|
||||
|
||||
@@ -12,7 +12,6 @@ import (
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/api"
|
||||
"github.com/OffchainLabs/prysm/v6/api/server/structs"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/util"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
|
||||
@@ -22,6 +21,7 @@ import (
|
||||
eth "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/assert"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/util"
|
||||
"github.com/prysmaticlabs/go-bitfield"
|
||||
log "github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
@@ -21,3 +21,4 @@ var ErrUnsupportedMediaType = errors.Wrap(ErrNotOK, "The media type in \"Content
|
||||
|
||||
// ErrNotAcceptable specifically means that a '406 - Not Acceptable' was received from the API.
|
||||
var ErrNotAcceptable = errors.Wrap(ErrNotOK, "The accept header value is not acceptable")
|
||||
var ErrBadGateway = errors.Wrap(ErrNotOK, "recv 502 BadGateway response from API")
|
||||
|
||||
@@ -6,6 +6,11 @@ import (
|
||||
"strings"
|
||||
)
|
||||
|
||||
var (
|
||||
ErrIndexedValidationFail = "One or more messages failed validation"
|
||||
ErrIndexedBroadcastFail = "One or more messages failed broadcast"
|
||||
)
|
||||
|
||||
// DecodeError represents an error resulting from trying to decode an HTTP request.
|
||||
// It tracks the full field name for which decoding failed.
|
||||
type DecodeError struct {
|
||||
@@ -29,19 +34,38 @@ func (e *DecodeError) Error() string {
|
||||
return fmt.Sprintf("could not decode %s: %s", strings.Join(e.path, "."), e.err.Error())
|
||||
}
|
||||
|
||||
// IndexedVerificationFailureError wraps a collection of verification failures.
|
||||
type IndexedVerificationFailureError struct {
|
||||
Message string `json:"message"`
|
||||
Code int `json:"code"`
|
||||
Failures []*IndexedVerificationFailure `json:"failures"`
|
||||
// IndexedErrorContainer wraps a collection of indexed errors.
|
||||
type IndexedErrorContainer struct {
|
||||
Message string `json:"message"`
|
||||
Code int `json:"code"`
|
||||
Failures []*IndexedError `json:"failures"`
|
||||
}
|
||||
|
||||
func (e *IndexedVerificationFailureError) StatusCode() int {
|
||||
func (e *IndexedErrorContainer) StatusCode() int {
|
||||
return e.Code
|
||||
}
|
||||
|
||||
// IndexedVerificationFailure represents an issue when verifying a single indexed object e.g. an item in an array.
|
||||
type IndexedVerificationFailure struct {
|
||||
// IndexedError represents an issue when processing a single indexed object e.g. an item in an array.
|
||||
type IndexedError struct {
|
||||
Index int `json:"index"`
|
||||
Message string `json:"message"`
|
||||
}
|
||||
|
||||
// BroadcastFailedError represents an error scenario where broadcasting a published message failed.
|
||||
type BroadcastFailedError struct {
|
||||
msg string
|
||||
err error
|
||||
}
|
||||
|
||||
// NewBroadcastFailedError creates a new instance of BroadcastFailedError.
|
||||
func NewBroadcastFailedError(msg string, err error) *BroadcastFailedError {
|
||||
return &BroadcastFailedError{
|
||||
msg: msg,
|
||||
err: err,
|
||||
}
|
||||
}
|
||||
|
||||
// Error returns the underlying error message.
|
||||
func (e *BroadcastFailedError) Error() string {
|
||||
return fmt.Sprintf("could not broadcast %s: %s", e.msg, e.err.Error())
|
||||
}
|
||||
|
||||
@@ -30,7 +30,7 @@ var (
|
||||
// errWSBlockNotFoundInEpoch is returned when a block is not found in the WS cache or DB within epoch.
|
||||
errWSBlockNotFoundInEpoch = errors.New("weak subjectivity root not found in db within epoch")
|
||||
// ErrNotDescendantOfFinalized is returned when a block is not a descendant of the finalized checkpoint
|
||||
ErrNotDescendantOfFinalized = invalidBlock{error: errors.New("not descendant of finalized checkpoint")}
|
||||
ErrNotDescendantOfFinalized = errors.New("not descendant of finalized checkpoint")
|
||||
// ErrNotCheckpoint is returned when a given checkpoint is not a
|
||||
// checkpoint in any chain known to forkchoice
|
||||
ErrNotCheckpoint = errors.New("not a checkpoint in forkchoice")
|
||||
|
||||
@@ -346,13 +346,24 @@ func (s *Service) notifyNewHeadEvent(
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not check if node is optimistically synced")
|
||||
}
|
||||
|
||||
parentRoot, err := s.ParentRoot([32]byte(newHeadRoot))
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not obtain parent root in forkchoice")
|
||||
}
|
||||
parentSlot, err := s.RecentBlockSlot(parentRoot)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not obtain parent slot in forkchoice")
|
||||
}
|
||||
epochTransition := slots.ToEpoch(newHeadSlot) > slots.ToEpoch(parentSlot)
|
||||
|
||||
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
|
||||
Type: statefeed.NewHead,
|
||||
Data: ðpbv1.EventHead{
|
||||
Slot: newHeadSlot,
|
||||
Block: newHeadRoot,
|
||||
State: newHeadStateRoot,
|
||||
EpochTransition: slots.IsEpochStart(newHeadSlot),
|
||||
EpochTransition: epochTransition,
|
||||
PreviousDutyDependentRoot: previousDutyDependentRoot[:],
|
||||
CurrentDutyDependentRoot: currentDutyDependentRoot[:],
|
||||
ExecutionOptimistic: isOptimistic,
|
||||
|
||||
@@ -162,6 +162,9 @@ func Test_notifyNewHeadEvent(t *testing.T) {
|
||||
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(t.Context(), st, blk))
|
||||
newHeadStateRoot := [32]byte{2}
|
||||
newHeadRoot := [32]byte{3}
|
||||
st, blk, err = prepareForkchoiceState(t.Context(), 1, newHeadRoot, [32]byte{}, [32]byte{}, ðpb.Checkpoint{}, ðpb.Checkpoint{})
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(t.Context(), st, blk))
|
||||
require.NoError(t, srv.notifyNewHeadEvent(t.Context(), 1, bState, newHeadStateRoot[:], newHeadRoot[:]))
|
||||
events := notifier.ReceivedEvents()
|
||||
require.Equal(t, 1, len(events))
|
||||
@@ -196,6 +199,9 @@ func Test_notifyNewHeadEvent(t *testing.T) {
|
||||
|
||||
newHeadStateRoot := [32]byte{2}
|
||||
newHeadRoot := [32]byte{3}
|
||||
st, blk, err = prepareForkchoiceState(t.Context(), 0, newHeadRoot, [32]byte{}, [32]byte{}, ðpb.Checkpoint{}, ðpb.Checkpoint{})
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(t.Context(), st, blk))
|
||||
err = srv.notifyNewHeadEvent(t.Context(), epoch2Start, bState, newHeadStateRoot[:], newHeadRoot[:])
|
||||
require.NoError(t, err)
|
||||
events := notifier.ReceivedEvents()
|
||||
@@ -213,6 +219,37 @@ func Test_notifyNewHeadEvent(t *testing.T) {
|
||||
}
|
||||
require.DeepSSZEqual(t, wanted, eventHead)
|
||||
})
|
||||
t.Run("epoch transition", func(t *testing.T) {
|
||||
bState, _ := util.DeterministicGenesisState(t, 10)
|
||||
srv := testServiceWithDB(t)
|
||||
srv.SetGenesisTime(time.Now())
|
||||
notifier := srv.cfg.StateNotifier.(*mock.MockStateNotifier)
|
||||
srv.originBlockRoot = [32]byte{1}
|
||||
st, blk, err := prepareForkchoiceState(t.Context(), 0, [32]byte{}, [32]byte{}, [32]byte{}, ðpb.Checkpoint{}, ðpb.Checkpoint{})
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(t.Context(), st, blk))
|
||||
newHeadStateRoot := [32]byte{2}
|
||||
newHeadRoot := [32]byte{3}
|
||||
st, blk, err = prepareForkchoiceState(t.Context(), 32, newHeadRoot, [32]byte{}, [32]byte{}, ðpb.Checkpoint{}, ðpb.Checkpoint{})
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(t.Context(), st, blk))
|
||||
newHeadSlot := params.BeaconConfig().SlotsPerEpoch
|
||||
require.NoError(t, srv.notifyNewHeadEvent(t.Context(), newHeadSlot, bState, newHeadStateRoot[:], newHeadRoot[:]))
|
||||
events := notifier.ReceivedEvents()
|
||||
require.Equal(t, 1, len(events))
|
||||
|
||||
eventHead, ok := events[0].Data.(*ethpbv1.EventHead)
|
||||
require.Equal(t, true, ok)
|
||||
wanted := ðpbv1.EventHead{
|
||||
Slot: newHeadSlot,
|
||||
Block: newHeadRoot[:],
|
||||
State: newHeadStateRoot[:],
|
||||
EpochTransition: true,
|
||||
PreviousDutyDependentRoot: params.BeaconConfig().ZeroHash[:],
|
||||
CurrentDutyDependentRoot: srv.originBlockRoot[:],
|
||||
}
|
||||
require.DeepSSZEqual(t, wanted, eventHead)
|
||||
})
|
||||
}
|
||||
|
||||
func TestRetrieveHead_ReadOnly(t *testing.T) {
|
||||
|
||||
@@ -72,7 +72,7 @@ func (s *Service) postBlockProcess(cfg *postBlockProcessConfig) error {
|
||||
if features.Get().EnableLightClient && slots.ToEpoch(s.CurrentSlot()) >= params.BeaconConfig().AltairForkEpoch {
|
||||
defer s.processLightClientUpdates(cfg)
|
||||
}
|
||||
defer s.sendStateFeedOnBlock(cfg)
|
||||
|
||||
defer reportProcessingTime(startTime)
|
||||
defer reportAttestationInclusion(cfg.roblock.Block())
|
||||
|
||||
@@ -93,6 +93,8 @@ func (s *Service) postBlockProcess(cfg *postBlockProcessConfig) error {
|
||||
return errors.Wrap(err, "could not set optimistic block to valid")
|
||||
}
|
||||
}
|
||||
|
||||
defer s.sendStateFeedOnBlock(cfg) // only send event after successful insertion
|
||||
start := time.Now()
|
||||
cfg.headRoot, err = s.cfg.ForkChoiceStore.Head(ctx)
|
||||
if err != nil {
|
||||
|
||||
@@ -9,8 +9,10 @@ import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
mock "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/blocks"
|
||||
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/signing"
|
||||
@@ -3147,6 +3149,159 @@ func TestIsDataAvailable(t *testing.T) {
|
||||
})
|
||||
}
|
||||
|
||||
// Test_postBlockProcess_EventSending tests that block processed events are only sent
|
||||
// when block processing succeeds according to the decision tree:
|
||||
//
|
||||
// Block Processing Flow:
|
||||
// ├─ InsertNode FAILS (fork choice timeout)
|
||||
// │ └─ blockProcessed = false ❌ NO EVENT
|
||||
// │
|
||||
// ├─ InsertNode succeeds
|
||||
// │ ├─ handleBlockAttestations FAILS
|
||||
// │ │ └─ blockProcessed = false ❌ NO EVENT
|
||||
// │ │
|
||||
// │ ├─ Block is NON-CANONICAL (not head)
|
||||
// │ │ └─ blockProcessed = true ✅ SEND EVENT (Line 111)
|
||||
// │ │
|
||||
// │ ├─ Block IS CANONICAL (new head)
|
||||
// │ │ ├─ getFCUArgs FAILS
|
||||
// │ │ │ └─ blockProcessed = true ✅ SEND EVENT (Line 117)
|
||||
// │ │ │
|
||||
// │ │ ├─ sendFCU FAILS
|
||||
// │ │ │ └─ blockProcessed = false ❌ NO EVENT
|
||||
// │ │ │
|
||||
// │ │ └─ Full success
|
||||
// │ │ └─ blockProcessed = true ✅ SEND EVENT (Line 125)
|
||||
func Test_postBlockProcess_EventSending(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
|
||||
// Helper to create a minimal valid block and state
|
||||
createTestBlockAndState := func(t *testing.T, slot primitives.Slot, parentRoot [32]byte) (consensusblocks.ROBlock, state.BeaconState) {
|
||||
st, _ := util.DeterministicGenesisState(t, 64)
|
||||
require.NoError(t, st.SetSlot(slot))
|
||||
|
||||
stateRoot, err := st.HashTreeRoot(ctx)
|
||||
require.NoError(t, err)
|
||||
|
||||
blk := util.NewBeaconBlock()
|
||||
blk.Block.Slot = slot
|
||||
blk.Block.ProposerIndex = 0
|
||||
blk.Block.ParentRoot = parentRoot[:]
|
||||
blk.Block.StateRoot = stateRoot[:]
|
||||
|
||||
signed := util.HydrateSignedBeaconBlock(blk)
|
||||
roBlock, err := consensusblocks.NewSignedBeaconBlock(signed)
|
||||
require.NoError(t, err)
|
||||
|
||||
roBlk, err := consensusblocks.NewROBlock(roBlock)
|
||||
require.NoError(t, err)
|
||||
return roBlk, st
|
||||
}
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
setupService func(*Service, [32]byte)
|
||||
expectEvent bool
|
||||
expectError bool
|
||||
errorContains string
|
||||
}{
|
||||
{
|
||||
name: "Block successfully processed - sends event",
|
||||
setupService: func(s *Service, blockRoot [32]byte) {
|
||||
// Default setup should work
|
||||
},
|
||||
expectEvent: true,
|
||||
expectError: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
// Create service with required options
|
||||
opts := testServiceOptsWithDB(t)
|
||||
service, err := NewService(ctx, opts...)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Initialize fork choice with genesis block
|
||||
st, _ := util.DeterministicGenesisState(t, 64)
|
||||
require.NoError(t, st.SetSlot(0))
|
||||
genesisBlock := util.NewBeaconBlock()
|
||||
genesisBlock.Block.StateRoot = bytesutil.PadTo([]byte("genesisState"), 32)
|
||||
signedGenesis := util.HydrateSignedBeaconBlock(genesisBlock)
|
||||
block, err := consensusblocks.NewSignedBeaconBlock(signedGenesis)
|
||||
require.NoError(t, err)
|
||||
genesisRoot, err := block.Block().HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, block))
|
||||
require.NoError(t, service.cfg.BeaconDB.SaveGenesisBlockRoot(ctx, genesisRoot))
|
||||
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, st, genesisRoot))
|
||||
|
||||
genesisROBlock, err := consensusblocks.NewROBlock(block)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, genesisROBlock))
|
||||
|
||||
// Create test block and state with genesis as parent
|
||||
roBlock, postSt := createTestBlockAndState(t, 100, genesisRoot)
|
||||
|
||||
// Apply additional service setup if provided
|
||||
if tt.setupService != nil {
|
||||
tt.setupService(service, roBlock.Root())
|
||||
}
|
||||
|
||||
// Create post block process config
|
||||
cfg := &postBlockProcessConfig{
|
||||
ctx: ctx,
|
||||
roblock: roBlock,
|
||||
postState: postSt,
|
||||
isValidPayload: true,
|
||||
}
|
||||
|
||||
// Execute postBlockProcess
|
||||
err = service.postBlockProcess(cfg)
|
||||
|
||||
// Check error expectation
|
||||
if tt.expectError {
|
||||
require.NotNil(t, err)
|
||||
if tt.errorContains != "" {
|
||||
require.ErrorContains(t, tt.errorContains, err)
|
||||
}
|
||||
} else {
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
// Give a moment for deferred functions to execute
|
||||
time.Sleep(10 * time.Millisecond)
|
||||
|
||||
// Check event expectation
|
||||
notifier := service.cfg.StateNotifier.(*mock.MockStateNotifier)
|
||||
events := notifier.ReceivedEvents()
|
||||
|
||||
if tt.expectEvent {
|
||||
require.NotEqual(t, 0, len(events), "Expected event to be sent but none were received")
|
||||
|
||||
// Verify it's a BlockProcessed event
|
||||
foundBlockProcessed := false
|
||||
for _, evt := range events {
|
||||
if evt.Type == statefeed.BlockProcessed {
|
||||
foundBlockProcessed = true
|
||||
data, ok := evt.Data.(*statefeed.BlockProcessedData)
|
||||
require.Equal(t, true, ok, "Event data should be BlockProcessedData")
|
||||
require.Equal(t, roBlock.Root(), data.BlockRoot, "Event should contain correct block root")
|
||||
break
|
||||
}
|
||||
}
|
||||
require.Equal(t, true, foundBlockProcessed, "Expected BlockProcessed event type")
|
||||
} else {
|
||||
// For no-event cases, verify no BlockProcessed events were sent
|
||||
for _, evt := range events {
|
||||
require.NotEqual(t, statefeed.BlockProcessed, evt.Type,
|
||||
"Expected no BlockProcessed event but one was sent")
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func setupLightClientTestRequirements(ctx context.Context, t *testing.T, s *Service, v int, options ...util.LightClientOption) (*util.TestLightClient, *postBlockProcessConfig) {
|
||||
var l *util.TestLightClient
|
||||
switch v {
|
||||
|
||||
@@ -219,6 +219,9 @@ func (s *Service) validateExecutionAndConsensus(
|
||||
eg.Go(func() error {
|
||||
var err error
|
||||
postState, err = s.validateStateTransition(ctx, preState, block)
|
||||
if errors.Is(err, ErrNotDescendantOfFinalized) {
|
||||
return invalidBlock{error: err, root: block.Root()}
|
||||
}
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to validate consensus state transition function")
|
||||
}
|
||||
|
||||
@@ -49,13 +49,22 @@ func ProcessSyncAggregate(ctx context.Context, s state.BeaconState, sync *ethpb.
|
||||
if err != nil {
|
||||
return nil, 0, errors.Wrap(err, "could not filter sync committee votes")
|
||||
}
|
||||
|
||||
if err := VerifySyncCommitteeSig(s, votedKeys, sync.SyncCommitteeSignature); err != nil {
|
||||
return nil, 0, errors.Wrap(err, "could not verify sync committee signature")
|
||||
}
|
||||
return s, reward, nil
|
||||
}
|
||||
|
||||
// ProcessSyncAggregateNoVerifySig processes the sync aggregate without verifying the sync committee signature.
|
||||
// This is useful in scenarios such as block reward calculation, where we can assume the data in the block is valid.
|
||||
func ProcessSyncAggregateNoVerifySig(ctx context.Context, s state.BeaconState, sync *ethpb.SyncAggregate) (state.BeaconState, uint64, error) {
|
||||
s, _, reward, err := processSyncAggregate(ctx, s, sync)
|
||||
if err != nil {
|
||||
return nil, 0, errors.Wrap(err, "could not filter sync committee votes")
|
||||
}
|
||||
return s, reward, nil
|
||||
}
|
||||
|
||||
// processSyncAggregate applies all the logic in the spec function `process_sync_aggregate` except
|
||||
// verifying the BLS signatures. It returns the modified beacons state, the list of validators'
|
||||
// public keys that voted (for future signature verification) and the proposer reward for including
|
||||
|
||||
@@ -53,9 +53,19 @@ func TestProcessSyncCommittee_PerfectParticipation(t *testing.T) {
|
||||
SyncCommitteeSignature: aggregatedSig,
|
||||
}
|
||||
|
||||
// Verify that ProcessSyncAggregateNoVerifySig and ProcessSyncAggregate have the same outcome.
|
||||
beaconStateNoVerifySig := beaconState.Copy()
|
||||
beaconStateNoVerifySig, rewardNoVerifySig, err := altair.ProcessSyncAggregateNoVerifySig(t.Context(), beaconStateNoVerifySig, syncAggregate)
|
||||
require.NoError(t, err)
|
||||
sszNoVerifySig, err := beaconStateNoVerifySig.MarshalSSZ()
|
||||
require.NoError(t, err)
|
||||
var reward uint64
|
||||
beaconState, reward, err = altair.ProcessSyncAggregate(t.Context(), beaconState, syncAggregate)
|
||||
require.NoError(t, err)
|
||||
ssz, err := beaconState.MarshalSSZ()
|
||||
require.NoError(t, err)
|
||||
assert.DeepEqual(t, sszNoVerifySig, ssz, "States resulting from ProcessSyncAggregateNoVerifySig and ProcessSyncAggregate are not equal")
|
||||
assert.Equal(t, rewardNoVerifySig, reward, "Rewards resulting from ProcessSyncAggregateNoVerifySig and ProcessSyncAggregate are not equal")
|
||||
assert.Equal(t, uint64(72192), reward)
|
||||
|
||||
// Use a non-sync committee index to compare profitability.
|
||||
|
||||
@@ -55,6 +55,28 @@ func ProcessAttesterSlashings(
|
||||
return beaconState, nil
|
||||
}
|
||||
|
||||
// ProcessAttesterSlashingsNoVerify processes attester slashings without verifying them.
|
||||
// This is useful in scenarios such as block reward calculation, where we can assume the data
|
||||
// in the block is valid.
|
||||
func ProcessAttesterSlashingsNoVerify(
|
||||
ctx context.Context,
|
||||
beaconState state.BeaconState,
|
||||
slashings []ethpb.AttSlashing,
|
||||
exitInfo *validators.ExitInfo,
|
||||
) (state.BeaconState, error) {
|
||||
if exitInfo == nil && len(slashings) > 0 {
|
||||
return nil, errors.New("exit info required to process attester slashings")
|
||||
}
|
||||
var err error
|
||||
for _, slashing := range slashings {
|
||||
beaconState, err = ProcessAttesterSlashingNoVerify(ctx, beaconState, slashing, exitInfo)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
return beaconState, nil
|
||||
}
|
||||
|
||||
// ProcessAttesterSlashing processes individual attester slashing.
|
||||
func ProcessAttesterSlashing(
|
||||
ctx context.Context,
|
||||
@@ -68,6 +90,30 @@ func ProcessAttesterSlashing(
|
||||
if err := VerifyAttesterSlashing(ctx, beaconState, slashing); err != nil {
|
||||
return nil, errors.Wrap(err, "could not verify attester slashing")
|
||||
}
|
||||
return processAttesterSlashing(ctx, beaconState, slashing, exitInfo)
|
||||
}
|
||||
|
||||
// ProcessAttesterSlashingNoVerify processes individual attester slashing without verifying it.
|
||||
// This is useful in scenarios such as block reward calculation, where we can assume the data
|
||||
// in the block is valid.
|
||||
func ProcessAttesterSlashingNoVerify(
|
||||
ctx context.Context,
|
||||
beaconState state.BeaconState,
|
||||
slashing ethpb.AttSlashing,
|
||||
exitInfo *validators.ExitInfo,
|
||||
) (state.BeaconState, error) {
|
||||
if exitInfo == nil {
|
||||
return nil, errors.New("exit info is required to process attester slashing")
|
||||
}
|
||||
return processAttesterSlashing(ctx, beaconState, slashing, exitInfo)
|
||||
}
|
||||
|
||||
func processAttesterSlashing(
|
||||
ctx context.Context,
|
||||
beaconState state.BeaconState,
|
||||
slashing ethpb.AttSlashing,
|
||||
exitInfo *validators.ExitInfo,
|
||||
) (state.BeaconState, error) {
|
||||
slashableIndices := SlashableAttesterIndices(slashing)
|
||||
sort.SliceStable(slashableIndices, func(i, j int) bool {
|
||||
return slashableIndices[i] < slashableIndices[j]
|
||||
|
||||
@@ -242,8 +242,18 @@ func TestProcessAttesterSlashings_AppliesCorrectStatus(t *testing.T) {
|
||||
currentSlot := 2 * params.BeaconConfig().SlotsPerEpoch
|
||||
require.NoError(t, tc.st.SetSlot(currentSlot))
|
||||
|
||||
// Verify that ProcessAttesterSlashingsNoVerify and ProcessAttesterSlashings have the same outcome.
|
||||
stNoVerify := tc.st.Copy()
|
||||
newStateNoVerify, err := blocks.ProcessAttesterSlashingsNoVerify(t.Context(), stNoVerify, []ethpb.AttSlashing{tc.slashing}, v.ExitInformation(stNoVerify))
|
||||
require.NoError(t, err)
|
||||
sszNoVerify, err := newStateNoVerify.MarshalSSZ()
|
||||
require.NoError(t, err)
|
||||
newState, err := blocks.ProcessAttesterSlashings(t.Context(), tc.st, []ethpb.AttSlashing{tc.slashing}, v.ExitInformation(tc.st))
|
||||
require.NoError(t, err)
|
||||
ssz, err := newState.MarshalSSZ()
|
||||
require.NoError(t, err)
|
||||
assert.DeepEqual(t, sszNoVerify, ssz, "States resulting from ProcessAttesterSlashingsNoVerify and ProcessAttesterSlashings are not equal")
|
||||
|
||||
newRegistry := newState.Validators()
|
||||
|
||||
// Given the intersection of slashable indices is [1], only validator
|
||||
|
||||
@@ -6,3 +6,4 @@ var errNilSignedWithdrawalMessage = errors.New("nil SignedBLSToExecutionChange m
|
||||
var errNilWithdrawalMessage = errors.New("nil BLSToExecutionChange message")
|
||||
var errInvalidBLSPrefix = errors.New("withdrawal credential prefix is not a BLS prefix")
|
||||
var errInvalidWithdrawalCredentials = errors.New("withdrawal credentials do not match")
|
||||
var ErrInvalidSignature = errors.New("invalid signature")
|
||||
|
||||
@@ -64,6 +64,28 @@ func ProcessProposerSlashings(
|
||||
return beaconState, nil
|
||||
}
|
||||
|
||||
// ProcessProposerSlashingsNoVerify processes proposer slashings without verifying them.
|
||||
// This is useful in scenarios such as block reward calculation, where we can assume the data
|
||||
// in the block is valid.
|
||||
func ProcessProposerSlashingsNoVerify(
|
||||
ctx context.Context,
|
||||
beaconState state.BeaconState,
|
||||
slashings []*ethpb.ProposerSlashing,
|
||||
exitInfo *validators.ExitInfo,
|
||||
) (state.BeaconState, error) {
|
||||
if exitInfo == nil && len(slashings) > 0 {
|
||||
return nil, errors.New("exit info required to process proposer slashings")
|
||||
}
|
||||
var err error
|
||||
for _, slashing := range slashings {
|
||||
beaconState, err = ProcessProposerSlashingNoVerify(ctx, beaconState, slashing, exitInfo)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
return beaconState, nil
|
||||
}
|
||||
|
||||
// ProcessProposerSlashing processes individual proposer slashing.
|
||||
func ProcessProposerSlashing(
|
||||
ctx context.Context,
|
||||
@@ -71,16 +93,40 @@ func ProcessProposerSlashing(
|
||||
slashing *ethpb.ProposerSlashing,
|
||||
exitInfo *validators.ExitInfo,
|
||||
) (state.BeaconState, error) {
|
||||
var err error
|
||||
if slashing == nil {
|
||||
return nil, errors.New("nil proposer slashings in block body")
|
||||
}
|
||||
if err = VerifyProposerSlashing(beaconState, slashing); err != nil {
|
||||
if err := VerifyProposerSlashing(beaconState, slashing); err != nil {
|
||||
return nil, errors.Wrap(err, "could not verify proposer slashing")
|
||||
}
|
||||
return processProposerSlashing(ctx, beaconState, slashing, exitInfo)
|
||||
}
|
||||
|
||||
// ProcessProposerSlashingNoVerify processes individual proposer slashing without verifying it.
|
||||
// This is useful in scenarios such as block reward calculation, where we can assume the data
|
||||
// in the block is valid.
|
||||
func ProcessProposerSlashingNoVerify(
|
||||
ctx context.Context,
|
||||
beaconState state.BeaconState,
|
||||
slashing *ethpb.ProposerSlashing,
|
||||
exitInfo *validators.ExitInfo,
|
||||
) (state.BeaconState, error) {
|
||||
if slashing == nil {
|
||||
return nil, errors.New("nil proposer slashings in block body")
|
||||
}
|
||||
return processProposerSlashing(ctx, beaconState, slashing, exitInfo)
|
||||
}
|
||||
|
||||
func processProposerSlashing(
|
||||
ctx context.Context,
|
||||
beaconState state.BeaconState,
|
||||
slashing *ethpb.ProposerSlashing,
|
||||
exitInfo *validators.ExitInfo,
|
||||
) (state.BeaconState, error) {
|
||||
if exitInfo == nil {
|
||||
return nil, errors.New("exit info is required to process proposer slashing")
|
||||
}
|
||||
var err error
|
||||
beaconState, err = validators.SlashValidator(ctx, beaconState, slashing.Header_1.Header.ProposerIndex, exitInfo)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "could not slash proposer index %d", slashing.Header_1.Header.ProposerIndex)
|
||||
|
||||
@@ -172,8 +172,17 @@ func TestProcessProposerSlashings_AppliesCorrectStatus(t *testing.T) {
|
||||
block := util.NewBeaconBlock()
|
||||
block.Block.Body.ProposerSlashings = slashings
|
||||
|
||||
// Verify that ProcessProposerSlashingsNoVerify and ProcessProposerSlashings have the same outcome.
|
||||
beaconStateNoVerify := beaconState.Copy()
|
||||
newStateNoVerify, err := blocks.ProcessProposerSlashingsNoVerify(t.Context(), beaconStateNoVerify, block.Block.Body.ProposerSlashings, v.ExitInformation(beaconStateNoVerify))
|
||||
require.NoError(t, err)
|
||||
sszNoVerify, err := newStateNoVerify.MarshalSSZ()
|
||||
require.NoError(t, err)
|
||||
newState, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, block.Block.Body.ProposerSlashings, v.ExitInformation(beaconState))
|
||||
require.NoError(t, err)
|
||||
ssz, err := newState.MarshalSSZ()
|
||||
require.NoError(t, err)
|
||||
assert.DeepEqual(t, sszNoVerify, ssz, "States resulting from ProcessProposerSlashingsNoVerify and ProcessProposerSlashings are not equal")
|
||||
|
||||
newStateVals := newState.Validators()
|
||||
if newStateVals[1].ExitEpoch != beaconState.Validators()[1].ExitEpoch {
|
||||
|
||||
@@ -114,9 +114,12 @@ func VerifyBlockSignatureUsingCurrentFork(beaconState state.ReadOnlyBeaconState,
|
||||
}
|
||||
proposerPubKey := proposer.PublicKey
|
||||
sig := blk.Signature()
|
||||
return signing.VerifyBlockSigningRoot(proposerPubKey, sig[:], domain, func() ([32]byte, error) {
|
||||
if err := signing.VerifyBlockSigningRoot(proposerPubKey, sig[:], domain, func() ([32]byte, error) {
|
||||
return blkRoot, nil
|
||||
})
|
||||
}); err != nil {
|
||||
return ErrInvalidSignature
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// BlockSignatureBatch retrieves the block signature batch from the provided block and its corresponding state.
|
||||
|
||||
@@ -89,3 +89,36 @@ func TestVerifyBlockSignatureUsingCurrentFork(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
assert.NoError(t, blocks.VerifyBlockSignatureUsingCurrentFork(bState, wsb, blkRoot))
|
||||
}
|
||||
|
||||
func TestVerifyBlockSignatureUsingCurrentFork_InvalidSignature(t *testing.T) {
|
||||
params.SetupTestConfigCleanup(t)
|
||||
bCfg := params.BeaconConfig()
|
||||
bCfg.AltairForkEpoch = 100
|
||||
bCfg.ForkVersionSchedule[bytesutil.ToBytes4(bCfg.AltairForkVersion)] = 100
|
||||
params.OverrideBeaconConfig(bCfg)
|
||||
bState, keys := util.DeterministicGenesisState(t, 100)
|
||||
altairBlk := util.NewBeaconBlockAltair()
|
||||
altairBlk.Block.ProposerIndex = 0
|
||||
altairBlk.Block.Slot = params.BeaconConfig().SlotsPerEpoch * 100
|
||||
blkRoot, err := altairBlk.Block.HashTreeRoot()
|
||||
assert.NoError(t, err)
|
||||
|
||||
// Sign with wrong key (proposer index 0, but using key 1)
|
||||
fData := ðpb.Fork{
|
||||
Epoch: 100,
|
||||
CurrentVersion: params.BeaconConfig().AltairForkVersion,
|
||||
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
|
||||
}
|
||||
domain, err := signing.Domain(fData, 100, params.BeaconConfig().DomainBeaconProposer, bState.GenesisValidatorsRoot())
|
||||
assert.NoError(t, err)
|
||||
rt, err := signing.ComputeSigningRoot(altairBlk.Block, domain)
|
||||
assert.NoError(t, err)
|
||||
wrongSig := keys[1].Sign(rt[:]).Marshal()
|
||||
altairBlk.Signature = wrongSig
|
||||
|
||||
wsb, err := consensusblocks.NewSignedBeaconBlock(altairBlk)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = blocks.VerifyBlockSignatureUsingCurrentFork(bState, wsb, blkRoot)
|
||||
require.ErrorIs(t, err, blocks.ErrInvalidSignature, "Expected ErrInvalidSignature for invalid signature")
|
||||
}
|
||||
|
||||
@@ -43,6 +43,13 @@ func VerifyDataColumnSidecar(sidecar blocks.RODataColumn) error {
|
||||
return ErrNoKzgCommitments
|
||||
}
|
||||
|
||||
// A sidecar with more commitments than the max blob count for this block is invalid.
|
||||
slot := sidecar.Slot()
|
||||
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(slot)
|
||||
if len(sidecar.KzgCommitments) > maxBlobsPerBlock {
|
||||
return ErrTooManyCommitments
|
||||
}
|
||||
|
||||
// The column length must be equal to the number of commitments/proofs.
|
||||
if len(sidecar.Column) != len(sidecar.KzgCommitments) || len(sidecar.Column) != len(sidecar.KzgProofs) {
|
||||
return ErrMismatchLength
|
||||
|
||||
@@ -18,38 +18,46 @@ import (
|
||||
)
|
||||
|
||||
func TestVerifyDataColumnSidecar(t *testing.T) {
|
||||
t.Run("index too large", func(t *testing.T) {
|
||||
roSidecar := createTestSidecar(t, 1_000_000, nil, nil, nil)
|
||||
err := peerdas.VerifyDataColumnSidecar(roSidecar)
|
||||
require.ErrorIs(t, err, peerdas.ErrIndexTooLarge)
|
||||
})
|
||||
testCases := []struct {
|
||||
name string
|
||||
index uint64
|
||||
blobCount int
|
||||
commitmentCount int
|
||||
proofCount int
|
||||
maxBlobsPerBlock uint64
|
||||
expectedError error
|
||||
}{
|
||||
{name: "index too large", index: 1_000_000, expectedError: peerdas.ErrIndexTooLarge},
|
||||
{name: "no commitments", expectedError: peerdas.ErrNoKzgCommitments},
|
||||
{name: "too many commitments", blobCount: 10, commitmentCount: 10, proofCount: 10, maxBlobsPerBlock: 2, expectedError: peerdas.ErrTooManyCommitments},
|
||||
{name: "commitments size mismatch", commitmentCount: 1, maxBlobsPerBlock: 1, expectedError: peerdas.ErrMismatchLength},
|
||||
{name: "proofs size mismatch", blobCount: 1, commitmentCount: 1, maxBlobsPerBlock: 1, expectedError: peerdas.ErrMismatchLength},
|
||||
{name: "nominal", blobCount: 1, commitmentCount: 1, proofCount: 1, maxBlobsPerBlock: 1, expectedError: nil},
|
||||
}
|
||||
|
||||
t.Run("no commitments", func(t *testing.T) {
|
||||
roSidecar := createTestSidecar(t, 0, nil, nil, nil)
|
||||
err := peerdas.VerifyDataColumnSidecar(roSidecar)
|
||||
require.ErrorIs(t, err, peerdas.ErrNoKzgCommitments)
|
||||
})
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
params.SetupTestConfigCleanup(t)
|
||||
cfg := params.BeaconConfig()
|
||||
cfg.FuluForkEpoch = 0
|
||||
cfg.BlobSchedule = []params.BlobScheduleEntry{{Epoch: 0, MaxBlobsPerBlock: tc.maxBlobsPerBlock}}
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
|
||||
t.Run("KZG commitments size mismatch", func(t *testing.T) {
|
||||
kzgCommitments := make([][]byte, 1)
|
||||
roSidecar := createTestSidecar(t, 0, nil, kzgCommitments, nil)
|
||||
err := peerdas.VerifyDataColumnSidecar(roSidecar)
|
||||
require.ErrorIs(t, err, peerdas.ErrMismatchLength)
|
||||
})
|
||||
column := make([][]byte, tc.blobCount)
|
||||
kzgCommitments := make([][]byte, tc.commitmentCount)
|
||||
kzgProof := make([][]byte, tc.proofCount)
|
||||
|
||||
t.Run("KZG proofs size mismatch", func(t *testing.T) {
|
||||
column, kzgCommitments := make([][]byte, 1), make([][]byte, 1)
|
||||
roSidecar := createTestSidecar(t, 0, column, kzgCommitments, nil)
|
||||
err := peerdas.VerifyDataColumnSidecar(roSidecar)
|
||||
require.ErrorIs(t, err, peerdas.ErrMismatchLength)
|
||||
})
|
||||
roSidecar := createTestSidecar(t, tc.index, column, kzgCommitments, kzgProof)
|
||||
err := peerdas.VerifyDataColumnSidecar(roSidecar)
|
||||
|
||||
t.Run("nominal", func(t *testing.T) {
|
||||
column, kzgCommitments, kzgProofs := make([][]byte, 1), make([][]byte, 1), make([][]byte, 1)
|
||||
roSidecar := createTestSidecar(t, 0, column, kzgCommitments, kzgProofs)
|
||||
err := peerdas.VerifyDataColumnSidecar(roSidecar)
|
||||
require.NoError(t, err)
|
||||
})
|
||||
if tc.expectedError != nil {
|
||||
require.ErrorIs(t, err, tc.expectedError)
|
||||
return
|
||||
}
|
||||
|
||||
require.NoError(t, err)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestVerifyDataColumnSidecarKZGProofs(t *testing.T) {
|
||||
|
||||
@@ -257,7 +257,7 @@ func ComputeCellsAndProofsFromStructured(blobsAndProofs []*pb.BlobAndProofV2) ([
|
||||
return nil, errors.Wrap(err, "compute cells")
|
||||
}
|
||||
|
||||
kzgProofs := make([]kzg.Proof, 0, numberOfColumns*kzg.BytesPerProof)
|
||||
kzgProofs := make([]kzg.Proof, 0, numberOfColumns)
|
||||
for _, kzgProofBytes := range blobAndProof.KzgProofs {
|
||||
if len(kzgProofBytes) != kzg.BytesPerProof {
|
||||
return nil, errors.New("wrong KZG proof size - should never happen")
|
||||
|
||||
@@ -441,6 +441,7 @@ func TestComputeCellsAndProofsFromStructured(t *testing.T) {
|
||||
for i := range blobCount {
|
||||
require.Equal(t, len(expectedCellsAndProofs[i].Cells), len(actualCellsAndProofs[i].Cells))
|
||||
require.Equal(t, len(expectedCellsAndProofs[i].Proofs), len(actualCellsAndProofs[i].Proofs))
|
||||
require.Equal(t, len(expectedCellsAndProofs[i].Proofs), cap(actualCellsAndProofs[i].Proofs))
|
||||
|
||||
// Compare cells
|
||||
for j, expectedCell := range expectedCellsAndProofs[i].Cells {
|
||||
|
||||
@@ -1032,5 +1032,5 @@ func extractFileMetadata(path string) (*fileMetadata, error) {
|
||||
|
||||
// period computes the period of a given epoch.
|
||||
func period(epoch primitives.Epoch) uint64 {
|
||||
return uint64(epoch / params.BeaconConfig().MinEpochsForBlobsSidecarsRequest)
|
||||
return uint64(epoch / params.BeaconConfig().MinEpochsForDataColumnSidecarsRequest)
|
||||
}
|
||||
|
||||
@@ -35,8 +35,9 @@ func (s DataColumnStorageSummary) HasIndex(index uint64) bool {
|
||||
|
||||
// HasAtLeastOneIndex returns true if at least one of the DataColumnSidecars at the given indices is available in the filesystem.
|
||||
func (s DataColumnStorageSummary) HasAtLeastOneIndex(indices []uint64) bool {
|
||||
size := uint64(len(s.mask))
|
||||
for _, index := range indices {
|
||||
if s.mask[index] {
|
||||
if index < size && s.mask[index] {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
@@ -25,11 +25,11 @@ func TestHasIndex(t *testing.T) {
|
||||
func TestHasAtLeastOneIndex(t *testing.T) {
|
||||
summary := NewDataColumnStorageSummary(0, [fieldparams.NumberOfColumns]bool{false, true})
|
||||
|
||||
hasAtLeastOneIndex := summary.HasAtLeastOneIndex([]uint64{3, 1, 2})
|
||||
require.Equal(t, true, hasAtLeastOneIndex)
|
||||
actual := summary.HasAtLeastOneIndex([]uint64{3, 1, fieldparams.NumberOfColumns, 2})
|
||||
require.Equal(t, true, actual)
|
||||
|
||||
hasAtLeastOneIndex = summary.HasAtLeastOneIndex([]uint64{3, 4, 2})
|
||||
require.Equal(t, false, hasAtLeastOneIndex)
|
||||
actual = summary.HasAtLeastOneIndex([]uint64{3, 4, fieldparams.NumberOfColumns, 2})
|
||||
require.Equal(t, false, actual)
|
||||
}
|
||||
|
||||
func TestCount(t *testing.T) {
|
||||
|
||||
@@ -126,7 +126,7 @@ func NewWarmedEphemeralDataColumnStorageUsingFs(t testing.TB, fs afero.Fs, opts
|
||||
|
||||
func NewEphemeralDataColumnStorageUsingFs(t testing.TB, fs afero.Fs, opts ...DataColumnStorageOption) *DataColumnStorage {
|
||||
opts = append(opts,
|
||||
WithDataColumnRetentionEpochs(params.BeaconConfig().MinEpochsForBlobsSidecarsRequest),
|
||||
WithDataColumnRetentionEpochs(params.BeaconConfig().MinEpochsForDataColumnSidecarsRequest),
|
||||
WithDataColumnFs(fs),
|
||||
)
|
||||
|
||||
|
||||
@@ -58,7 +58,6 @@ go_library(
|
||||
"//config/params:go_default_library",
|
||||
"//consensus-types/primitives:go_default_library",
|
||||
"//container/slice:go_default_library",
|
||||
"//encoding/bytesutil:go_default_library",
|
||||
"//genesis:go_default_library",
|
||||
"//monitoring/prometheus:go_default_library",
|
||||
"//monitoring/tracing:go_default_library",
|
||||
|
||||
@@ -60,7 +60,6 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v6/container/slice"
|
||||
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
|
||||
"github.com/OffchainLabs/prysm/v6/genesis"
|
||||
"github.com/OffchainLabs/prysm/v6/monitoring/prometheus"
|
||||
"github.com/OffchainLabs/prysm/v6/runtime"
|
||||
@@ -598,22 +597,7 @@ func (b *BeaconNode) startStateGen(ctx context.Context, bfs coverage.AvailableBl
|
||||
return err
|
||||
}
|
||||
|
||||
r := bytesutil.ToBytes32(cp.Root)
|
||||
// Consider edge case where finalized root are zeros instead of genesis root hash.
|
||||
if r == params.BeaconConfig().ZeroHash {
|
||||
genesisBlock, err := b.db.GenesisBlock(ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if genesisBlock != nil && !genesisBlock.IsNil() {
|
||||
r, err = genesisBlock.Block().HashTreeRoot()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
b.finalizedStateAtStartUp, err = sg.StateByRoot(ctx, r)
|
||||
b.finalizedStateAtStartUp, err = sg.StateByRoot(ctx, [32]byte(cp.Root))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
@@ -11,14 +11,15 @@ import (
|
||||
|
||||
var (
|
||||
knownAgentVersions = []string{
|
||||
"erigon/caplin",
|
||||
"grandine",
|
||||
"js-libp2p",
|
||||
"lighthouse",
|
||||
"lodestar",
|
||||
"nimbus",
|
||||
"prysm",
|
||||
"teku",
|
||||
"lodestar",
|
||||
"js-libp2p",
|
||||
"rust-libp2p",
|
||||
"erigon/caplin",
|
||||
}
|
||||
p2pPeerCount = promauto.NewGaugeVec(prometheus.GaugeOpts{
|
||||
Name: "p2p_peer_count",
|
||||
|
||||
@@ -12,6 +12,7 @@ go_library(
|
||||
importpath = "github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/core",
|
||||
visibility = ["//visibility:public"],
|
||||
deps = [
|
||||
"//api/server:go_default_library",
|
||||
"//beacon-chain/blockchain:go_default_library",
|
||||
"//beacon-chain/cache:go_default_library",
|
||||
"//beacon-chain/core/altair:go_default_library",
|
||||
|
||||
@@ -7,6 +7,7 @@ import (
|
||||
"sort"
|
||||
"time"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/api/server"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/altair"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/epoch/precompute"
|
||||
@@ -36,24 +37,6 @@ import (
|
||||
|
||||
var errOptimisticMode = errors.New("the node is currently optimistic and cannot serve validators")
|
||||
|
||||
// AggregateBroadcastFailedError represents an error scenario where
|
||||
// broadcasting an aggregate selection proof failed.
|
||||
type AggregateBroadcastFailedError struct {
|
||||
err error
|
||||
}
|
||||
|
||||
// NewAggregateBroadcastFailedError creates a new error instance.
|
||||
func NewAggregateBroadcastFailedError(err error) AggregateBroadcastFailedError {
|
||||
return AggregateBroadcastFailedError{
|
||||
err: err,
|
||||
}
|
||||
}
|
||||
|
||||
// Error returns the underlying error message.
|
||||
func (e *AggregateBroadcastFailedError) Error() string {
|
||||
return fmt.Sprintf("could not broadcast signed aggregated attestation: %s", e.err.Error())
|
||||
}
|
||||
|
||||
// ComputeValidatorPerformance reports the validator's latest balance along with other important metrics on
|
||||
// rewards and penalties throughout its lifecycle in the beacon chain.
|
||||
func (s *Service) ComputeValidatorPerformance(
|
||||
@@ -360,7 +343,8 @@ func (s *Service) SubmitSignedContributionAndProof(
|
||||
// Wait for p2p broadcast to complete and return the first error (if any)
|
||||
err := errs.Wait()
|
||||
if err != nil {
|
||||
return &RpcError{Err: err, Reason: Internal}
|
||||
log.WithError(err).Debug("Could not broadcast signed contribution and proof")
|
||||
return &RpcError{Err: server.NewBroadcastFailedError("SignedContributionAndProof", err), Reason: Internal}
|
||||
}
|
||||
|
||||
s.OperationNotifier.OperationFeed().Send(&feed.Event{
|
||||
@@ -411,7 +395,8 @@ func (s *Service) SubmitSignedAggregateSelectionProof(
|
||||
}
|
||||
|
||||
if err := s.Broadcaster.Broadcast(ctx, agg); err != nil {
|
||||
return &RpcError{Err: &AggregateBroadcastFailedError{err: err}, Reason: Internal}
|
||||
log.WithError(err).Debug("Could not broadcast signed aggregate att and proof")
|
||||
return &RpcError{Err: server.NewBroadcastFailedError("SignedAggregateAttAndProof", err), Reason: Internal}
|
||||
}
|
||||
|
||||
if logrus.GetLevel() >= logrus.DebugLevel {
|
||||
|
||||
@@ -6,8 +6,6 @@ import (
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/api"
|
||||
@@ -31,6 +29,7 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/runtime/version"
|
||||
"github.com/OffchainLabs/prysm/v6/time/slots"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
const broadcastBLSChangesRateLimit = 128
|
||||
@@ -200,22 +199,23 @@ func (s *Server) SubmitAttestations(w http.ResponseWriter, r *http.Request) {
|
||||
return
|
||||
}
|
||||
|
||||
if len(failedBroadcasts) > 0 {
|
||||
httputil.HandleError(
|
||||
w,
|
||||
fmt.Sprintf("Attestations at index %s could not be broadcasted", strings.Join(failedBroadcasts, ", ")),
|
||||
http.StatusInternalServerError,
|
||||
)
|
||||
return
|
||||
}
|
||||
|
||||
if len(attFailures) > 0 {
|
||||
failuresErr := &server.IndexedVerificationFailureError{
|
||||
failuresErr := &server.IndexedErrorContainer{
|
||||
Code: http.StatusBadRequest,
|
||||
Message: "One or more attestations failed validation",
|
||||
Message: server.ErrIndexedValidationFail,
|
||||
Failures: attFailures,
|
||||
}
|
||||
httputil.WriteError(w, failuresErr)
|
||||
return
|
||||
}
|
||||
if len(failedBroadcasts) > 0 {
|
||||
failuresErr := &server.IndexedErrorContainer{
|
||||
Code: http.StatusInternalServerError,
|
||||
Message: server.ErrIndexedBroadcastFail,
|
||||
Failures: failedBroadcasts,
|
||||
}
|
||||
httputil.WriteError(w, failuresErr)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
@@ -247,8 +247,8 @@ func (s *Server) SubmitAttestationsV2(w http.ResponseWriter, r *http.Request) {
|
||||
return
|
||||
}
|
||||
|
||||
var attFailures []*server.IndexedVerificationFailure
|
||||
var failedBroadcasts []string
|
||||
var attFailures []*server.IndexedError
|
||||
var failedBroadcasts []*server.IndexedError
|
||||
|
||||
if v >= version.Electra {
|
||||
attFailures, failedBroadcasts, err = s.handleAttestationsElectra(ctx, req.Data)
|
||||
@@ -260,29 +260,30 @@ func (s *Server) SubmitAttestationsV2(w http.ResponseWriter, r *http.Request) {
|
||||
return
|
||||
}
|
||||
|
||||
if len(failedBroadcasts) > 0 {
|
||||
httputil.HandleError(
|
||||
w,
|
||||
fmt.Sprintf("Attestations at index %s could not be broadcasted", strings.Join(failedBroadcasts, ", ")),
|
||||
http.StatusInternalServerError,
|
||||
)
|
||||
return
|
||||
}
|
||||
|
||||
if len(attFailures) > 0 {
|
||||
failuresErr := &server.IndexedVerificationFailureError{
|
||||
failuresErr := &server.IndexedErrorContainer{
|
||||
Code: http.StatusBadRequest,
|
||||
Message: "One or more attestations failed validation",
|
||||
Message: server.ErrIndexedValidationFail,
|
||||
Failures: attFailures,
|
||||
}
|
||||
httputil.WriteError(w, failuresErr)
|
||||
return
|
||||
}
|
||||
if len(failedBroadcasts) > 0 {
|
||||
failuresErr := &server.IndexedErrorContainer{
|
||||
Code: http.StatusInternalServerError,
|
||||
Message: server.ErrIndexedBroadcastFail,
|
||||
Failures: failedBroadcasts,
|
||||
}
|
||||
httputil.WriteError(w, failuresErr)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
func (s *Server) handleAttestationsElectra(
|
||||
ctx context.Context,
|
||||
data json.RawMessage,
|
||||
) (attFailures []*server.IndexedVerificationFailure, failedBroadcasts []string, err error) {
|
||||
) (attFailures []*server.IndexedError, failedBroadcasts []*server.IndexedError, err error) {
|
||||
var sourceAttestations []*structs.SingleAttestation
|
||||
currentEpoch := slots.ToEpoch(s.TimeFetcher.CurrentSlot())
|
||||
if currentEpoch < params.BeaconConfig().ElectraForkEpoch {
|
||||
@@ -301,14 +302,14 @@ func (s *Server) handleAttestationsElectra(
|
||||
for i, sourceAtt := range sourceAttestations {
|
||||
att, err := sourceAtt.ToConsensus()
|
||||
if err != nil {
|
||||
attFailures = append(attFailures, &server.IndexedVerificationFailure{
|
||||
attFailures = append(attFailures, &server.IndexedError{
|
||||
Index: i,
|
||||
Message: "Could not convert request attestation to consensus attestation: " + err.Error(),
|
||||
})
|
||||
continue
|
||||
}
|
||||
if _, err = bls.SignatureFromBytes(att.Signature); err != nil {
|
||||
attFailures = append(attFailures, &server.IndexedVerificationFailure{
|
||||
attFailures = append(attFailures, &server.IndexedError{
|
||||
Index: i,
|
||||
Message: "Incorrect attestation signature: " + err.Error(),
|
||||
})
|
||||
@@ -317,6 +318,13 @@ func (s *Server) handleAttestationsElectra(
|
||||
validAttestations = append(validAttestations, att)
|
||||
}
|
||||
|
||||
// We store the error for the first failed broadcast and use it in the log message in case
|
||||
// there are broadcast issues. Having a single log at the end instead of logging
|
||||
// for every failed broadcast prevents log noise in case there are many failures.
|
||||
// Even though we only retain the first error, there is a very good chance that all
|
||||
// broadcasts fail for the same reason, so this should be sufficient in most cases.
|
||||
var broadcastErr error
|
||||
|
||||
for i, singleAtt := range validAttestations {
|
||||
s.OperationNotifier.OperationFeed().Send(&feed.Event{
|
||||
Type: operation.SingleAttReceived,
|
||||
@@ -338,31 +346,45 @@ func (s *Server) handleAttestationsElectra(
|
||||
wantedEpoch := slots.ToEpoch(att.Data.Slot)
|
||||
vals, err := s.HeadFetcher.HeadValidatorsIndices(ctx, wantedEpoch)
|
||||
if err != nil {
|
||||
failedBroadcasts = append(failedBroadcasts, strconv.Itoa(i))
|
||||
continue
|
||||
return nil, nil, errors.Wrap(err, "could not get head validator indices")
|
||||
}
|
||||
subnet := corehelpers.ComputeSubnetFromCommitteeAndSlot(uint64(len(vals)), att.GetCommitteeIndex(), att.Data.Slot)
|
||||
if err = s.Broadcaster.BroadcastAttestation(ctx, subnet, singleAtt); err != nil {
|
||||
log.WithError(err).Errorf("could not broadcast attestation at index %d", i)
|
||||
failedBroadcasts = append(failedBroadcasts, strconv.Itoa(i))
|
||||
failedBroadcasts = append(failedBroadcasts, &server.IndexedError{
|
||||
Index: i,
|
||||
Message: server.NewBroadcastFailedError("SingleAttestation", err).Error(),
|
||||
})
|
||||
if broadcastErr == nil {
|
||||
broadcastErr = err
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
if features.Get().EnableExperimentalAttestationPool {
|
||||
if err = s.AttestationCache.Add(att); err != nil {
|
||||
log.WithError(err).Error("could not save attestation")
|
||||
log.WithError(err).Error("Could not save attestation")
|
||||
}
|
||||
} else {
|
||||
if err = s.AttestationsPool.SaveUnaggregatedAttestation(att); err != nil {
|
||||
log.WithError(err).Error("could not save attestation")
|
||||
log.WithError(err).Error("Could not save attestation")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if len(failedBroadcasts) > 0 {
|
||||
log.WithFields(logrus.Fields{
|
||||
"failedCount": len(failedBroadcasts),
|
||||
"totalCount": len(validAttestations),
|
||||
}).WithError(broadcastErr).Error("Some attestations failed to be broadcast")
|
||||
}
|
||||
|
||||
return attFailures, failedBroadcasts, nil
|
||||
}
|
||||
|
||||
func (s *Server) handleAttestations(ctx context.Context, data json.RawMessage) (attFailures []*server.IndexedVerificationFailure, failedBroadcasts []string, err error) {
|
||||
func (s *Server) handleAttestations(
|
||||
ctx context.Context,
|
||||
data json.RawMessage,
|
||||
) (attFailures []*server.IndexedError, failedBroadcasts []*server.IndexedError, err error) {
|
||||
var sourceAttestations []*structs.Attestation
|
||||
|
||||
if slots.ToEpoch(s.TimeFetcher.CurrentSlot()) >= params.BeaconConfig().ElectraForkEpoch {
|
||||
@@ -381,14 +403,14 @@ func (s *Server) handleAttestations(ctx context.Context, data json.RawMessage) (
|
||||
for i, sourceAtt := range sourceAttestations {
|
||||
att, err := sourceAtt.ToConsensus()
|
||||
if err != nil {
|
||||
attFailures = append(attFailures, &server.IndexedVerificationFailure{
|
||||
attFailures = append(attFailures, &server.IndexedError{
|
||||
Index: i,
|
||||
Message: "Could not convert request attestation to consensus attestation: " + err.Error(),
|
||||
})
|
||||
continue
|
||||
}
|
||||
if _, err = bls.SignatureFromBytes(att.Signature); err != nil {
|
||||
attFailures = append(attFailures, &server.IndexedVerificationFailure{
|
||||
attFailures = append(attFailures, &server.IndexedError{
|
||||
Index: i,
|
||||
Message: "Incorrect attestation signature: " + err.Error(),
|
||||
})
|
||||
@@ -397,6 +419,13 @@ func (s *Server) handleAttestations(ctx context.Context, data json.RawMessage) (
|
||||
validAttestations = append(validAttestations, att)
|
||||
}
|
||||
|
||||
// We store the error for the first failed broadcast and use it in the log message in case
|
||||
// there are broadcast issues. Having a single log at the end instead of logging
|
||||
// for every failed broadcast prevents log noise in case there are many failures.
|
||||
// Even though we only retain the first error, there is a very good chance that all
|
||||
// broadcasts fail for the same reason, so this should be sufficient in most cases.
|
||||
var broadcastErr error
|
||||
|
||||
for i, att := range validAttestations {
|
||||
// Broadcast the unaggregated attestation on a feed to notify other services in the beacon node
|
||||
// of a received unaggregated attestation.
|
||||
@@ -413,32 +442,43 @@ func (s *Server) handleAttestations(ctx context.Context, data json.RawMessage) (
|
||||
wantedEpoch := slots.ToEpoch(att.Data.Slot)
|
||||
vals, err := s.HeadFetcher.HeadValidatorsIndices(ctx, wantedEpoch)
|
||||
if err != nil {
|
||||
failedBroadcasts = append(failedBroadcasts, strconv.Itoa(i))
|
||||
continue
|
||||
return nil, nil, errors.Wrap(err, "could not get head validator indices")
|
||||
}
|
||||
|
||||
subnet := corehelpers.ComputeSubnetFromCommitteeAndSlot(uint64(len(vals)), att.Data.CommitteeIndex, att.Data.Slot)
|
||||
if err = s.Broadcaster.BroadcastAttestation(ctx, subnet, att); err != nil {
|
||||
log.WithError(err).Errorf("could not broadcast attestation at index %d", i)
|
||||
failedBroadcasts = append(failedBroadcasts, strconv.Itoa(i))
|
||||
failedBroadcasts = append(failedBroadcasts, &server.IndexedError{
|
||||
Index: i,
|
||||
Message: server.NewBroadcastFailedError("Attestation", err).Error(),
|
||||
})
|
||||
if broadcastErr == nil {
|
||||
broadcastErr = err
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
if features.Get().EnableExperimentalAttestationPool {
|
||||
if err = s.AttestationCache.Add(att); err != nil {
|
||||
log.WithError(err).Error("could not save attestation")
|
||||
log.WithError(err).Error("Could not save attestation")
|
||||
}
|
||||
} else if att.IsAggregated() {
|
||||
if err = s.AttestationsPool.SaveAggregatedAttestation(att); err != nil {
|
||||
log.WithError(err).Error("could not save aggregated attestation")
|
||||
log.WithError(err).Error("Could not save aggregated attestation")
|
||||
}
|
||||
} else {
|
||||
if err = s.AttestationsPool.SaveUnaggregatedAttestation(att); err != nil {
|
||||
log.WithError(err).Error("could not save unaggregated attestation")
|
||||
log.WithError(err).Error("Could not save unaggregated attestation")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if len(failedBroadcasts) > 0 {
|
||||
log.WithFields(logrus.Fields{
|
||||
"failedCount": len(failedBroadcasts),
|
||||
"totalCount": len(validAttestations),
|
||||
}).WithError(broadcastErr).Error("Some attestations failed to be broadcast")
|
||||
}
|
||||
|
||||
return attFailures, failedBroadcasts, nil
|
||||
}
|
||||
|
||||
@@ -541,11 +581,11 @@ func (s *Server) SubmitSyncCommitteeSignatures(w http.ResponseWriter, r *http.Re
|
||||
}
|
||||
|
||||
var validMessages []*eth.SyncCommitteeMessage
|
||||
var msgFailures []*server.IndexedVerificationFailure
|
||||
var msgFailures []*server.IndexedError
|
||||
for i, sourceMsg := range req.Data {
|
||||
msg, err := sourceMsg.ToConsensus()
|
||||
if err != nil {
|
||||
msgFailures = append(msgFailures, &server.IndexedVerificationFailure{
|
||||
msgFailures = append(msgFailures, &server.IndexedError{
|
||||
Index: i,
|
||||
Message: "Could not convert request message to consensus message: " + err.Error(),
|
||||
})
|
||||
@@ -562,7 +602,7 @@ func (s *Server) SubmitSyncCommitteeSignatures(w http.ResponseWriter, r *http.Re
|
||||
}
|
||||
|
||||
if len(msgFailures) > 0 {
|
||||
failuresErr := &server.IndexedVerificationFailureError{
|
||||
failuresErr := &server.IndexedErrorContainer{
|
||||
Code: http.StatusBadRequest,
|
||||
Message: "One or more messages failed validation",
|
||||
Failures: msgFailures,
|
||||
@@ -581,7 +621,7 @@ func (s *Server) SubmitBLSToExecutionChanges(w http.ResponseWriter, r *http.Requ
|
||||
httputil.HandleError(w, fmt.Sprintf("Could not get head state: %v", err), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
var failures []*server.IndexedVerificationFailure
|
||||
var failures []*server.IndexedError
|
||||
var toBroadcast []*eth.SignedBLSToExecutionChange
|
||||
|
||||
var req []*structs.SignedBLSToExecutionChange
|
||||
@@ -602,7 +642,7 @@ func (s *Server) SubmitBLSToExecutionChanges(w http.ResponseWriter, r *http.Requ
|
||||
for i, change := range req {
|
||||
sbls, err := change.ToConsensus()
|
||||
if err != nil {
|
||||
failures = append(failures, &server.IndexedVerificationFailure{
|
||||
failures = append(failures, &server.IndexedError{
|
||||
Index: i,
|
||||
Message: "Unable to decode SignedBLSToExecutionChange: " + err.Error(),
|
||||
})
|
||||
@@ -610,14 +650,14 @@ func (s *Server) SubmitBLSToExecutionChanges(w http.ResponseWriter, r *http.Requ
|
||||
}
|
||||
_, err = blocks.ValidateBLSToExecutionChange(st, sbls)
|
||||
if err != nil {
|
||||
failures = append(failures, &server.IndexedVerificationFailure{
|
||||
failures = append(failures, &server.IndexedError{
|
||||
Index: i,
|
||||
Message: "Could not validate SignedBLSToExecutionChange: " + err.Error(),
|
||||
})
|
||||
continue
|
||||
}
|
||||
if err := blocks.VerifyBLSChangeSignature(st, sbls); err != nil {
|
||||
failures = append(failures, &server.IndexedVerificationFailure{
|
||||
failures = append(failures, &server.IndexedError{
|
||||
Index: i,
|
||||
Message: "Could not validate signature: " + err.Error(),
|
||||
})
|
||||
@@ -636,9 +676,9 @@ func (s *Server) SubmitBLSToExecutionChanges(w http.ResponseWriter, r *http.Requ
|
||||
}
|
||||
go s.broadcastBLSChanges(context.Background(), toBroadcast)
|
||||
if len(failures) > 0 {
|
||||
failuresErr := &server.IndexedVerificationFailureError{
|
||||
failuresErr := &server.IndexedErrorContainer{
|
||||
Code: http.StatusBadRequest,
|
||||
Message: "One or more BLSToExecutionChange failed validation",
|
||||
Message: server.ErrIndexedValidationFail,
|
||||
Failures: failures,
|
||||
}
|
||||
httputil.WriteError(w, failuresErr)
|
||||
@@ -655,18 +695,18 @@ func (s *Server) broadcastBLSBatch(ctx context.Context, ptr *[]*eth.SignedBLSToE
|
||||
}
|
||||
st, err := s.ChainInfoFetcher.HeadStateReadOnly(ctx)
|
||||
if err != nil {
|
||||
log.WithError(err).Error("could not get head state")
|
||||
log.WithError(err).Error("Could not get head state")
|
||||
return
|
||||
}
|
||||
for _, ch := range (*ptr)[:limit] {
|
||||
if ch != nil {
|
||||
_, err := blocks.ValidateBLSToExecutionChange(st, ch)
|
||||
if err != nil {
|
||||
log.WithError(err).Error("could not validate BLS to execution change")
|
||||
log.WithError(err).Error("Could not validate BLS to execution change")
|
||||
continue
|
||||
}
|
||||
if err := s.Broadcaster.Broadcast(ctx, ch); err != nil {
|
||||
log.WithError(err).Error("could not broadcast BLS to execution changes.")
|
||||
log.WithError(err).Error("Could not broadcast BLS to execution changes.")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -638,7 +638,7 @@ func TestSubmitAttestations(t *testing.T) {
|
||||
|
||||
s.SubmitAttestations(writer, request)
|
||||
assert.Equal(t, http.StatusBadRequest, writer.Code)
|
||||
e := &server.IndexedVerificationFailureError{}
|
||||
e := &server.IndexedErrorContainer{}
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
|
||||
assert.Equal(t, http.StatusBadRequest, e.Code)
|
||||
require.Equal(t, 1, len(e.Failures))
|
||||
@@ -772,7 +772,7 @@ func TestSubmitAttestations(t *testing.T) {
|
||||
|
||||
s.SubmitAttestationsV2(writer, request)
|
||||
assert.Equal(t, http.StatusBadRequest, writer.Code)
|
||||
e := &server.IndexedVerificationFailureError{}
|
||||
e := &server.IndexedErrorContainer{}
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
|
||||
assert.Equal(t, http.StatusBadRequest, e.Code)
|
||||
require.Equal(t, 1, len(e.Failures))
|
||||
@@ -873,7 +873,7 @@ func TestSubmitAttestations(t *testing.T) {
|
||||
|
||||
s.SubmitAttestationsV2(writer, request)
|
||||
assert.Equal(t, http.StatusBadRequest, writer.Code)
|
||||
e := &server.IndexedVerificationFailureError{}
|
||||
e := &server.IndexedErrorContainer{}
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
|
||||
assert.Equal(t, http.StatusBadRequest, e.Code)
|
||||
require.Equal(t, 1, len(e.Failures))
|
||||
@@ -1538,7 +1538,7 @@ func TestSubmitSignedBLSToExecutionChanges_Failures(t *testing.T) {
|
||||
s.SubmitBLSToExecutionChanges(writer, request)
|
||||
assert.Equal(t, http.StatusBadRequest, writer.Code)
|
||||
time.Sleep(10 * time.Millisecond) // Delay to allow the routine to start
|
||||
require.StringContains(t, "One or more BLSToExecutionChange failed validation", writer.Body.String())
|
||||
require.StringContains(t, "One or more messages failed validation", writer.Body.String())
|
||||
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
|
||||
assert.Equal(t, numValidators, len(broadcaster.BroadcastMessages)+1)
|
||||
|
||||
|
||||
@@ -151,7 +151,7 @@ func (s *Server) SyncCommitteeRewards(w http.ResponseWriter, r *http.Request) {
|
||||
}
|
||||
}
|
||||
|
||||
_, proposerReward, err := altair.ProcessSyncAggregate(r.Context(), st, sa)
|
||||
_, proposerReward, err := altair.ProcessSyncAggregateNoVerifySig(r.Context(), st, sa)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not get sync aggregate rewards: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
|
||||
@@ -73,7 +73,7 @@ func (rs *BlockRewardService) GetBlockRewardsData(ctx context.Context, blk inter
|
||||
// ExitInformation is expensive to compute, only do it if we need it.
|
||||
exitInfo = validators.ExitInformation(st)
|
||||
}
|
||||
st, err = coreblocks.ProcessAttesterSlashings(ctx, st, blk.Body().AttesterSlashings(), exitInfo)
|
||||
st, err = coreblocks.ProcessAttesterSlashingsNoVerify(ctx, st, blk.Body().AttesterSlashings(), exitInfo)
|
||||
if err != nil {
|
||||
return nil, &httputil.DefaultJsonError{
|
||||
Message: "Could not get attester slashing rewards: " + err.Error(),
|
||||
@@ -87,7 +87,7 @@ func (rs *BlockRewardService) GetBlockRewardsData(ctx context.Context, blk inter
|
||||
Code: http.StatusInternalServerError,
|
||||
}
|
||||
}
|
||||
st, err = coreblocks.ProcessProposerSlashings(ctx, st, blk.Body().ProposerSlashings(), exitInfo)
|
||||
st, err = coreblocks.ProcessProposerSlashingsNoVerify(ctx, st, blk.Body().ProposerSlashings(), exitInfo)
|
||||
if err != nil {
|
||||
return nil, &httputil.DefaultJsonError{
|
||||
Message: "Could not get proposer slashing rewards: " + err.Error(),
|
||||
@@ -109,7 +109,7 @@ func (rs *BlockRewardService) GetBlockRewardsData(ctx context.Context, blk inter
|
||||
}
|
||||
}
|
||||
var syncCommitteeReward uint64
|
||||
_, syncCommitteeReward, err = altair.ProcessSyncAggregate(ctx, st, sa)
|
||||
_, syncCommitteeReward, err = altair.ProcessSyncAggregateNoVerifySig(ctx, st, sa)
|
||||
if err != nil {
|
||||
return nil, &httputil.DefaultJsonError{
|
||||
Message: "Could not get sync aggregate rewards: " + err.Error(),
|
||||
|
||||
@@ -12,6 +12,7 @@ go_library(
|
||||
visibility = ["//visibility:public"],
|
||||
deps = [
|
||||
"//api:go_default_library",
|
||||
"//api/server:go_default_library",
|
||||
"//api/server/structs:go_default_library",
|
||||
"//beacon-chain/blockchain:go_default_library",
|
||||
"//beacon-chain/builder:go_default_library",
|
||||
|
||||
@@ -14,6 +14,7 @@ import (
|
||||
"time"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/api"
|
||||
"github.com/OffchainLabs/prysm/v6/api/server"
|
||||
"github.com/OffchainLabs/prysm/v6/api/server/structs"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/builder"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
|
||||
@@ -268,22 +269,61 @@ func (s *Server) SubmitContributionAndProofs(w http.ResponseWriter, r *http.Requ
|
||||
return
|
||||
}
|
||||
|
||||
for _, item := range reqData {
|
||||
var failures []*server.IndexedError
|
||||
var failedBroadcasts []*server.IndexedError
|
||||
|
||||
for i, item := range reqData {
|
||||
var contribution structs.SignedContributionAndProof
|
||||
if err := json.Unmarshal(item, &contribution); err != nil {
|
||||
httputil.HandleError(w, "Could not decode item: "+err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
failures = append(failures, &server.IndexedError{
|
||||
Index: i,
|
||||
Message: "Could not unmarshal message: " + err.Error(),
|
||||
})
|
||||
continue
|
||||
}
|
||||
consensusItem, err := contribution.ToConsensus()
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not convert contribution to consensus format: "+err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
failures = append(failures, &server.IndexedError{
|
||||
Index: i,
|
||||
Message: "Could not convert request contribution to consensus contribution: " + err.Error(),
|
||||
})
|
||||
continue
|
||||
}
|
||||
if rpcError := s.CoreService.SubmitSignedContributionAndProof(ctx, consensusItem); rpcError != nil {
|
||||
httputil.HandleError(w, rpcError.Err.Error(), core.ErrorReasonToHTTP(rpcError.Reason))
|
||||
return
|
||||
|
||||
rpcError := s.CoreService.SubmitSignedContributionAndProof(ctx, consensusItem)
|
||||
if rpcError != nil {
|
||||
var broadcastFailedErr *server.BroadcastFailedError
|
||||
if errors.As(rpcError.Err, &broadcastFailedErr) {
|
||||
failedBroadcasts = append(failedBroadcasts, &server.IndexedError{
|
||||
Index: i,
|
||||
Message: rpcError.Err.Error(),
|
||||
})
|
||||
continue
|
||||
} else {
|
||||
httputil.HandleError(w, rpcError.Err.Error(), core.ErrorReasonToHTTP(rpcError.Reason))
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if len(failures) > 0 {
|
||||
failuresErr := &server.IndexedErrorContainer{
|
||||
Code: http.StatusBadRequest,
|
||||
Message: server.ErrIndexedValidationFail,
|
||||
Failures: failures,
|
||||
}
|
||||
httputil.WriteError(w, failuresErr)
|
||||
return
|
||||
}
|
||||
if len(failedBroadcasts) > 0 {
|
||||
failuresErr := &server.IndexedErrorContainer{
|
||||
Code: http.StatusInternalServerError,
|
||||
Message: server.ErrIndexedBroadcastFail,
|
||||
Failures: failedBroadcasts,
|
||||
}
|
||||
httputil.WriteError(w, failuresErr)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// Deprecated: use SubmitAggregateAndProofsV2 instead
|
||||
@@ -322,8 +362,8 @@ func (s *Server) SubmitAggregateAndProofs(w http.ResponseWriter, r *http.Request
|
||||
}
|
||||
rpcError := s.CoreService.SubmitSignedAggregateSelectionProof(ctx, consensusItem)
|
||||
if rpcError != nil {
|
||||
var aggregateBroadcastFailedError *core.AggregateBroadcastFailedError
|
||||
ok := errors.As(rpcError.Err, &aggregateBroadcastFailedError)
|
||||
var broadcastFailedErr *server.BroadcastFailedError
|
||||
ok := errors.As(rpcError.Err, &broadcastFailedErr)
|
||||
if ok {
|
||||
broadcastFailed = true
|
||||
} else {
|
||||
@@ -368,49 +408,83 @@ func (s *Server) SubmitAggregateAndProofsV2(w http.ResponseWriter, r *http.Reque
|
||||
return
|
||||
}
|
||||
|
||||
broadcastFailed := false
|
||||
var failures []*server.IndexedError
|
||||
var failedBroadcasts []*server.IndexedError
|
||||
|
||||
var rpcError *core.RpcError
|
||||
for _, raw := range reqData {
|
||||
for i, raw := range reqData {
|
||||
if v >= version.Electra {
|
||||
var signedAggregate structs.SignedAggregateAttestationAndProofElectra
|
||||
err = json.Unmarshal(raw, &signedAggregate)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Failed to parse aggregate attestation and proof: "+err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
failures = append(failures, &server.IndexedError{
|
||||
Index: i,
|
||||
Message: "Could not parse message: " + err.Error(),
|
||||
})
|
||||
continue
|
||||
}
|
||||
consensusItem, err := signedAggregate.ToConsensus()
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not convert request aggregate to consensus aggregate: "+err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
failures = append(failures, &server.IndexedError{
|
||||
Index: i,
|
||||
Message: "Could not convert request aggregate to consensus aggregate: " + err.Error(),
|
||||
})
|
||||
continue
|
||||
}
|
||||
rpcError = s.CoreService.SubmitSignedAggregateSelectionProof(ctx, consensusItem)
|
||||
} else {
|
||||
var signedAggregate structs.SignedAggregateAttestationAndProof
|
||||
err = json.Unmarshal(raw, &signedAggregate)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Failed to parse aggregate attestation and proof: "+err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
failures = append(failures, &server.IndexedError{
|
||||
Index: i,
|
||||
Message: "Could not parse message: " + err.Error(),
|
||||
})
|
||||
continue
|
||||
}
|
||||
consensusItem, err := signedAggregate.ToConsensus()
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not convert request aggregate to consensus aggregate: "+err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
failures = append(failures, &server.IndexedError{
|
||||
Index: i,
|
||||
Message: "Could not convert request aggregate to consensus aggregate: " + err.Error(),
|
||||
})
|
||||
continue
|
||||
}
|
||||
rpcError = s.CoreService.SubmitSignedAggregateSelectionProof(ctx, consensusItem)
|
||||
}
|
||||
|
||||
if rpcError != nil {
|
||||
var aggregateBroadcastFailedError *core.AggregateBroadcastFailedError
|
||||
if errors.As(rpcError.Err, &aggregateBroadcastFailedError) {
|
||||
broadcastFailed = true
|
||||
var broadcastFailedErr *server.BroadcastFailedError
|
||||
if errors.As(rpcError.Err, &broadcastFailedErr) {
|
||||
failedBroadcasts = append(failedBroadcasts, &server.IndexedError{
|
||||
Index: i,
|
||||
Message: rpcError.Err.Error(),
|
||||
})
|
||||
continue
|
||||
} else {
|
||||
httputil.HandleError(w, rpcError.Err.Error(), core.ErrorReasonToHTTP(rpcError.Reason))
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
if broadcastFailed {
|
||||
httputil.HandleError(w, "Could not broadcast one or more signed aggregated attestations", http.StatusInternalServerError)
|
||||
|
||||
if len(failures) > 0 {
|
||||
failuresErr := &server.IndexedErrorContainer{
|
||||
Code: http.StatusBadRequest,
|
||||
Message: server.ErrIndexedValidationFail,
|
||||
Failures: failures,
|
||||
}
|
||||
httputil.WriteError(w, failuresErr)
|
||||
return
|
||||
}
|
||||
if len(failedBroadcasts) > 0 {
|
||||
failuresErr := &server.IndexedErrorContainer{
|
||||
Code: http.StatusInternalServerError,
|
||||
Message: server.ErrIndexedBroadcastFail,
|
||||
Failures: failedBroadcasts,
|
||||
}
|
||||
httputil.WriteError(w, failuresErr)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -3,6 +3,7 @@ package lookup
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"math"
|
||||
"strconv"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain"
|
||||
@@ -283,9 +284,13 @@ func (p *BeaconDbBlocker) Blobs(ctx context.Context, id string, opts ...options.
|
||||
return make([]*blocks.VerifiedROBlob, 0), nil
|
||||
}
|
||||
|
||||
fuluForkSlot, err := slots.EpochStart(params.BeaconConfig().FuluForkEpoch)
|
||||
if err != nil {
|
||||
return nil, &core.RpcError{Err: errors.Wrap(err, "could not calculate Fulu start slot"), Reason: core.Internal}
|
||||
// Compute the first Fulu slot.
|
||||
fuluForkSlot := primitives.Slot(math.MaxUint64)
|
||||
if fuluForkEpoch := params.BeaconConfig().FuluForkEpoch; fuluForkEpoch != primitives.Epoch(math.MaxUint64) {
|
||||
fuluForkSlot, err = slots.EpochStart(fuluForkEpoch)
|
||||
if err != nil {
|
||||
return nil, &core.RpcError{Err: errors.Wrap(err, "could not calculate Fulu start slot"), Reason: core.Internal}
|
||||
}
|
||||
}
|
||||
|
||||
// Convert versioned hashes to indices if provided
|
||||
|
||||
@@ -587,6 +587,51 @@ func TestGetBlob(t *testing.T) {
|
||||
require.Equal(t, http.StatusBadRequest, core.ErrorReasonToHTTP(rpcErr.Reason))
|
||||
require.StringContains(t, "not supported before", rpcErr.Err.Error())
|
||||
})
|
||||
|
||||
t.Run("fulu fork epoch not set (MaxUint64)", func(t *testing.T) {
|
||||
// Setup with Deneb fork enabled but Fulu fork epoch set to MaxUint64 (not set/far future)
|
||||
params.SetupTestConfigCleanup(t)
|
||||
cfg := params.BeaconConfig().Copy()
|
||||
cfg.DenebForkEpoch = 1
|
||||
cfg.FuluForkEpoch = primitives.Epoch(math.MaxUint64) // Not set / far future
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
|
||||
// Create and save Deneb block and blob sidecars
|
||||
denebSlot := util.SlotAtEpoch(t, cfg.DenebForkEpoch)
|
||||
_, tempBlobStorage := filesystem.NewEphemeralBlobStorageAndFs(t)
|
||||
|
||||
denebBlockWithBlobs, denebBlobSidecars := util.GenerateTestDenebBlockWithSidecar(t, [fieldparams.RootLength]byte{}, denebSlot, 2, util.WithDenebSlot(denebSlot))
|
||||
denebBlockRoot := denebBlockWithBlobs.Root()
|
||||
|
||||
verifiedDenebBlobs := verification.FakeVerifySliceForTest(t, denebBlobSidecars)
|
||||
for i := range verifiedDenebBlobs {
|
||||
err := tempBlobStorage.Save(verifiedDenebBlobs[i])
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
err := db.SaveBlock(t.Context(), denebBlockWithBlobs)
|
||||
require.NoError(t, err)
|
||||
|
||||
blocker := &BeaconDbBlocker{
|
||||
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
|
||||
Genesis: time.Now(),
|
||||
},
|
||||
BeaconDB: db,
|
||||
BlobStorage: tempBlobStorage,
|
||||
}
|
||||
|
||||
// Should successfully retrieve blobs even when FuluForkEpoch is not set
|
||||
retrievedBlobs, rpcErr := blocker.Blobs(ctx, hexutil.Encode(denebBlockRoot[:]))
|
||||
require.IsNil(t, rpcErr)
|
||||
require.Equal(t, 2, len(retrievedBlobs))
|
||||
|
||||
// Verify blob content matches
|
||||
for i, retrievedBlob := range retrievedBlobs {
|
||||
require.NotNil(t, retrievedBlob.BlobSidecar)
|
||||
require.DeepEqual(t, denebBlobSidecars[i].Blob, retrievedBlob.Blob)
|
||||
require.DeepEqual(t, denebBlobSidecars[i].KzgCommitment, retrievedBlob.KzgCommitment)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func TestBlobs_CommitmentOrdering(t *testing.T) {
|
||||
|
||||
@@ -316,6 +316,10 @@ func (vs *Server) ProposeBeaconBlock(ctx context.Context, req *ethpb.GenericSign
|
||||
blobSidecars, dataColumnSidecars, err = vs.handleUnblindedBlock(rob, req)
|
||||
}
|
||||
if err != nil {
|
||||
if errors.Is(err, builderapi.ErrBadGateway) && block.IsBlinded() {
|
||||
log.WithError(err).Info("Optimistically proposed block - builder relay temporarily unavailable, block may arrive over P2P")
|
||||
return ðpb.ProposeResponse{BlockRoot: root[:]}, nil
|
||||
}
|
||||
return nil, status.Errorf(codes.Internal, "%s: %v", "handle block failed", err)
|
||||
}
|
||||
|
||||
|
||||
@@ -6,6 +6,7 @@ import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
builderapi "github.com/OffchainLabs/prysm/v6/api/client/builder"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
|
||||
mock "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/builder"
|
||||
@@ -3634,4 +3635,52 @@ func TestServer_ProposeBeaconBlock_PostFuluBlindedBlock(t *testing.T) {
|
||||
require.NotNil(t, res)
|
||||
require.NotEmpty(t, res.BlockRoot)
|
||||
})
|
||||
|
||||
t.Run("blinded block - 502 error handling", func(t *testing.T) {
|
||||
params.SetupTestConfigCleanup(t)
|
||||
cfg := params.BeaconConfig().Copy()
|
||||
cfg.FuluForkEpoch = 10
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
|
||||
mockBuilder := &builderTest.MockBuilderService{
|
||||
HasConfigured: true,
|
||||
Cfg: &builderTest.Config{BeaconDB: db},
|
||||
PayloadDeneb: &enginev1.ExecutionPayloadDeneb{},
|
||||
ErrSubmitBlindedBlock: builderapi.ErrBadGateway,
|
||||
}
|
||||
|
||||
c := &mock.ChainService{State: beaconState, Root: parentRoot[:]}
|
||||
proposerServer := &Server{
|
||||
ChainStartFetcher: &mockExecution.Chain{},
|
||||
Eth1InfoFetcher: &mockExecution.Chain{},
|
||||
Eth1BlockFetcher: &mockExecution.Chain{},
|
||||
BlockReceiver: c,
|
||||
BlobReceiver: c,
|
||||
HeadFetcher: c,
|
||||
BlockNotifier: c.BlockNotifier(),
|
||||
OperationNotifier: c.OperationNotifier(),
|
||||
StateGen: stategen.New(db, doublylinkedtree.New()),
|
||||
TimeFetcher: c,
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
BeaconDB: db,
|
||||
BlockBuilder: mockBuilder,
|
||||
P2P: &mockp2p.MockBroadcaster{},
|
||||
}
|
||||
|
||||
blindedBlock := util.NewBlindedBeaconBlockDeneb()
|
||||
blindedBlock.Message.Slot = 160 // This puts us at epoch 5 (160/32 = 5)
|
||||
blindedBlock.Message.ProposerIndex = 0
|
||||
blindedBlock.Message.ParentRoot = parentRoot[:]
|
||||
blindedBlock.Message.StateRoot = make([]byte, 32)
|
||||
|
||||
req := ðpb.GenericSignedBeaconBlock{
|
||||
Block: ðpb.GenericSignedBeaconBlock_BlindedDeneb{BlindedDeneb: blindedBlock},
|
||||
}
|
||||
|
||||
// Should handle 502 error gracefully and continue with original blinded block
|
||||
res, err := proposerServer.ProposeBeaconBlock(ctx, req)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, res)
|
||||
require.NotEmpty(t, res.BlockRoot)
|
||||
})
|
||||
}
|
||||
|
||||
@@ -14,7 +14,7 @@ import (
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
const signatureVerificationInterval = 50 * time.Millisecond
|
||||
const signatureVerificationInterval = 5 * time.Millisecond
|
||||
|
||||
type signatureVerifier struct {
|
||||
set *bls.SignatureBatch
|
||||
|
||||
@@ -169,7 +169,7 @@ func FetchDataColumnSidecars(
|
||||
result[root] = sidecars
|
||||
}
|
||||
|
||||
log.WithField("finalMissingRootCount", len(incompleteRoots)).Debug("Failed to fetch data column sidecars from storage and peers using rescue mode")
|
||||
log.WithField("finalMissingRootCount", len(incompleteRoots)).Warning("Failed to fetch data column sidecars")
|
||||
return result, missingByRoot, nil
|
||||
}
|
||||
|
||||
@@ -738,7 +738,7 @@ func fetchDataColumnSidecarsFromPeers(
|
||||
|
||||
roDataColumns, err := sendDataColumnSidecarsRequest(params, slotByRoot, slotsWithCommitments, peerID, indicesByRoot)
|
||||
if err != nil {
|
||||
log.WithError(err).Warning("Failed to send data column sidecars request")
|
||||
log.WithError(err).Debug("Failed to send data column sidecars request")
|
||||
return
|
||||
}
|
||||
|
||||
|
||||
@@ -45,6 +45,7 @@ func TestFetchDataColumnSidecars(t *testing.T) {
|
||||
params.SetupTestConfigCleanup(t)
|
||||
cfg := params.BeaconConfig().Copy()
|
||||
cfg.FuluForkEpoch = 0
|
||||
cfg.BlobSchedule = []params.BlobScheduleEntry{{Epoch: 0, MaxBlobsPerBlock: 10}}
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
|
||||
// Start the trusted setup.
|
||||
@@ -760,6 +761,12 @@ func TestVerifyDataColumnSidecarsByPeer(t *testing.T) {
|
||||
err := kzg.Start()
|
||||
require.NoError(t, err)
|
||||
|
||||
params.SetupTestConfigCleanup(t)
|
||||
cfg := params.BeaconConfig()
|
||||
cfg.FuluForkEpoch = 0
|
||||
cfg.BlobSchedule = []params.BlobScheduleEntry{{Epoch: 0, MaxBlobsPerBlock: 2}}
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
|
||||
t.Run("nominal", func(t *testing.T) {
|
||||
const (
|
||||
start, stop = 0, 15
|
||||
|
||||
@@ -683,6 +683,7 @@ func TestFetchOriginColumns(t *testing.T) {
|
||||
params.SetupTestConfigCleanup(t)
|
||||
cfg := params.BeaconConfig().Copy()
|
||||
cfg.FuluForkEpoch = 0
|
||||
cfg.BlobSchedule = []params.BlobScheduleEntry{{Epoch: 0, MaxBlobsPerBlock: 10}}
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
|
||||
const (
|
||||
|
||||
@@ -241,6 +241,9 @@ func (s *Service) saveAttestation(att ethpb.Att) error {
|
||||
if features.Get().EnableExperimentalAttestationPool {
|
||||
return s.cfg.attestationCache.Add(att)
|
||||
}
|
||||
if att.IsAggregated() {
|
||||
return s.cfg.attPool.SaveAggregatedAttestation(att)
|
||||
}
|
||||
return s.cfg.attPool.SaveUnaggregatedAttestation(att)
|
||||
}
|
||||
|
||||
@@ -300,40 +303,26 @@ func (s *Service) processVerifiedAttestation(
|
||||
}
|
||||
|
||||
func (s *Service) processAggregate(ctx context.Context, aggregate ethpb.SignedAggregateAttAndProof) {
|
||||
att := aggregate.AggregateAttestationAndProof().AggregateVal()
|
||||
|
||||
// Save the pending aggregated attestation to the pool if it passes the aggregated
|
||||
// validation steps.
|
||||
valRes, err := s.validateAggregatedAtt(ctx, aggregate)
|
||||
res, err := s.validateAggregatedAtt(ctx, aggregate)
|
||||
if err != nil {
|
||||
log.WithError(err).Debug("Pending aggregated attestation failed validation")
|
||||
return
|
||||
}
|
||||
aggValid := pubsub.ValidationAccept == valRes
|
||||
if aggValid && s.validateBlockInAttestation(ctx, aggregate) {
|
||||
if features.Get().EnableExperimentalAttestationPool {
|
||||
if err = s.cfg.attestationCache.Add(att); err != nil {
|
||||
log.WithError(err).Debug("Could not save aggregated attestation")
|
||||
return
|
||||
}
|
||||
} else {
|
||||
if att.IsAggregated() {
|
||||
if err = s.cfg.attPool.SaveAggregatedAttestation(att); err != nil {
|
||||
log.WithError(err).Debug("Could not save aggregated attestation")
|
||||
return
|
||||
}
|
||||
} else if err = s.cfg.attPool.SaveUnaggregatedAttestation(att); err != nil {
|
||||
log.WithError(err).Debug("Could not save unaggregated attestation")
|
||||
return
|
||||
}
|
||||
}
|
||||
if res != pubsub.ValidationAccept || !s.validateBlockInAttestation(ctx, aggregate) {
|
||||
log.Debug("Pending aggregated attestation failed validation")
|
||||
return
|
||||
}
|
||||
|
||||
s.setAggregatorIndexEpochSeen(att.GetData().Target.Epoch, aggregate.AggregateAttestationAndProof().GetAggregatorIndex())
|
||||
att := aggregate.AggregateAttestationAndProof().AggregateVal()
|
||||
if err := s.saveAttestation(att); err != nil {
|
||||
log.WithError(err).Debug("Could not save aggregated attestation")
|
||||
return
|
||||
}
|
||||
|
||||
// Broadcasting the signed attestation again once a node is able to process it.
|
||||
if err := s.cfg.p2p.Broadcast(ctx, aggregate); err != nil {
|
||||
log.WithError(err).Debug("Could not broadcast")
|
||||
}
|
||||
s.setAggregatorIndexEpochSeen(att.GetData().Target.Epoch, aggregate.AggregateAttestationAndProof().GetAggregatorIndex())
|
||||
|
||||
if err := s.cfg.p2p.Broadcast(ctx, aggregate); err != nil {
|
||||
log.WithError(err).Debug("Could not broadcast aggregated attestation")
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -147,6 +147,11 @@ func (s *Service) processPendingBlocks(ctx context.Context) error {
|
||||
}
|
||||
cancelFunction()
|
||||
|
||||
// Process pending attestations for this block.
|
||||
if err := s.processPendingAttsForBlock(ctx, blkRoot); err != nil {
|
||||
log.WithError(err).Debug("Failed to process pending attestations for block")
|
||||
}
|
||||
|
||||
// Remove the processed block from the queue.
|
||||
if err := s.removeBlockFromQueue(b, blkRoot); err != nil {
|
||||
return err
|
||||
|
||||
@@ -70,6 +70,7 @@ func (s *Service) dataColumnSidecarsByRangeRPCHandler(ctx context.Context, msg i
|
||||
log.Trace("Serving data column sidecars by range")
|
||||
|
||||
if rangeParameters == nil {
|
||||
closeStream(stream, log)
|
||||
return nil
|
||||
}
|
||||
|
||||
|
||||
@@ -23,16 +23,15 @@ import (
|
||||
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
consensusblocks "github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
pb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/assert"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/util"
|
||||
)
|
||||
|
||||
func TestDataColumnSidecarsByRangeRPCHandler(t *testing.T) {
|
||||
params.SetupTestConfigCleanup(t)
|
||||
beaconConfig := params.BeaconConfig()
|
||||
//beaconConfig.FuluForkEpoch = beaconConfig.ElectraForkEpoch + 100
|
||||
beaconConfig.FuluForkEpoch = 0
|
||||
params.OverrideBeaconConfig(beaconConfig)
|
||||
params.BeaconConfig().InitializeForkSchedule()
|
||||
@@ -47,6 +46,7 @@ func TestDataColumnSidecarsByRangeRPCHandler(t *testing.T) {
|
||||
|
||||
ctxMap, err := ContextByteVersionsForValRoot(params.BeaconConfig().GenesisValidatorsRoot)
|
||||
require.NoError(t, err)
|
||||
|
||||
t.Run("invalid request", func(t *testing.T) {
|
||||
slot := primitives.Slot(400)
|
||||
mockNower.SetSlot(t, clock, slot)
|
||||
@@ -72,8 +72,8 @@ func TestDataColumnSidecarsByRangeRPCHandler(t *testing.T) {
|
||||
remoteP2P.BHost.SetStreamHandler(protocolID, func(stream network.Stream) {
|
||||
defer wg.Done()
|
||||
code, _, err := readStatusCodeNoDeadline(stream, localP2P.Encoding())
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, responseCodeInvalidRequest, code)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, responseCodeInvalidRequest, code)
|
||||
})
|
||||
|
||||
localP2P.Connect(remoteP2P)
|
||||
@@ -94,6 +94,48 @@ func TestDataColumnSidecarsByRangeRPCHandler(t *testing.T) {
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("in the future", func(t *testing.T) {
|
||||
slot := primitives.Slot(400)
|
||||
mockNower.SetSlot(t, clock, slot)
|
||||
|
||||
localP2P, remoteP2P := p2ptest.NewTestP2P(t), p2ptest.NewTestP2P(t)
|
||||
protocolID := protocol.ID(fmt.Sprintf("%s/ssz_snappy", p2p.RPCDataColumnSidecarsByRangeTopicV1))
|
||||
|
||||
service := &Service{
|
||||
cfg: &config{
|
||||
p2p: localP2P,
|
||||
chain: &chainMock.ChainService{
|
||||
Slot: &slot,
|
||||
},
|
||||
clock: clock,
|
||||
},
|
||||
rateLimiter: newRateLimiter(localP2P),
|
||||
}
|
||||
|
||||
var wg sync.WaitGroup
|
||||
wg.Add(1)
|
||||
|
||||
remoteP2P.BHost.SetStreamHandler(protocolID, func(stream network.Stream) {
|
||||
defer wg.Done()
|
||||
|
||||
_, err := readChunkedDataColumnSidecar(stream, remoteP2P, ctxMap)
|
||||
assert.Equal(t, true, errors.Is(err, io.EOF))
|
||||
})
|
||||
|
||||
localP2P.Connect(remoteP2P)
|
||||
stream, err := localP2P.BHost.NewStream(ctx, remoteP2P.BHost.ID(), protocolID)
|
||||
require.NoError(t, err)
|
||||
|
||||
msg := &pb.DataColumnSidecarsByRangeRequest{
|
||||
StartSlot: slot + 1,
|
||||
Count: 50,
|
||||
Columns: []uint64{1, 2, 3, 4, 6, 7, 8, 9, 10},
|
||||
}
|
||||
|
||||
err = service.dataColumnSidecarsByRangeRPCHandler(ctx, msg, stream)
|
||||
require.NoError(t, err)
|
||||
})
|
||||
|
||||
t.Run("nominal", func(t *testing.T) {
|
||||
slot := primitives.Slot(400)
|
||||
|
||||
@@ -133,12 +175,12 @@ func TestDataColumnSidecarsByRangeRPCHandler(t *testing.T) {
|
||||
signedBeaconBlockPb.Block.ParentRoot = roots[i-1][:]
|
||||
}
|
||||
|
||||
signedBeaconBlock, err := consensusblocks.NewSignedBeaconBlock(signedBeaconBlockPb)
|
||||
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(signedBeaconBlockPb)
|
||||
require.NoError(t, err)
|
||||
|
||||
// There is a discrepancy between the root of the beacon block and the rodata column root,
|
||||
// but for the sake of this test, we actually don't care.
|
||||
roblock, err := consensusblocks.NewROBlockWithRoot(signedBeaconBlock, roots[i])
|
||||
roblock, err := blocks.NewROBlockWithRoot(signedBeaconBlock, roots[i])
|
||||
require.NoError(t, err)
|
||||
|
||||
roBlocks = append(roBlocks, roblock)
|
||||
@@ -178,28 +220,28 @@ func TestDataColumnSidecarsByRangeRPCHandler(t *testing.T) {
|
||||
break
|
||||
}
|
||||
|
||||
require.NoError(t, err)
|
||||
assert.NoError(t, err)
|
||||
sidecars = append(sidecars, sidecar)
|
||||
}
|
||||
|
||||
require.Equal(t, 8, len(sidecars))
|
||||
require.Equal(t, root0, sidecars[0].BlockRoot())
|
||||
require.Equal(t, root0, sidecars[1].BlockRoot())
|
||||
require.Equal(t, root0, sidecars[2].BlockRoot())
|
||||
require.Equal(t, root3, sidecars[3].BlockRoot())
|
||||
require.Equal(t, root3, sidecars[4].BlockRoot())
|
||||
require.Equal(t, root5, sidecars[5].BlockRoot())
|
||||
require.Equal(t, root5, sidecars[6].BlockRoot())
|
||||
require.Equal(t, root5, sidecars[7].BlockRoot())
|
||||
assert.Equal(t, 8, len(sidecars))
|
||||
assert.Equal(t, root0, sidecars[0].BlockRoot())
|
||||
assert.Equal(t, root0, sidecars[1].BlockRoot())
|
||||
assert.Equal(t, root0, sidecars[2].BlockRoot())
|
||||
assert.Equal(t, root3, sidecars[3].BlockRoot())
|
||||
assert.Equal(t, root3, sidecars[4].BlockRoot())
|
||||
assert.Equal(t, root5, sidecars[5].BlockRoot())
|
||||
assert.Equal(t, root5, sidecars[6].BlockRoot())
|
||||
assert.Equal(t, root5, sidecars[7].BlockRoot())
|
||||
|
||||
require.Equal(t, uint64(1), sidecars[0].Index)
|
||||
require.Equal(t, uint64(2), sidecars[1].Index)
|
||||
require.Equal(t, uint64(3), sidecars[2].Index)
|
||||
require.Equal(t, uint64(4), sidecars[3].Index)
|
||||
require.Equal(t, uint64(6), sidecars[4].Index)
|
||||
require.Equal(t, uint64(7), sidecars[5].Index)
|
||||
require.Equal(t, uint64(8), sidecars[6].Index)
|
||||
require.Equal(t, uint64(9), sidecars[7].Index)
|
||||
assert.Equal(t, uint64(1), sidecars[0].Index)
|
||||
assert.Equal(t, uint64(2), sidecars[1].Index)
|
||||
assert.Equal(t, uint64(3), sidecars[2].Index)
|
||||
assert.Equal(t, uint64(4), sidecars[3].Index)
|
||||
assert.Equal(t, uint64(6), sidecars[4].Index)
|
||||
assert.Equal(t, uint64(7), sidecars[5].Index)
|
||||
assert.Equal(t, uint64(8), sidecars[6].Index)
|
||||
assert.Equal(t, uint64(9), sidecars[7].Index)
|
||||
})
|
||||
|
||||
localP2P.Connect(remoteP2P)
|
||||
@@ -215,7 +257,6 @@ func TestDataColumnSidecarsByRangeRPCHandler(t *testing.T) {
|
||||
err = service.dataColumnSidecarsByRangeRPCHandler(ctx, msg, stream)
|
||||
require.NoError(t, err)
|
||||
})
|
||||
|
||||
}
|
||||
|
||||
func TestValidateDataColumnsByRange(t *testing.T) {
|
||||
|
||||
@@ -94,9 +94,11 @@ func TestVerifyIndexInCommittee_ExistsInBeaconCommittee(t *testing.T) {
|
||||
assert.ErrorContains(t, wanted, err)
|
||||
assert.Equal(t, pubsub.ValidationReject, result)
|
||||
|
||||
att.Data.CommitteeIndex = 10000
|
||||
// Test the edge case where committee index equals count (should be rejected)
|
||||
// With 64 validators and minimal config, count = 2, so valid indices are 0 and 1
|
||||
att.Data.CommitteeIndex = 2
|
||||
_, _, result, err = service.validateCommitteeIndexAndCount(ctx, att, s)
|
||||
require.ErrorContains(t, "committee index 10000 > 2", err)
|
||||
require.ErrorContains(t, "committee index 2 >= 2", err)
|
||||
assert.Equal(t, pubsub.ValidationReject, result)
|
||||
}
|
||||
|
||||
|
||||
@@ -112,8 +112,12 @@ func (s *Service) validateCommitteeIndexBeaconAttestation(
|
||||
// Verify the block being voted and the processed state is in beaconDB and the block has passed validation if it's in the beaconDB.
|
||||
blockRoot := bytesutil.ToBytes32(data.BeaconBlockRoot)
|
||||
if !s.hasBlockAndState(ctx, blockRoot) {
|
||||
// Block not yet available - save attestation to pending queue for later processing
|
||||
// when the block arrives. Return ValidationIgnore so gossip doesn't potentially penalize the peer.
|
||||
s.savePendingAtt(att)
|
||||
return pubsub.ValidationIgnore, nil
|
||||
}
|
||||
// Block exists - verify it's in forkchoice (i.e., it's a descendant of the finalized checkpoint)
|
||||
if !s.cfg.chain.InForkchoice(blockRoot) {
|
||||
tracing.AnnotateError(span, blockchain.ErrNotDescendantOfFinalized)
|
||||
return pubsub.ValidationIgnore, blockchain.ErrNotDescendantOfFinalized
|
||||
@@ -274,8 +278,8 @@ func (s *Service) validateCommitteeIndexAndCount(
|
||||
} else {
|
||||
ci = a.GetCommitteeIndex()
|
||||
}
|
||||
if uint64(ci) > count {
|
||||
return 0, 0, pubsub.ValidationReject, fmt.Errorf("committee index %d > %d", ci, count)
|
||||
if uint64(ci) >= count {
|
||||
return 0, 0, pubsub.ValidationReject, fmt.Errorf("committee index %d >= %d", ci, count)
|
||||
}
|
||||
return ci, valCount, pubsub.ValidationAccept, nil
|
||||
}
|
||||
|
||||
@@ -611,3 +611,41 @@ func TestService_setSeenUnaggregatedAtt(t *testing.T) {
|
||||
})
|
||||
})
|
||||
}
|
||||
|
||||
func Test_validateCommitteeIndexAndCount_Boundary(t *testing.T) {
|
||||
ctx := t.Context()
|
||||
|
||||
// Create a minimal state with a known number of validators.
|
||||
validators := uint64(64)
|
||||
bs, _ := util.DeterministicGenesisState(t, validators)
|
||||
require.NoError(t, bs.SetSlot(1))
|
||||
|
||||
s := &Service{}
|
||||
|
||||
// Build a minimal Phase0 attestation (unaggregated path).
|
||||
att := ðpb.Attestation{
|
||||
Data: ðpb.AttestationData{
|
||||
Slot: 1,
|
||||
CommitteeIndex: 0,
|
||||
},
|
||||
}
|
||||
|
||||
// First call to obtain the active validator count used to derive committees per slot.
|
||||
_, valCount, res, err := s.validateCommitteeIndexAndCount(ctx, att, bs)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, pubsub.ValidationAccept, res)
|
||||
|
||||
count := helpers.SlotCommitteeCount(valCount)
|
||||
|
||||
// committee_index == count - 1 should be accepted.
|
||||
att.Data.CommitteeIndex = primitives.CommitteeIndex(count - 1)
|
||||
_, _, res, err = s.validateCommitteeIndexAndCount(ctx, att, bs)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, pubsub.ValidationAccept, res)
|
||||
|
||||
// committee_index == count should be rejected (out of range).
|
||||
att.Data.CommitteeIndex = primitives.CommitteeIndex(count)
|
||||
_, _, res, err = s.validateCommitteeIndexAndCount(ctx, att, bs)
|
||||
require.ErrorContains(t, "committee index", err)
|
||||
require.Equal(t, pubsub.ValidationReject, res)
|
||||
}
|
||||
|
||||
@@ -294,6 +294,9 @@ func (s *Service) validatePhase0Block(ctx context.Context, blk interfaces.ReadOn
|
||||
}
|
||||
|
||||
if err := blocks.VerifyBlockSignatureUsingCurrentFork(parentState, blk, blockRoot); err != nil {
|
||||
if errors.Is(err, blocks.ErrInvalidSignature) {
|
||||
s.setBadBlock(ctx, blockRoot)
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
// In the event the block is more than an epoch ahead from its
|
||||
|
||||
@@ -103,11 +103,84 @@ func TestValidateBeaconBlockPubSub_InvalidSignature(t *testing.T) {
|
||||
},
|
||||
}
|
||||
res, err := r.validateBeaconBlockPubSub(ctx, "", m)
|
||||
require.ErrorIs(t, err, signing.ErrSigFailedToVerify)
|
||||
require.ErrorContains(t, "invalid signature", err)
|
||||
result := res == pubsub.ValidationReject
|
||||
assert.Equal(t, true, result)
|
||||
}
|
||||
|
||||
func TestValidateBeaconBlockPubSub_InvalidSignature_MarksBlockAsBad(t *testing.T) {
|
||||
db := dbtest.SetupDB(t)
|
||||
p := p2ptest.NewTestP2P(t)
|
||||
ctx := t.Context()
|
||||
beaconState, privKeys := util.DeterministicGenesisState(t, 100)
|
||||
parentBlock := util.NewBeaconBlock()
|
||||
util.SaveBlock(t, ctx, db, parentBlock)
|
||||
bRoot, err := parentBlock.Block.HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, db.SaveState(ctx, beaconState, bRoot))
|
||||
require.NoError(t, db.SaveStateSummary(ctx, ðpb.StateSummary{Root: bRoot[:]}))
|
||||
copied := beaconState.Copy()
|
||||
require.NoError(t, copied.SetSlot(1))
|
||||
proposerIdx, err := helpers.BeaconProposerIndex(ctx, copied)
|
||||
require.NoError(t, err)
|
||||
msg := util.NewBeaconBlock()
|
||||
msg.Block.ParentRoot = bRoot[:]
|
||||
msg.Block.Slot = 1
|
||||
msg.Block.ProposerIndex = proposerIdx
|
||||
badPrivKeyIdx := proposerIdx + 1 // We generate a valid signature from a wrong private key which fails to verify
|
||||
msg.Signature, err = signing.ComputeDomainAndSign(beaconState, 0, msg.Block, params.BeaconConfig().DomainBeaconProposer, privKeys[badPrivKeyIdx])
|
||||
require.NoError(t, err)
|
||||
|
||||
stateGen := stategen.New(db, doublylinkedtree.New())
|
||||
chainService := &mock.ChainService{Genesis: time.Unix(time.Now().Unix()-int64(params.BeaconConfig().SecondsPerSlot), 0),
|
||||
FinalizedCheckPoint: ðpb.Checkpoint{
|
||||
Epoch: 0,
|
||||
Root: make([]byte, 32),
|
||||
},
|
||||
DB: db,
|
||||
}
|
||||
r := &Service{
|
||||
cfg: &config{
|
||||
beaconDB: db,
|
||||
p2p: p,
|
||||
initialSync: &mockSync.Sync{IsSyncing: false},
|
||||
chain: chainService,
|
||||
clock: startup.NewClock(chainService.Genesis, chainService.ValidatorsRoot),
|
||||
blockNotifier: chainService.BlockNotifier(),
|
||||
stateGen: stateGen,
|
||||
},
|
||||
seenBlockCache: lruwrpr.New(10),
|
||||
badBlockCache: lruwrpr.New(10),
|
||||
}
|
||||
|
||||
blockRoot, err := msg.Block.HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
|
||||
// Verify block is not marked as bad initially
|
||||
assert.Equal(t, false, r.hasBadBlock(blockRoot), "block should not be marked as bad initially")
|
||||
|
||||
buf := new(bytes.Buffer)
|
||||
_, err = p.Encoding().EncodeGossip(buf, msg)
|
||||
require.NoError(t, err)
|
||||
topic := p2p.GossipTypeMapping[reflect.TypeOf(msg)]
|
||||
digest, err := r.currentForkDigest()
|
||||
assert.NoError(t, err)
|
||||
topic = r.addDigestToTopic(topic, digest)
|
||||
m := &pubsub.Message{
|
||||
Message: &pubsubpb.Message{
|
||||
Data: buf.Bytes(),
|
||||
Topic: &topic,
|
||||
},
|
||||
}
|
||||
res, err := r.validateBeaconBlockPubSub(ctx, "", m)
|
||||
require.ErrorContains(t, "invalid signature", err)
|
||||
result := res == pubsub.ValidationReject
|
||||
assert.Equal(t, true, result)
|
||||
|
||||
// Verify block is now marked as bad after invalid signature
|
||||
assert.Equal(t, true, r.hasBadBlock(blockRoot), "block should be marked as bad after invalid signature")
|
||||
}
|
||||
|
||||
func TestValidateBeaconBlockPubSub_BlockAlreadyPresentInDB(t *testing.T) {
|
||||
db := dbtest.SetupDB(t)
|
||||
ctx := t.Context()
|
||||
@@ -976,7 +1049,7 @@ func TestValidateBeaconBlockPubSub_InvalidParentBlock(t *testing.T) {
|
||||
},
|
||||
}
|
||||
res, err := r.validateBeaconBlockPubSub(ctx, "", m)
|
||||
require.ErrorContains(t, "could not unmarshal bytes into signature", err)
|
||||
require.ErrorContains(t, "invalid signature", err)
|
||||
assert.Equal(t, res, pubsub.ValidationReject, "block with invalid signature should be rejected")
|
||||
|
||||
require.NoError(t, copied.SetSlot(2))
|
||||
|
||||
@@ -58,7 +58,6 @@ func TestValid(t *testing.T) {
|
||||
|
||||
t.Run("one invalid column", func(t *testing.T) {
|
||||
columns := GenerateTestDataColumns(t, [fieldparams.RootLength]byte{}, 1, 1)
|
||||
columns[0].KzgCommitments = [][]byte{}
|
||||
verifier := initializer.NewDataColumnsVerifier(columns, GossipDataColumnSidecarRequirements)
|
||||
|
||||
err := verifier.ValidFields()
|
||||
@@ -67,6 +66,14 @@ func TestValid(t *testing.T) {
|
||||
})
|
||||
|
||||
t.Run("nominal", func(t *testing.T) {
|
||||
const maxBlobsPerBlock = 2
|
||||
|
||||
params.SetupTestConfigCleanup(t)
|
||||
cfg := params.BeaconConfig()
|
||||
cfg.FuluForkEpoch = 0
|
||||
cfg.BlobSchedule = []params.BlobScheduleEntry{{Epoch: 0, MaxBlobsPerBlock: maxBlobsPerBlock}}
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
|
||||
columns := GenerateTestDataColumns(t, [fieldparams.RootLength]byte{}, 1, 1)
|
||||
verifier := initializer.NewDataColumnsVerifier(columns, GossipDataColumnSidecarRequirements)
|
||||
|
||||
|
||||
@@ -1,2 +0,0 @@
|
||||
### Changed
|
||||
- Updated outdated documentation links for Web3Signer and Why Bazel.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- Added missing `meta` field to the response of the endpoint `/eth/v1/node/peers` to align with the Beacon API spec (#15370)
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- Replace fmt.Printf with proper test error handling in web3signer keymanager tests, using require.NoError(t, err) instead of t.Fatalf for better error handling and debugging.
|
||||
@@ -1,2 +0,0 @@
|
||||
## Ignored
|
||||
- Remove duplicate test case in `container/leaky-bucket/collector_test.go` to reduce redundancy. (#15672)
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- when shutting down the sync service we now send p2p goodbye messages in parallel to maxmimize changes of propogating goodbyes to all peers before an unsafe shutdown.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- changed from in-memory to persistent discv5 db to keep local node information persistent for the key to keep the ENR sequence number deterministic when restarting.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Add BLOB_SCHEDULE field to `/eth/v1/config/spec` endpoint response to expose blob scheduling configuration for networks.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- Moved the validation logic for saving LC updates to the store function.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- Moved the validation logic for saving LC finality/optimistic updates to the store.
|
||||
@@ -1,4 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Add SSZ support for two attestation APIs: `/eth/v1/validator/attestation_data` and
|
||||
`/eth/v2/validator/aggregate_attestation`.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Restrict best LC update collection to canonical blocks.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Change LC Bootstrap logic to only save bootstraps on finalized checkpoints instead of every block.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Added expected delay before broadcasting light client p2p messages.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- Fixed the versioning bug for light client data types in the Beacon API.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- Fulu spec tests coverage for light client tests.
|
||||
@@ -1,4 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Compare received LC messages over gossipsub with locally computed ones before forwarding. Also no longer save updates
|
||||
from gossipsub, just validate and forward.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- Moved light-client package out of `beacon-chain/core/` and into `beacon-chain/`.
|
||||
@@ -1,5 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Moved the broadcast and event notifier logic for saving LC updates to the store function.
|
||||
- Fixed the issue with broadcasting more than twice per LC Finality update, and the if-case bug.
|
||||
- Separated the finality update validation rules for saving and broadcasting.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Put the initiation of LC Store behind the `enable-light-client` flag.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Read light client optimistic and finality updates from LCStore instead of computing them in the beacon API handlers.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- Refactor light client bootstrap tests in the RPC package.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- Refactor light client kv tests.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- Refactor light client testing utils to use functional options.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- Removed extra/unused fields of the LC Beacon API server.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- Remove unused parameter `currentSlot` from LC functions.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Save light client finality and optimistic updates to the light client store.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Move setter/getter functions for LC Bootstrap into LcStore for a unified interface.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Move setter/getter functions for LC Updates into LcStore for a unified interface.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Add method `VersionToForkEpochMap()` to the `BeaconChainConfig` in the `params` package.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Populate sszInfo of variable-length fields in AnalyzeObjects
|
||||
@@ -0,0 +1,3 @@
|
||||
### Added
|
||||
|
||||
- Delegate sszInfo HashTreeRoot to FastSSZ-generated implementations via SSZObject, enabling roots calculation for generated types while avoiding duplicate logic.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- Fix builder bid version compatibility to support Electra bids with Fulu blocks
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- Skip genesis block retrieval when EIP-6110 deposit requests have started to prevent "pruned history unavailable" errors with execution clients that have pruned pre-merge data
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Clarified misleading log messages in beacon-chain/rpc/service gRPC module.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Updated consensus spec from v1.6.0-alpha.5 to v1.6.0-alpha.6
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Fulu block proposal changes for beacon api and gRPC.
|
||||
3
changelog/james-prysm_cleanup-process-aggregate.md
Normal file
3
changelog/james-prysm_cleanup-process-aggregate.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Ignored
|
||||
|
||||
- Small code changes for reusability and readability to processAggregate.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Removed
|
||||
|
||||
- removed unused configs and hides prysm specific configs from `/eth/v1/config/spec` endpoint
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Adding `/eth/v1/debug/beacon/data_column_sidecars/{block_id}` endpoint.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Changed `enable-duties-v2` to `disable-duties-v2` to default to using duties v2.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Deprecated and added error to /prysm/v1/beacon/blobs endpoint for post Fulu fork.
|
||||
3
changelog/james-prysm_fix-block-event.md
Normal file
3
changelog/james-prysm_fix-block-event.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Fixed
|
||||
|
||||
- block event probably shouldn't be sent on certain block processing failures, now sends only on successing processing Block is NON-CANONICAL, Block IS CANONICAL but getFCUArgs FAILS, and Full success
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user