Compare commits

..

39 Commits

Author SHA1 Message Date
Potuz
ce48bcd0fc Paralellize commitment computation
- Also fix --rlnc-block-chunks flag to pass it on the validator
- Add a feature flag --rlnc-mesh-size to configure at runtime to how
  many peers we send chunks.
2025-05-15 14:28:09 -03:00
potuz
4217fee54c add benches 2025-05-15 11:37:30 -03:00
potuz
b8ccfb6d13 Adjust meshsize as a function of chunk numbers 2025-05-14 14:20:50 -03:00
potuz
c72e3d9d67 add chunks feature flag 2025-05-13 16:22:56 -03:00
Potuz
bb4022fd01 add larger trusted setup 2025-05-13 15:37:07 -03:00
Potuz
740de753fb Fix genesis block proposal (#15084)
* Fix genesis block proposal

* fix test

* fix test 2
2025-04-24 10:07:55 -03:00
Potuz
11fd5cb40a ignore some chunks and blocks earlier 2025-04-15 14:53:44 -03:00
Potuz
f4b5b89508 delay block proposal 2025-04-15 14:29:37 -03:00
Potuz
dcb1923d0b better logs 2025-04-15 14:23:09 -03:00
Potuz
0c69601014 Add feature flag to delay block broadcast 2025-04-15 13:41:36 -03:00
Potuz
82e74cfa64 do not verify block signatures over RPC 2025-01-29 11:19:31 -03:00
Potuz
a90cdd8d7d create new context to process blocks 2025-01-29 09:56:19 -03:00
Potuz
1220e0798a send block feed when received via chunks 2025-01-29 08:48:26 -03:00
nisdas
8a6d74d636 Use SignedBeaconBlock 2025-01-28 22:41:04 +08:00
Potuz
b6f76033f1 Fix chunking 2025-01-28 08:29:44 -03:00
nisdas
cc5681b0d8 Build Reproducible Test 2025-01-28 14:28:13 +08:00
nisdas
f8ddcb8978 Fix Interface Assertion 2025-01-28 13:02:22 +08:00
Potuz
84051f43bb broadcast chunks on gossip 2025-01-27 16:26:01 -03:00
Potuz
e593797a5c add log when decoding a block 2025-01-27 16:10:22 -03:00
Potuz
51b86abd06 Broadcast chunked blocks 2025-01-27 16:05:41 -03:00
Potuz
c2073dfd3f fix block encoding when chunking 2025-01-27 12:45:42 -03:00
nisdas
ca2a20e992 Fix Validator Config 2025-01-27 21:48:16 +08:00
Nishant Das
e4075b6684 Implement RLNC Broadcasting (#14830)
Add a pubsub API to broadcast chunks
2025-01-27 10:18:54 -03:00
Potuz
50e327c7f8 Do not check blob signature 2025-01-24 06:29:28 -03:00
Potuz
2771bfc5f7 Add the signatures over the commitments to the block before processing the state transition 2025-01-24 06:29:28 -03:00
Potuz
186974fc7f Add endpoint to propose blocks
The implementation of Broadcast is blocked as it needs to be changed
2025-01-24 06:29:28 -03:00
Potuz
9ca832b4fb remove signature check 2025-01-24 06:29:28 -03:00
Potuz
7a00a9cc16 Add Ristretto trusted setup
Load the setup from a file and use it across all clients.
2025-01-24 06:29:28 -03:00
Potuz
9a3bcb876c Add chunk broadcasting
Also make `ReadOnlyBeaconBlockChunk` actually read only and copy the
return values.
2025-01-24 06:29:28 -03:00
Potuz
8bb3c388ed Handle the block when it can be recovered
Still TODO

- Handle chunks for pending blocks
- Handle state transition logic
- Handle rebroadcasting
- Handle block production
2025-01-24 06:29:28 -03:00
Potuz
044456a2c3 prune the chunk cache every 10 minutes 2025-01-24 06:29:28 -03:00
Potuz
303ac87d17 Add a cache for incoming chunks
- Finish validation of each incoming chunk including signature
  verification when needed.

Still TODO:
- Handle chunks for pending blocks
- Convert the decoded block to an actual block and send it to the
  blockchain package.
2025-01-24 06:29:28 -03:00
Potuz
d93ab44a37 Handle incoming chunks on the P2P layer
- Add hooks for the new topic "beacon_block_chunk"
- Add protos for the new messages
- Handle the sync package logic of signature validation
- Create native types for the chunks to be handled in the sync package

Still TODO:
- Handle chunks for pending blocks.
- Handle nodes with incoming chunks, avoid verifying signature twice
- Decode the block and send it to the blockchain package
2025-01-24 06:29:28 -03:00
Potuz
8b905d24a0 Add feature flag 2025-01-24 06:29:28 -03:00
Potuz
f46eaa88bf Add decoding and tests. 2025-01-24 06:29:28 -03:00
Potuz
799bd33158 Add node, receive and prepare message functions 2025-01-24 06:29:28 -03:00
Potuz
f51132fcda Add echelon form and addRow 2025-01-24 06:29:28 -03:00
Potuz
a32ed2eb98 Add chunks and message verifying 2025-01-24 06:29:28 -03:00
Potuz
7043dacaf9 add committer 2025-01-24 06:29:28 -03:00
474 changed files with 85943 additions and 19161 deletions

View File

@@ -1,21 +0,0 @@
name: Protobuf Format
on:
push:
branches: [ '*' ]
pull_request:
branches: [ '*' ]
merge_group:
types: [checks_requested]
jobs:
clang-format-checking:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
# Is this step failing for you?
# Run: clang-format -i proto/**/*.proto
# See: https://clang.llvm.org/docs/ClangFormat.html
- uses: RafikFarhad/clang-format-github-action@v3
with:
sources: "proto/**/*.proto"

View File

@@ -4,141 +4,6 @@ All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
## [v5.3.0](https://github.com/prysmaticlabs/prysm/compare/v5.2.0...v5.3.0) - 2025-02-12
This release includes support for Pectra activation in the [Holesky](https://github.com/eth-clients/holesky) and [Sepolia](https://github.com/eth-clients/sepolia) testnets! The release contains many fixes for Electra that have been found in rigorous testing through devnets in the last few months.
For mainnet, we have a few nice features for you to try:
- [PR #14023](https://github.com/prysmaticlabs/prysm/pull/14023) introduces a new file layout structure for storing blobs. Rather than storing all blob root directories in one parent directory, blob root directories are organized in subdirectories by epoch. This should vastly decrease the blob cache warmup time when Prysm is starting. Try this feature with `--blob-storage-layout=by-epoch`.
Updating to this release is **required** for Holesky and Sepolia operators and it is **recommended** for mainnet users as there are a few bug fixes that apply to deneb logic.
### Added
- Added an error field to log `Finished building block`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14696)
- Implemented a new `EmptyExecutionPayloadHeader` function. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14713)
- Added proper gas limit check for header from the builder. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14707)
- `Finished building block`: Display error only if not nil. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14722)
- Added light client feature flag check to RPC handlers. [PR](https://github.com/prysmaticlabs/prysm/pull/14736). [[PR]](https://github.com/prysmaticlabs/prysm/pull/14782)
- Added support to update target and max blob count to different values per hard fork config. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14678)
- Log before blob filesystem cache warm-up. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14735)
- New design for the attestation pool. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14324)
- Add field param placeholder for Electra blob target and max to pass spec tests. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14733)
- Light client: Add better error handling. [PR](https://github.com/prysmaticlabs/prysm/pull/14749). [[PR]](https://github.com/prysmaticlabs/prysm/pull/14782)
- Add EIP-7691: Blob throughput increase. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14750)
- Trace IDONTWANT Messages in Pubsub. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14778)
- Add Fulu fork boilerplate. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14771)
- DB optimization for saving light client bootstraps (save unique sync committees only). [[PR]](https://github.com/prysmaticlabs/prysm/pull/14782)
- Separate type for unaggregated network attestations. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14659)
- Remote signer electra fork support. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14477)
- Add Electra test case to rewards API. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14816)
- Update `proto_test.go` to Electra. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14817)
- Update slasher service to Electra. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14812)
- Builder API endpoint to support Electra. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14344)
- Added protoc toolchains with a version of v25.3. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14818)
- Add test cases for the eth_lightclient_bootstrap API SSZ support. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14824)
- Handle `AttesterSlashingElectra` everywhere in the codebase. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14823)
- Add Beacon DB pruning service to prune historical data older than MIN_EPOCHS_FOR_BLOCK_REQUESTS (roughly equivalent to the weak subjectivity period). [[PR]](https://github.com/prysmaticlabs/prysm/pull/14687)
- Nil consolidation request check for core processing. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14851)
- Updated blob sidecar api endpoint for Electra. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14852)
- Slashing pool service to convert slashings from Phase0 to Electra at the fork. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14844)
- check to stop eth1 voting after electra and eth1 deposits stop. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14835)
- WARN log message on node startup advising of the upcoming deprecation of the --enable-historical-state-representation feature flag. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14856)
- Beacon API event support for `SingleAttestation` and `SignedAggregateAttestationAndProofElectra`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14855)
- Added Electra tests for `TestLightClient_NewLightClientOptimisticUpdateFromBeaconState` and `TestLightClient_NewLightClientFinalityUpdateFromBeaconState`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14783)
- New option to select an alternate blob storage layout. Rather than a flat directory with a subdir for each block root, a multi-level scheme is used to organize blobs by epoch/slot/root, enabling leaner syscalls, indexing and pruning. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14023)
- Send pending att queue's attestations through the notification feed. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14862)
- Prune all pending deposits and proofs in post-Electra. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14829)
- Add Pectra testnet dates. (Sepolia and Holesky). [[PR]](https://github.com/prysmaticlabs/prysm/pull/14884)
### Changed
- Process light client finality updates only for new finalized epochs instead of doing it for every block. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14713)
- Refactor subnets subscriptions. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14711)
- Refactor RPC handlers subscriptions. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14732)
- Go deps upgrade, from `ioutil` to `io`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14737)
- Move successfully registered validator(s) on builder log to debug. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14735)
- Update some test files to use `crypto/rand` instead of `math/rand`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14747)
- Re-organize the content of the `*.proto` files (No functional change). [[PR]](https://github.com/prysmaticlabs/prysm/pull/14755)
- SSZ files generation: Remove the `// Hash: ...` header.[[PR]](https://github.com/prysmaticlabs/prysm/pull/14760)
- Updated Electra spec definition for `process_epoch`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14768)
- Update our `go-libp2p-pubsub` dependency. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14770)
- Re-organize the content of files to ease the creation of a new fork boilerplate. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14761)
- Updated spec definition electra `process_registry_updates`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14767)
- Fixed Metadata errors for peers connected via QUIC. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14776)
- Updated spec definitions for `process_slashings` in godocs. Simplified `ProcessSlashings` API. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14766)
- Update spec tests to v1.5.0-beta.0. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14788)
- Process light client finality updates only for new finalized epochs instead of doing it for every block. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14718)
- Update blobs by rpc topics from V2 to V1. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14785)
- Updated geth to 1.14~. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14351)
- E2e tests start from bellatrix. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14351)
- Version pinning unclog after making some ux improvements. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14802)
- Remove helpers to check for execution/compounding withdrawal credentials and expose them as methods. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14808)
- Refactor `2006-01-02 15:04:05` to `time.DateTime`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14792)
- Updated Prysm to Go v1.23.5. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14818)
- Updated Bazel version to v7.4.1. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14818)
- Updated rules_go to v0.46.0. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14818)
- Updated golang.org/x/tools to be compatible with v1.23.5. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14818)
- CI now requires proto files to be properly formatted with clang-format. [[PR](https://github.com/prysmaticlabs/prysm/pull/14831)]. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14831)
- Improved test coverage of beacon-chain/core/electra/churn.go. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14837)
- Update electra spec test to beta1. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14841)
- Move deposit request nil check to apply all. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14849)
- Do not mark blocks as invalid on context deadlines during state transition. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14838)
- Update electra core processing to not mark block bad if execution request error. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14826)
- Dependency: Updated go-ethereum to v1.14.13. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14872)
- improving readability on proposer settings loader. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14868)
- Removes existing validator.processSlot span and adds validator.processSlot span to slotCtx. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14874)
- DownloadFinalizedData has moved from the api/client package to beacon-chain/sync/checkpoint. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14871)
- Updated Blob-Batch-Limit to increase to 192 for electra. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14883)
- Updated Blob-Batch-Limit-Burst-Factor to increase to 3. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14883)
- Changed the derived batch limit when serving blobs. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14883)
- Updated go-libp2p-pubsub to v0.13.0. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14890)
- Rename light client flag from `enable-lightclient` to `enable-light-client`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14887)
- Update electra spec test to beta2. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14901)
### Removed
- Cleanup ProcessSlashings method to remove unnecessary argument. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14762)
- Remove `/proto/eth/v2` directory. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14765)
- Remove `/memsize/` pprof endpoint as it will no longer be supported in go 1.23. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14351)
- Clean `TestCanUpgrade*` tests. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14791)
- Remove `Copy()` from the `ReadOnlyBeaconBlock` interface. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14811)
- Removed a tracing span on signature requests. These requests usually took less than 5 nanoseconds and are generally not worth tracing. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14864)
### Fixed
- Added check to prevent nil pointer deference or out of bounds array access when validating the BLSToExecutionChange on an impossibly nil validator. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14705)
- EIP-7691: Ensure new blobs subnets are subscribed on epoch in advance. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14759)
- Fix kzg commitment inclusion proof depth minimal value. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14787)
- Replace exampleIP to `96.7.129.13`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14795)
- Fixed a p2p test to reliably return a static IP through DNS resolution. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14800)
- `ToBlinded`: Use Fulu struct for Fulu (instead of Electra). [[PR]](https://github.com/prysmaticlabs/prysm/pull/14797)
- fix panic with type cast on pbgenericblock(). [[PR]](https://github.com/prysmaticlabs/prysm/pull/14801)
- Prysmctl generate genesis state: fix truncation of ExtraData to 32 bytes to satisfy SSZ marshaling. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14803)
- added conditional evaluators to fix scenario e2e tests. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14798)
- Use `SingleAttestation` for Fulu in p2p attestation map. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14809)
- `UpgradeToFulu`: Respect the specification. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14821)
- `nodeFilter`: Implement `filterPeerForBlobSubnet` to avoid error logs. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14822)
- Fixed deposit packing for post-Electra: early return if EIP-6110 is applied. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14697)
- Fix batch process new pending deposits by getting validators from state. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14827)
- Fix handling unfound block at slot. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14852)
- Fixed incorrect attester slashing length check. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14833)
- Fix monitor service for Electra. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14853)
- add more nil checks on ToConsensus functions for added safety. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14867)
- Fix electra state to safe share references on pending fields when append. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14895)
- Add missing config values from the spec. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14903)
- We remove the unused `rebuildTrie` assignments for fields which do not use them. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14906)
- fix block api endpoint to handle blocks with the same structure but on different forks (i.e. fulu and electra). [[PR]](https://github.com/prysmaticlabs/prysm/pull/14897)
- We change how we track blob indexes during their reconstruction from the EL to prevent. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14909)
- We now use the correct maximum value when serving blobs for electra blocks. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14910)
### Security
- go version upgrade to 1.22.10 for CVE CVE-2024-34156. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14729)
- Update golang.org/x/crypto to v0.31.0 to address CVE-2024-45337. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14777)
- Update golang.org/x/net to v0.33.0 to address CVE-2024-45338. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14780)
## [v5.2.0](https://github.com/prysmaticlabs/prysm/compare/v5.1.2...v5.2.0)
Updating to this release is highly recommended, especially for users running v5.1.1 or v5.1.2.
@@ -195,7 +60,7 @@ Notable features:
- Updated the default `scrape-interval` in `Client-stats` to 2 minutes to accommodate Beaconcha.in API rate limits.
- Switch to compounding when consolidating with source==target.
- Revert block db save when saving state fails.
- Return false from HasBlock if the block is being synced.
- Return false from HasBlock if the block is being synced.
- Cleanup forkchoice on failed insertions.
- Use read only validator for core processing to avoid unnecessary copying.
- Use ROBlock across block processing pipeline.
@@ -208,7 +73,7 @@ Notable features:
- Simplified `EjectedValidatorIndices`.
- `engine_newPayloadV4`,`engine_getPayloadV4` are changes due to new execution request serialization decisions, [PR](https://github.com/prysmaticlabs/prysm/pull/14580)
- Fixed various small things in state-native code.
- Use ROBlock earlier in block syncing pipeline.
- Use ROBlock earlier in block syncing pipeline.
- Changed the signature of `ProcessPayload`.
- Only Build the Protobuf state once during serialization.
- Capella blocks are execution.
@@ -274,9 +139,9 @@ Notable features:
### Security
## [v5.1.2](https://github.com/prysmaticlabs/prysm/compare/v5.1.1...v5.1.2) - 2024-10-16
## [v5.1.2](https://github.com/prysmaticlabs/prysm/compare/v5.1.1...v5.1.2) - 2024-10-16
This is a hotfix release with one change.
This is a hotfix release with one change.
Prysm v5.1.1 contains an updated implementation of the beacon api streaming events endpoint. This
new implementation contains a bug that can cause a panic in certain conditions. The issue is
@@ -288,20 +153,20 @@ prysm REST mode validator (a feature which requires the validator to be configur
api instead of prysm's stock grpc endpoints) or accessory software that connects to the events api,
like https://github.com/ethpandaops/ethereum-metrics-exporter
### Fixed
### Fixed
- Recover from panics when writing the event stream [#14545](https://github.com/prysmaticlabs/prysm/pull/14545)
## [v5.1.1](https://github.com/prysmaticlabs/prysm/compare/v5.1.0...v5.1.1) - 2024-10-15
This release has a number of features and improvements. Most notably, the feature flag
`--enable-experimental-state` has been flipped to "opt out" via `--disable-experimental-state`.
This release has a number of features and improvements. Most notably, the feature flag
`--enable-experimental-state` has been flipped to "opt out" via `--disable-experimental-state`.
The experimental state management design has shown significant improvements in memory usage at
runtime. Updates to libp2p's gossipsub have some bandwidith stability improvements with support for
IDONTWANT control messages.
IDONTWANT control messages.
The gRPC gateway has been deprecated from Prysm in this release. If you need JSON data, consider the
standardized beacon-APIs.
standardized beacon-APIs.
Updating to this release is recommended at your convenience.
@@ -343,7 +208,7 @@ Updating to this release is recommended at your convenience.
- `grpc-gateway-corsdomain` is renamed to http-cors-domain. The old name can still be used as an alias.
- `api-timeout` is changed from int flag to duration flag, default value updated.
- Light client support: abstracted out the light client headers with different versions.
- `ApplyToEveryValidator` has been changed to prevent misuse bugs, it takes a closure that takes a `ReadOnlyValidator` and returns a raw pointer to a `Validator`.
- `ApplyToEveryValidator` has been changed to prevent misuse bugs, it takes a closure that takes a `ReadOnlyValidator` and returns a raw pointer to a `Validator`.
- Removed gorilla mux library and replaced it with net/http updates in go 1.22.
- Clean up `ProposeBlock` for validator client to reduce cognitive scoring and enable further changes.
- Updated k8s-io/client-go to v0.30.4 and k8s-io/apimachinery to v0.30.4
@@ -354,7 +219,7 @@ Updating to this release is recommended at your convenience.
- Updated Sepolia bootnodes.
- Make committee aware packing the default by deprecating `--enable-committee-aware-packing`.
- Moved `ConvertKzgCommitmentToVersionedHash` to the `primitives` package.
- Updated correlation penalty for EIP-7251.
- Updated correlation penalty for EIP-7251.
### Deprecated
- `--disable-grpc-gateway` flag is deprecated due to grpc gateway removal.
@@ -828,34 +693,34 @@ AVX support (eg Celeron) after the Deneb fork. This is not an issue for mainnet.
- Linter: Wastedassign linter enabled to improve code quality.
- API Enhancements:
- Added payload return in Wei for /eth/v3/validator/blocks.
- Added Holesky Deneb Epoch for better epoch management.
- Added payload return in Wei for /eth/v3/validator/blocks.
- Added Holesky Deneb Epoch for better epoch management.
- Testing Enhancements:
- Clear cache in tests of core helpers to ensure test reliability.
- Added Debug State Transition Method for improved debugging.
- Backfilling test: Enabled backfill in E2E tests for more comprehensive coverage.
- Clear cache in tests of core helpers to ensure test reliability.
- Added Debug State Transition Method for improved debugging.
- Backfilling test: Enabled backfill in E2E tests for more comprehensive coverage.
- API Updates: Re-enabled jwt on keymanager API for enhanced security.
- Logging Improvements: Enhanced block by root log for better traceability.
- Validator Client Improvements:
- Added Spans to Core Validator Methods for enhanced monitoring.
- Improved readability in validator client code for better maintenance (various commits).
- Added Spans to Core Validator Methods for enhanced monitoring.
- Improved readability in validator client code for better maintenance (various commits).
### Changed
- Optimizations and Refinements:
- Lowered resource usage in certain processes for efficiency.
- Moved blob rpc validation closer to peer read for optimized processing.
- Cleaned up validate beacon block code for clarity and efficiency.
- Updated Sepolia Deneb fork epoch for alignment with network changes.
- Changed blob latency metrics to milliseconds for more precise measurement.
- Altered getLegacyDatabaseLocation message for better clarity.
- Improved wait for activation method for enhanced performance.
- Capitalized Aggregated Unaggregated Attestations Log for consistency.
- Modified HistoricalRoots usage for accuracy.
- Adjusted checking of attribute emptiness for efficiency.
- Lowered resource usage in certain processes for efficiency.
- Moved blob rpc validation closer to peer read for optimized processing.
- Cleaned up validate beacon block code for clarity and efficiency.
- Updated Sepolia Deneb fork epoch for alignment with network changes.
- Changed blob latency metrics to milliseconds for more precise measurement.
- Altered getLegacyDatabaseLocation message for better clarity.
- Improved wait for activation method for enhanced performance.
- Capitalized Aggregated Unaggregated Attestations Log for consistency.
- Modified HistoricalRoots usage for accuracy.
- Adjusted checking of attribute emptiness for efficiency.
- Database Management:
- Moved --db-backup-output-dir as a deprecated flag for database management simplification.
- Added the Ability to Defragment the Beacon State for improved database performance.
- Moved --db-backup-output-dir as a deprecated flag for database management simplification.
- Added the Ability to Defragment the Beacon State for improved database performance.
- Dependency Update: Bumped quic-go version from 0.39.3 to 0.39.4 for up-to-date dependencies.
### Removed
@@ -866,12 +731,12 @@ AVX support (eg Celeron) after the Deneb fork. This is not an issue for mainnet.
### Fixed
- Bug Fixes:
- Fixed off by one error for improved accuracy.
- Resolved small typo in error messages for clarity.
- Addressed minor issue in blsToExecChange validator for better validation.
- Corrected blobsidecar json tag for commitment inclusion proof.
- Fixed ssz post-requests content type check.
- Resolved issue with port logging in bootnode.
- Fixed off by one error for improved accuracy.
- Resolved small typo in error messages for clarity.
- Addressed minor issue in blsToExecChange validator for better validation.
- Corrected blobsidecar json tag for commitment inclusion proof.
- Fixed ssz post-requests content type check.
- Resolved issue with port logging in bootnode.
- Test Fixes: Re-enabled Slasher E2E Test for more comprehensive testing.
### Security
@@ -1298,9 +1163,9 @@ No security issues in this release.
now features runtime detection, automatically enabling optimized code paths if your CPU supports it.
- **Multiarch Containers Preview Available**: multiarch (:wave: arm64 support :wave:) containers will be offered for
preview at the following locations:
- Beacon Chain: [gcr.io/prylabs-dev/prysm/beacon-chain:v4.1.0](gcr.io/prylabs-dev/prysm/beacon-chain:v4.1.0)
- Validator: [gcr.io/prylabs-dev/prysm/validator:v4.1.0](gcr.io/prylabs-dev/prysm/validator:v4.1.0)
- Please note that in the next cycle, we will exclusively use these containers at the canonical URLs.
- Beacon Chain: [gcr.io/prylabs-dev/prysm/beacon-chain:v4.1.0](gcr.io/prylabs-dev/prysm/beacon-chain:v4.1.0)
- Validator: [gcr.io/prylabs-dev/prysm/validator:v4.1.0](gcr.io/prylabs-dev/prysm/validator:v4.1.0)
- Please note that in the next cycle, we will exclusively use these containers at the canonical URLs.
### Added

View File

@@ -55,7 +55,7 @@ bazel build //beacon-chain --config=release
## Adding / updating dependencies
1. Add your dependency as you would with go modules. I.e. `go get ...`
1. Run `bazel run //:gazelle -- update-repos -from_file=go.mod -to_macro=deps.bzl%prysm_deps -prune=true` to update the bazel managed dependencies.
1. Run `bazel run //:gazelle -- update-repos -from_file=go.mod` to update the bazel managed dependencies.
Example:

View File

@@ -255,7 +255,7 @@ filegroup(
url = "https://github.com/ethereum/EIPs/archive/5480440fe51742ed23342b68cf106cefd427e39d.tar.gz",
)
consensus_spec_version = "v1.5.0-beta.2"
consensus_spec_version = "v1.5.0-beta.0"
bls_test_version = "v0.1.1"
@@ -271,7 +271,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-X/bMxbKg1clo2aFEjBoeuFq/U+BF1eQopgRP/7nI3Qg=",
integrity = "sha256-HdMlTN3wv+hUMCkIRPk+EHcLixY1cSZlvkx3obEp4AM=",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/general.tar.gz" % consensus_spec_version,
)
@@ -287,7 +287,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-WSxdri5OJGuNApW+odKle5UzToDyEOx+F3lMiqamJAg=",
integrity = "sha256-eX/ihmHQ+OvfoGJxSMgy22yAU3SZ3xjsX0FU0EaZrSs=",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/minimal.tar.gz" % consensus_spec_version,
)
@@ -303,7 +303,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-LYE8l3y/zSt4YVrehrJ3ralqtgeYNildiIp+HR6+xAI=",
integrity = "sha256-k3Onf42vOzIqyddecR6G82sDy3mmDA+R8RN66QjB0GI=",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/mainnet.tar.gz" % consensus_spec_version,
)
@@ -318,7 +318,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-jvZQ90qcJMTOqMsPO7sgeEVQmewZTHcz7LVDkNqwTFQ=",
integrity = "sha256-N/d4AwdOSlb70Dr+2l20dfXxNSzJDj/qKA9Rkn8Gb5w=",
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
)

View File

@@ -3,6 +3,7 @@ load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"checkpoint.go",
"client.go",
"doc.go",
"health.go",
@@ -15,19 +16,28 @@ go_library(
"//api/client/beacon/iface:go_default_library",
"//api/server:go_default_library",
"//api/server/structs:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/state:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//encoding/ssz/detect:go_default_library",
"//io/file:go_default_library",
"//network/forks:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@org_golang_x_mod//semver:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = [
"checkpoint_test.go",
"client_test.go",
"health_test.go",
],
@@ -35,7 +45,19 @@ go_test(
deps = [
"//api/client:go_default_library",
"//api/client/beacon/testing:go_default_library",
"//beacon-chain/state:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/blocks/testing:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/ssz/detect:go_default_library",
"//network/forks:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@org_uber_go_mock//gomock:go_default_library",
],
)

View File

@@ -0,0 +1,276 @@
package beacon
import (
"context"
"fmt"
"path"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
base "github.com/prysmaticlabs/prysm/v5/api/client"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/encoding/ssz/detect"
"github.com/prysmaticlabs/prysm/v5/io/file"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/sirupsen/logrus"
"golang.org/x/mod/semver"
)
var errCheckpointBlockMismatch = errors.New("mismatch between checkpoint sync state and block")
// OriginData represents the BeaconState and ReadOnlySignedBeaconBlock necessary to start an empty Beacon Node
// using Checkpoint Sync.
type OriginData struct {
sb []byte
bb []byte
st state.BeaconState
b interfaces.ReadOnlySignedBeaconBlock
vu *detect.VersionedUnmarshaler
br [32]byte
sr [32]byte
}
// SaveBlock saves the downloaded block to a unique file in the given path.
// For readability and collision avoidance, the file name includes: type, config name, slot and root
func (o *OriginData) SaveBlock(dir string) (string, error) {
blockPath := path.Join(dir, fname("block", o.vu, o.b.Block().Slot(), o.br))
return blockPath, file.WriteFile(blockPath, o.BlockBytes())
}
// SaveState saves the downloaded state to a unique file in the given path.
// For readability and collision avoidance, the file name includes: type, config name, slot and root
func (o *OriginData) SaveState(dir string) (string, error) {
statePath := path.Join(dir, fname("state", o.vu, o.st.Slot(), o.sr))
return statePath, file.WriteFile(statePath, o.StateBytes())
}
// StateBytes returns the ssz-encoded bytes of the downloaded BeaconState value.
func (o *OriginData) StateBytes() []byte {
return o.sb
}
// BlockBytes returns the ssz-encoded bytes of the downloaded ReadOnlySignedBeaconBlock value.
func (o *OriginData) BlockBytes() []byte {
return o.bb
}
func fname(prefix string, vu *detect.VersionedUnmarshaler, slot primitives.Slot, root [32]byte) string {
return fmt.Sprintf("%s_%s_%s_%d-%#x.ssz", prefix, vu.Config.ConfigName, version.String(vu.Fork), slot, root)
}
// DownloadFinalizedData downloads the most recently finalized state, and the block most recently applied to that state.
// This pair can be used to initialize a new beacon node via checkpoint sync.
func DownloadFinalizedData(ctx context.Context, client *Client) (*OriginData, error) {
sb, err := client.GetState(ctx, IdFinalized)
if err != nil {
return nil, err
}
vu, err := detect.FromState(sb)
if err != nil {
return nil, errors.Wrap(err, "error detecting chain config for finalized state")
}
log.WithFields(logrus.Fields{
"name": vu.Config.ConfigName,
"fork": version.String(vu.Fork),
}).Info("Detected supported config in remote finalized state")
s, err := vu.UnmarshalBeaconState(sb)
if err != nil {
return nil, errors.Wrap(err, "error unmarshaling finalized state to correct version")
}
slot := s.LatestBlockHeader().Slot
bb, err := client.GetBlock(ctx, IdFromSlot(slot))
if err != nil {
return nil, errors.Wrapf(err, "error requesting block by slot = %d", slot)
}
b, err := vu.UnmarshalBeaconBlock(bb)
if err != nil {
return nil, errors.Wrap(err, "unable to unmarshal block to a supported type using the detected fork schedule")
}
br, err := b.Block().HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "error computing hash_tree_root of retrieved block")
}
bodyRoot, err := b.Block().Body().HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "error computing hash_tree_root of retrieved block body")
}
sbr := bytesutil.ToBytes32(s.LatestBlockHeader().BodyRoot)
if sbr != bodyRoot {
return nil, errors.Wrapf(errCheckpointBlockMismatch, "state body root = %#x, block body root = %#x", sbr, bodyRoot)
}
sr, err := s.HashTreeRoot(ctx)
if err != nil {
return nil, errors.Wrapf(err, "failed to compute htr for finalized state at slot=%d", s.Slot())
}
log.
WithField("blockSlot", b.Block().Slot()).
WithField("stateSlot", s.Slot()).
WithField("stateRoot", hexutil.Encode(sr[:])).
WithField("blockRoot", hexutil.Encode(br[:])).
Info("Downloaded checkpoint sync state and block.")
return &OriginData{
st: s,
b: b,
sb: sb,
bb: bb,
vu: vu,
br: br,
sr: sr,
}, nil
}
// WeakSubjectivityData represents the state root, block root and epoch of the BeaconState + ReadOnlySignedBeaconBlock
// that falls at the beginning of the current weak subjectivity period. These values can be used to construct
// a weak subjectivity checkpoint beacon node flag to be used for validation.
type WeakSubjectivityData struct {
BlockRoot [32]byte
StateRoot [32]byte
Epoch primitives.Epoch
}
// CheckpointString returns the standard string representation of a Checkpoint.
// The format is a hex-encoded block root, followed by the epoch of the block, separated by a colon. For example:
// "0x1c35540cac127315fabb6bf29181f2ae0de1a3fc909d2e76ba771e61312cc49a:74888"
func (wsd *WeakSubjectivityData) CheckpointString() string {
return fmt.Sprintf("%#x:%d", wsd.BlockRoot, wsd.Epoch)
}
// ComputeWeakSubjectivityCheckpoint attempts to use the prysm weak_subjectivity api
// to obtain the current weak_subjectivity checkpoint.
// For non-prysm nodes, the same computation will be performed with extra steps,
// using the head state downloaded from the beacon node api.
func ComputeWeakSubjectivityCheckpoint(ctx context.Context, client *Client) (*WeakSubjectivityData, error) {
ws, err := client.GetWeakSubjectivity(ctx)
if err != nil {
// a 404/405 is expected if querying an endpoint that doesn't support the weak subjectivity checkpoint api
if !errors.Is(err, base.ErrNotOK) {
return nil, errors.Wrap(err, "unexpected API response for prysm-only weak subjectivity checkpoint API")
}
// fall back to vanilla Beacon Node API method
return computeBackwardsCompatible(ctx, client)
}
log.Printf("server weak subjectivity checkpoint response - epoch=%d, block_root=%#x, state_root=%#x", ws.Epoch, ws.BlockRoot, ws.StateRoot)
return ws, nil
}
const (
prysmMinimumVersion = "v2.0.7"
prysmImplementationName = "Prysm"
)
// errUnsupportedPrysmCheckpointVersion indicates remote beacon node can't be used for checkpoint retrieval.
var errUnsupportedPrysmCheckpointVersion = errors.New("node does not meet minimum version requirements for checkpoint retrieval")
// for older endpoints or clients that do not support the weak_subjectivity api method
// we gather the necessary data for a checkpoint sync by:
// - inspecting the remote server's head state and computing the weak subjectivity epoch locally
// - requesting the state at the first slot of the epoch
// - using hash_tree_root(state.latest_block_header) to compute the block the state integrates
// - requesting that block by its root
func computeBackwardsCompatible(ctx context.Context, client *Client) (*WeakSubjectivityData, error) {
log.Print("falling back to generic checkpoint derivation, weak_subjectivity API not supported by server")
nv, err := client.GetNodeVersion(ctx)
if err != nil {
return nil, errors.Wrap(err, "unable to proceed with fallback method without confirming node version")
}
if nv.implementation == prysmImplementationName && semver.Compare(nv.semver, prysmMinimumVersion) < 0 {
return nil, errors.Wrapf(errUnsupportedPrysmCheckpointVersion, "%s < minimum (%s)", nv.semver, prysmMinimumVersion)
}
epoch, err := getWeakSubjectivityEpochFromHead(ctx, client)
if err != nil {
return nil, errors.Wrap(err, "error computing weak subjectivity epoch via head state inspection")
}
// use first slot of the epoch for the state slot
slot, err := slots.EpochStart(epoch)
if err != nil {
return nil, errors.Wrapf(err, "error computing first slot of epoch=%d", epoch)
}
log.Printf("requesting checkpoint state at slot %d", slot)
// get the state at the first slot of the epoch
sb, err := client.GetState(ctx, IdFromSlot(slot))
if err != nil {
return nil, errors.Wrapf(err, "failed to request state by slot from api, slot=%d", slot)
}
// ConfigFork is used to unmarshal the BeaconState so we can read the block root in latest_block_header
vu, err := detect.FromState(sb)
if err != nil {
return nil, errors.Wrap(err, "error detecting chain config for beacon state")
}
log.Printf("detected supported config in checkpoint state, name=%s, fork=%s", vu.Config.ConfigName, version.String(vu.Fork))
s, err := vu.UnmarshalBeaconState(sb)
if err != nil {
return nil, errors.Wrap(err, "error using detected config fork to unmarshal state bytes")
}
// compute state and block roots
sr, err := s.HashTreeRoot(ctx)
if err != nil {
return nil, errors.Wrap(err, "error computing hash_tree_root of state")
}
h := s.LatestBlockHeader()
h.StateRoot = sr[:]
br, err := h.HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "error while computing block root using state data")
}
bb, err := client.GetBlock(ctx, IdFromRoot(br))
if err != nil {
return nil, errors.Wrapf(err, "error requesting block by root = %d", br)
}
b, err := vu.UnmarshalBeaconBlock(bb)
if err != nil {
return nil, errors.Wrap(err, "unable to unmarshal block to a supported type using the detected fork schedule")
}
br, err = b.Block().HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "error computing hash_tree_root for block obtained via root")
}
return &WeakSubjectivityData{
Epoch: epoch,
BlockRoot: br,
StateRoot: sr,
}, nil
}
// this method downloads the head state, which can be used to find the correct chain config
// and use prysm's helper methods to compute the latest weak subjectivity epoch.
func getWeakSubjectivityEpochFromHead(ctx context.Context, client *Client) (primitives.Epoch, error) {
headBytes, err := client.GetState(ctx, IdHead)
if err != nil {
return 0, err
}
vu, err := detect.FromState(headBytes)
if err != nil {
return 0, errors.Wrap(err, "error detecting chain config for beacon state")
}
log.Printf("detected supported config in remote head state, name=%s, fork=%s", vu.Config.ConfigName, version.String(vu.Fork))
headState, err := vu.UnmarshalBeaconState(headBytes)
if err != nil {
return 0, errors.Wrap(err, "error unmarshaling state to correct version")
}
epoch, err := helpers.LatestWeakSubjectivityEpoch(ctx, headState, vu.Config)
if err != nil {
return 0, errors.Wrap(err, "error computing the weak subjectivity epoch from head state")
}
log.Printf("(computed client-side) weak subjectivity epoch = %d", epoch)
return epoch, nil
}

View File

@@ -1,4 +1,4 @@
package checkpoint
package beacon
import (
"bytes"
@@ -11,7 +11,6 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/api/client"
"github.com/prysmaticlabs/prysm/v5/api/client/beacon"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
@@ -26,6 +25,19 @@ import (
"github.com/prysmaticlabs/prysm/v5/time/slots"
)
type testRT struct {
rt func(*http.Request) (*http.Response, error)
}
func (rt *testRT) RoundTrip(req *http.Request) (*http.Response, error) {
if rt.rt != nil {
return rt.rt(req)
}
return nil, errors.New("RoundTripper not implemented")
}
var _ http.RoundTripper = &testRT{}
func marshalToEnvelope(val interface{}) ([]byte, error) {
raw, err := json.Marshal(val)
if err != nil {
@@ -51,6 +63,35 @@ func TestMarshalToEnvelope(t *testing.T) {
require.Equal(t, expected, string(encoded))
}
func TestFallbackVersionCheck(t *testing.T) {
trans := &testRT{rt: func(req *http.Request) (*http.Response, error) {
res := &http.Response{Request: req}
switch req.URL.Path {
case getNodeVersionPath:
res.StatusCode = http.StatusOK
b := bytes.NewBuffer(nil)
d := struct {
Version string `json:"version"`
}{
Version: "Prysm/v2.0.5 (linux amd64)",
}
encoded, err := marshalToEnvelope(d)
require.NoError(t, err)
b.Write(encoded)
res.Body = io.NopCloser(b)
case getWeakSubjectivityPath:
res.StatusCode = http.StatusNotFound
}
return res, nil
}}
c, err := NewClient("http://localhost:3500", client.WithRoundTripper(trans))
require.NoError(t, err)
ctx := context.Background()
_, err = ComputeWeakSubjectivityCheckpoint(ctx, c)
require.ErrorIs(t, err, errUnsupportedPrysmCheckpointVersion)
}
func TestFname(t *testing.T) {
vu := &detect.VersionedUnmarshaler{
Config: params.MainnetConfig(),
@@ -119,7 +160,7 @@ func TestDownloadWeakSubjectivityCheckpoint(t *testing.T) {
wsSerialized, err := wst.MarshalSSZ()
require.NoError(t, err)
expectedWSD := beacon.WeakSubjectivityData{
expectedWSD := WeakSubjectivityData{
BlockRoot: bRoot,
StateRoot: wRoot,
Epoch: epoch,
@@ -128,7 +169,7 @@ func TestDownloadWeakSubjectivityCheckpoint(t *testing.T) {
trans := &testRT{rt: func(req *http.Request) (*http.Response, error) {
res := &http.Response{Request: req}
switch req.URL.Path {
case beacon.GetWeakSubjectivityPath:
case getWeakSubjectivityPath:
res.StatusCode = http.StatusOK
cp := struct {
Epoch string `json:"epoch"`
@@ -147,10 +188,10 @@ func TestDownloadWeakSubjectivityCheckpoint(t *testing.T) {
rb, err := marshalToEnvelope(wsr)
require.NoError(t, err)
res.Body = io.NopCloser(bytes.NewBuffer(rb))
case beacon.RenderGetStatePath(beacon.IdFromSlot(wSlot)):
case renderGetStatePath(IdFromSlot(wSlot)):
res.StatusCode = http.StatusOK
res.Body = io.NopCloser(bytes.NewBuffer(wsSerialized))
case beacon.RenderGetBlockPath(beacon.IdFromRoot(bRoot)):
case renderGetBlockPath(IdFromRoot(bRoot)):
res.StatusCode = http.StatusOK
res.Body = io.NopCloser(bytes.NewBuffer(serBlock))
}
@@ -158,7 +199,7 @@ func TestDownloadWeakSubjectivityCheckpoint(t *testing.T) {
return res, nil
}}
c, err := beacon.NewClient("http://localhost:3500", client.WithRoundTripper(trans))
c, err := NewClient("http://localhost:3500", client.WithRoundTripper(trans))
require.NoError(t, err)
wsd, err := ComputeWeakSubjectivityCheckpoint(ctx, c)
@@ -222,7 +263,7 @@ func TestDownloadBackwardsCompatibleCombined(t *testing.T) {
trans := &testRT{rt: func(req *http.Request) (*http.Response, error) {
res := &http.Response{Request: req}
switch req.URL.Path {
case beacon.GetNodeVersionPath:
case getNodeVersionPath:
res.StatusCode = http.StatusOK
b := bytes.NewBuffer(nil)
d := struct {
@@ -234,15 +275,15 @@ func TestDownloadBackwardsCompatibleCombined(t *testing.T) {
require.NoError(t, err)
b.Write(encoded)
res.Body = io.NopCloser(b)
case beacon.GetWeakSubjectivityPath:
case getWeakSubjectivityPath:
res.StatusCode = http.StatusNotFound
case beacon.RenderGetStatePath(beacon.IdHead):
case renderGetStatePath(IdHead):
res.StatusCode = http.StatusOK
res.Body = io.NopCloser(bytes.NewBuffer(serialized))
case beacon.RenderGetStatePath(beacon.IdFromSlot(wSlot)):
case renderGetStatePath(IdFromSlot(wSlot)):
res.StatusCode = http.StatusOK
res.Body = io.NopCloser(bytes.NewBuffer(wsSerialized))
case beacon.RenderGetBlockPath(beacon.IdFromRoot(bRoot)):
case renderGetBlockPath(IdFromRoot(bRoot)):
res.StatusCode = http.StatusOK
res.Body = io.NopCloser(bytes.NewBuffer(serBlock))
}
@@ -250,7 +291,7 @@ func TestDownloadBackwardsCompatibleCombined(t *testing.T) {
return res, nil
}}
c, err := beacon.NewClient("http://localhost:3500", client.WithRoundTripper(trans))
c, err := NewClient("http://localhost:3500", client.WithRoundTripper(trans))
require.NoError(t, err)
wsPub, err := ComputeWeakSubjectivityCheckpoint(ctx, c)
@@ -267,13 +308,13 @@ func TestGetWeakSubjectivityEpochFromHead(t *testing.T) {
require.NoError(t, err)
trans := &testRT{rt: func(req *http.Request) (*http.Response, error) {
res := &http.Response{Request: req}
if req.URL.Path == beacon.RenderGetStatePath(beacon.IdHead) {
if req.URL.Path == renderGetStatePath(IdHead) {
res.StatusCode = http.StatusOK
res.Body = io.NopCloser(bytes.NewBuffer(serialized))
}
return res, nil
}}
c, err := beacon.NewClient("http://localhost:3500", client.WithRoundTripper(trans))
c, err := NewClient("http://localhost:3500", client.WithRoundTripper(trans))
require.NoError(t, err)
actualEpoch, err := getWeakSubjectivityEpochFromHead(context.Background(), c)
require.NoError(t, err)
@@ -345,3 +386,96 @@ func populateValidators(cfg *params.BeaconChainConfig, st state.BeaconState, val
}
return st.SetBalances(balances)
}
func TestDownloadFinalizedData(t *testing.T) {
ctx := context.Background()
cfg := params.MainnetConfig().Copy()
// avoid the altair zone because genesis tests are easier to set up
epoch := cfg.AltairForkEpoch - 1
// set up checkpoint state, using the epoch that will be computed as the ws checkpoint state based on the head state
slot, err := slots.EpochStart(epoch)
require.NoError(t, err)
st, err := util.NewBeaconState()
require.NoError(t, err)
fork, err := forkForEpoch(cfg, epoch)
require.NoError(t, err)
require.NoError(t, st.SetFork(fork))
require.NoError(t, st.SetSlot(slot))
// set up checkpoint block
b, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlock())
require.NoError(t, err)
b, err = blocktest.SetBlockParentRoot(b, cfg.ZeroHash)
require.NoError(t, err)
b, err = blocktest.SetBlockSlot(b, slot)
require.NoError(t, err)
b, err = blocktest.SetProposerIndex(b, 0)
require.NoError(t, err)
// set up state header pointing at checkpoint block - this is how the block is downloaded by root
header, err := b.Header()
require.NoError(t, err)
require.NoError(t, st.SetLatestBlockHeader(header.Header))
// order of operations can be confusing here:
// - when computing the state root, make sure block header is complete, EXCEPT the state root should be zero-value
// - before computing the block root (to match the request route), the block should include the state root
// *computed from the state with a header that does not have a state root set yet*
sr, err := st.HashTreeRoot(ctx)
require.NoError(t, err)
b, err = blocktest.SetBlockStateRoot(b, sr)
require.NoError(t, err)
mb, err := b.MarshalSSZ()
require.NoError(t, err)
br, err := b.Block().HashTreeRoot()
require.NoError(t, err)
ms, err := st.MarshalSSZ()
require.NoError(t, err)
trans := &testRT{rt: func(req *http.Request) (*http.Response, error) {
res := &http.Response{Request: req}
switch req.URL.Path {
case renderGetStatePath(IdFinalized):
res.StatusCode = http.StatusOK
res.Body = io.NopCloser(bytes.NewBuffer(ms))
case renderGetBlockPath(IdFromSlot(b.Block().Slot())):
res.StatusCode = http.StatusOK
res.Body = io.NopCloser(bytes.NewBuffer(mb))
default:
res.StatusCode = http.StatusInternalServerError
res.Body = io.NopCloser(bytes.NewBufferString(""))
}
return res, nil
}}
c, err := NewClient("http://localhost:3500", client.WithRoundTripper(trans))
require.NoError(t, err)
// sanity check before we go through checkpoint
// make sure we can download the state and unmarshal it with the VersionedUnmarshaler
sb, err := c.GetState(ctx, IdFinalized)
require.NoError(t, err)
require.Equal(t, true, bytes.Equal(sb, ms))
vu, err := detect.FromState(sb)
require.NoError(t, err)
us, err := vu.UnmarshalBeaconState(sb)
require.NoError(t, err)
ushtr, err := us.HashTreeRoot(ctx)
require.NoError(t, err)
require.Equal(t, sr, ushtr)
expected := &OriginData{
sb: ms,
bb: mb,
br: br,
sr: sr,
}
od, err := DownloadFinalizedData(ctx, c)
require.NoError(t, err)
require.Equal(t, true, bytes.Equal(expected.sb, od.sb))
require.Equal(t, true, bytes.Equal(expected.bb, od.bb))
require.Equal(t, expected.br, od.br)
require.Equal(t, expected.sr, od.sr)
}

View File

@@ -29,13 +29,12 @@ const (
getSignedBlockPath = "/eth/v2/beacon/blocks"
getBlockRootPath = "/eth/v1/beacon/blocks/{{.Id}}/root"
getForkForStatePath = "/eth/v1/beacon/states/{{.Id}}/fork"
getWeakSubjectivityPath = "/prysm/v1/beacon/weak_subjectivity"
getForkSchedulePath = "/eth/v1/config/fork_schedule"
getConfigSpecPath = "/eth/v1/config/spec"
getStatePath = "/eth/v2/debug/beacon/states"
getNodeVersionPath = "/eth/v1/node/version"
changeBLStoExecutionPath = "/eth/v1/beacon/pool/bls_to_execution_changes"
GetNodeVersionPath = "/eth/v1/node/version"
GetWeakSubjectivityPath = "/prysm/v1/beacon/weak_subjectivity"
)
// StateOrBlockId represents the block_id / state_id parameters that several of the Eth Beacon API methods accept.
@@ -81,8 +80,7 @@ func idTemplate(ts string) func(StateOrBlockId) string {
return f
}
// RenderGetBlockPath formats a block id into a path for the GetBlock API endpoint.
func RenderGetBlockPath(id StateOrBlockId) string {
func renderGetBlockPath(id StateOrBlockId) string {
return path.Join(getSignedBlockPath, string(id))
}
@@ -106,7 +104,7 @@ func NewClient(host string, opts ...client.ClientOpt) (*Client, error) {
// for the named identifiers.
// The return value contains the ssz-encoded bytes.
func (c *Client) GetBlock(ctx context.Context, blockId StateOrBlockId) ([]byte, error) {
blockPath := RenderGetBlockPath(blockId)
blockPath := renderGetBlockPath(blockId)
b, err := c.Get(ctx, blockPath, client.WithSSZEncoding())
if err != nil {
return nil, errors.Wrapf(err, "error requesting state by id = %s", blockId)
@@ -197,10 +195,6 @@ type NodeVersion struct {
systemInfo string
}
func (nv *NodeVersion) SetImplementation(impl string) {
nv.implementation = impl
}
var versionRE = regexp.MustCompile(`^(\w+)/(v\d+\.\d+\.\d+[-a-zA-Z0-9]*)\s*/?(.*)$`)
func parseNodeVersion(v string) (*NodeVersion, error) {
@@ -218,7 +212,7 @@ func parseNodeVersion(v string) (*NodeVersion, error) {
// GetNodeVersion requests that the beacon node identify information about its implementation in a format
// similar to a HTTP User-Agent field. ex: Lighthouse/v0.1.5 (Linux x86_64)
func (c *Client) GetNodeVersion(ctx context.Context) (*NodeVersion, error) {
b, err := c.Get(ctx, GetNodeVersionPath)
b, err := c.Get(ctx, getNodeVersionPath)
if err != nil {
return nil, errors.Wrap(err, "error requesting node version")
}
@@ -234,8 +228,7 @@ func (c *Client) GetNodeVersion(ctx context.Context) (*NodeVersion, error) {
return parseNodeVersion(d.Data.Version)
}
// RenderGetStatePath formats a state id into a path for the GetState API endpoint.
func RenderGetStatePath(id StateOrBlockId) string {
func renderGetStatePath(id StateOrBlockId) string {
return path.Join(getStatePath, string(id))
}
@@ -253,29 +246,13 @@ func (c *Client) GetState(ctx context.Context, stateId StateOrBlockId) ([]byte,
return b, nil
}
// WeakSubjectivityData represents the state root, block root and epoch of the BeaconState + ReadOnlySignedBeaconBlock
// that falls at the beginning of the current weak subjectivity period. These values can be used to construct
// a weak subjectivity checkpoint beacon node flag to be used for validation.
type WeakSubjectivityData struct {
BlockRoot [32]byte
StateRoot [32]byte
Epoch primitives.Epoch
}
// CheckpointString returns the standard string representation of a Checkpoint.
// The format is a hex-encoded block root, followed by the epoch of the block, separated by a colon. For example:
// "0x1c35540cac127315fabb6bf29181f2ae0de1a3fc909d2e76ba771e61312cc49a:74888"
func (wsd *WeakSubjectivityData) CheckpointString() string {
return fmt.Sprintf("%#x:%d", wsd.BlockRoot, wsd.Epoch)
}
// GetWeakSubjectivity calls a proposed API endpoint that is unique to prysm
// This api method does the following:
// - computes weak subjectivity epoch
// - finds the highest non-skipped block preceding the epoch
// - returns the htr of the found block and returns this + the value of state_root from the block
func (c *Client) GetWeakSubjectivity(ctx context.Context) (*WeakSubjectivityData, error) {
body, err := c.Get(ctx, GetWeakSubjectivityPath)
body, err := c.Get(ctx, getWeakSubjectivityPath)
if err != nil {
return nil, err
}

View File

@@ -97,31 +97,31 @@ func TestValidHostname(t *testing.T) {
{
name: "hostname with port",
hostArg: "mydomain.org:3500",
path: GetNodeVersionPath,
path: getNodeVersionPath,
joined: "http://mydomain.org:3500/eth/v1/node/version",
},
{
name: "https scheme, hostname with port",
hostArg: "https://mydomain.org:3500",
path: GetNodeVersionPath,
path: getNodeVersionPath,
joined: "https://mydomain.org:3500/eth/v1/node/version",
},
{
name: "http scheme, hostname without port",
hostArg: "http://mydomain.org",
path: GetNodeVersionPath,
path: getNodeVersionPath,
joined: "http://mydomain.org/eth/v1/node/version",
},
{
name: "http scheme, trailing slash, hostname without port",
hostArg: "http://mydomain.org/",
path: GetNodeVersionPath,
path: getNodeVersionPath,
joined: "http://mydomain.org/eth/v1/node/version",
},
{
name: "http scheme, hostname with basic auth creds and no port",
hostArg: "http://username:pass@mydomain.org/",
path: GetNodeVersionPath,
path: getNodeVersionPath,
joined: "http://username:pass@mydomain.org/eth/v1/node/version",
},
}

View File

@@ -33,7 +33,6 @@ go_library(
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_fastssz//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_opentelemetry_go_contrib_instrumentation_net_http_otelhttp//:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",
],
)
@@ -47,7 +46,6 @@ go_test(
data = glob(["testdata/**"]),
embed = [":go_default_library"],
deps = [
"//api:go_default_library",
"//api/server/structs:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",

View File

@@ -25,7 +25,6 @@ import (
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
log "github.com/sirupsen/logrus"
"go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
)
const (
@@ -109,7 +108,7 @@ func NewClient(host string, opts ...ClientOpt) (*Client, error) {
return nil, err
}
c := &Client{
hc: &http.Client{Transport: otelhttp.NewTransport(http.DefaultTransport)},
hc: &http.Client{},
baseURL: u,
}
for _, o := range opts {
@@ -155,10 +154,6 @@ func (c *Client) do(ctx context.Context, method string, path string, body io.Rea
if err != nil {
return
}
if method == http.MethodPost {
req.Header.Set("Content-Type", api.JsonMediaType)
}
req.Header.Set("Accept", api.JsonMediaType)
req.Header.Add("User-Agent", version.BuildData())
for _, o := range opts {
o(req)

View File

@@ -12,7 +12,6 @@ import (
"testing"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v5/api"
"github.com/prysmaticlabs/prysm/v5/api/server/structs"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
@@ -90,8 +89,6 @@ func TestClient_RegisterValidator(t *testing.T) {
expectedPath := "/eth/v1/builder/validators"
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, api.JsonMediaType, r.Header.Get("Content-Type"))
require.Equal(t, api.JsonMediaType, r.Header.Get("Accept"))
body, err := io.ReadAll(r.Body)
defer func() {
require.NoError(t, r.Body.Close())
@@ -367,8 +364,8 @@ func TestSubmitBlindedBlock(t *testing.T) {
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, postBlindedBeaconBlockPath, r.URL.Path)
require.Equal(t, "bellatrix", r.Header.Get("Eth-Consensus-Version"))
require.Equal(t, api.JsonMediaType, r.Header.Get("Content-Type"))
require.Equal(t, api.JsonMediaType, r.Header.Get("Accept"))
require.Equal(t, "application/json", r.Header.Get("Content-Type"))
require.Equal(t, "application/json", r.Header.Get("Accept"))
return &http.Response{
StatusCode: http.StatusOK,
Body: io.NopCloser(bytes.NewBufferString(testExampleExecutionPayload)),
@@ -395,8 +392,8 @@ func TestSubmitBlindedBlock(t *testing.T) {
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, postBlindedBeaconBlockPath, r.URL.Path)
require.Equal(t, "capella", r.Header.Get("Eth-Consensus-Version"))
require.Equal(t, api.JsonMediaType, r.Header.Get("Content-Type"))
require.Equal(t, api.JsonMediaType, r.Header.Get("Accept"))
require.Equal(t, "application/json", r.Header.Get("Content-Type"))
require.Equal(t, "application/json", r.Header.Get("Accept"))
return &http.Response{
StatusCode: http.StatusOK,
Body: io.NopCloser(bytes.NewBufferString(testExampleExecutionPayloadCapella)),
@@ -426,8 +423,8 @@ func TestSubmitBlindedBlock(t *testing.T) {
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, postBlindedBeaconBlockPath, r.URL.Path)
require.Equal(t, "deneb", r.Header.Get("Eth-Consensus-Version"))
require.Equal(t, api.JsonMediaType, r.Header.Get("Content-Type"))
require.Equal(t, api.JsonMediaType, r.Header.Get("Accept"))
require.Equal(t, "application/json", r.Header.Get("Content-Type"))
require.Equal(t, "application/json", r.Header.Get("Accept"))
var req structs.SignedBlindedBeaconBlockDeneb
err := json.NewDecoder(r.Body).Decode(&req)
require.NoError(t, err)

View File

@@ -4,11 +4,9 @@ go_library(
name = "go_default_library",
srcs = [
"block.go",
"block_execution.go",
"conversions.go",
"conversions_blob.go",
"conversions_block.go",
"conversions_block_execution.go",
"conversions_lightclient.go",
"conversions_state.go",
"endpoints_beacon.go",
@@ -30,7 +28,6 @@ go_library(
"//api/server:go_default_library",
"//beacon-chain/state:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//consensus-types/validator:go_default_library",
@@ -49,16 +46,10 @@ go_library(
go_test(
name = "go_default_test",
srcs = [
"conversions_block_execution_test.go",
"conversions_test.go",
],
srcs = ["conversions_test.go"],
embed = [":go_default_library"],
deps = [
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//testing/require:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
],
)

View File

@@ -186,6 +186,40 @@ type BlindedBeaconBlockBodyBellatrix struct {
ExecutionPayloadHeader *ExecutionPayloadHeader `json:"execution_payload_header"`
}
type ExecutionPayload struct {
ParentHash string `json:"parent_hash"`
FeeRecipient string `json:"fee_recipient"`
StateRoot string `json:"state_root"`
ReceiptsRoot string `json:"receipts_root"`
LogsBloom string `json:"logs_bloom"`
PrevRandao string `json:"prev_randao"`
BlockNumber string `json:"block_number"`
GasLimit string `json:"gas_limit"`
GasUsed string `json:"gas_used"`
Timestamp string `json:"timestamp"`
ExtraData string `json:"extra_data"`
BaseFeePerGas string `json:"base_fee_per_gas"`
BlockHash string `json:"block_hash"`
Transactions []string `json:"transactions"`
}
type ExecutionPayloadHeader struct {
ParentHash string `json:"parent_hash"`
FeeRecipient string `json:"fee_recipient"`
StateRoot string `json:"state_root"`
ReceiptsRoot string `json:"receipts_root"`
LogsBloom string `json:"logs_bloom"`
PrevRandao string `json:"prev_randao"`
BlockNumber string `json:"block_number"`
GasLimit string `json:"gas_limit"`
GasUsed string `json:"gas_used"`
Timestamp string `json:"timestamp"`
ExtraData string `json:"extra_data"`
BaseFeePerGas string `json:"base_fee_per_gas"`
BlockHash string `json:"block_hash"`
TransactionsRoot string `json:"transactions_root"`
}
// ----------------------------------------------------------------------------
// Capella
// ----------------------------------------------------------------------------
@@ -264,6 +298,42 @@ type BlindedBeaconBlockBodyCapella struct {
BLSToExecutionChanges []*SignedBLSToExecutionChange `json:"bls_to_execution_changes"`
}
type ExecutionPayloadCapella struct {
ParentHash string `json:"parent_hash"`
FeeRecipient string `json:"fee_recipient"`
StateRoot string `json:"state_root"`
ReceiptsRoot string `json:"receipts_root"`
LogsBloom string `json:"logs_bloom"`
PrevRandao string `json:"prev_randao"`
BlockNumber string `json:"block_number"`
GasLimit string `json:"gas_limit"`
GasUsed string `json:"gas_used"`
Timestamp string `json:"timestamp"`
ExtraData string `json:"extra_data"`
BaseFeePerGas string `json:"base_fee_per_gas"`
BlockHash string `json:"block_hash"`
Transactions []string `json:"transactions"`
Withdrawals []*Withdrawal `json:"withdrawals"`
}
type ExecutionPayloadHeaderCapella struct {
ParentHash string `json:"parent_hash"`
FeeRecipient string `json:"fee_recipient"`
StateRoot string `json:"state_root"`
ReceiptsRoot string `json:"receipts_root"`
LogsBloom string `json:"logs_bloom"`
PrevRandao string `json:"prev_randao"`
BlockNumber string `json:"block_number"`
GasLimit string `json:"gas_limit"`
GasUsed string `json:"gas_used"`
Timestamp string `json:"timestamp"`
ExtraData string `json:"extra_data"`
BaseFeePerGas string `json:"base_fee_per_gas"`
BlockHash string `json:"block_hash"`
TransactionsRoot string `json:"transactions_root"`
WithdrawalsRoot string `json:"withdrawals_root"`
}
// ----------------------------------------------------------------------------
// Deneb
// ----------------------------------------------------------------------------
@@ -356,6 +426,46 @@ type BlindedBeaconBlockBodyDeneb struct {
BlobKzgCommitments []string `json:"blob_kzg_commitments"`
}
type ExecutionPayloadDeneb struct {
ParentHash string `json:"parent_hash"`
FeeRecipient string `json:"fee_recipient"`
StateRoot string `json:"state_root"`
ReceiptsRoot string `json:"receipts_root"`
LogsBloom string `json:"logs_bloom"`
PrevRandao string `json:"prev_randao"`
BlockNumber string `json:"block_number"`
GasLimit string `json:"gas_limit"`
GasUsed string `json:"gas_used"`
Timestamp string `json:"timestamp"`
ExtraData string `json:"extra_data"`
BaseFeePerGas string `json:"base_fee_per_gas"`
BlockHash string `json:"block_hash"`
Transactions []string `json:"transactions"`
Withdrawals []*Withdrawal `json:"withdrawals"`
BlobGasUsed string `json:"blob_gas_used"`
ExcessBlobGas string `json:"excess_blob_gas"`
}
type ExecutionPayloadHeaderDeneb struct {
ParentHash string `json:"parent_hash"`
FeeRecipient string `json:"fee_recipient"`
StateRoot string `json:"state_root"`
ReceiptsRoot string `json:"receipts_root"`
LogsBloom string `json:"logs_bloom"`
PrevRandao string `json:"prev_randao"`
BlockNumber string `json:"block_number"`
GasLimit string `json:"gas_limit"`
GasUsed string `json:"gas_used"`
Timestamp string `json:"timestamp"`
ExtraData string `json:"extra_data"`
BaseFeePerGas string `json:"base_fee_per_gas"`
BlockHash string `json:"block_hash"`
TransactionsRoot string `json:"transactions_root"`
WithdrawalsRoot string `json:"withdrawals_root"`
BlobGasUsed string `json:"blob_gas_used"`
ExcessBlobGas string `json:"excess_blob_gas"`
}
// ----------------------------------------------------------------------------
// Electra
// ----------------------------------------------------------------------------
@@ -450,6 +560,14 @@ type BlindedBeaconBlockBodyElectra struct {
ExecutionRequests *ExecutionRequests `json:"execution_requests"`
}
type (
ExecutionRequests struct {
Deposits []*DepositRequest `json:"deposits"`
Withdrawals []*WithdrawalRequest `json:"withdrawals"`
Consolidations []*ConsolidationRequest `json:"consolidations"`
}
)
// ----------------------------------------------------------------------------
// Fulu
// ----------------------------------------------------------------------------
@@ -461,14 +579,14 @@ type SignedBeaconBlockContentsFulu struct {
}
type BeaconBlockContentsFulu struct {
Block *BeaconBlockElectra `json:"block"`
KzgProofs []string `json:"kzg_proofs"`
Blobs []string `json:"blobs"`
Block *BeaconBlockFulu `json:"block"`
KzgProofs []string `json:"kzg_proofs"`
Blobs []string `json:"blobs"`
}
type SignedBeaconBlockFulu struct {
Message *BeaconBlockElectra `json:"message"`
Signature string `json:"signature"`
Message *BeaconBlockFulu `json:"message"`
Signature string `json:"signature"`
}
var _ SignedMessageJsoner = &SignedBeaconBlockFulu{}
@@ -481,12 +599,36 @@ func (s *SignedBeaconBlockFulu) SigString() string {
return s.Signature
}
type BeaconBlockFulu struct {
Slot string `json:"slot"`
ProposerIndex string `json:"proposer_index"`
ParentRoot string `json:"parent_root"`
StateRoot string `json:"state_root"`
Body *BeaconBlockBodyFulu `json:"body"`
}
type BeaconBlockBodyFulu struct {
RandaoReveal string `json:"randao_reveal"`
Eth1Data *Eth1Data `json:"eth1_data"`
Graffiti string `json:"graffiti"`
ProposerSlashings []*ProposerSlashing `json:"proposer_slashings"`
AttesterSlashings []*AttesterSlashingElectra `json:"attester_slashings"`
Attestations []*AttestationElectra `json:"attestations"`
Deposits []*Deposit `json:"deposits"`
VoluntaryExits []*SignedVoluntaryExit `json:"voluntary_exits"`
SyncAggregate *SyncAggregate `json:"sync_aggregate"`
ExecutionPayload *ExecutionPayloadDeneb `json:"execution_payload"`
BLSToExecutionChanges []*SignedBLSToExecutionChange `json:"bls_to_execution_changes"`
BlobKzgCommitments []string `json:"blob_kzg_commitments"`
ExecutionRequests *ExecutionRequests `json:"execution_requests"`
}
type BlindedBeaconBlockFulu struct {
Slot string `json:"slot"`
ProposerIndex string `json:"proposer_index"`
ParentRoot string `json:"parent_root"`
StateRoot string `json:"state_root"`
Body *BlindedBeaconBlockBodyElectra `json:"body"`
Slot string `json:"slot"`
ProposerIndex string `json:"proposer_index"`
ParentRoot string `json:"parent_root"`
StateRoot string `json:"state_root"`
Body *BlindedBeaconBlockBodyFulu `json:"body"`
}
type SignedBlindedBeaconBlockFulu struct {
@@ -503,3 +645,19 @@ func (s *SignedBlindedBeaconBlockFulu) MessageRawJson() ([]byte, error) {
func (s *SignedBlindedBeaconBlockFulu) SigString() string {
return s.Signature
}
type BlindedBeaconBlockBodyFulu struct {
RandaoReveal string `json:"randao_reveal"`
Eth1Data *Eth1Data `json:"eth1_data"`
Graffiti string `json:"graffiti"`
ProposerSlashings []*ProposerSlashing `json:"proposer_slashings"`
AttesterSlashings []*AttesterSlashingElectra `json:"attester_slashings"`
Attestations []*AttestationElectra `json:"attestations"`
Deposits []*Deposit `json:"deposits"`
VoluntaryExits []*SignedVoluntaryExit `json:"voluntary_exits"`
SyncAggregate *SyncAggregate `json:"sync_aggregate"`
ExecutionPayloadHeader *ExecutionPayloadHeaderDeneb `json:"execution_payload_header"`
BLSToExecutionChanges []*SignedBLSToExecutionChange `json:"bls_to_execution_changes"`
BlobKzgCommitments []string `json:"blob_kzg_commitments"`
ExecutionRequests *ExecutionRequests `json:"execution_requests"`
}

View File

@@ -1,157 +0,0 @@
package structs
// ----------------------------------------------------------------------------
// Bellatrix
// ----------------------------------------------------------------------------
type ExecutionPayload struct {
ParentHash string `json:"parent_hash"`
FeeRecipient string `json:"fee_recipient"`
StateRoot string `json:"state_root"`
ReceiptsRoot string `json:"receipts_root"`
LogsBloom string `json:"logs_bloom"`
PrevRandao string `json:"prev_randao"`
BlockNumber string `json:"block_number"`
GasLimit string `json:"gas_limit"`
GasUsed string `json:"gas_used"`
Timestamp string `json:"timestamp"`
ExtraData string `json:"extra_data"`
BaseFeePerGas string `json:"base_fee_per_gas"`
BlockHash string `json:"block_hash"`
Transactions []string `json:"transactions"`
}
type ExecutionPayloadHeader struct {
ParentHash string `json:"parent_hash"`
FeeRecipient string `json:"fee_recipient"`
StateRoot string `json:"state_root"`
ReceiptsRoot string `json:"receipts_root"`
LogsBloom string `json:"logs_bloom"`
PrevRandao string `json:"prev_randao"`
BlockNumber string `json:"block_number"`
GasLimit string `json:"gas_limit"`
GasUsed string `json:"gas_used"`
Timestamp string `json:"timestamp"`
ExtraData string `json:"extra_data"`
BaseFeePerGas string `json:"base_fee_per_gas"`
BlockHash string `json:"block_hash"`
TransactionsRoot string `json:"transactions_root"`
}
// ----------------------------------------------------------------------------
// Capella
// ----------------------------------------------------------------------------
type ExecutionPayloadCapella struct {
ParentHash string `json:"parent_hash"`
FeeRecipient string `json:"fee_recipient"`
StateRoot string `json:"state_root"`
ReceiptsRoot string `json:"receipts_root"`
LogsBloom string `json:"logs_bloom"`
PrevRandao string `json:"prev_randao"`
BlockNumber string `json:"block_number"`
GasLimit string `json:"gas_limit"`
GasUsed string `json:"gas_used"`
Timestamp string `json:"timestamp"`
ExtraData string `json:"extra_data"`
BaseFeePerGas string `json:"base_fee_per_gas"`
BlockHash string `json:"block_hash"`
Transactions []string `json:"transactions"`
Withdrawals []*Withdrawal `json:"withdrawals"`
}
type ExecutionPayloadHeaderCapella struct {
ParentHash string `json:"parent_hash"`
FeeRecipient string `json:"fee_recipient"`
StateRoot string `json:"state_root"`
ReceiptsRoot string `json:"receipts_root"`
LogsBloom string `json:"logs_bloom"`
PrevRandao string `json:"prev_randao"`
BlockNumber string `json:"block_number"`
GasLimit string `json:"gas_limit"`
GasUsed string `json:"gas_used"`
Timestamp string `json:"timestamp"`
ExtraData string `json:"extra_data"`
BaseFeePerGas string `json:"base_fee_per_gas"`
BlockHash string `json:"block_hash"`
TransactionsRoot string `json:"transactions_root"`
WithdrawalsRoot string `json:"withdrawals_root"`
}
// ----------------------------------------------------------------------------
// Deneb
// ----------------------------------------------------------------------------
type ExecutionPayloadDeneb struct {
ParentHash string `json:"parent_hash"`
FeeRecipient string `json:"fee_recipient"`
StateRoot string `json:"state_root"`
ReceiptsRoot string `json:"receipts_root"`
LogsBloom string `json:"logs_bloom"`
PrevRandao string `json:"prev_randao"`
BlockNumber string `json:"block_number"`
GasLimit string `json:"gas_limit"`
GasUsed string `json:"gas_used"`
Timestamp string `json:"timestamp"`
ExtraData string `json:"extra_data"`
BaseFeePerGas string `json:"base_fee_per_gas"`
BlockHash string `json:"block_hash"`
Transactions []string `json:"transactions"`
Withdrawals []*Withdrawal `json:"withdrawals"`
BlobGasUsed string `json:"blob_gas_used"`
ExcessBlobGas string `json:"excess_blob_gas"`
}
type ExecutionPayloadHeaderDeneb struct {
ParentHash string `json:"parent_hash"`
FeeRecipient string `json:"fee_recipient"`
StateRoot string `json:"state_root"`
ReceiptsRoot string `json:"receipts_root"`
LogsBloom string `json:"logs_bloom"`
PrevRandao string `json:"prev_randao"`
BlockNumber string `json:"block_number"`
GasLimit string `json:"gas_limit"`
GasUsed string `json:"gas_used"`
Timestamp string `json:"timestamp"`
ExtraData string `json:"extra_data"`
BaseFeePerGas string `json:"base_fee_per_gas"`
BlockHash string `json:"block_hash"`
TransactionsRoot string `json:"transactions_root"`
WithdrawalsRoot string `json:"withdrawals_root"`
BlobGasUsed string `json:"blob_gas_used"`
ExcessBlobGas string `json:"excess_blob_gas"`
}
// ----------------------------------------------------------------------------
// Electra
// ----------------------------------------------------------------------------
type ExecutionRequests struct {
Deposits []*DepositRequest `json:"deposits"`
Withdrawals []*WithdrawalRequest `json:"withdrawals"`
Consolidations []*ConsolidationRequest `json:"consolidations"`
}
type DepositRequest struct {
Pubkey string `json:"pubkey"`
WithdrawalCredentials string `json:"withdrawal_credentials"`
Amount string `json:"amount"`
Signature string `json:"signature"`
Index string `json:"index"`
}
type WithdrawalRequest struct {
SourceAddress string `json:"source_address"`
ValidatorPubkey string `json:"validator_pubkey"`
Amount string `json:"amount"`
}
type ConsolidationRequest struct {
SourceAddress string `json:"source_address"`
SourcePubkey string `json:"source_pubkey"`
TargetPubkey string `json:"target_pubkey"`
}
// ----------------------------------------------------------------------------
// Fulu
// ----------------------------------------------------------------------------

View File

@@ -9,7 +9,6 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/api/server"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/consensus-types/validator"
"github.com/prysmaticlabs/prysm/v5/container/slice"
@@ -52,9 +51,6 @@ func HistoricalSummaryFromConsensus(s *eth.HistoricalSummary) *HistoricalSummary
}
func (s *SignedBLSToExecutionChange) ToConsensus() (*eth.SignedBLSToExecutionChange, error) {
if s.Message == nil {
return nil, server.NewDecodeError(errNilValue, "Message")
}
change, err := s.Message.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Message")
@@ -106,17 +102,14 @@ func SignedBLSChangeFromConsensus(ch *eth.SignedBLSToExecutionChange) *SignedBLS
func SignedBLSChangesToConsensus(src []*SignedBLSToExecutionChange) ([]*eth.SignedBLSToExecutionChange, error) {
if src == nil {
return nil, server.NewDecodeError(errNilValue, "SignedBLSToExecutionChanges")
return nil, errNilValue
}
err := slice.VerifyMaxLength(src, 16)
if err != nil {
return nil, server.NewDecodeError(err, "SignedBLSToExecutionChanges")
return nil, err
}
changes := make([]*eth.SignedBLSToExecutionChange, len(src))
for i, ch := range src {
if ch == nil {
return nil, server.NewDecodeError(errNilValue, fmt.Sprintf("[%d]", i))
}
changes[i], err = ch.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("[%d]", i))
@@ -162,9 +155,6 @@ func ForkFromConsensus(f *eth.Fork) *Fork {
}
func (s *SignedValidatorRegistration) ToConsensus() (*eth.SignedValidatorRegistrationV1, error) {
if s.Message == nil {
return nil, server.NewDecodeError(errNilValue, "Message")
}
msg, err := s.Message.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Message")
@@ -221,9 +211,6 @@ func SignedValidatorRegistrationFromConsensus(vr *eth.SignedValidatorRegistratio
}
func (s *SignedContributionAndProof) ToConsensus() (*eth.SignedContributionAndProof, error) {
if s.Message == nil {
return nil, server.NewDecodeError(errNilValue, "Message")
}
msg, err := s.Message.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Message")
@@ -248,9 +235,6 @@ func SignedContributionAndProofFromConsensus(c *eth.SignedContributionAndProof)
}
func (c *ContributionAndProof) ToConsensus() (*eth.ContributionAndProof, error) {
if c.Contribution == nil {
return nil, server.NewDecodeError(errNilValue, "Contribution")
}
contribution, err := c.Contribution.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Contribution")
@@ -259,7 +243,7 @@ func (c *ContributionAndProof) ToConsensus() (*eth.ContributionAndProof, error)
if err != nil {
return nil, server.NewDecodeError(err, "AggregatorIndex")
}
selectionProof, err := bytesutil.DecodeHexWithLength(c.SelectionProof, fieldparams.BLSSignatureLength)
selectionProof, err := bytesutil.DecodeHexWithLength(c.SelectionProof, 96)
if err != nil {
return nil, server.NewDecodeError(err, "SelectionProof")
}
@@ -322,9 +306,6 @@ func SyncCommitteeContributionFromConsensus(c *eth.SyncCommitteeContribution) *S
}
func (s *SignedAggregateAttestationAndProof) ToConsensus() (*eth.SignedAggregateAttestationAndProof, error) {
if s.Message == nil {
return nil, server.NewDecodeError(errNilValue, "Message")
}
msg, err := s.Message.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Message")
@@ -345,14 +326,11 @@ func (a *AggregateAttestationAndProof) ToConsensus() (*eth.AggregateAttestationA
if err != nil {
return nil, server.NewDecodeError(err, "AggregatorIndex")
}
if a.Aggregate == nil {
return nil, server.NewDecodeError(errNilValue, "Aggregate")
}
agg, err := a.Aggregate.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Aggregate")
}
proof, err := bytesutil.DecodeHexWithLength(a.SelectionProof, fieldparams.BLSSignatureLength)
proof, err := bytesutil.DecodeHexWithLength(a.SelectionProof, 96)
if err != nil {
return nil, server.NewDecodeError(err, "SelectionProof")
}
@@ -364,9 +342,6 @@ func (a *AggregateAttestationAndProof) ToConsensus() (*eth.AggregateAttestationA
}
func (s *SignedAggregateAttestationAndProofElectra) ToConsensus() (*eth.SignedAggregateAttestationAndProofElectra, error) {
if s.Message == nil {
return nil, server.NewDecodeError(errNilValue, "Message")
}
msg, err := s.Message.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Message")
@@ -387,14 +362,11 @@ func (a *AggregateAttestationAndProofElectra) ToConsensus() (*eth.AggregateAttes
if err != nil {
return nil, server.NewDecodeError(err, "AggregatorIndex")
}
if a.Aggregate == nil {
return nil, server.NewDecodeError(errNilValue, "Aggregate")
}
agg, err := a.Aggregate.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Aggregate")
}
proof, err := bytesutil.DecodeHexWithLength(a.SelectionProof, fieldparams.BLSSignatureLength)
proof, err := bytesutil.DecodeHexWithLength(a.SelectionProof, 96)
if err != nil {
return nil, server.NewDecodeError(err, "SelectionProof")
}
@@ -410,9 +382,6 @@ func (a *Attestation) ToConsensus() (*eth.Attestation, error) {
if err != nil {
return nil, server.NewDecodeError(err, "AggregationBits")
}
if a.Data == nil {
return nil, server.NewDecodeError(errNilValue, "Data")
}
data, err := a.Data.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Data")
@@ -442,9 +411,6 @@ func (a *AttestationElectra) ToConsensus() (*eth.AttestationElectra, error) {
if err != nil {
return nil, server.NewDecodeError(err, "AggregationBits")
}
if a.Data == nil {
return nil, server.NewDecodeError(errNilValue, "Data")
}
data, err := a.Data.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Data")
@@ -466,15 +432,6 @@ func (a *AttestationElectra) ToConsensus() (*eth.AttestationElectra, error) {
}, nil
}
func SingleAttFromConsensus(a *eth.SingleAttestation) *SingleAttestation {
return &SingleAttestation{
CommitteeIndex: fmt.Sprintf("%d", a.CommitteeId),
AttesterIndex: fmt.Sprintf("%d", a.AttesterIndex),
Data: AttDataFromConsensus(a.Data),
Signature: hexutil.Encode(a.Signature),
}
}
func (a *SingleAttestation) ToConsensus() (*eth.SingleAttestation, error) {
ci, err := strconv.ParseUint(a.CommitteeIndex, 10, 64)
if err != nil {
@@ -484,9 +441,6 @@ func (a *SingleAttestation) ToConsensus() (*eth.SingleAttestation, error) {
if err != nil {
return nil, server.NewDecodeError(err, "AttesterIndex")
}
if a.Data == nil {
return nil, server.NewDecodeError(errNilValue, "Data")
}
data, err := a.Data.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Data")
@@ -526,16 +480,10 @@ func (a *AttestationData) ToConsensus() (*eth.AttestationData, error) {
if err != nil {
return nil, server.NewDecodeError(err, "BeaconBlockRoot")
}
if a.Source == nil {
return nil, server.NewDecodeError(errNilValue, "Source")
}
source, err := a.Source.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Source")
}
if a.Target == nil {
return nil, server.NewDecodeError(errNilValue, "Target")
}
target, err := a.Target.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Target")
@@ -635,17 +583,15 @@ func (b *BeaconCommitteeSubscription) ToConsensus() (*validator.BeaconCommitteeS
}
func (e *SignedVoluntaryExit) ToConsensus() (*eth.SignedVoluntaryExit, error) {
if e.Message == nil {
return nil, server.NewDecodeError(errNilValue, "Message")
sig, err := bytesutil.DecodeHexWithLength(e.Signature, fieldparams.BLSSignatureLength)
if err != nil {
return nil, server.NewDecodeError(err, "Signature")
}
exit, err := e.Message.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Message")
}
sig, err := bytesutil.DecodeHexWithLength(e.Signature, fieldparams.BLSSignatureLength)
if err != nil {
return nil, server.NewDecodeError(err, "Signature")
}
return &eth.SignedVoluntaryExit{
Exit: exit,
Signature: sig,
@@ -748,16 +694,10 @@ func Eth1DataFromConsensus(e1d *eth.Eth1Data) *Eth1Data {
}
func (s *ProposerSlashing) ToConsensus() (*eth.ProposerSlashing, error) {
if s.SignedHeader1 == nil {
return nil, server.NewDecodeError(errNilValue, "SignedHeader1")
}
h1, err := s.SignedHeader1.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "SignedHeader1")
}
if s.SignedHeader2 == nil {
return nil, server.NewDecodeError(errNilValue, "SignedHeader2")
}
h2, err := s.SignedHeader2.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "SignedHeader2")
@@ -770,16 +710,10 @@ func (s *ProposerSlashing) ToConsensus() (*eth.ProposerSlashing, error) {
}
func (s *AttesterSlashing) ToConsensus() (*eth.AttesterSlashing, error) {
if s.Attestation1 == nil {
return nil, server.NewDecodeError(errNilValue, "Attestation1")
}
att1, err := s.Attestation1.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Attestation1")
}
if s.Attestation2 == nil {
return nil, server.NewDecodeError(errNilValue, "Attestation2")
}
att2, err := s.Attestation2.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Attestation2")
@@ -788,16 +722,10 @@ func (s *AttesterSlashing) ToConsensus() (*eth.AttesterSlashing, error) {
}
func (s *AttesterSlashingElectra) ToConsensus() (*eth.AttesterSlashingElectra, error) {
if s.Attestation1 == nil {
return nil, server.NewDecodeError(errNilValue, "Attestation1")
}
att1, err := s.Attestation1.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Attestation1")
}
if s.Attestation2 == nil {
return nil, server.NewDecodeError(errNilValue, "Attestation2")
}
att2, err := s.Attestation2.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Attestation2")
@@ -806,10 +734,6 @@ func (s *AttesterSlashingElectra) ToConsensus() (*eth.AttesterSlashingElectra, e
}
func (a *IndexedAttestation) ToConsensus() (*eth.IndexedAttestation, error) {
if err := slice.VerifyMaxLength(a.AttestingIndices, params.BeaconConfig().MaxValidatorsPerCommittee); err != nil {
return nil, err
}
indices := make([]uint64, len(a.AttestingIndices))
var err error
for i, ix := range a.AttestingIndices {
@@ -818,9 +742,6 @@ func (a *IndexedAttestation) ToConsensus() (*eth.IndexedAttestation, error) {
return nil, server.NewDecodeError(err, fmt.Sprintf("AttestingIndices[%d]", i))
}
}
if a.Data == nil {
return nil, server.NewDecodeError(errNilValue, "Data")
}
data, err := a.Data.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Data")
@@ -838,13 +759,6 @@ func (a *IndexedAttestation) ToConsensus() (*eth.IndexedAttestation, error) {
}
func (a *IndexedAttestationElectra) ToConsensus() (*eth.IndexedAttestationElectra, error) {
if err := slice.VerifyMaxLength(
a.AttestingIndices,
params.BeaconConfig().MaxValidatorsPerCommittee*params.BeaconConfig().MaxCommitteesPerSlot,
); err != nil {
return nil, err
}
indices := make([]uint64, len(a.AttestingIndices))
var err error
for i, ix := range a.AttestingIndices {
@@ -853,9 +767,6 @@ func (a *IndexedAttestationElectra) ToConsensus() (*eth.IndexedAttestationElectr
return nil, server.NewDecodeError(err, fmt.Sprintf("AttestingIndices[%d]", i))
}
}
if a.Data == nil {
return nil, server.NewDecodeError(errNilValue, "Data")
}
data, err := a.Data.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Data")
@@ -889,13 +800,133 @@ func WithdrawalFromConsensus(w *enginev1.Withdrawal) *Withdrawal {
}
}
func WithdrawalRequestsFromConsensus(ws []*enginev1.WithdrawalRequest) []*WithdrawalRequest {
result := make([]*WithdrawalRequest, len(ws))
for i, w := range ws {
result[i] = WithdrawalRequestFromConsensus(w)
}
return result
}
func WithdrawalRequestFromConsensus(w *enginev1.WithdrawalRequest) *WithdrawalRequest {
return &WithdrawalRequest{
SourceAddress: hexutil.Encode(w.SourceAddress),
ValidatorPubkey: hexutil.Encode(w.ValidatorPubkey),
Amount: fmt.Sprintf("%d", w.Amount),
}
}
func (w *WithdrawalRequest) ToConsensus() (*enginev1.WithdrawalRequest, error) {
src, err := bytesutil.DecodeHexWithLength(w.SourceAddress, common.AddressLength)
if err != nil {
return nil, server.NewDecodeError(err, "SourceAddress")
}
pubkey, err := bytesutil.DecodeHexWithLength(w.ValidatorPubkey, fieldparams.BLSPubkeyLength)
if err != nil {
return nil, server.NewDecodeError(err, "ValidatorPubkey")
}
amount, err := strconv.ParseUint(w.Amount, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "Amount")
}
return &enginev1.WithdrawalRequest{
SourceAddress: src,
ValidatorPubkey: pubkey,
Amount: amount,
}, nil
}
func ConsolidationRequestsFromConsensus(cs []*enginev1.ConsolidationRequest) []*ConsolidationRequest {
result := make([]*ConsolidationRequest, len(cs))
for i, c := range cs {
result[i] = ConsolidationRequestFromConsensus(c)
}
return result
}
func ConsolidationRequestFromConsensus(c *enginev1.ConsolidationRequest) *ConsolidationRequest {
return &ConsolidationRequest{
SourceAddress: hexutil.Encode(c.SourceAddress),
SourcePubkey: hexutil.Encode(c.SourcePubkey),
TargetPubkey: hexutil.Encode(c.TargetPubkey),
}
}
func (c *ConsolidationRequest) ToConsensus() (*enginev1.ConsolidationRequest, error) {
srcAddress, err := bytesutil.DecodeHexWithLength(c.SourceAddress, common.AddressLength)
if err != nil {
return nil, server.NewDecodeError(err, "SourceAddress")
}
srcPubkey, err := bytesutil.DecodeHexWithLength(c.SourcePubkey, fieldparams.BLSPubkeyLength)
if err != nil {
return nil, server.NewDecodeError(err, "SourcePubkey")
}
targetPubkey, err := bytesutil.DecodeHexWithLength(c.TargetPubkey, fieldparams.BLSPubkeyLength)
if err != nil {
return nil, server.NewDecodeError(err, "TargetPubkey")
}
return &enginev1.ConsolidationRequest{
SourceAddress: srcAddress,
SourcePubkey: srcPubkey,
TargetPubkey: targetPubkey,
}, nil
}
func DepositRequestsFromConsensus(ds []*enginev1.DepositRequest) []*DepositRequest {
result := make([]*DepositRequest, len(ds))
for i, d := range ds {
result[i] = DepositRequestFromConsensus(d)
}
return result
}
func DepositRequestFromConsensus(d *enginev1.DepositRequest) *DepositRequest {
return &DepositRequest{
Pubkey: hexutil.Encode(d.Pubkey),
WithdrawalCredentials: hexutil.Encode(d.WithdrawalCredentials),
Amount: fmt.Sprintf("%d", d.Amount),
Signature: hexutil.Encode(d.Signature),
Index: fmt.Sprintf("%d", d.Index),
}
}
func (d *DepositRequest) ToConsensus() (*enginev1.DepositRequest, error) {
pubkey, err := bytesutil.DecodeHexWithLength(d.Pubkey, fieldparams.BLSPubkeyLength)
if err != nil {
return nil, server.NewDecodeError(err, "Pubkey")
}
withdrawalCredentials, err := bytesutil.DecodeHexWithLength(d.WithdrawalCredentials, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "WithdrawalCredentials")
}
amount, err := strconv.ParseUint(d.Amount, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "Amount")
}
sig, err := bytesutil.DecodeHexWithLength(d.Signature, fieldparams.BLSSignatureLength)
if err != nil {
return nil, server.NewDecodeError(err, "Signature")
}
index, err := strconv.ParseUint(d.Index, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "Index")
}
return &enginev1.DepositRequest{
Pubkey: pubkey,
WithdrawalCredentials: withdrawalCredentials,
Amount: amount,
Signature: sig,
Index: index,
}, nil
}
func ProposerSlashingsToConsensus(src []*ProposerSlashing) ([]*eth.ProposerSlashing, error) {
if src == nil {
return nil, server.NewDecodeError(errNilValue, "ProposerSlashings")
return nil, errNilValue
}
err := slice.VerifyMaxLength(src, 16)
if err != nil {
return nil, server.NewDecodeError(err, "ProposerSlashings")
return nil, err
}
proposerSlashings := make([]*eth.ProposerSlashing, len(src))
for i, s := range src {
@@ -1024,11 +1055,11 @@ func ProposerSlashingFromConsensus(src *eth.ProposerSlashing) *ProposerSlashing
func AttesterSlashingsToConsensus(src []*AttesterSlashing) ([]*eth.AttesterSlashing, error) {
if src == nil {
return nil, server.NewDecodeError(errNilValue, "AttesterSlashings")
return nil, errNilValue
}
err := slice.VerifyMaxLength(src, 2)
if err != nil {
return nil, server.NewDecodeError(err, "AttesterSlashings")
return nil, err
}
attesterSlashings := make([]*eth.AttesterSlashing, len(src))
@@ -1039,19 +1070,10 @@ func AttesterSlashingsToConsensus(src []*AttesterSlashing) ([]*eth.AttesterSlash
if s.Attestation1 == nil {
return nil, server.NewDecodeError(errNilValue, fmt.Sprintf("[%d].Attestation1", i))
}
if s.Attestation1.Data == nil {
return nil, server.NewDecodeError(errNilValue, fmt.Sprintf("[%d].Attestation1.Data", i))
}
if s.Attestation2 == nil {
return nil, server.NewDecodeError(errNilValue, fmt.Sprintf("[%d].Attestation2", i))
}
if s.Attestation2.Data == nil {
return nil, server.NewDecodeError(errNilValue, fmt.Sprintf("[%d].Attestation2.Data", i))
}
a1Sig, err := bytesutil.DecodeHexWithLength(s.Attestation1.Signature, fieldparams.BLSSignatureLength)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("[%d].Attestation1.Signature", i))
@@ -1068,7 +1090,6 @@ func AttesterSlashingsToConsensus(src []*AttesterSlashing) ([]*eth.AttesterSlash
}
a1AttestingIndices[j] = attestingIndex
}
a1Data, err := s.Attestation1.Data.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("[%d].Attestation1.Data", i))
@@ -1166,11 +1187,11 @@ func AttesterSlashingFromConsensus(src *eth.AttesterSlashing) *AttesterSlashing
func AttesterSlashingsElectraToConsensus(src []*AttesterSlashingElectra) ([]*eth.AttesterSlashingElectra, error) {
if src == nil {
return nil, server.NewDecodeError(errNilValue, "AttesterSlashingsElectra")
return nil, errNilValue
}
err := slice.VerifyMaxLength(src, fieldparams.MaxAttesterSlashingsElectra)
err := slice.VerifyMaxLength(src, 2)
if err != nil {
return nil, server.NewDecodeError(err, "AttesterSlashingsElectra")
return nil, err
}
attesterSlashings := make([]*eth.AttesterSlashingElectra, len(src))
@@ -1178,28 +1199,18 @@ func AttesterSlashingsElectraToConsensus(src []*AttesterSlashingElectra) ([]*eth
if s == nil {
return nil, server.NewDecodeError(errNilValue, fmt.Sprintf("[%d]", i))
}
if s.Attestation1 == nil {
return nil, server.NewDecodeError(errNilValue, fmt.Sprintf("[%d].Attestation1", i))
}
if s.Attestation1.Data == nil {
return nil, server.NewDecodeError(errNilValue, fmt.Sprintf("[%d].Attestation1.Data", i))
}
if s.Attestation2 == nil {
return nil, server.NewDecodeError(errNilValue, fmt.Sprintf("[%d].Attestation2", i))
}
if s.Attestation2.Data == nil {
return nil, server.NewDecodeError(errNilValue, fmt.Sprintf("[%d].Attestation2.Data", i))
}
a1Sig, err := bytesutil.DecodeHexWithLength(s.Attestation1.Signature, fieldparams.BLSSignatureLength)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("[%d].Attestation1.Signature", i))
}
err = slice.VerifyMaxLength(s.Attestation1.AttestingIndices, params.BeaconConfig().MaxValidatorsPerCommittee*params.BeaconConfig().MaxCommitteesPerSlot)
err = slice.VerifyMaxLength(s.Attestation1.AttestingIndices, 2048)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("[%d].Attestation1.AttestingIndices", i))
}
@@ -1219,7 +1230,7 @@ func AttesterSlashingsElectraToConsensus(src []*AttesterSlashingElectra) ([]*eth
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("[%d].Attestation2.Signature", i))
}
err = slice.VerifyMaxLength(s.Attestation2.AttestingIndices, params.BeaconConfig().MaxValidatorsPerCommittee*params.BeaconConfig().MaxCommitteesPerSlot)
err = slice.VerifyMaxLength(s.Attestation2.AttestingIndices, 2048)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("[%d].Attestation2.AttestingIndices", i))
}
@@ -1308,18 +1319,15 @@ func AttesterSlashingElectraFromConsensus(src *eth.AttesterSlashingElectra) *Att
func AttsToConsensus(src []*Attestation) ([]*eth.Attestation, error) {
if src == nil {
return nil, server.NewDecodeError(errNilValue, "Attestations")
return nil, errNilValue
}
err := slice.VerifyMaxLength(src, 128)
if err != nil {
return nil, server.NewDecodeError(err, "Attestations")
return nil, err
}
atts := make([]*eth.Attestation, len(src))
for i, a := range src {
if a == nil {
return nil, server.NewDecodeError(errNilValue, fmt.Sprintf("[%d]", i))
}
atts[i], err = a.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("[%d]", i))
@@ -1338,18 +1346,15 @@ func AttsFromConsensus(src []*eth.Attestation) []*Attestation {
func AttsElectraToConsensus(src []*AttestationElectra) ([]*eth.AttestationElectra, error) {
if src == nil {
return nil, server.NewDecodeError(errNilValue, "AttestationsElectra")
return nil, errNilValue
}
err := slice.VerifyMaxLength(src, 8)
if err != nil {
return nil, server.NewDecodeError(err, "AttestationsElectra")
return nil, err
}
atts := make([]*eth.AttestationElectra, len(src))
for i, a := range src {
if a == nil {
return nil, server.NewDecodeError(errNilValue, fmt.Sprintf("[%d]", i))
}
atts[i], err = a.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("[%d]", i))
@@ -1368,11 +1373,11 @@ func AttsElectraFromConsensus(src []*eth.AttestationElectra) []*AttestationElect
func DepositsToConsensus(src []*Deposit) ([]*eth.Deposit, error) {
if src == nil {
return nil, server.NewDecodeError(errNilValue, "Deposits")
return nil, errNilValue
}
err := slice.VerifyMaxLength(src, 16)
if err != nil {
return nil, server.NewDecodeError(err, "Deposits")
return nil, err
}
deposits := make([]*eth.Deposit, len(src))
@@ -1444,18 +1449,15 @@ func DepositsFromConsensus(src []*eth.Deposit) []*Deposit {
func SignedExitsToConsensus(src []*SignedVoluntaryExit) ([]*eth.SignedVoluntaryExit, error) {
if src == nil {
return nil, server.NewDecodeError(errNilValue, "SignedVoluntaryExits")
return nil, errNilValue
}
err := slice.VerifyMaxLength(src, 16)
if err != nil {
return nil, server.NewDecodeError(err, "SignedVoluntaryExits")
return nil, err
}
exits := make([]*eth.SignedVoluntaryExit, len(src))
for i, e := range src {
if e == nil {
return nil, server.NewDecodeError(errNilValue, fmt.Sprintf("[%d]", i))
}
exits[i], err = e.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("[%d]", i))

File diff suppressed because it is too large Load Diff

View File

@@ -1,973 +0,0 @@
package structs
import (
"fmt"
"strconv"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/prysmaticlabs/prysm/v5/api/server"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/container/slice"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
)
// ----------------------------------------------------------------------------
// Bellatrix
// ----------------------------------------------------------------------------
func ExecutionPayloadFromConsensus(payload *enginev1.ExecutionPayload) (*ExecutionPayload, error) {
baseFeePerGas, err := sszBytesToUint256String(payload.BaseFeePerGas)
if err != nil {
return nil, err
}
transactions := make([]string, len(payload.Transactions))
for i, tx := range payload.Transactions {
transactions[i] = hexutil.Encode(tx)
}
return &ExecutionPayload{
ParentHash: hexutil.Encode(payload.ParentHash),
FeeRecipient: hexutil.Encode(payload.FeeRecipient),
StateRoot: hexutil.Encode(payload.StateRoot),
ReceiptsRoot: hexutil.Encode(payload.ReceiptsRoot),
LogsBloom: hexutil.Encode(payload.LogsBloom),
PrevRandao: hexutil.Encode(payload.PrevRandao),
BlockNumber: fmt.Sprintf("%d", payload.BlockNumber),
GasLimit: fmt.Sprintf("%d", payload.GasLimit),
GasUsed: fmt.Sprintf("%d", payload.GasUsed),
Timestamp: fmt.Sprintf("%d", payload.Timestamp),
ExtraData: hexutil.Encode(payload.ExtraData),
BaseFeePerGas: baseFeePerGas,
BlockHash: hexutil.Encode(payload.BlockHash),
Transactions: transactions,
}, nil
}
func (e *ExecutionPayload) ToConsensus() (*enginev1.ExecutionPayload, error) {
if e == nil {
return nil, server.NewDecodeError(errNilValue, "ExecutionPayload")
}
payloadParentHash, err := bytesutil.DecodeHexWithLength(e.ParentHash, common.HashLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.ParentHash")
}
payloadFeeRecipient, err := bytesutil.DecodeHexWithLength(e.FeeRecipient, fieldparams.FeeRecipientLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.FeeRecipient")
}
payloadStateRoot, err := bytesutil.DecodeHexWithLength(e.StateRoot, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.StateRoot")
}
payloadReceiptsRoot, err := bytesutil.DecodeHexWithLength(e.ReceiptsRoot, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.ReceiptsRoot")
}
payloadLogsBloom, err := bytesutil.DecodeHexWithLength(e.LogsBloom, fieldparams.LogsBloomLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.LogsBloom")
}
payloadPrevRandao, err := bytesutil.DecodeHexWithLength(e.PrevRandao, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.PrevRandao")
}
payloadBlockNumber, err := strconv.ParseUint(e.BlockNumber, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.BlockNumber")
}
payloadGasLimit, err := strconv.ParseUint(e.GasLimit, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.GasLimit")
}
payloadGasUsed, err := strconv.ParseUint(e.GasUsed, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.GasUsed")
}
payloadTimestamp, err := strconv.ParseUint(e.Timestamp, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.Timestamp")
}
payloadExtraData, err := bytesutil.DecodeHexWithMaxLength(e.ExtraData, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.ExtraData")
}
payloadBaseFeePerGas, err := bytesutil.Uint256ToSSZBytes(e.BaseFeePerGas)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.BaseFeePerGas")
}
payloadBlockHash, err := bytesutil.DecodeHexWithLength(e.BlockHash, common.HashLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.BlockHash")
}
err = slice.VerifyMaxLength(e.Transactions, fieldparams.MaxTxsPerPayloadLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.Transactions")
}
payloadTxs := make([][]byte, len(e.Transactions))
for i, tx := range e.Transactions {
payloadTxs[i], err = bytesutil.DecodeHexWithMaxLength(tx, fieldparams.MaxBytesPerTxLength)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("ExecutionPayload.Transactions[%d]", i))
}
}
return &enginev1.ExecutionPayload{
ParentHash: payloadParentHash,
FeeRecipient: payloadFeeRecipient,
StateRoot: payloadStateRoot,
ReceiptsRoot: payloadReceiptsRoot,
LogsBloom: payloadLogsBloom,
PrevRandao: payloadPrevRandao,
BlockNumber: payloadBlockNumber,
GasLimit: payloadGasLimit,
GasUsed: payloadGasUsed,
Timestamp: payloadTimestamp,
ExtraData: payloadExtraData,
BaseFeePerGas: payloadBaseFeePerGas,
BlockHash: payloadBlockHash,
Transactions: payloadTxs,
}, nil
}
func ExecutionPayloadHeaderFromConsensus(payload *enginev1.ExecutionPayloadHeader) (*ExecutionPayloadHeader, error) {
baseFeePerGas, err := sszBytesToUint256String(payload.BaseFeePerGas)
if err != nil {
return nil, err
}
return &ExecutionPayloadHeader{
ParentHash: hexutil.Encode(payload.ParentHash),
FeeRecipient: hexutil.Encode(payload.FeeRecipient),
StateRoot: hexutil.Encode(payload.StateRoot),
ReceiptsRoot: hexutil.Encode(payload.ReceiptsRoot),
LogsBloom: hexutil.Encode(payload.LogsBloom),
PrevRandao: hexutil.Encode(payload.PrevRandao),
BlockNumber: fmt.Sprintf("%d", payload.BlockNumber),
GasLimit: fmt.Sprintf("%d", payload.GasLimit),
GasUsed: fmt.Sprintf("%d", payload.GasUsed),
Timestamp: fmt.Sprintf("%d", payload.Timestamp),
ExtraData: hexutil.Encode(payload.ExtraData),
BaseFeePerGas: baseFeePerGas,
BlockHash: hexutil.Encode(payload.BlockHash),
TransactionsRoot: hexutil.Encode(payload.TransactionsRoot),
}, nil
}
func (e *ExecutionPayloadHeader) ToConsensus() (*enginev1.ExecutionPayloadHeader, error) {
if e == nil {
return nil, server.NewDecodeError(errNilValue, "ExecutionPayloadHeader")
}
payloadParentHash, err := bytesutil.DecodeHexWithLength(e.ParentHash, common.HashLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.ParentHash")
}
payloadFeeRecipient, err := bytesutil.DecodeHexWithLength(e.FeeRecipient, fieldparams.FeeRecipientLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.FeeRecipient")
}
payloadStateRoot, err := bytesutil.DecodeHexWithLength(e.StateRoot, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.StateRoot")
}
payloadReceiptsRoot, err := bytesutil.DecodeHexWithLength(e.ReceiptsRoot, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.ReceiptsRoot")
}
payloadLogsBloom, err := bytesutil.DecodeHexWithLength(e.LogsBloom, fieldparams.LogsBloomLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.LogsBloom")
}
payloadPrevRandao, err := bytesutil.DecodeHexWithLength(e.PrevRandao, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.PrevRandao")
}
payloadBlockNumber, err := strconv.ParseUint(e.BlockNumber, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.BlockNumber")
}
payloadGasLimit, err := strconv.ParseUint(e.GasLimit, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.GasLimit")
}
payloadGasUsed, err := strconv.ParseUint(e.GasUsed, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.GasUsed")
}
payloadTimestamp, err := strconv.ParseUint(e.Timestamp, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.Timestamp")
}
payloadExtraData, err := bytesutil.DecodeHexWithMaxLength(e.ExtraData, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.ExtraData")
}
payloadBaseFeePerGas, err := bytesutil.Uint256ToSSZBytes(e.BaseFeePerGas)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.BaseFeePerGas")
}
payloadBlockHash, err := bytesutil.DecodeHexWithLength(e.BlockHash, common.HashLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.BlockHash")
}
payloadTxsRoot, err := bytesutil.DecodeHexWithLength(e.TransactionsRoot, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.TransactionsRoot")
}
return &enginev1.ExecutionPayloadHeader{
ParentHash: payloadParentHash,
FeeRecipient: payloadFeeRecipient,
StateRoot: payloadStateRoot,
ReceiptsRoot: payloadReceiptsRoot,
LogsBloom: payloadLogsBloom,
PrevRandao: payloadPrevRandao,
BlockNumber: payloadBlockNumber,
GasLimit: payloadGasLimit,
GasUsed: payloadGasUsed,
Timestamp: payloadTimestamp,
ExtraData: payloadExtraData,
BaseFeePerGas: payloadBaseFeePerGas,
BlockHash: payloadBlockHash,
TransactionsRoot: payloadTxsRoot,
}, nil
}
// ----------------------------------------------------------------------------
// Capella
// ----------------------------------------------------------------------------
func ExecutionPayloadCapellaFromConsensus(payload *enginev1.ExecutionPayloadCapella) (*ExecutionPayloadCapella, error) {
baseFeePerGas, err := sszBytesToUint256String(payload.BaseFeePerGas)
if err != nil {
return nil, err
}
transactions := make([]string, len(payload.Transactions))
for i, tx := range payload.Transactions {
transactions[i] = hexutil.Encode(tx)
}
return &ExecutionPayloadCapella{
ParentHash: hexutil.Encode(payload.ParentHash),
FeeRecipient: hexutil.Encode(payload.FeeRecipient),
StateRoot: hexutil.Encode(payload.StateRoot),
ReceiptsRoot: hexutil.Encode(payload.ReceiptsRoot),
LogsBloom: hexutil.Encode(payload.LogsBloom),
PrevRandao: hexutil.Encode(payload.PrevRandao),
BlockNumber: fmt.Sprintf("%d", payload.BlockNumber),
GasLimit: fmt.Sprintf("%d", payload.GasLimit),
GasUsed: fmt.Sprintf("%d", payload.GasUsed),
Timestamp: fmt.Sprintf("%d", payload.Timestamp),
ExtraData: hexutil.Encode(payload.ExtraData),
BaseFeePerGas: baseFeePerGas,
BlockHash: hexutil.Encode(payload.BlockHash),
Transactions: transactions,
Withdrawals: WithdrawalsFromConsensus(payload.Withdrawals),
}, nil
}
func (e *ExecutionPayloadCapella) ToConsensus() (*enginev1.ExecutionPayloadCapella, error) {
if e == nil {
return nil, server.NewDecodeError(errNilValue, "ExecutionPayload")
}
payloadParentHash, err := bytesutil.DecodeHexWithLength(e.ParentHash, common.HashLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.ParentHash")
}
payloadFeeRecipient, err := bytesutil.DecodeHexWithLength(e.FeeRecipient, fieldparams.FeeRecipientLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.FeeRecipient")
}
payloadStateRoot, err := bytesutil.DecodeHexWithLength(e.StateRoot, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.StateRoot")
}
payloadReceiptsRoot, err := bytesutil.DecodeHexWithLength(e.ReceiptsRoot, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.ReceiptsRoot")
}
payloadLogsBloom, err := bytesutil.DecodeHexWithLength(e.LogsBloom, fieldparams.LogsBloomLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.LogsBloom")
}
payloadPrevRandao, err := bytesutil.DecodeHexWithLength(e.PrevRandao, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.PrevRandao")
}
payloadBlockNumber, err := strconv.ParseUint(e.BlockNumber, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.BlockNumber")
}
payloadGasLimit, err := strconv.ParseUint(e.GasLimit, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.GasLimit")
}
payloadGasUsed, err := strconv.ParseUint(e.GasUsed, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.GasUsed")
}
payloadTimestamp, err := strconv.ParseUint(e.Timestamp, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.Timestamp")
}
payloadExtraData, err := bytesutil.DecodeHexWithMaxLength(e.ExtraData, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.ExtraData")
}
payloadBaseFeePerGas, err := bytesutil.Uint256ToSSZBytes(e.BaseFeePerGas)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.BaseFeePerGas")
}
payloadBlockHash, err := bytesutil.DecodeHexWithLength(e.BlockHash, common.HashLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.BlockHash")
}
err = slice.VerifyMaxLength(e.Transactions, fieldparams.MaxTxsPerPayloadLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.Transactions")
}
payloadTxs := make([][]byte, len(e.Transactions))
for i, tx := range e.Transactions {
payloadTxs[i], err = bytesutil.DecodeHexWithMaxLength(tx, fieldparams.MaxBytesPerTxLength)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("ExecutionPayload.Transactions[%d]", i))
}
}
err = slice.VerifyMaxLength(e.Withdrawals, fieldparams.MaxWithdrawalsPerPayload)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.Withdrawals")
}
withdrawals := make([]*enginev1.Withdrawal, len(e.Withdrawals))
for i, w := range e.Withdrawals {
withdrawalIndex, err := strconv.ParseUint(w.WithdrawalIndex, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("ExecutionPayload.Withdrawals[%d].WithdrawalIndex", i))
}
validatorIndex, err := strconv.ParseUint(w.ValidatorIndex, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("ExecutionPayload.Withdrawals[%d].ValidatorIndex", i))
}
address, err := bytesutil.DecodeHexWithLength(w.ExecutionAddress, common.AddressLength)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("ExecutionPayload.Withdrawals[%d].ExecutionAddress", i))
}
amount, err := strconv.ParseUint(w.Amount, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("ExecutionPayload.Withdrawals[%d].Amount", i))
}
withdrawals[i] = &enginev1.Withdrawal{
Index: withdrawalIndex,
ValidatorIndex: primitives.ValidatorIndex(validatorIndex),
Address: address,
Amount: amount,
}
}
return &enginev1.ExecutionPayloadCapella{
ParentHash: payloadParentHash,
FeeRecipient: payloadFeeRecipient,
StateRoot: payloadStateRoot,
ReceiptsRoot: payloadReceiptsRoot,
LogsBloom: payloadLogsBloom,
PrevRandao: payloadPrevRandao,
BlockNumber: payloadBlockNumber,
GasLimit: payloadGasLimit,
GasUsed: payloadGasUsed,
Timestamp: payloadTimestamp,
ExtraData: payloadExtraData,
BaseFeePerGas: payloadBaseFeePerGas,
BlockHash: payloadBlockHash,
Transactions: payloadTxs,
Withdrawals: withdrawals,
}, nil
}
func ExecutionPayloadHeaderCapellaFromConsensus(payload *enginev1.ExecutionPayloadHeaderCapella) (*ExecutionPayloadHeaderCapella, error) {
baseFeePerGas, err := sszBytesToUint256String(payload.BaseFeePerGas)
if err != nil {
return nil, err
}
return &ExecutionPayloadHeaderCapella{
ParentHash: hexutil.Encode(payload.ParentHash),
FeeRecipient: hexutil.Encode(payload.FeeRecipient),
StateRoot: hexutil.Encode(payload.StateRoot),
ReceiptsRoot: hexutil.Encode(payload.ReceiptsRoot),
LogsBloom: hexutil.Encode(payload.LogsBloom),
PrevRandao: hexutil.Encode(payload.PrevRandao),
BlockNumber: fmt.Sprintf("%d", payload.BlockNumber),
GasLimit: fmt.Sprintf("%d", payload.GasLimit),
GasUsed: fmt.Sprintf("%d", payload.GasUsed),
Timestamp: fmt.Sprintf("%d", payload.Timestamp),
ExtraData: hexutil.Encode(payload.ExtraData),
BaseFeePerGas: baseFeePerGas,
BlockHash: hexutil.Encode(payload.BlockHash),
TransactionsRoot: hexutil.Encode(payload.TransactionsRoot),
WithdrawalsRoot: hexutil.Encode(payload.WithdrawalsRoot),
}, nil
}
func (e *ExecutionPayloadHeaderCapella) ToConsensus() (*enginev1.ExecutionPayloadHeaderCapella, error) {
if e == nil {
return nil, server.NewDecodeError(errNilValue, "ExecutionPayloadHeader")
}
payloadParentHash, err := bytesutil.DecodeHexWithLength(e.ParentHash, common.HashLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.ParentHash")
}
payloadFeeRecipient, err := bytesutil.DecodeHexWithLength(e.FeeRecipient, fieldparams.FeeRecipientLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.FeeRecipient")
}
payloadStateRoot, err := bytesutil.DecodeHexWithLength(e.StateRoot, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.StateRoot")
}
payloadReceiptsRoot, err := bytesutil.DecodeHexWithLength(e.ReceiptsRoot, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.ReceiptsRoot")
}
payloadLogsBloom, err := bytesutil.DecodeHexWithLength(e.LogsBloom, fieldparams.LogsBloomLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.LogsBloom")
}
payloadPrevRandao, err := bytesutil.DecodeHexWithLength(e.PrevRandao, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.PrevRandao")
}
payloadBlockNumber, err := strconv.ParseUint(e.BlockNumber, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.BlockNumber")
}
payloadGasLimit, err := strconv.ParseUint(e.GasLimit, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.GasLimit")
}
payloadGasUsed, err := strconv.ParseUint(e.GasUsed, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.GasUsed")
}
payloadTimestamp, err := strconv.ParseUint(e.Timestamp, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.Timestamp")
}
payloadExtraData, err := bytesutil.DecodeHexWithMaxLength(e.ExtraData, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.ExtraData")
}
payloadBaseFeePerGas, err := bytesutil.Uint256ToSSZBytes(e.BaseFeePerGas)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.BaseFeePerGas")
}
payloadBlockHash, err := bytesutil.DecodeHexWithMaxLength(e.BlockHash, common.HashLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.BlockHash")
}
payloadTxsRoot, err := bytesutil.DecodeHexWithMaxLength(e.TransactionsRoot, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.TransactionsRoot")
}
payloadWithdrawalsRoot, err := bytesutil.DecodeHexWithMaxLength(e.WithdrawalsRoot, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.WithdrawalsRoot")
}
return &enginev1.ExecutionPayloadHeaderCapella{
ParentHash: payloadParentHash,
FeeRecipient: payloadFeeRecipient,
StateRoot: payloadStateRoot,
ReceiptsRoot: payloadReceiptsRoot,
LogsBloom: payloadLogsBloom,
PrevRandao: payloadPrevRandao,
BlockNumber: payloadBlockNumber,
GasLimit: payloadGasLimit,
GasUsed: payloadGasUsed,
Timestamp: payloadTimestamp,
ExtraData: payloadExtraData,
BaseFeePerGas: payloadBaseFeePerGas,
BlockHash: payloadBlockHash,
TransactionsRoot: payloadTxsRoot,
WithdrawalsRoot: payloadWithdrawalsRoot,
}, nil
}
// ----------------------------------------------------------------------------
// Deneb
// ----------------------------------------------------------------------------
func ExecutionPayloadDenebFromConsensus(payload *enginev1.ExecutionPayloadDeneb) (*ExecutionPayloadDeneb, error) {
baseFeePerGas, err := sszBytesToUint256String(payload.BaseFeePerGas)
if err != nil {
return nil, err
}
transactions := make([]string, len(payload.Transactions))
for i, tx := range payload.Transactions {
transactions[i] = hexutil.Encode(tx)
}
return &ExecutionPayloadDeneb{
ParentHash: hexutil.Encode(payload.ParentHash),
FeeRecipient: hexutil.Encode(payload.FeeRecipient),
StateRoot: hexutil.Encode(payload.StateRoot),
ReceiptsRoot: hexutil.Encode(payload.ReceiptsRoot),
LogsBloom: hexutil.Encode(payload.LogsBloom),
PrevRandao: hexutil.Encode(payload.PrevRandao),
BlockNumber: fmt.Sprintf("%d", payload.BlockNumber),
GasLimit: fmt.Sprintf("%d", payload.GasLimit),
GasUsed: fmt.Sprintf("%d", payload.GasUsed),
Timestamp: fmt.Sprintf("%d", payload.Timestamp),
ExtraData: hexutil.Encode(payload.ExtraData),
BaseFeePerGas: baseFeePerGas,
BlockHash: hexutil.Encode(payload.BlockHash),
Transactions: transactions,
Withdrawals: WithdrawalsFromConsensus(payload.Withdrawals),
BlobGasUsed: fmt.Sprintf("%d", payload.BlobGasUsed),
ExcessBlobGas: fmt.Sprintf("%d", payload.ExcessBlobGas),
}, nil
}
func (e *ExecutionPayloadDeneb) ToConsensus() (*enginev1.ExecutionPayloadDeneb, error) {
if e == nil {
return nil, server.NewDecodeError(errNilValue, "ExecutionPayload")
}
payloadParentHash, err := bytesutil.DecodeHexWithLength(e.ParentHash, common.HashLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.ParentHash")
}
payloadFeeRecipient, err := bytesutil.DecodeHexWithLength(e.FeeRecipient, fieldparams.FeeRecipientLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.FeeRecipient")
}
payloadStateRoot, err := bytesutil.DecodeHexWithLength(e.StateRoot, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.StateRoot")
}
payloadReceiptsRoot, err := bytesutil.DecodeHexWithLength(e.ReceiptsRoot, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.ReceiptsRoot")
}
payloadLogsBloom, err := bytesutil.DecodeHexWithLength(e.LogsBloom, fieldparams.LogsBloomLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.LogsBloom")
}
payloadPrevRandao, err := bytesutil.DecodeHexWithLength(e.PrevRandao, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.PrevRandao")
}
payloadBlockNumber, err := strconv.ParseUint(e.BlockNumber, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.BlockNumber")
}
payloadGasLimit, err := strconv.ParseUint(e.GasLimit, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.GasLimit")
}
payloadGasUsed, err := strconv.ParseUint(e.GasUsed, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.GasUsed")
}
payloadTimestamp, err := strconv.ParseUint(e.Timestamp, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.Timestamp")
}
payloadExtraData, err := bytesutil.DecodeHexWithMaxLength(e.ExtraData, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.ExtraData")
}
payloadBaseFeePerGas, err := bytesutil.Uint256ToSSZBytes(e.BaseFeePerGas)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.BaseFeePerGas")
}
payloadBlockHash, err := bytesutil.DecodeHexWithLength(e.BlockHash, common.HashLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.BlockHash")
}
err = slice.VerifyMaxLength(e.Transactions, fieldparams.MaxTxsPerPayloadLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.Transactions")
}
txs := make([][]byte, len(e.Transactions))
for i, tx := range e.Transactions {
txs[i], err = bytesutil.DecodeHexWithMaxLength(tx, fieldparams.MaxBytesPerTxLength)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("ExecutionPayload.Transactions[%d]", i))
}
}
err = slice.VerifyMaxLength(e.Withdrawals, fieldparams.MaxWithdrawalsPerPayload)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.Withdrawals")
}
withdrawals := make([]*enginev1.Withdrawal, len(e.Withdrawals))
for i, w := range e.Withdrawals {
withdrawalIndex, err := strconv.ParseUint(w.WithdrawalIndex, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("ExecutionPayload.Withdrawals[%d].WithdrawalIndex", i))
}
validatorIndex, err := strconv.ParseUint(w.ValidatorIndex, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("ExecutionPayload.Withdrawals[%d].ValidatorIndex", i))
}
address, err := bytesutil.DecodeHexWithLength(w.ExecutionAddress, common.AddressLength)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("ExecutionPayload.Withdrawals[%d].ExecutionAddress", i))
}
amount, err := strconv.ParseUint(w.Amount, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("ExecutionPayload.Withdrawals[%d].Amount", i))
}
withdrawals[i] = &enginev1.Withdrawal{
Index: withdrawalIndex,
ValidatorIndex: primitives.ValidatorIndex(validatorIndex),
Address: address,
Amount: amount,
}
}
payloadBlobGasUsed, err := strconv.ParseUint(e.BlobGasUsed, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.BlobGasUsed")
}
payloadExcessBlobGas, err := strconv.ParseUint(e.ExcessBlobGas, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.ExcessBlobGas")
}
return &enginev1.ExecutionPayloadDeneb{
ParentHash: payloadParentHash,
FeeRecipient: payloadFeeRecipient,
StateRoot: payloadStateRoot,
ReceiptsRoot: payloadReceiptsRoot,
LogsBloom: payloadLogsBloom,
PrevRandao: payloadPrevRandao,
BlockNumber: payloadBlockNumber,
GasLimit: payloadGasLimit,
GasUsed: payloadGasUsed,
Timestamp: payloadTimestamp,
ExtraData: payloadExtraData,
BaseFeePerGas: payloadBaseFeePerGas,
BlockHash: payloadBlockHash,
Transactions: txs,
Withdrawals: withdrawals,
BlobGasUsed: payloadBlobGasUsed,
ExcessBlobGas: payloadExcessBlobGas,
}, nil
}
func ExecutionPayloadHeaderDenebFromConsensus(payload *enginev1.ExecutionPayloadHeaderDeneb) (*ExecutionPayloadHeaderDeneb, error) {
baseFeePerGas, err := sszBytesToUint256String(payload.BaseFeePerGas)
if err != nil {
return nil, err
}
return &ExecutionPayloadHeaderDeneb{
ParentHash: hexutil.Encode(payload.ParentHash),
FeeRecipient: hexutil.Encode(payload.FeeRecipient),
StateRoot: hexutil.Encode(payload.StateRoot),
ReceiptsRoot: hexutil.Encode(payload.ReceiptsRoot),
LogsBloom: hexutil.Encode(payload.LogsBloom),
PrevRandao: hexutil.Encode(payload.PrevRandao),
BlockNumber: fmt.Sprintf("%d", payload.BlockNumber),
GasLimit: fmt.Sprintf("%d", payload.GasLimit),
GasUsed: fmt.Sprintf("%d", payload.GasUsed),
Timestamp: fmt.Sprintf("%d", payload.Timestamp),
ExtraData: hexutil.Encode(payload.ExtraData),
BaseFeePerGas: baseFeePerGas,
BlockHash: hexutil.Encode(payload.BlockHash),
TransactionsRoot: hexutil.Encode(payload.TransactionsRoot),
WithdrawalsRoot: hexutil.Encode(payload.WithdrawalsRoot),
BlobGasUsed: fmt.Sprintf("%d", payload.BlobGasUsed),
ExcessBlobGas: fmt.Sprintf("%d", payload.ExcessBlobGas),
}, nil
}
func (e *ExecutionPayloadHeaderDeneb) ToConsensus() (*enginev1.ExecutionPayloadHeaderDeneb, error) {
if e == nil {
return nil, server.NewDecodeError(errNilValue, "ExecutionPayloadHeader")
}
payloadParentHash, err := bytesutil.DecodeHexWithLength(e.ParentHash, common.HashLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.ParentHash")
}
payloadFeeRecipient, err := bytesutil.DecodeHexWithLength(e.FeeRecipient, fieldparams.FeeRecipientLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.FeeRecipient")
}
payloadStateRoot, err := bytesutil.DecodeHexWithLength(e.StateRoot, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.StateRoot")
}
payloadReceiptsRoot, err := bytesutil.DecodeHexWithLength(e.ReceiptsRoot, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.ReceiptsRoot")
}
payloadLogsBloom, err := bytesutil.DecodeHexWithLength(e.LogsBloom, fieldparams.LogsBloomLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.LogsBloom")
}
payloadPrevRandao, err := bytesutil.DecodeHexWithLength(e.PrevRandao, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.PrevRandao")
}
payloadBlockNumber, err := strconv.ParseUint(e.BlockNumber, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.BlockNumber")
}
payloadGasLimit, err := strconv.ParseUint(e.GasLimit, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.GasLimit")
}
payloadGasUsed, err := strconv.ParseUint(e.GasUsed, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.GasUsed")
}
payloadTimestamp, err := strconv.ParseUint(e.Timestamp, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.Timestamp")
}
payloadExtraData, err := bytesutil.DecodeHexWithMaxLength(e.ExtraData, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.ExtraData")
}
payloadBaseFeePerGas, err := bytesutil.Uint256ToSSZBytes(e.BaseFeePerGas)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.BaseFeePerGas")
}
payloadBlockHash, err := bytesutil.DecodeHexWithLength(e.BlockHash, common.HashLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.BlockHash")
}
payloadTxsRoot, err := bytesutil.DecodeHexWithLength(e.TransactionsRoot, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.TransactionsRoot")
}
payloadWithdrawalsRoot, err := bytesutil.DecodeHexWithLength(e.WithdrawalsRoot, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.WithdrawalsRoot")
}
payloadBlobGasUsed, err := strconv.ParseUint(e.BlobGasUsed, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.BlobGasUsed")
}
payloadExcessBlobGas, err := strconv.ParseUint(e.ExcessBlobGas, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.ExcessBlobGas")
}
return &enginev1.ExecutionPayloadHeaderDeneb{
ParentHash: payloadParentHash,
FeeRecipient: payloadFeeRecipient,
StateRoot: payloadStateRoot,
ReceiptsRoot: payloadReceiptsRoot,
LogsBloom: payloadLogsBloom,
PrevRandao: payloadPrevRandao,
BlockNumber: payloadBlockNumber,
GasLimit: payloadGasLimit,
GasUsed: payloadGasUsed,
Timestamp: payloadTimestamp,
ExtraData: payloadExtraData,
BaseFeePerGas: payloadBaseFeePerGas,
BlockHash: payloadBlockHash,
TransactionsRoot: payloadTxsRoot,
WithdrawalsRoot: payloadWithdrawalsRoot,
BlobGasUsed: payloadBlobGasUsed,
ExcessBlobGas: payloadExcessBlobGas,
}, nil
}
// ----------------------------------------------------------------------------
// Electra
// ----------------------------------------------------------------------------
var (
ExecutionPayloadElectraFromConsensus = ExecutionPayloadDenebFromConsensus
ExecutionPayloadHeaderElectraFromConsensus = ExecutionPayloadHeaderDenebFromConsensus
)
func WithdrawalRequestsFromConsensus(ws []*enginev1.WithdrawalRequest) []*WithdrawalRequest {
result := make([]*WithdrawalRequest, len(ws))
for i, w := range ws {
result[i] = WithdrawalRequestFromConsensus(w)
}
return result
}
func WithdrawalRequestFromConsensus(w *enginev1.WithdrawalRequest) *WithdrawalRequest {
return &WithdrawalRequest{
SourceAddress: hexutil.Encode(w.SourceAddress),
ValidatorPubkey: hexutil.Encode(w.ValidatorPubkey),
Amount: fmt.Sprintf("%d", w.Amount),
}
}
func (w *WithdrawalRequest) ToConsensus() (*enginev1.WithdrawalRequest, error) {
if w == nil {
return nil, server.NewDecodeError(errNilValue, "WithdrawalRequest")
}
src, err := bytesutil.DecodeHexWithLength(w.SourceAddress, common.AddressLength)
if err != nil {
return nil, server.NewDecodeError(err, "SourceAddress")
}
pubkey, err := bytesutil.DecodeHexWithLength(w.ValidatorPubkey, fieldparams.BLSPubkeyLength)
if err != nil {
return nil, server.NewDecodeError(err, "ValidatorPubkey")
}
amount, err := strconv.ParseUint(w.Amount, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "Amount")
}
return &enginev1.WithdrawalRequest{
SourceAddress: src,
ValidatorPubkey: pubkey,
Amount: amount,
}, nil
}
func ConsolidationRequestsFromConsensus(cs []*enginev1.ConsolidationRequest) []*ConsolidationRequest {
result := make([]*ConsolidationRequest, len(cs))
for i, c := range cs {
result[i] = ConsolidationRequestFromConsensus(c)
}
return result
}
func ConsolidationRequestFromConsensus(c *enginev1.ConsolidationRequest) *ConsolidationRequest {
return &ConsolidationRequest{
SourceAddress: hexutil.Encode(c.SourceAddress),
SourcePubkey: hexutil.Encode(c.SourcePubkey),
TargetPubkey: hexutil.Encode(c.TargetPubkey),
}
}
func (c *ConsolidationRequest) ToConsensus() (*enginev1.ConsolidationRequest, error) {
if c == nil {
return nil, server.NewDecodeError(errNilValue, "ConsolidationRequest")
}
srcAddress, err := bytesutil.DecodeHexWithLength(c.SourceAddress, common.AddressLength)
if err != nil {
return nil, server.NewDecodeError(err, "SourceAddress")
}
srcPubkey, err := bytesutil.DecodeHexWithLength(c.SourcePubkey, fieldparams.BLSPubkeyLength)
if err != nil {
return nil, server.NewDecodeError(err, "SourcePubkey")
}
targetPubkey, err := bytesutil.DecodeHexWithLength(c.TargetPubkey, fieldparams.BLSPubkeyLength)
if err != nil {
return nil, server.NewDecodeError(err, "TargetPubkey")
}
return &enginev1.ConsolidationRequest{
SourceAddress: srcAddress,
SourcePubkey: srcPubkey,
TargetPubkey: targetPubkey,
}, nil
}
func DepositRequestsFromConsensus(ds []*enginev1.DepositRequest) []*DepositRequest {
result := make([]*DepositRequest, len(ds))
for i, d := range ds {
result[i] = DepositRequestFromConsensus(d)
}
return result
}
func DepositRequestFromConsensus(d *enginev1.DepositRequest) *DepositRequest {
return &DepositRequest{
Pubkey: hexutil.Encode(d.Pubkey),
WithdrawalCredentials: hexutil.Encode(d.WithdrawalCredentials),
Amount: fmt.Sprintf("%d", d.Amount),
Signature: hexutil.Encode(d.Signature),
Index: fmt.Sprintf("%d", d.Index),
}
}
func (d *DepositRequest) ToConsensus() (*enginev1.DepositRequest, error) {
if d == nil {
return nil, server.NewDecodeError(errNilValue, "DepositRequest")
}
pubkey, err := bytesutil.DecodeHexWithLength(d.Pubkey, fieldparams.BLSPubkeyLength)
if err != nil {
return nil, server.NewDecodeError(err, "Pubkey")
}
withdrawalCredentials, err := bytesutil.DecodeHexWithLength(d.WithdrawalCredentials, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "WithdrawalCredentials")
}
amount, err := strconv.ParseUint(d.Amount, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "Amount")
}
sig, err := bytesutil.DecodeHexWithLength(d.Signature, fieldparams.BLSSignatureLength)
if err != nil {
return nil, server.NewDecodeError(err, "Signature")
}
index, err := strconv.ParseUint(d.Index, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "Index")
}
return &enginev1.DepositRequest{
Pubkey: pubkey,
WithdrawalCredentials: withdrawalCredentials,
Amount: amount,
Signature: sig,
Index: index,
}, nil
}
func ExecutionRequestsFromConsensus(er *enginev1.ExecutionRequests) *ExecutionRequests {
return &ExecutionRequests{
Deposits: DepositRequestsFromConsensus(er.Deposits),
Withdrawals: WithdrawalRequestsFromConsensus(er.Withdrawals),
Consolidations: ConsolidationRequestsFromConsensus(er.Consolidations),
}
}
func (e *ExecutionRequests) ToConsensus() (*enginev1.ExecutionRequests, error) {
if e == nil {
return nil, server.NewDecodeError(errNilValue, "ExecutionRequests")
}
var err error
if err = slice.VerifyMaxLength(e.Deposits, params.BeaconConfig().MaxDepositRequestsPerPayload); err != nil {
return nil, err
}
depositRequests := make([]*enginev1.DepositRequest, len(e.Deposits))
for i, d := range e.Deposits {
depositRequests[i], err = d.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("ExecutionRequests.Deposits[%d]", i))
}
}
if err = slice.VerifyMaxLength(e.Withdrawals, params.BeaconConfig().MaxWithdrawalRequestsPerPayload); err != nil {
return nil, err
}
withdrawalRequests := make([]*enginev1.WithdrawalRequest, len(e.Withdrawals))
for i, w := range e.Withdrawals {
withdrawalRequests[i], err = w.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("ExecutionRequests.Withdrawals[%d]", i))
}
}
if err = slice.VerifyMaxLength(e.Consolidations, params.BeaconConfig().MaxConsolidationsRequestsPerPayload); err != nil {
return nil, err
}
consolidationRequests := make([]*enginev1.ConsolidationRequest, len(e.Consolidations))
for i, c := range e.Consolidations {
consolidationRequests[i], err = c.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("ExecutionRequests.Consolidations[%d]", i))
}
}
return &enginev1.ExecutionRequests{
Deposits: depositRequests,
Withdrawals: withdrawalRequests,
Consolidations: consolidationRequests,
}, nil
}
// ----------------------------------------------------------------------------
// Fulu
// ----------------------------------------------------------------------------
var (
ExecutionPayloadFuluFromConsensus = ExecutionPayloadDenebFromConsensus
ExecutionPayloadHeaderFuluFromConsensus = ExecutionPayloadHeaderDenebFromConsensus
BeaconBlockFuluFromConsensus = BeaconBlockElectraFromConsensus
)

View File

@@ -1,563 +0,0 @@
package structs
import (
"fmt"
"testing"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func fillByteSlice(sliceLength int, value byte) []byte {
bytes := make([]byte, sliceLength)
for index := range bytes {
bytes[index] = value
}
return bytes
}
// TestExecutionPayloadFromConsensus_HappyPath checks the
// ExecutionPayloadFromConsensus function under normal conditions.
func TestExecutionPayloadFromConsensus_HappyPath(t *testing.T) {
consensusPayload := &enginev1.ExecutionPayload{
ParentHash: fillByteSlice(common.HashLength, 0xaa),
FeeRecipient: fillByteSlice(20, 0xbb),
StateRoot: fillByteSlice(32, 0xcc),
ReceiptsRoot: fillByteSlice(32, 0xdd),
LogsBloom: fillByteSlice(256, 0xee),
PrevRandao: fillByteSlice(32, 0xff),
BlockNumber: 12345,
GasLimit: 15000000,
GasUsed: 8000000,
Timestamp: 1680000000,
ExtraData: fillByteSlice(8, 0x11),
BaseFeePerGas: fillByteSlice(32, 0x01),
BlockHash: fillByteSlice(common.HashLength, 0x22),
Transactions: [][]byte{
fillByteSlice(10, 0x33),
fillByteSlice(10, 0x44),
},
}
result, err := ExecutionPayloadFromConsensus(consensusPayload)
require.NoError(t, err)
require.NotNil(t, result)
require.Equal(t, hexutil.Encode(consensusPayload.ParentHash), result.ParentHash)
require.Equal(t, hexutil.Encode(consensusPayload.FeeRecipient), result.FeeRecipient)
require.Equal(t, hexutil.Encode(consensusPayload.StateRoot), result.StateRoot)
require.Equal(t, hexutil.Encode(consensusPayload.ReceiptsRoot), result.ReceiptsRoot)
require.Equal(t, fmt.Sprintf("%d", consensusPayload.BlockNumber), result.BlockNumber)
}
// TestExecutionPayload_ToConsensus_HappyPath checks the
// (*ExecutionPayload).ToConsensus function under normal conditions.
func TestExecutionPayload_ToConsensus_HappyPath(t *testing.T) {
payload := &ExecutionPayload{
ParentHash: hexutil.Encode(fillByteSlice(common.HashLength, 0xaa)),
FeeRecipient: hexutil.Encode(fillByteSlice(20, 0xbb)),
StateRoot: hexutil.Encode(fillByteSlice(32, 0xcc)),
ReceiptsRoot: hexutil.Encode(fillByteSlice(32, 0xdd)),
LogsBloom: hexutil.Encode(fillByteSlice(256, 0xee)),
PrevRandao: hexutil.Encode(fillByteSlice(32, 0xff)),
BlockNumber: "12345",
GasLimit: "15000000",
GasUsed: "8000000",
Timestamp: "1680000000",
ExtraData: "0x11111111",
BaseFeePerGas: "1234",
BlockHash: hexutil.Encode(fillByteSlice(common.HashLength, 0x22)),
Transactions: []string{
hexutil.Encode(fillByteSlice(10, 0x33)),
hexutil.Encode(fillByteSlice(10, 0x44)),
},
}
result, err := payload.ToConsensus()
require.NoError(t, err)
require.DeepEqual(t, result.ParentHash, fillByteSlice(common.HashLength, 0xaa))
require.DeepEqual(t, result.FeeRecipient, fillByteSlice(20, 0xbb))
require.DeepEqual(t, result.StateRoot, fillByteSlice(32, 0xcc))
}
// TestExecutionPayloadHeaderFromConsensus_HappyPath checks the
// ExecutionPayloadHeaderFromConsensus function under normal conditions.
func TestExecutionPayloadHeaderFromConsensus_HappyPath(t *testing.T) {
consensusHeader := &enginev1.ExecutionPayloadHeader{
ParentHash: fillByteSlice(common.HashLength, 0xaa),
FeeRecipient: fillByteSlice(20, 0xbb),
StateRoot: fillByteSlice(32, 0xcc),
ReceiptsRoot: fillByteSlice(32, 0xdd),
LogsBloom: fillByteSlice(256, 0xee),
PrevRandao: fillByteSlice(32, 0xff),
BlockNumber: 9999,
GasLimit: 5000000,
GasUsed: 2500000,
Timestamp: 1111111111,
ExtraData: fillByteSlice(4, 0x12),
BaseFeePerGas: fillByteSlice(32, 0x34),
BlockHash: fillByteSlice(common.HashLength, 0x56),
TransactionsRoot: fillByteSlice(32, 0x78),
}
result, err := ExecutionPayloadHeaderFromConsensus(consensusHeader)
require.NoError(t, err)
require.NotNil(t, result)
require.Equal(t, hexutil.Encode(consensusHeader.ParentHash), result.ParentHash)
require.Equal(t, fmt.Sprintf("%d", consensusHeader.BlockNumber), result.BlockNumber)
}
// TestExecutionPayloadHeader_ToConsensus_HappyPath checks the
// (*ExecutionPayloadHeader).ToConsensus function under normal conditions.
func TestExecutionPayloadHeader_ToConsensus_HappyPath(t *testing.T) {
header := &ExecutionPayloadHeader{
ParentHash: hexutil.Encode(fillByteSlice(common.HashLength, 0xaa)),
FeeRecipient: hexutil.Encode(fillByteSlice(20, 0xbb)),
StateRoot: hexutil.Encode(fillByteSlice(32, 0xcc)),
ReceiptsRoot: hexutil.Encode(fillByteSlice(32, 0xdd)),
LogsBloom: hexutil.Encode(fillByteSlice(256, 0xee)),
PrevRandao: hexutil.Encode(fillByteSlice(32, 0xff)),
BlockNumber: "9999",
GasLimit: "5000000",
GasUsed: "2500000",
Timestamp: "1111111111",
ExtraData: "0x1234abcd",
BaseFeePerGas: "1234",
BlockHash: hexutil.Encode(fillByteSlice(common.HashLength, 0x56)),
TransactionsRoot: hexutil.Encode(fillByteSlice(32, 0x78)),
}
result, err := header.ToConsensus()
require.NoError(t, err)
require.DeepEqual(t, hexutil.Encode(result.ParentHash), header.ParentHash)
require.DeepEqual(t, hexutil.Encode(result.FeeRecipient), header.FeeRecipient)
require.DeepEqual(t, hexutil.Encode(result.StateRoot), header.StateRoot)
}
// TestExecutionPayloadCapellaFromConsensus_HappyPath checks the
// ExecutionPayloadCapellaFromConsensus function under normal conditions.
func TestExecutionPayloadCapellaFromConsensus_HappyPath(t *testing.T) {
capellaPayload := &enginev1.ExecutionPayloadCapella{
ParentHash: fillByteSlice(common.HashLength, 0xaa),
FeeRecipient: fillByteSlice(20, 0xbb),
StateRoot: fillByteSlice(32, 0xcc),
ReceiptsRoot: fillByteSlice(32, 0xdd),
LogsBloom: fillByteSlice(256, 0xee),
PrevRandao: fillByteSlice(32, 0xff),
BlockNumber: 123,
GasLimit: 9876543,
GasUsed: 1234567,
Timestamp: 5555555,
ExtraData: fillByteSlice(6, 0x11),
BaseFeePerGas: fillByteSlice(32, 0x22),
BlockHash: fillByteSlice(common.HashLength, 0x33),
Transactions: [][]byte{
fillByteSlice(5, 0x44),
},
Withdrawals: []*enginev1.Withdrawal{
{
Index: 1,
ValidatorIndex: 2,
Address: fillByteSlice(20, 0xaa),
Amount: 100,
},
},
}
result, err := ExecutionPayloadCapellaFromConsensus(capellaPayload)
require.NoError(t, err)
require.NotNil(t, result)
require.Equal(t, hexutil.Encode(capellaPayload.ParentHash), result.ParentHash)
require.Equal(t, len(capellaPayload.Transactions), len(result.Transactions))
require.Equal(t, len(capellaPayload.Withdrawals), len(result.Withdrawals))
}
// TestExecutionPayloadCapella_ToConsensus_HappyPath checks the
// (*ExecutionPayloadCapella).ToConsensus function under normal conditions.
func TestExecutionPayloadCapella_ToConsensus_HappyPath(t *testing.T) {
capella := &ExecutionPayloadCapella{
ParentHash: hexutil.Encode(fillByteSlice(common.HashLength, 0xaa)),
FeeRecipient: hexutil.Encode(fillByteSlice(20, 0xbb)),
StateRoot: hexutil.Encode(fillByteSlice(32, 0xcc)),
ReceiptsRoot: hexutil.Encode(fillByteSlice(32, 0xdd)),
LogsBloom: hexutil.Encode(fillByteSlice(256, 0xee)),
PrevRandao: hexutil.Encode(fillByteSlice(32, 0xff)),
BlockNumber: "123",
GasLimit: "9876543",
GasUsed: "1234567",
Timestamp: "5555555",
ExtraData: hexutil.Encode(fillByteSlice(6, 0x11)),
BaseFeePerGas: "1234",
BlockHash: hexutil.Encode(fillByteSlice(common.HashLength, 0x33)),
Transactions: []string{
hexutil.Encode(fillByteSlice(5, 0x44)),
},
Withdrawals: []*Withdrawal{
{
WithdrawalIndex: "1",
ValidatorIndex: "2",
ExecutionAddress: hexutil.Encode(fillByteSlice(20, 0xaa)),
Amount: "100",
},
},
}
result, err := capella.ToConsensus()
require.NoError(t, err)
require.DeepEqual(t, hexutil.Encode(result.ParentHash), capella.ParentHash)
require.DeepEqual(t, hexutil.Encode(result.FeeRecipient), capella.FeeRecipient)
require.DeepEqual(t, hexutil.Encode(result.StateRoot), capella.StateRoot)
}
// TestExecutionPayloadDenebFromConsensus_HappyPath checks the
// ExecutionPayloadDenebFromConsensus function under normal conditions.
func TestExecutionPayloadDenebFromConsensus_HappyPath(t *testing.T) {
denebPayload := &enginev1.ExecutionPayloadDeneb{
ParentHash: fillByteSlice(common.HashLength, 0xaa),
FeeRecipient: fillByteSlice(20, 0xbb),
StateRoot: fillByteSlice(32, 0xcc),
ReceiptsRoot: fillByteSlice(32, 0xdd),
LogsBloom: fillByteSlice(256, 0xee),
PrevRandao: fillByteSlice(32, 0xff),
BlockNumber: 999,
GasLimit: 2222222,
GasUsed: 1111111,
Timestamp: 666666,
ExtraData: fillByteSlice(6, 0x11),
BaseFeePerGas: fillByteSlice(32, 0x22),
BlockHash: fillByteSlice(common.HashLength, 0x33),
Transactions: [][]byte{
fillByteSlice(5, 0x44),
},
Withdrawals: []*enginev1.Withdrawal{
{
Index: 1,
ValidatorIndex: 2,
Address: fillByteSlice(20, 0xaa),
Amount: 100,
},
},
BlobGasUsed: 1234,
ExcessBlobGas: 5678,
}
result, err := ExecutionPayloadDenebFromConsensus(denebPayload)
require.NoError(t, err)
require.Equal(t, hexutil.Encode(denebPayload.ParentHash), result.ParentHash)
require.Equal(t, len(denebPayload.Transactions), len(result.Transactions))
require.Equal(t, len(denebPayload.Withdrawals), len(result.Withdrawals))
require.Equal(t, "1234", result.BlobGasUsed)
require.Equal(t, fmt.Sprintf("%d", denebPayload.BlockNumber), result.BlockNumber)
}
// TestExecutionPayloadDeneb_ToConsensus_HappyPath checks the
// (*ExecutionPayloadDeneb).ToConsensus function under normal conditions.
func TestExecutionPayloadDeneb_ToConsensus_HappyPath(t *testing.T) {
deneb := &ExecutionPayloadDeneb{
ParentHash: hexutil.Encode(fillByteSlice(common.HashLength, 0xaa)),
FeeRecipient: hexutil.Encode(fillByteSlice(20, 0xbb)),
StateRoot: hexutil.Encode(fillByteSlice(32, 0xcc)),
ReceiptsRoot: hexutil.Encode(fillByteSlice(32, 0xdd)),
LogsBloom: hexutil.Encode(fillByteSlice(256, 0xee)),
PrevRandao: hexutil.Encode(fillByteSlice(32, 0xff)),
BlockNumber: "999",
GasLimit: "2222222",
GasUsed: "1111111",
Timestamp: "666666",
ExtraData: hexutil.Encode(fillByteSlice(6, 0x11)),
BaseFeePerGas: "1234",
BlockHash: hexutil.Encode(fillByteSlice(common.HashLength, 0x33)),
Transactions: []string{
hexutil.Encode(fillByteSlice(5, 0x44)),
},
Withdrawals: []*Withdrawal{
{
WithdrawalIndex: "1",
ValidatorIndex: "2",
ExecutionAddress: hexutil.Encode(fillByteSlice(20, 0xaa)),
Amount: "100",
},
},
BlobGasUsed: "1234",
ExcessBlobGas: "5678",
}
result, err := deneb.ToConsensus()
require.NoError(t, err)
require.DeepEqual(t, hexutil.Encode(result.ParentHash), deneb.ParentHash)
require.DeepEqual(t, hexutil.Encode(result.FeeRecipient), deneb.FeeRecipient)
require.Equal(t, result.BlockNumber, uint64(999))
}
func TestExecutionPayloadHeaderCapellaFromConsensus_HappyPath(t *testing.T) {
capellaHeader := &enginev1.ExecutionPayloadHeaderCapella{
ParentHash: fillByteSlice(common.HashLength, 0xaa),
FeeRecipient: fillByteSlice(20, 0xbb),
StateRoot: fillByteSlice(32, 0xcc),
ReceiptsRoot: fillByteSlice(32, 0xdd),
LogsBloom: fillByteSlice(256, 0xee),
PrevRandao: fillByteSlice(32, 0xff),
BlockNumber: 555,
GasLimit: 1111111,
GasUsed: 222222,
Timestamp: 3333333333,
ExtraData: fillByteSlice(4, 0x12),
BaseFeePerGas: fillByteSlice(32, 0x34),
BlockHash: fillByteSlice(common.HashLength, 0x56),
TransactionsRoot: fillByteSlice(32, 0x78),
WithdrawalsRoot: fillByteSlice(32, 0x99),
}
result, err := ExecutionPayloadHeaderCapellaFromConsensus(capellaHeader)
require.NoError(t, err)
require.Equal(t, hexutil.Encode(capellaHeader.ParentHash), result.ParentHash)
require.DeepEqual(t, hexutil.Encode(capellaHeader.WithdrawalsRoot), result.WithdrawalsRoot)
}
func TestExecutionPayloadHeaderCapella_ToConsensus_HappyPath(t *testing.T) {
header := &ExecutionPayloadHeaderCapella{
ParentHash: hexutil.Encode(fillByteSlice(common.HashLength, 0xaa)),
FeeRecipient: hexutil.Encode(fillByteSlice(20, 0xbb)),
StateRoot: hexutil.Encode(fillByteSlice(32, 0xcc)),
ReceiptsRoot: hexutil.Encode(fillByteSlice(32, 0xdd)),
LogsBloom: hexutil.Encode(fillByteSlice(256, 0xee)),
PrevRandao: hexutil.Encode(fillByteSlice(32, 0xff)),
BlockNumber: "555",
GasLimit: "1111111",
GasUsed: "222222",
Timestamp: "3333333333",
ExtraData: "0x1234abcd",
BaseFeePerGas: "1234",
BlockHash: hexutil.Encode(fillByteSlice(common.HashLength, 0x56)),
TransactionsRoot: hexutil.Encode(fillByteSlice(32, 0x78)),
WithdrawalsRoot: hexutil.Encode(fillByteSlice(32, 0x99)),
}
result, err := header.ToConsensus()
require.NoError(t, err)
require.DeepEqual(t, hexutil.Encode(result.ParentHash), header.ParentHash)
require.DeepEqual(t, hexutil.Encode(result.FeeRecipient), header.FeeRecipient)
require.DeepEqual(t, hexutil.Encode(result.StateRoot), header.StateRoot)
require.DeepEqual(t, hexutil.Encode(result.ReceiptsRoot), header.ReceiptsRoot)
require.DeepEqual(t, hexutil.Encode(result.WithdrawalsRoot), header.WithdrawalsRoot)
}
func TestExecutionPayloadHeaderDenebFromConsensus_HappyPath(t *testing.T) {
denebHeader := &enginev1.ExecutionPayloadHeaderDeneb{
ParentHash: fillByteSlice(common.HashLength, 0xaa),
FeeRecipient: fillByteSlice(20, 0xbb),
StateRoot: fillByteSlice(32, 0xcc),
ReceiptsRoot: fillByteSlice(32, 0xdd),
LogsBloom: fillByteSlice(256, 0xee),
PrevRandao: fillByteSlice(32, 0xff),
BlockNumber: 999,
GasLimit: 5000000,
GasUsed: 2500000,
Timestamp: 4444444444,
ExtraData: fillByteSlice(4, 0x12),
BaseFeePerGas: fillByteSlice(32, 0x34),
BlockHash: fillByteSlice(common.HashLength, 0x56),
TransactionsRoot: fillByteSlice(32, 0x78),
WithdrawalsRoot: fillByteSlice(32, 0x99),
BlobGasUsed: 1234,
ExcessBlobGas: 5678,
}
result, err := ExecutionPayloadHeaderDenebFromConsensus(denebHeader)
require.NoError(t, err)
require.Equal(t, hexutil.Encode(denebHeader.ParentHash), result.ParentHash)
require.DeepEqual(t, hexutil.Encode(denebHeader.FeeRecipient), result.FeeRecipient)
require.DeepEqual(t, hexutil.Encode(denebHeader.StateRoot), result.StateRoot)
require.DeepEqual(t, fmt.Sprintf("%d", denebHeader.BlobGasUsed), result.BlobGasUsed)
}
func TestExecutionPayloadHeaderDeneb_ToConsensus_HappyPath(t *testing.T) {
header := &ExecutionPayloadHeaderDeneb{
ParentHash: hexutil.Encode(fillByteSlice(common.HashLength, 0xaa)),
FeeRecipient: hexutil.Encode(fillByteSlice(20, 0xbb)),
StateRoot: hexutil.Encode(fillByteSlice(32, 0xcc)),
ReceiptsRoot: hexutil.Encode(fillByteSlice(32, 0xdd)),
LogsBloom: hexutil.Encode(fillByteSlice(256, 0xee)),
PrevRandao: hexutil.Encode(fillByteSlice(32, 0xff)),
BlockNumber: "999",
GasLimit: "5000000",
GasUsed: "2500000",
Timestamp: "4444444444",
ExtraData: "0x1234abcd",
BaseFeePerGas: "1234",
BlockHash: hexutil.Encode(fillByteSlice(common.HashLength, 0x56)),
TransactionsRoot: hexutil.Encode(fillByteSlice(32, 0x78)),
WithdrawalsRoot: hexutil.Encode(fillByteSlice(32, 0x99)),
BlobGasUsed: "1234",
ExcessBlobGas: "5678",
}
result, err := header.ToConsensus()
require.NoError(t, err)
require.DeepEqual(t, hexutil.Encode(result.ParentHash), header.ParentHash)
require.DeepEqual(t, result.BlobGasUsed, uint64(1234))
require.DeepEqual(t, result.ExcessBlobGas, uint64(5678))
require.DeepEqual(t, result.BlockNumber, uint64(999))
}
func TestWithdrawalRequestsFromConsensus_HappyPath(t *testing.T) {
consensusRequests := []*enginev1.WithdrawalRequest{
{
SourceAddress: fillByteSlice(20, 0xbb),
ValidatorPubkey: fillByteSlice(48, 0xbb),
Amount: 12345,
},
{
SourceAddress: fillByteSlice(20, 0xcc),
ValidatorPubkey: fillByteSlice(48, 0xcc),
Amount: 54321,
},
}
result := WithdrawalRequestsFromConsensus(consensusRequests)
require.DeepEqual(t, len(result), len(consensusRequests))
require.DeepEqual(t, result[0].Amount, fmt.Sprintf("%d", consensusRequests[0].Amount))
}
func TestWithdrawalRequestFromConsensus_HappyPath(t *testing.T) {
req := &enginev1.WithdrawalRequest{
SourceAddress: fillByteSlice(20, 0xbb),
ValidatorPubkey: fillByteSlice(48, 0xbb),
Amount: 42,
}
result := WithdrawalRequestFromConsensus(req)
require.NotNil(t, result)
require.DeepEqual(t, result.SourceAddress, hexutil.Encode(fillByteSlice(20, 0xbb)))
}
func TestWithdrawalRequest_ToConsensus_HappyPath(t *testing.T) {
withdrawalReq := &WithdrawalRequest{
SourceAddress: hexutil.Encode(fillByteSlice(20, 111)),
ValidatorPubkey: hexutil.Encode(fillByteSlice(48, 123)),
Amount: "12345",
}
result, err := withdrawalReq.ToConsensus()
require.NoError(t, err)
require.DeepEqual(t, result.Amount, uint64(12345))
}
func TestConsolidationRequestsFromConsensus_HappyPath(t *testing.T) {
consensusRequests := []*enginev1.ConsolidationRequest{
{
SourceAddress: fillByteSlice(20, 111),
SourcePubkey: fillByteSlice(48, 112),
TargetPubkey: fillByteSlice(48, 113),
},
}
result := ConsolidationRequestsFromConsensus(consensusRequests)
require.DeepEqual(t, len(result), len(consensusRequests))
require.DeepEqual(t, result[0].SourceAddress, "0x6f6f6f6f6f6f6f6f6f6f6f6f6f6f6f6f6f6f6f6f")
}
func TestDepositRequestsFromConsensus_HappyPath(t *testing.T) {
ds := []*enginev1.DepositRequest{
{
Pubkey: fillByteSlice(48, 0xbb),
WithdrawalCredentials: fillByteSlice(32, 0xdd),
Amount: 98765,
Signature: fillByteSlice(96, 0xff),
Index: 111,
},
}
result := DepositRequestsFromConsensus(ds)
require.DeepEqual(t, len(result), len(ds))
require.DeepEqual(t, result[0].Amount, "98765")
}
func TestDepositRequest_ToConsensus_HappyPath(t *testing.T) {
req := &DepositRequest{
Pubkey: hexutil.Encode(fillByteSlice(48, 0xbb)),
WithdrawalCredentials: hexutil.Encode(fillByteSlice(32, 0xaa)),
Amount: "123",
Signature: hexutil.Encode(fillByteSlice(96, 0xdd)),
Index: "456",
}
result, err := req.ToConsensus()
require.NoError(t, err)
require.DeepEqual(t, result.Amount, uint64(123))
require.DeepEqual(t, result.Signature, fillByteSlice(96, 0xdd))
}
func TestExecutionRequestsFromConsensus_HappyPath(t *testing.T) {
er := &enginev1.ExecutionRequests{
Deposits: []*enginev1.DepositRequest{
{
Pubkey: fillByteSlice(48, 0xba),
WithdrawalCredentials: fillByteSlice(32, 0xaa),
Amount: 33,
Signature: fillByteSlice(96, 0xff),
Index: 44,
},
},
Withdrawals: []*enginev1.WithdrawalRequest{
{
SourceAddress: fillByteSlice(20, 0xaa),
ValidatorPubkey: fillByteSlice(48, 0xba),
Amount: 555,
},
},
Consolidations: []*enginev1.ConsolidationRequest{
{
SourceAddress: fillByteSlice(20, 0xdd),
SourcePubkey: fillByteSlice(48, 0xdd),
TargetPubkey: fillByteSlice(48, 0xcc),
},
},
}
result := ExecutionRequestsFromConsensus(er)
require.NotNil(t, result)
require.Equal(t, 1, len(result.Deposits))
require.Equal(t, "33", result.Deposits[0].Amount)
require.Equal(t, 1, len(result.Withdrawals))
require.Equal(t, "555", result.Withdrawals[0].Amount)
require.Equal(t, 1, len(result.Consolidations))
require.Equal(t, "0xcccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc", result.Consolidations[0].TargetPubkey)
}
func TestExecutionRequests_ToConsensus_HappyPath(t *testing.T) {
execReq := &ExecutionRequests{
Deposits: []*DepositRequest{
{
Pubkey: hexutil.Encode(fillByteSlice(48, 0xbb)),
WithdrawalCredentials: hexutil.Encode(fillByteSlice(32, 0xaa)),
Amount: "33",
Signature: hexutil.Encode(fillByteSlice(96, 0xff)),
Index: "44",
},
},
Withdrawals: []*WithdrawalRequest{
{
SourceAddress: hexutil.Encode(fillByteSlice(20, 0xdd)),
ValidatorPubkey: hexutil.Encode(fillByteSlice(48, 0xbb)),
Amount: "555",
},
},
Consolidations: []*ConsolidationRequest{
{
SourceAddress: hexutil.Encode(fillByteSlice(20, 0xcc)),
SourcePubkey: hexutil.Encode(fillByteSlice(48, 0xbb)),
TargetPubkey: hexutil.Encode(fillByteSlice(48, 0xcc)),
},
},
}
result, err := execReq.ToConsensus()
require.NoError(t, err)
require.Equal(t, 1, len(result.Deposits))
require.Equal(t, uint64(33), result.Deposits[0].Amount)
require.Equal(t, 1, len(result.Withdrawals))
require.Equal(t, uint64(555), result.Withdrawals[0].Amount)
require.Equal(t, 1, len(result.Consolidations))
require.DeepEqual(t, fillByteSlice(48, 0xcc), result.Consolidations[0].TargetPubkey)
}

View File

@@ -24,96 +24,3 @@ func TestDepositSnapshotFromConsensus(t *testing.T) {
require.Equal(t, "0x1234", res.ExecutionBlockHash)
require.Equal(t, "67890", res.ExecutionBlockHeight)
}
func TestSignedBLSToExecutionChange_ToConsensus(t *testing.T) {
s := &SignedBLSToExecutionChange{Message: nil, Signature: ""}
_, err := s.ToConsensus()
require.ErrorContains(t, errNilValue.Error(), err)
}
func TestSignedValidatorRegistration_ToConsensus(t *testing.T) {
s := &SignedValidatorRegistration{Message: nil, Signature: ""}
_, err := s.ToConsensus()
require.ErrorContains(t, errNilValue.Error(), err)
}
func TestSignedContributionAndProof_ToConsensus(t *testing.T) {
s := &SignedContributionAndProof{Message: nil, Signature: ""}
_, err := s.ToConsensus()
require.ErrorContains(t, errNilValue.Error(), err)
}
func TestContributionAndProof_ToConsensus(t *testing.T) {
c := &ContributionAndProof{
Contribution: nil,
AggregatorIndex: "invalid",
SelectionProof: "",
}
_, err := c.ToConsensus()
require.ErrorContains(t, errNilValue.Error(), err)
}
func TestSignedAggregateAttestationAndProof_ToConsensus(t *testing.T) {
s := &SignedAggregateAttestationAndProof{Message: nil, Signature: ""}
_, err := s.ToConsensus()
require.ErrorContains(t, errNilValue.Error(), err)
}
func TestAggregateAttestationAndProof_ToConsensus(t *testing.T) {
a := &AggregateAttestationAndProof{
AggregatorIndex: "1",
Aggregate: nil,
SelectionProof: "",
}
_, err := a.ToConsensus()
require.ErrorContains(t, errNilValue.Error(), err)
}
func TestAttestation_ToConsensus(t *testing.T) {
a := &Attestation{
AggregationBits: "0x10",
Data: nil,
Signature: "",
}
_, err := a.ToConsensus()
require.ErrorContains(t, errNilValue.Error(), err)
}
func TestSingleAttestation_ToConsensus(t *testing.T) {
s := &SingleAttestation{
CommitteeIndex: "1",
AttesterIndex: "1",
Data: nil,
Signature: "",
}
_, err := s.ToConsensus()
require.ErrorContains(t, errNilValue.Error(), err)
}
func TestSignedVoluntaryExit_ToConsensus(t *testing.T) {
s := &SignedVoluntaryExit{Message: nil, Signature: ""}
_, err := s.ToConsensus()
require.ErrorContains(t, errNilValue.Error(), err)
}
func TestProposerSlashing_ToConsensus(t *testing.T) {
p := &ProposerSlashing{SignedHeader1: nil, SignedHeader2: nil}
_, err := p.ToConsensus()
require.ErrorContains(t, errNilValue.Error(), err)
}
func TestAttesterSlashing_ToConsensus(t *testing.T) {
a := &AttesterSlashing{Attestation1: nil, Attestation2: nil}
_, err := a.ToConsensus()
require.ErrorContains(t, errNilValue.Error(), err)
}
func TestIndexedAttestation_ToConsensus(t *testing.T) {
a := &IndexedAttestation{
AttestingIndices: []string{"1"},
Data: nil,
Signature: "invalid",
}
_, err := a.ToConsensus()
require.ErrorContains(t, errNilValue.Error(), err)
}

View File

@@ -250,17 +250,3 @@ type ChainHead struct {
PreviousJustifiedBlockRoot string `json:"previous_justified_block_root"`
OptimisticStatus bool `json:"optimistic_status"`
}
type GetPendingDepositsResponse struct {
Version string `json:"version"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
Data []*PendingDeposit `json:"data"`
}
type GetPendingPartialWithdrawalsResponse struct {
Version string `json:"version"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
Data []*PendingPartialWithdrawal `json:"data"`
}

View File

@@ -244,6 +244,26 @@ type Withdrawal struct {
Amount string `json:"amount"`
}
type DepositRequest struct {
Pubkey string `json:"pubkey"`
WithdrawalCredentials string `json:"withdrawal_credentials"`
Amount string `json:"amount"`
Signature string `json:"signature"`
Index string `json:"index"`
}
type WithdrawalRequest struct {
SourceAddress string `json:"source_address"`
ValidatorPubkey string `json:"validator_pubkey"`
Amount string `json:"amount"`
}
type ConsolidationRequest struct {
SourceAddress string `json:"source_address"`
SourcePubkey string `json:"source_pubkey"`
TargetPubkey string `json:"target_pubkey"`
}
type PendingDeposit struct {
Pubkey string `json:"pubkey"`
WithdrawalCredentials string `json:"withdrawal_credentials"`

View File

@@ -125,7 +125,7 @@ func getChan(key string) chan byte {
// Return a new string with unique elements.
func unique(arr []string) []string {
if len(arr) <= 1 {
if arr == nil || len(arr) <= 1 {
return arr
}

View File

@@ -43,7 +43,6 @@ go_library(
"//beacon-chain/cache:go_default_library",
"//beacon-chain/core/altair:go_default_library",
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/electra:go_default_library",
"//beacon-chain/core/epoch/precompute:go_default_library",
"//beacon-chain/core/feed:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
@@ -69,6 +68,7 @@ go_library(
"//beacon-chain/startup:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/stategen:go_default_library",
"//beacon-chain/sync/rlnc:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",

View File

@@ -3,10 +3,13 @@ package blockchain
import (
"context"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
consensus_blocks "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/forkchoice"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
)
// CachedHeadRoot returns the corresponding value from Forkchoice
@@ -100,3 +103,26 @@ func (s *Service) ParentRoot(root [32]byte) ([32]byte, error) {
defer s.cfg.ForkChoiceStore.RUnlock()
return s.cfg.ForkChoiceStore.ParentRoot(root)
}
// hashForGenesisBlock returns the right hash for the genesis block
func (s *Service) hashForGenesisBlock(ctx context.Context, root [32]byte) ([]byte, error) {
genRoot, err := s.cfg.BeaconDB.GenesisBlockRoot(ctx)
if err != nil {
return nil, errors.Wrap(err, "could not get genesis block root")
}
if root != genRoot {
return nil, errNotGenesisRoot
}
st, err := s.cfg.BeaconDB.GenesisState(ctx)
if err != nil {
return nil, errors.Wrap(err, "could not get genesis state")
}
if st.Version() < version.Bellatrix {
return nil, nil
}
header, err := st.LatestExecutionPayloadHeader()
if err != nil {
return nil, errors.Wrap(err, "could not get latest execution payload header")
}
return bytesutil.SafeCopyBytes(header.BlockHash()), nil
}

View File

@@ -612,3 +612,20 @@ func TestService_IsFinalized(t *testing.T) {
require.Equal(t, true, c.IsFinalized(ctx, br))
require.Equal(t, false, c.IsFinalized(ctx, [32]byte{'c'}))
}
func Test_hashForGenesisRoot(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := context.Background()
c := setupBeaconChain(t, beaconDB)
st, _ := util.DeterministicGenesisStateElectra(t, 10)
require.NoError(t, c.cfg.BeaconDB.SaveGenesisData(ctx, st))
root, err := beaconDB.GenesisBlockRoot(ctx)
require.NoError(t, err)
genRoot, err := c.hashForGenesisBlock(ctx, [32]byte{'a'})
require.ErrorIs(t, err, errNotGenesisRoot)
require.IsNil(t, genRoot)
genRoot, err = c.hashForGenesisBlock(ctx, root)
require.NoError(t, err)
require.Equal(t, [32]byte{}, [32]byte(genRoot))
}

View File

@@ -30,6 +30,8 @@ var (
ErrNotCheckpoint = errors.New("not a checkpoint in forkchoice")
// ErrNilHead is returned when no head is present in the blockchain service.
ErrNilHead = errors.New("nil head")
// errNotGenesisRoot is returned when the root is not the genesis block root.
errNotGenesisRoot = errors.New("root is not the genesis block root")
)
var errMaxBlobsExceeded = errors.New("Expected commitments in block exceeds MAX_BLOBS_PER_BLOCK")

View File

@@ -69,6 +69,18 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *fcuConfig) (*
SafeBlockHash: justifiedHash[:],
FinalizedBlockHash: finalizedHash[:],
}
if len(fcs.HeadBlockHash) != 32 || [32]byte(fcs.HeadBlockHash) == [32]byte{} {
// check if we are sending FCU at genesis
hash, err := s.hashForGenesisBlock(ctx, arg.headRoot)
if errors.Is(err, errNotGenesisRoot) {
log.Error("Sending nil head block hash to execution engine")
return nil, nil
}
if err != nil {
return nil, errors.Wrap(err, "could not get head block hash")
}
fcs.HeadBlockHash = hash
}
if arg.attributes == nil {
arg.attributes = payloadattribute.EmptyWithVersion(headBlk.Version())
}

View File

@@ -84,7 +84,7 @@ func Test_NotifyForkchoiceUpdate_GetPayloadAttrErrorCanContinue(t *testing.T) {
service.cfg.PayloadIDCache.Set(1, [32]byte{}, [8]byte{})
got, err := service.notifyForkchoiceUpdate(ctx, arg)
require.NoError(t, err)
require.DeepEqual(t, got, pid) // We still get a payload ID even though the state is bad. This means it returns until the end.
require.IsNil(t, got)
}
func Test_NotifyForkchoiceUpdate(t *testing.T) {
@@ -113,6 +113,7 @@ func Test_NotifyForkchoiceUpdate(t *testing.T) {
state, blkRoot, err = prepareForkchoiceState(ctx, 2, bellatrixBlkRoot, altairBlkRoot, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
badHash := [32]byte{'h'}
tests := []struct {
name string
@@ -210,7 +211,7 @@ func Test_NotifyForkchoiceUpdate(t *testing.T) {
blk: func() interfaces.ReadOnlySignedBeaconBlock {
b, err := consensusblocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlockBellatrix{Block: &ethpb.BeaconBlockBellatrix{
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &v1.ExecutionPayload{},
ExecutionPayload: &v1.ExecutionPayload{BlockHash: badHash[:]},
},
}})
require.NoError(t, err)

View File

@@ -69,22 +69,6 @@ func logStateTransitionData(b interfaces.ReadOnlyBeaconBlock) error {
log = log.WithField("kzgCommitmentCount", len(kzgs))
}
}
if b.Version() >= version.Electra {
eReqs, err := b.Body().ExecutionRequests()
if err != nil {
log.WithError(err).Error("Failed to get execution requests")
} else {
if len(eReqs.Deposits) > 0 {
log = log.WithField("depositRequestCount", len(eReqs.Deposits))
}
if len(eReqs.Consolidations) > 0 {
log = log.WithField("consolidationRequestCount", len(eReqs.Consolidations))
}
if len(eReqs.Withdrawals) > 0 {
log = log.WithField("withdrawalRequestCount", len(eReqs.Withdrawals))
}
}
}
log.Info("Finished applying state transition")
return nil
}

View File

@@ -16,6 +16,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/startup"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state/stategen"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/sync/rlnc"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
)
@@ -69,6 +70,14 @@ func WithDepositCache(c cache.DepositCache) Option {
}
}
// WithChunkCommitter for chunk committer.
func WithChunkCommitter(c *rlnc.Committer) Option {
return func(s *Service) error {
s.cfg.ChunkCommitter = c
return nil
}
}
// WithPayloadIDCache for payload ID cache.
func WithPayloadIDCache(c *cache.PayloadIDCache) Option {
return func(s *Service) error {

View File

@@ -512,11 +512,17 @@ func missingIndices(bs *filesystem.BlobStorage, root [32]byte, expected [][]byte
if len(expected) > maxBlobsPerBlock {
return nil, errMaxBlobsExceeded
}
indices := bs.Summary(root)
indices, err := bs.Indices(root, slot)
if err != nil {
return nil, err
}
missing := make(map[uint64]struct{}, len(expected))
for i := range expected {
if len(expected[i]) > 0 && !indices.HasIndex(uint64(i)) {
missing[uint64(i)] = struct{}{}
ui := uint64(i)
if len(expected[i]) > 0 {
if !indices[i] {
missing[ui] = struct{}{}
}
}
}
return missing, nil

View File

@@ -7,7 +7,6 @@ import (
"strings"
"time"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
lightclient "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/light-client"
"github.com/ethereum/go-ethereum/common"
@@ -553,8 +552,7 @@ func (s *Service) fillInForkChoiceMissingBlocks(ctx context.Context, signed inte
// inserts finalized deposits into our finalized deposit trie, needs to be
// called in the background
// Post-Electra: prunes all proofs and pending deposits in the cache
func (s *Service) insertFinalizedDepositsAndPrune(ctx context.Context, fRoot [32]byte) {
func (s *Service) insertFinalizedDeposits(ctx context.Context, fRoot [32]byte) {
ctx, span := trace.StartSpan(ctx, "blockChain.insertFinalizedDeposits")
defer span.End()
startTime := time.Now()
@@ -565,16 +563,6 @@ func (s *Service) insertFinalizedDepositsAndPrune(ctx context.Context, fRoot [32
log.WithError(err).Error("could not fetch finalized state")
return
}
// Check if we should prune all pending deposits.
// In post-Electra(after the legacy deposit mechanism is deprecated),
// we can prune all pending deposits in the deposit cache.
// See: https://eips.ethereum.org/EIPS/eip-6110#eth1data-poll-deprecation
if helpers.DepositRequestsStarted(finalizedState) {
s.pruneAllPendingDepositsAndProofs(ctx)
return
}
// We update the cache up to the last deposit index in the finalized block's state.
// We can be confident that these deposits will be included in some block
// because the Eth1 follow distance makes such long-range reorgs extremely unlikely.
@@ -603,12 +591,6 @@ func (s *Service) insertFinalizedDepositsAndPrune(ctx context.Context, fRoot [32
log.WithField("duration", time.Since(startTime).String()).Debugf("Finalized deposit insertion completed at index %d", finalizedEth1DepIdx)
}
// pruneAllPendingDepositsAndProofs prunes all proofs and pending deposits in the cache.
func (s *Service) pruneAllPendingDepositsAndProofs(ctx context.Context) {
s.cfg.DepositCache.PruneAllPendingDeposits(ctx)
s.cfg.DepositCache.PruneAllProofs(ctx)
}
// This ensures that the input root defaults to using genesis root instead of zero hashes. This is needed for handling
// fork choice justification routine.
func (s *Service) ensureRootNotZeros(root [32]byte) [32]byte {

View File

@@ -651,24 +651,6 @@ func TestOnBlock_NilBlock(t *testing.T) {
require.Equal(t, true, IsInvalidBlock(err))
}
func TestOnBlock_InvalidSignature(t *testing.T) {
service, tr := minimalTestService(t)
ctx := tr.ctx
gs, keys := util.DeterministicGenesisState(t, 32)
require.NoError(t, service.saveGenesisData(ctx, gs))
blk, err := util.GenerateFullBlock(gs, keys, util.DefaultBlockGenConfig(), 1)
require.NoError(t, err)
blk.Signature = []byte{'a'} // Mutate the signature.
wsb, err := consensusblocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
_, err = service.validateStateTransition(ctx, preState, wsb)
require.Equal(t, true, IsInvalidBlock(err))
}
func TestOnBlock_CallNewPayloadAndForkchoiceUpdated(t *testing.T) {
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
@@ -723,7 +705,7 @@ func TestInsertFinalizedDeposits(t *testing.T) {
Signature: zeroSig[:],
}, Proof: [][]byte{root}}, 100+i, int64(i), bytesutil.ToBytes32(root)))
}
service.insertFinalizedDepositsAndPrune(ctx, [32]byte{'m', 'o', 'c', 'k'})
service.insertFinalizedDeposits(ctx, [32]byte{'m', 'o', 'c', 'k'})
fDeposits, err := depositCache.FinalizedDeposits(ctx)
require.NoError(t, err)
assert.Equal(t, 7, int(fDeposits.MerkleTrieIndex()), "Finalized deposits not inserted correctly")
@@ -759,7 +741,7 @@ func TestInsertFinalizedDeposits_PrunePendingDeposits(t *testing.T) {
Signature: zeroSig[:],
}, Proof: [][]byte{root}}, 100+i, int64(i), bytesutil.ToBytes32(root))
}
service.insertFinalizedDepositsAndPrune(ctx, [32]byte{'m', 'o', 'c', 'k'})
service.insertFinalizedDeposits(ctx, [32]byte{'m', 'o', 'c', 'k'})
fDeposits, err := depositCache.FinalizedDeposits(ctx)
require.NoError(t, err)
assert.Equal(t, 7, int(fDeposits.MerkleTrieIndex()), "Finalized deposits not inserted correctly")
@@ -799,7 +781,7 @@ func TestInsertFinalizedDeposits_MultipleFinalizedRoutines(t *testing.T) {
}
// Insert 3 deposits before hand.
require.NoError(t, depositCache.InsertFinalizedDeposits(ctx, 2, [32]byte{}, 0))
service.insertFinalizedDepositsAndPrune(ctx, [32]byte{'m', 'o', 'c', 'k'})
service.insertFinalizedDeposits(ctx, [32]byte{'m', 'o', 'c', 'k'})
fDeposits, err := depositCache.FinalizedDeposits(ctx)
require.NoError(t, err)
assert.Equal(t, 5, int(fDeposits.MerkleTrieIndex()), "Finalized deposits not inserted correctly")
@@ -810,7 +792,7 @@ func TestInsertFinalizedDeposits_MultipleFinalizedRoutines(t *testing.T) {
}
// Insert New Finalized State with higher deposit count.
service.insertFinalizedDepositsAndPrune(ctx, [32]byte{'m', 'o', 'c', 'k', '2'})
service.insertFinalizedDeposits(ctx, [32]byte{'m', 'o', 'c', 'k', '2'})
fDeposits, err = depositCache.FinalizedDeposits(ctx)
require.NoError(t, err)
assert.Equal(t, 12, int(fDeposits.MerkleTrieIndex()), "Finalized deposits not inserted correctly")
@@ -1076,48 +1058,6 @@ func TestService_insertSlashingsToForkChoiceStore(t *testing.T) {
service.InsertSlashingsToForkChoiceStore(ctx, wb.Block().Body().AttesterSlashings())
}
func TestService_insertSlashingsToForkChoiceStoreElectra(t *testing.T) {
service, tr := minimalTestService(t)
ctx := tr.ctx
beaconState, privKeys := util.DeterministicGenesisStateElectra(t, 100)
att1 := util.HydrateIndexedAttestationElectra(&ethpb.IndexedAttestationElectra{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 1},
},
AttestingIndices: []uint64{0, 1},
})
domain, err := signing.Domain(beaconState.Fork(), 0, params.BeaconConfig().DomainBeaconAttester, beaconState.GenesisValidatorsRoot())
require.NoError(t, err)
signingRoot, err := signing.ComputeSigningRoot(att1.Data, domain)
assert.NoError(t, err, "Could not get signing root of beacon block header")
sig0 := privKeys[0].Sign(signingRoot[:])
sig1 := privKeys[1].Sign(signingRoot[:])
aggregateSig := bls.AggregateSignatures([]bls.Signature{sig0, sig1})
att1.Signature = aggregateSig.Marshal()
att2 := util.HydrateIndexedAttestationElectra(&ethpb.IndexedAttestationElectra{
AttestingIndices: []uint64{0, 1},
})
signingRoot, err = signing.ComputeSigningRoot(att2.Data, domain)
assert.NoError(t, err, "Could not get signing root of beacon block header")
sig0 = privKeys[0].Sign(signingRoot[:])
sig1 = privKeys[1].Sign(signingRoot[:])
aggregateSig = bls.AggregateSignatures([]bls.Signature{sig0, sig1})
att2.Signature = aggregateSig.Marshal()
slashings := []*ethpb.AttesterSlashingElectra{
{
Attestation_1: att1,
Attestation_2: att2,
},
}
b := util.NewBeaconBlockElectra()
b.Block.Body.AttesterSlashings = slashings
wb, err := consensusblocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
service.InsertSlashingsToForkChoiceStore(ctx, wb.Block().Body().AttesterSlashings())
}
func TestOnBlock_ProcessBlocksParallel(t *testing.T) {
service, tr := minimalTestService(t)
ctx := tr.ctx
@@ -2297,7 +2237,7 @@ func TestMissingIndices(t *testing.T) {
for _, c := range cases {
bm, bs := filesystem.NewEphemeralBlobStorageWithMocker(t)
t.Run(c.name, func(t *testing.T) {
require.NoError(t, bm.CreateFakeIndices(c.root, 0, c.present...))
require.NoError(t, bm.CreateFakeIndices(c.root, c.present...))
missing, err := missingIndices(bs, c.root, c.expected, 0)
if c.err != nil {
require.ErrorIs(t, err, c.err)

View File

@@ -7,7 +7,6 @@ import (
"time"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/electra"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
@@ -70,6 +69,10 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
log.WithField("blockRoot", fmt.Sprintf("%#x", blockRoot)).Debug("Ignoring already synced block")
return nil
}
if s.BlockBeingSynced(blockRoot) {
log.WithField("blockRoot", fmt.Sprintf("%#x", blockRoot)).Debug("Ignoring already syncing block")
return nil
}
receivedTime := time.Now()
s.blockBeingSynced.set(blockRoot)
defer s.blockBeingSynced.unset(blockRoot)
@@ -279,10 +282,9 @@ func (s *Service) executePostFinalizationTasks(ctx context.Context, finalizedSta
go func() {
s.sendNewFinalizedEvent(ctx, finalizedState)
}()
depCtx, cancel := context.WithTimeout(context.Background(), depositDeadline)
go func() {
s.insertFinalizedDepositsAndPrune(depCtx, finalized.Root)
s.insertFinalizedDeposits(depCtx, finalized.Root)
cancel()
}()
}
@@ -470,9 +472,6 @@ func (s *Service) validateStateTransition(ctx context.Context, preState state.Be
stateTransitionStartTime := time.Now()
postState, err := transition.ExecuteStateTransition(ctx, preState, signed)
if err != nil {
if ctx.Err() != nil || electra.IsExecutionRequestError(err) {
return nil, err
}
return nil, invalidBlock{error: err}
}
stateTransitionProcessingTime.Observe(float64(time.Since(stateTransitionStartTime).Milliseconds()))

View File

@@ -455,81 +455,41 @@ func Test_executePostFinalizationTasks(t *testing.T) {
Root: headRoot[:],
}))
require.NoError(t, headState.SetGenesisValidatorsRoot(params.BeaconConfig().ZeroHash[:]))
t.Run("pre deposit request", func(t *testing.T) {
require.NoError(t, headState.SetEth1DepositIndex(1))
s, tr := minimalTestService(t, WithFinalizedStateAtStartUp(headState))
ctx, beaconDB, stateGen := tr.ctx, tr.db, tr.sg
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, genesisRoot))
util.SaveBlock(t, ctx, beaconDB, genesis)
require.NoError(t, beaconDB.SaveState(ctx, headState, headRoot))
require.NoError(t, beaconDB.SaveState(ctx, headState, genesisRoot))
util.SaveBlock(t, ctx, beaconDB, headBlock)
require.NoError(t, beaconDB.SaveFinalizedCheckpoint(ctx, &ethpb.Checkpoint{Epoch: slots.ToEpoch(finalizedSlot), Root: headRoot[:]}))
s, tr := minimalTestService(t, WithFinalizedStateAtStartUp(headState))
ctx, beaconDB, stateGen := tr.ctx, tr.db, tr.sg
require.NoError(t, err)
require.NoError(t, stateGen.SaveState(ctx, headRoot, headState))
require.NoError(t, beaconDB.SaveLastValidatedCheckpoint(ctx, &ethpb.Checkpoint{Epoch: slots.ToEpoch(finalizedSlot), Root: headRoot[:]}))
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, genesisRoot))
util.SaveBlock(t, ctx, beaconDB, genesis)
require.NoError(t, beaconDB.SaveState(ctx, headState, headRoot))
require.NoError(t, beaconDB.SaveState(ctx, headState, genesisRoot))
util.SaveBlock(t, ctx, beaconDB, headBlock)
require.NoError(t, beaconDB.SaveFinalizedCheckpoint(ctx, &ethpb.Checkpoint{Epoch: slots.ToEpoch(finalizedSlot), Root: headRoot[:]}))
notifier := &blockchainTesting.MockStateNotifier{RecordEvents: true}
s.cfg.StateNotifier = notifier
s.executePostFinalizationTasks(s.ctx, headState)
require.NoError(t, err)
require.NoError(t, stateGen.SaveState(ctx, headRoot, headState))
require.NoError(t, beaconDB.SaveLastValidatedCheckpoint(ctx, &ethpb.Checkpoint{Epoch: slots.ToEpoch(finalizedSlot), Root: headRoot[:]}))
time.Sleep(1 * time.Second) // sleep for a second because event is in a separate go routine
require.Equal(t, 1, len(notifier.ReceivedEvents()))
e := notifier.ReceivedEvents()[0]
assert.Equal(t, statefeed.FinalizedCheckpoint, int(e.Type))
fc, ok := e.Data.(*ethpbv1.EventFinalizedCheckpoint)
require.Equal(t, true, ok, "event has wrong data type")
assert.Equal(t, primitives.Epoch(123), fc.Epoch)
assert.DeepEqual(t, headRoot[:], fc.Block)
assert.DeepEqual(t, finalizedStRoot[:], fc.State)
assert.Equal(t, false, fc.ExecutionOptimistic)
notifier := &blockchainTesting.MockStateNotifier{RecordEvents: true}
s.cfg.StateNotifier = notifier
s.executePostFinalizationTasks(s.ctx, headState)
// check the cache
index, ok := headState.ValidatorIndexByPubkey(bytesutil.ToBytes48(key))
require.Equal(t, true, ok)
require.Equal(t, primitives.ValidatorIndex(0), index) // first index
time.Sleep(1 * time.Second) // sleep for a second because event is in a separate go routine
require.Equal(t, 1, len(notifier.ReceivedEvents()))
e := notifier.ReceivedEvents()[0]
assert.Equal(t, statefeed.FinalizedCheckpoint, int(e.Type))
fc, ok := e.Data.(*ethpbv1.EventFinalizedCheckpoint)
require.Equal(t, true, ok, "event has wrong data type")
assert.Equal(t, primitives.Epoch(123), fc.Epoch)
assert.DeepEqual(t, headRoot[:], fc.Block)
assert.DeepEqual(t, finalizedStRoot[:], fc.State)
assert.Equal(t, false, fc.ExecutionOptimistic)
// check deposit
require.LogsContain(t, logHook, "Finalized deposit insertion completed at index")
})
t.Run("deposit requests started", func(t *testing.T) {
require.NoError(t, headState.SetEth1DepositIndex(1))
require.NoError(t, headState.SetDepositRequestsStartIndex(1))
s, tr := minimalTestService(t, WithFinalizedStateAtStartUp(headState))
ctx, beaconDB, stateGen := tr.ctx, tr.db, tr.sg
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, genesisRoot))
util.SaveBlock(t, ctx, beaconDB, genesis)
require.NoError(t, beaconDB.SaveState(ctx, headState, headRoot))
require.NoError(t, beaconDB.SaveState(ctx, headState, genesisRoot))
util.SaveBlock(t, ctx, beaconDB, headBlock)
require.NoError(t, beaconDB.SaveFinalizedCheckpoint(ctx, &ethpb.Checkpoint{Epoch: slots.ToEpoch(finalizedSlot), Root: headRoot[:]}))
require.NoError(t, err)
require.NoError(t, stateGen.SaveState(ctx, headRoot, headState))
require.NoError(t, beaconDB.SaveLastValidatedCheckpoint(ctx, &ethpb.Checkpoint{Epoch: slots.ToEpoch(finalizedSlot), Root: headRoot[:]}))
notifier := &blockchainTesting.MockStateNotifier{RecordEvents: true}
s.cfg.StateNotifier = notifier
s.executePostFinalizationTasks(s.ctx, headState)
time.Sleep(1 * time.Second) // sleep for a second because event is in a separate go routine
require.Equal(t, 1, len(notifier.ReceivedEvents()))
e := notifier.ReceivedEvents()[0]
assert.Equal(t, statefeed.FinalizedCheckpoint, int(e.Type))
fc, ok := e.Data.(*ethpbv1.EventFinalizedCheckpoint)
require.Equal(t, true, ok, "event has wrong data type")
assert.Equal(t, primitives.Epoch(123), fc.Epoch)
assert.DeepEqual(t, headRoot[:], fc.Block)
assert.DeepEqual(t, finalizedStRoot[:], fc.State)
assert.Equal(t, false, fc.ExecutionOptimistic)
// check the cache
index, ok := headState.ValidatorIndexByPubkey(bytesutil.ToBytes48(key))
require.Equal(t, true, ok)
require.Equal(t, primitives.ValidatorIndex(0), index) // first index
})
// check the cache
index, ok := headState.ValidatorIndexByPubkey(bytesutil.ToBytes48(key))
require.Equal(t, true, ok)
require.Equal(t, primitives.ValidatorIndex(0), index) // first index
// check deposit
require.LogsContain(t, logHook, "Finalized deposit insertion completed at index")
}

View File

@@ -32,6 +32,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/startup"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state/stategen"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/sync/rlnc"
"github.com/prysmaticlabs/prysm/v5/config/features"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
@@ -92,6 +93,7 @@ type config struct {
FinalizedStateAtStartUp state.BeaconState
ExecutionEngineCaller execution.EngineCaller
SyncChecker Checker
ChunkCommitter *rlnc.Committer
}
// Checker is an interface used to determine if a node is in initial sync

View File

@@ -50,6 +50,11 @@ func (mb *mockBroadcaster) Broadcast(_ context.Context, _ proto.Message) error {
return nil
}
func (mb *mockBroadcaster) BroadcastBlockChunks(_ context.Context, _ []*ethpb.BeaconBlockChunk) error {
mb.broadcastCalled = true
return nil
}
func (mb *mockBroadcaster) BroadcastAttestation(_ context.Context, _ uint64, _ ethpb.Att) error {
mb.broadcastCalled = true
return nil

View File

@@ -84,7 +84,6 @@ go_test(
"sync_committee_head_state_test.go",
"sync_committee_test.go",
"sync_subnet_ids_test.go",
"tracked_validators_test.go",
],
embed = [":go_default_library"],
deps = [

View File

@@ -5,7 +5,6 @@ go_library(
srcs = [
"deposit_fetcher.go",
"deposit_inserter.go",
"deposit_pruner.go",
"deposit_tree.go",
"deposit_tree_snapshot.go",
"merkle_tree.go",
@@ -36,7 +35,6 @@ go_test(
srcs = [
"deposit_cache_test.go",
"deposit_fetcher_test.go",
"deposit_pruner_test.go",
"deposit_tree_snapshot_test.go",
"merkle_tree_test.go",
"spec_test.go",

View File

@@ -903,6 +903,189 @@ func TestMin(t *testing.T) {
}
func TestPruneProofs_Ok(t *testing.T) {
dc, err := New()
require.NoError(t, err)
deposits := []struct {
blkNum uint64
deposit *ethpb.Deposit
index int64
}{
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk0"), 48)}},
index: 0,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk1"), 48)}},
index: 1,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk2"), 48)}},
index: 2,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk3"), 48)}},
index: 3,
},
}
for _, ins := range deposits {
assert.NoError(t, dc.InsertDeposit(context.Background(), ins.deposit, ins.blkNum, ins.index, [32]byte{}))
}
require.NoError(t, dc.PruneProofs(context.Background(), 1))
assert.DeepEqual(t, [][]byte(nil), dc.deposits[0].Deposit.Proof)
assert.DeepEqual(t, [][]byte(nil), dc.deposits[1].Deposit.Proof)
assert.NotNil(t, dc.deposits[2].Deposit.Proof)
assert.NotNil(t, dc.deposits[3].Deposit.Proof)
}
func TestPruneProofs_SomeAlreadyPruned(t *testing.T) {
dc, err := New()
require.NoError(t, err)
deposits := []struct {
blkNum uint64
deposit *ethpb.Deposit
index int64
}{
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: nil, Data: &ethpb.Deposit_Data{
PublicKey: bytesutil.PadTo([]byte("pk0"), 48)}},
index: 0,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: nil, Data: &ethpb.Deposit_Data{
PublicKey: bytesutil.PadTo([]byte("pk1"), 48)}}, index: 1,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(), Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk2"), 48)}},
index: 2,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk3"), 48)}},
index: 3,
},
}
for _, ins := range deposits {
assert.NoError(t, dc.InsertDeposit(context.Background(), ins.deposit, ins.blkNum, ins.index, [32]byte{}))
}
require.NoError(t, dc.PruneProofs(context.Background(), 2))
assert.DeepEqual(t, [][]byte(nil), dc.deposits[2].Deposit.Proof)
}
func TestPruneProofs_PruneAllWhenDepositIndexTooBig(t *testing.T) {
dc, err := New()
require.NoError(t, err)
deposits := []struct {
blkNum uint64
deposit *ethpb.Deposit
index int64
}{
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk0"), 48)}},
index: 0,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk1"), 48)}},
index: 1,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk2"), 48)}},
index: 2,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk3"), 48)}},
index: 3,
},
}
for _, ins := range deposits {
assert.NoError(t, dc.InsertDeposit(context.Background(), ins.deposit, ins.blkNum, ins.index, [32]byte{}))
}
require.NoError(t, dc.PruneProofs(context.Background(), 99))
assert.DeepEqual(t, [][]byte(nil), dc.deposits[0].Deposit.Proof)
assert.DeepEqual(t, [][]byte(nil), dc.deposits[1].Deposit.Proof)
assert.DeepEqual(t, [][]byte(nil), dc.deposits[2].Deposit.Proof)
assert.DeepEqual(t, [][]byte(nil), dc.deposits[3].Deposit.Proof)
}
func TestPruneProofs_CorrectlyHandleLastIndex(t *testing.T) {
dc, err := New()
require.NoError(t, err)
deposits := []struct {
blkNum uint64
deposit *ethpb.Deposit
index int64
}{
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk0"), 48)}},
index: 0,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk1"), 48)}},
index: 1,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk2"), 48)}},
index: 2,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk3"), 48)}},
index: 3,
},
}
for _, ins := range deposits {
assert.NoError(t, dc.InsertDeposit(context.Background(), ins.deposit, ins.blkNum, ins.index, [32]byte{}))
}
require.NoError(t, dc.PruneProofs(context.Background(), 4))
assert.DeepEqual(t, [][]byte(nil), dc.deposits[0].Deposit.Proof)
assert.DeepEqual(t, [][]byte(nil), dc.deposits[1].Deposit.Proof)
assert.DeepEqual(t, [][]byte(nil), dc.deposits[2].Deposit.Proof)
assert.DeepEqual(t, [][]byte(nil), dc.deposits[3].Deposit.Proof)
}
func TestDepositMap_WorksCorrectly(t *testing.T) {
dc, err := New()
require.NoError(t, err)

View File

@@ -178,6 +178,52 @@ func (c *Cache) NonFinalizedDeposits(ctx context.Context, lastFinalizedIndex int
return deposits
}
// PruneProofs removes proofs from all deposits whose index is equal or less than untilDepositIndex.
func (c *Cache) PruneProofs(ctx context.Context, untilDepositIndex int64) error {
_, span := trace.StartSpan(ctx, "Cache.PruneProofs")
defer span.End()
c.depositsLock.Lock()
defer c.depositsLock.Unlock()
if untilDepositIndex >= int64(len(c.deposits)) {
untilDepositIndex = int64(len(c.deposits) - 1)
}
for i := untilDepositIndex; i >= 0; i-- {
// Finding a nil proof means that all proofs up to this deposit have been already pruned.
if c.deposits[i].Deposit.Proof == nil {
break
}
c.deposits[i].Deposit.Proof = nil
}
return nil
}
// PrunePendingDeposits removes any deposit which is older than the given deposit merkle tree index.
func (c *Cache) PrunePendingDeposits(ctx context.Context, merkleTreeIndex int64) {
_, span := trace.StartSpan(ctx, "Cache.PrunePendingDeposits")
defer span.End()
if merkleTreeIndex == 0 {
log.Debug("Ignoring 0 deposit removal")
return
}
c.depositsLock.Lock()
defer c.depositsLock.Unlock()
cleanDeposits := make([]*ethpb.DepositContainer, 0, len(c.pendingDeposits))
for _, dp := range c.pendingDeposits {
if dp.Index >= merkleTreeIndex {
cleanDeposits = append(cleanDeposits, dp)
}
}
c.pendingDeposits = cleanDeposits
pendingDepositsCount.Set(float64(len(c.pendingDeposits)))
}
// InsertPendingDeposit into the database. If deposit or block number are nil
// then this method does nothing.
func (c *Cache) InsertPendingDeposit(ctx context.Context, d *ethpb.Deposit, blockNum uint64, index int64, depositRoot [32]byte) {

View File

@@ -44,3 +44,67 @@ func TestPendingDeposits_OK(t *testing.T) {
all := dc.PendingDeposits(context.Background(), nil)
assert.Equal(t, len(dc.pendingDeposits), len(all), "PendingDeposits(ctx, nil) did not return all deposits")
}
func TestPrunePendingDeposits_ZeroMerkleIndex(t *testing.T) {
dc := Cache{}
dc.pendingDeposits = []*ethpb.DepositContainer{
{Eth1BlockHeight: 2, Index: 2},
{Eth1BlockHeight: 4, Index: 4},
{Eth1BlockHeight: 6, Index: 6},
{Eth1BlockHeight: 8, Index: 8},
{Eth1BlockHeight: 10, Index: 10},
{Eth1BlockHeight: 12, Index: 12},
}
dc.PrunePendingDeposits(context.Background(), 0)
expected := []*ethpb.DepositContainer{
{Eth1BlockHeight: 2, Index: 2},
{Eth1BlockHeight: 4, Index: 4},
{Eth1BlockHeight: 6, Index: 6},
{Eth1BlockHeight: 8, Index: 8},
{Eth1BlockHeight: 10, Index: 10},
{Eth1BlockHeight: 12, Index: 12},
}
assert.DeepEqual(t, expected, dc.pendingDeposits)
}
func TestPrunePendingDeposits_OK(t *testing.T) {
dc := Cache{}
dc.pendingDeposits = []*ethpb.DepositContainer{
{Eth1BlockHeight: 2, Index: 2},
{Eth1BlockHeight: 4, Index: 4},
{Eth1BlockHeight: 6, Index: 6},
{Eth1BlockHeight: 8, Index: 8},
{Eth1BlockHeight: 10, Index: 10},
{Eth1BlockHeight: 12, Index: 12},
}
dc.PrunePendingDeposits(context.Background(), 6)
expected := []*ethpb.DepositContainer{
{Eth1BlockHeight: 6, Index: 6},
{Eth1BlockHeight: 8, Index: 8},
{Eth1BlockHeight: 10, Index: 10},
{Eth1BlockHeight: 12, Index: 12},
}
assert.DeepEqual(t, expected, dc.pendingDeposits)
dc.pendingDeposits = []*ethpb.DepositContainer{
{Eth1BlockHeight: 2, Index: 2},
{Eth1BlockHeight: 4, Index: 4},
{Eth1BlockHeight: 6, Index: 6},
{Eth1BlockHeight: 8, Index: 8},
{Eth1BlockHeight: 10, Index: 10},
{Eth1BlockHeight: 12, Index: 12},
}
dc.PrunePendingDeposits(context.Background(), 10)
expected = []*ethpb.DepositContainer{
{Eth1BlockHeight: 10, Index: 10},
{Eth1BlockHeight: 12, Index: 12},
}
assert.DeepEqual(t, expected, dc.pendingDeposits)
}

View File

@@ -1,88 +0,0 @@
package depositsnapshot
import (
"context"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
)
// PruneProofs removes proofs from all deposits whose index is equal or less than untilDepositIndex.
func (c *Cache) PruneProofs(ctx context.Context, untilDepositIndex int64) error {
_, span := trace.StartSpan(ctx, "Cache.PruneProofs")
defer span.End()
c.depositsLock.Lock()
defer c.depositsLock.Unlock()
if untilDepositIndex >= int64(len(c.deposits)) {
untilDepositIndex = int64(len(c.deposits) - 1)
}
for i := untilDepositIndex; i >= 0; i-- {
// Finding a nil proof means that all proofs up to this deposit have been already pruned.
if c.deposits[i].Deposit.Proof == nil {
break
}
c.deposits[i].Deposit.Proof = nil
}
return nil
}
// PruneAllProofs removes proofs from all deposits.
// As EIP-6110 applies and the legacy deposit mechanism is deprecated,
// proofs in deposit snapshot are no longer needed.
// See: https://eips.ethereum.org/EIPS/eip-6110#eth1data-poll-deprecation
func (c *Cache) PruneAllProofs(ctx context.Context) {
_, span := trace.StartSpan(ctx, "Cache.PruneAllProofs")
defer span.End()
c.depositsLock.Lock()
defer c.depositsLock.Unlock()
for i := len(c.deposits) - 1; i >= 0; i-- {
if c.deposits[i].Deposit.Proof == nil {
break
}
c.deposits[i].Deposit.Proof = nil
}
}
// PrunePendingDeposits removes any deposit which is older than the given deposit merkle tree index.
func (c *Cache) PrunePendingDeposits(ctx context.Context, merkleTreeIndex int64) {
_, span := trace.StartSpan(ctx, "Cache.PrunePendingDeposits")
defer span.End()
if merkleTreeIndex == 0 {
log.Debug("Ignoring 0 deposit removal")
return
}
c.depositsLock.Lock()
defer c.depositsLock.Unlock()
cleanDeposits := make([]*ethpb.DepositContainer, 0, len(c.pendingDeposits))
for _, dp := range c.pendingDeposits {
if dp.Index >= merkleTreeIndex {
cleanDeposits = append(cleanDeposits, dp)
}
}
c.pendingDeposits = cleanDeposits
pendingDepositsCount.Set(float64(len(c.pendingDeposits)))
}
// PruneAllPendingDeposits removes all pending deposits from the cache.
// As EIP-6110 applies and the legacy deposit mechanism is deprecated,
// pending deposits in deposit snapshot are no longer needed.
// See: https://eips.ethereum.org/EIPS/eip-6110#eth1data-poll-deprecation
func (c *Cache) PruneAllPendingDeposits(ctx context.Context) {
_, span := trace.StartSpan(ctx, "Cache.PruneAllPendingDeposits")
defer span.End()
c.depositsLock.Lock()
defer c.depositsLock.Unlock()
c.pendingDeposits = make([]*ethpb.DepositContainer, 0)
pendingDepositsCount.Set(float64(0))
}

View File

@@ -1,323 +0,0 @@
package depositsnapshot
import (
"context"
"testing"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func TestPrunePendingDeposits_ZeroMerkleIndex(t *testing.T) {
dc := Cache{}
dc.pendingDeposits = []*ethpb.DepositContainer{
{Eth1BlockHeight: 2, Index: 2},
{Eth1BlockHeight: 4, Index: 4},
{Eth1BlockHeight: 6, Index: 6},
{Eth1BlockHeight: 8, Index: 8},
{Eth1BlockHeight: 10, Index: 10},
{Eth1BlockHeight: 12, Index: 12},
}
dc.PrunePendingDeposits(context.Background(), 0)
expected := []*ethpb.DepositContainer{
{Eth1BlockHeight: 2, Index: 2},
{Eth1BlockHeight: 4, Index: 4},
{Eth1BlockHeight: 6, Index: 6},
{Eth1BlockHeight: 8, Index: 8},
{Eth1BlockHeight: 10, Index: 10},
{Eth1BlockHeight: 12, Index: 12},
}
assert.DeepEqual(t, expected, dc.pendingDeposits)
}
func TestPrunePendingDeposits_OK(t *testing.T) {
dc := Cache{}
dc.pendingDeposits = []*ethpb.DepositContainer{
{Eth1BlockHeight: 2, Index: 2},
{Eth1BlockHeight: 4, Index: 4},
{Eth1BlockHeight: 6, Index: 6},
{Eth1BlockHeight: 8, Index: 8},
{Eth1BlockHeight: 10, Index: 10},
{Eth1BlockHeight: 12, Index: 12},
}
dc.PrunePendingDeposits(context.Background(), 6)
expected := []*ethpb.DepositContainer{
{Eth1BlockHeight: 6, Index: 6},
{Eth1BlockHeight: 8, Index: 8},
{Eth1BlockHeight: 10, Index: 10},
{Eth1BlockHeight: 12, Index: 12},
}
assert.DeepEqual(t, expected, dc.pendingDeposits)
dc.pendingDeposits = []*ethpb.DepositContainer{
{Eth1BlockHeight: 2, Index: 2},
{Eth1BlockHeight: 4, Index: 4},
{Eth1BlockHeight: 6, Index: 6},
{Eth1BlockHeight: 8, Index: 8},
{Eth1BlockHeight: 10, Index: 10},
{Eth1BlockHeight: 12, Index: 12},
}
dc.PrunePendingDeposits(context.Background(), 10)
expected = []*ethpb.DepositContainer{
{Eth1BlockHeight: 10, Index: 10},
{Eth1BlockHeight: 12, Index: 12},
}
assert.DeepEqual(t, expected, dc.pendingDeposits)
}
func TestPruneAllPendingDeposits(t *testing.T) {
dc := Cache{}
dc.pendingDeposits = []*ethpb.DepositContainer{
{Eth1BlockHeight: 2, Index: 2},
{Eth1BlockHeight: 4, Index: 4},
{Eth1BlockHeight: 6, Index: 6},
{Eth1BlockHeight: 8, Index: 8},
{Eth1BlockHeight: 10, Index: 10},
{Eth1BlockHeight: 12, Index: 12},
}
dc.PruneAllPendingDeposits(context.Background())
expected := []*ethpb.DepositContainer{}
assert.DeepEqual(t, expected, dc.pendingDeposits)
}
func TestPruneProofs_Ok(t *testing.T) {
dc, err := New()
require.NoError(t, err)
deposits := []struct {
blkNum uint64
deposit *ethpb.Deposit
index int64
}{
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk0"), 48)}},
index: 0,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk1"), 48)}},
index: 1,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk2"), 48)}},
index: 2,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk3"), 48)}},
index: 3,
},
}
for _, ins := range deposits {
assert.NoError(t, dc.InsertDeposit(context.Background(), ins.deposit, ins.blkNum, ins.index, [32]byte{}))
}
require.NoError(t, dc.PruneProofs(context.Background(), 1))
assert.DeepEqual(t, [][]byte(nil), dc.deposits[0].Deposit.Proof)
assert.DeepEqual(t, [][]byte(nil), dc.deposits[1].Deposit.Proof)
assert.NotNil(t, dc.deposits[2].Deposit.Proof)
assert.NotNil(t, dc.deposits[3].Deposit.Proof)
}
func TestPruneProofs_SomeAlreadyPruned(t *testing.T) {
dc, err := New()
require.NoError(t, err)
deposits := []struct {
blkNum uint64
deposit *ethpb.Deposit
index int64
}{
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: nil, Data: &ethpb.Deposit_Data{
PublicKey: bytesutil.PadTo([]byte("pk0"), 48)}},
index: 0,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: nil, Data: &ethpb.Deposit_Data{
PublicKey: bytesutil.PadTo([]byte("pk1"), 48)}}, index: 1,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(), Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk2"), 48)}},
index: 2,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk3"), 48)}},
index: 3,
},
}
for _, ins := range deposits {
assert.NoError(t, dc.InsertDeposit(context.Background(), ins.deposit, ins.blkNum, ins.index, [32]byte{}))
}
require.NoError(t, dc.PruneProofs(context.Background(), 2))
assert.DeepEqual(t, [][]byte(nil), dc.deposits[2].Deposit.Proof)
}
func TestPruneProofs_PruneAllWhenDepositIndexTooBig(t *testing.T) {
dc, err := New()
require.NoError(t, err)
deposits := []struct {
blkNum uint64
deposit *ethpb.Deposit
index int64
}{
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk0"), 48)}},
index: 0,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk1"), 48)}},
index: 1,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk2"), 48)}},
index: 2,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk3"), 48)}},
index: 3,
},
}
for _, ins := range deposits {
assert.NoError(t, dc.InsertDeposit(context.Background(), ins.deposit, ins.blkNum, ins.index, [32]byte{}))
}
require.NoError(t, dc.PruneProofs(context.Background(), 99))
assert.DeepEqual(t, [][]byte(nil), dc.deposits[0].Deposit.Proof)
assert.DeepEqual(t, [][]byte(nil), dc.deposits[1].Deposit.Proof)
assert.DeepEqual(t, [][]byte(nil), dc.deposits[2].Deposit.Proof)
assert.DeepEqual(t, [][]byte(nil), dc.deposits[3].Deposit.Proof)
}
func TestPruneProofs_CorrectlyHandleLastIndex(t *testing.T) {
dc, err := New()
require.NoError(t, err)
deposits := []struct {
blkNum uint64
deposit *ethpb.Deposit
index int64
}{
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk0"), 48)}},
index: 0,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk1"), 48)}},
index: 1,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk2"), 48)}},
index: 2,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk3"), 48)}},
index: 3,
},
}
for _, ins := range deposits {
assert.NoError(t, dc.InsertDeposit(context.Background(), ins.deposit, ins.blkNum, ins.index, [32]byte{}))
}
require.NoError(t, dc.PruneProofs(context.Background(), 4))
assert.DeepEqual(t, [][]byte(nil), dc.deposits[0].Deposit.Proof)
assert.DeepEqual(t, [][]byte(nil), dc.deposits[1].Deposit.Proof)
assert.DeepEqual(t, [][]byte(nil), dc.deposits[2].Deposit.Proof)
assert.DeepEqual(t, [][]byte(nil), dc.deposits[3].Deposit.Proof)
}
func TestPruneAllProofs(t *testing.T) {
dc, err := New()
require.NoError(t, err)
deposits := []struct {
blkNum uint64
deposit *ethpb.Deposit
index int64
}{
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk0"), 48)}},
index: 0,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk1"), 48)}},
index: 1,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk2"), 48)}},
index: 2,
},
{
blkNum: 0,
deposit: &ethpb.Deposit{Proof: makeDepositProof(),
Data: &ethpb.Deposit_Data{PublicKey: bytesutil.PadTo([]byte("pk3"), 48)}},
index: 3,
},
}
for _, ins := range deposits {
assert.NoError(t, dc.InsertDeposit(context.Background(), ins.deposit, ins.blkNum, ins.index, [32]byte{}))
}
dc.PruneAllProofs(context.Background())
assert.DeepEqual(t, [][]byte(nil), dc.deposits[0].Deposit.Proof)
assert.DeepEqual(t, [][]byte(nil), dc.deposits[1].Deposit.Proof)
assert.DeepEqual(t, [][]byte(nil), dc.deposits[2].Deposit.Proof)
assert.DeepEqual(t, [][]byte(nil), dc.deposits[3].Deposit.Proof)
}

View File

@@ -12,7 +12,6 @@ import (
type DepositCache interface {
DepositFetcher
DepositInserter
DepositPruner
}
// DepositFetcher defines a struct which can retrieve deposit information from a store.
@@ -24,6 +23,8 @@ type DepositFetcher interface {
InsertPendingDeposit(ctx context.Context, d *ethpb.Deposit, blockNum uint64, index int64, depositRoot [32]byte)
PendingDeposits(ctx context.Context, untilBlk *big.Int) []*ethpb.Deposit
PendingContainers(ctx context.Context, untilBlk *big.Int) []*ethpb.DepositContainer
PrunePendingDeposits(ctx context.Context, merkleTreeIndex int64)
PruneProofs(ctx context.Context, untilDepositIndex int64) error
FinalizedFetcher
}
@@ -41,14 +42,6 @@ type FinalizedFetcher interface {
NonFinalizedDeposits(ctx context.Context, lastFinalizedIndex int64, untilBlk *big.Int) []*ethpb.Deposit
}
// DepositPruner is an interface for pruning deposits and proofs.
type DepositPruner interface {
PrunePendingDeposits(ctx context.Context, merkleTreeIndex int64)
PruneAllPendingDeposits(ctx context.Context)
PruneProofs(ctx context.Context, untilDepositIndex int64) error
PruneAllProofs(ctx context.Context)
}
// FinalizedDeposits defines a method to access a merkle tree containing deposits and their indexes.
type FinalizedDeposits interface {
Deposits() MerkleTree

View File

@@ -1,139 +1,49 @@
package cache
import (
"strconv"
"time"
"sync"
"github.com/patrickmn/go-cache"
"github.com/pkg/errors"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/sirupsen/logrus"
)
const (
defaultExpiration = 1 * time.Hour
cleanupInterval = 15 * time.Minute
)
type TrackedValidator struct {
Active bool
FeeRecipient primitives.ExecutionAddress
Index primitives.ValidatorIndex
}
type (
TrackedValidator struct {
Active bool
FeeRecipient primitives.ExecutionAddress
Index primitives.ValidatorIndex
}
type TrackedValidatorsCache struct {
sync.Mutex
trackedValidators map[primitives.ValidatorIndex]TrackedValidator
}
TrackedValidatorsCache struct {
trackedValidators cache.Cache
}
)
var (
// Metrics.
trackedValidatorsCacheMiss = promauto.NewCounter(prometheus.CounterOpts{
Name: "tracked_validators_cache_miss",
Help: "The number of tracked validators requests that are not present in the cache.",
})
trackedValidatorsCacheTotal = promauto.NewCounter(prometheus.CounterOpts{
Name: "tracked_validators_cache_total",
Help: "The total number of tracked validators requests in the cache.",
})
trackedValidatorsCacheCount = promauto.NewGauge(prometheus.GaugeOpts{
Name: "tracked_validators_cache_count",
Help: "The number of tracked validators in the cache.",
})
)
// NewTrackedValidatorsCache creates a new cache for tracking validators.
func NewTrackedValidatorsCache() *TrackedValidatorsCache {
return &TrackedValidatorsCache{
trackedValidators: *cache.New(defaultExpiration, cleanupInterval),
trackedValidators: make(map[primitives.ValidatorIndex]TrackedValidator),
}
}
// Validator retrieves a tracked validator from the cache (if present).
func (t *TrackedValidatorsCache) Validator(index primitives.ValidatorIndex) (TrackedValidator, bool) {
trackedValidatorsCacheTotal.Inc()
key := toCacheKey(index)
item, ok := t.trackedValidators.Get(key)
if !ok {
trackedValidatorsCacheMiss.Inc()
return TrackedValidator{}, false
}
val, ok := item.(TrackedValidator)
if !ok {
logrus.Errorf("Failed to cast tracked validator from cache, got unexpected item type %T", item)
return TrackedValidator{}, false
}
return val, true
t.Lock()
defer t.Unlock()
val, ok := t.trackedValidators[index]
return val, ok
}
// Set adds a tracked validator to the cache.
func (t *TrackedValidatorsCache) Set(val TrackedValidator) {
key := toCacheKey(val.Index)
t.trackedValidators.Set(key, val, cache.DefaultExpiration)
t.Lock()
defer t.Unlock()
t.trackedValidators[val.Index] = val
}
// Delete removes a tracked validator from the cache.
func (t *TrackedValidatorsCache) Prune() {
t.trackedValidators.Flush()
trackedValidatorsCacheCount.Set(0)
t.Lock()
defer t.Unlock()
t.trackedValidators = make(map[primitives.ValidatorIndex]TrackedValidator)
}
// Validating returns true if there are at least one tracked validators in the cache.
func (t *TrackedValidatorsCache) Validating() bool {
count := t.trackedValidators.ItemCount()
trackedValidatorsCacheCount.Set(float64(count))
return count > 0
}
// ItemCount returns the number of tracked validators in the cache.
func (t *TrackedValidatorsCache) ItemCount() int {
count := t.trackedValidators.ItemCount()
trackedValidatorsCacheCount.Set(float64(count))
return count
}
// Indices returns a map of validator indices that are being tracked.
func (t *TrackedValidatorsCache) Indices() map[primitives.ValidatorIndex]bool {
items := t.trackedValidators.Items()
count := len(items)
trackedValidatorsCacheCount.Set(float64(count))
indices := make(map[primitives.ValidatorIndex]bool, count)
for cacheKey := range items {
index, err := fromCacheKey(cacheKey)
if err != nil {
logrus.WithError(err).Error("Failed to get validator index from cache key")
continue
}
indices[index] = true
}
return indices
}
// toCacheKey creates a cache key from the validator index.
func toCacheKey(validatorIndex primitives.ValidatorIndex) string {
return strconv.FormatUint(uint64(validatorIndex), 10)
}
// fromCacheKey gets the validator index from the cache key.
func fromCacheKey(key string) (primitives.ValidatorIndex, error) {
validatorIndex, err := strconv.ParseUint(key, 10, 64)
if err != nil {
return 0, errors.Wrapf(err, "parse Uint: %s", key)
}
return primitives.ValidatorIndex(validatorIndex), nil
t.Lock()
defer t.Unlock()
return len(t.trackedValidators) > 0
}

View File

@@ -1,79 +0,0 @@
package cache
import (
"testing"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func mapEqual(a, b map[primitives.ValidatorIndex]bool) bool {
if len(a) != len(b) {
return false
}
for k, v := range a {
if b[k] != v {
return false
}
}
return true
}
func TestTrackedValidatorsCache(t *testing.T) {
vc := NewTrackedValidatorsCache()
// No validators in cache.
require.Equal(t, 0, vc.ItemCount())
require.Equal(t, false, vc.Validating())
require.Equal(t, 0, len(vc.Indices()))
_, ok := vc.Validator(41)
require.Equal(t, false, ok)
// Add some validators (one twice).
v42Expected := TrackedValidator{Active: true, FeeRecipient: [20]byte{1}, Index: 42}
v43Expected := TrackedValidator{Active: false, FeeRecipient: [20]byte{2}, Index: 43}
vc.Set(v42Expected)
vc.Set(v43Expected)
vc.Set(v42Expected)
// Check if they are in the cache.
v42Actual, ok := vc.Validator(42)
require.Equal(t, true, ok)
require.Equal(t, v42Expected, v42Actual)
v43Actual, ok := vc.Validator(43)
require.Equal(t, true, ok)
require.Equal(t, v43Expected, v43Actual)
expected := map[primitives.ValidatorIndex]bool{42: true, 43: true}
actual := vc.Indices()
require.Equal(t, true, mapEqual(expected, actual))
// Check the item count and if the cache is validating.
require.Equal(t, 2, vc.ItemCount())
require.Equal(t, true, vc.Validating())
// Check if a non-existing validator is in the cache.
_, ok = vc.Validator(41)
require.Equal(t, false, ok)
// Prune the cache and test it.
vc.Prune()
_, ok = vc.Validator(41)
require.Equal(t, false, ok)
_, ok = vc.Validator(42)
require.Equal(t, false, ok)
_, ok = vc.Validator(43)
require.Equal(t, false, ok)
require.Equal(t, 0, vc.ItemCount())
require.Equal(t, false, vc.Validating())
require.Equal(t, 0, len(vc.Indices()))
}

View File

@@ -7,14 +7,12 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/signing"
v "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/validators"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
state_native "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/crypto/bls"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
@@ -107,162 +105,293 @@ func TestProcessAttesterSlashings_IndexedAttestationFailedToVerify(t *testing.T)
}
func TestProcessAttesterSlashings_AppliesCorrectStatus(t *testing.T) {
statePhase0, keysPhase0 := util.DeterministicGenesisState(t, 100)
stateAltair, keysAltair := util.DeterministicGenesisStateAltair(t, 100)
stateBellatrix, keysBellatrix := util.DeterministicGenesisStateBellatrix(t, 100)
stateCapella, keysCapella := util.DeterministicGenesisStateCapella(t, 100)
stateDeneb, keysDeneb := util.DeterministicGenesisStateDeneb(t, 100)
stateElectra, keysElectra := util.DeterministicGenesisStateElectra(t, 100)
beaconState, privKeys := util.DeterministicGenesisState(t, 100)
for _, vv := range beaconState.Validators() {
vv.WithdrawableEpoch = primitives.Epoch(params.BeaconConfig().SlotsPerEpoch)
}
att1Phase0 := util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
att1 := util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 1},
},
AttestingIndices: []uint64{0, 1},
})
att2Phase0 := util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
AttestingIndices: []uint64{0, 1},
})
att1Electra := util.HydrateIndexedAttestationElectra(&ethpb.IndexedAttestationElectra{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 1},
},
AttestingIndices: []uint64{0, 1},
})
att2Electra := util.HydrateIndexedAttestationElectra(&ethpb.IndexedAttestationElectra{
domain, err := signing.Domain(beaconState.Fork(), 0, params.BeaconConfig().DomainBeaconAttester, beaconState.GenesisValidatorsRoot())
require.NoError(t, err)
signingRoot, err := signing.ComputeSigningRoot(att1.Data, domain)
assert.NoError(t, err, "Could not get signing root of beacon block header")
sig0 := privKeys[0].Sign(signingRoot[:])
sig1 := privKeys[1].Sign(signingRoot[:])
aggregateSig := bls.AggregateSignatures([]bls.Signature{sig0, sig1})
att1.Signature = aggregateSig.Marshal()
att2 := util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
AttestingIndices: []uint64{0, 1},
})
signingRoot, err = signing.ComputeSigningRoot(att2.Data, domain)
assert.NoError(t, err, "Could not get signing root of beacon block header")
sig0 = privKeys[0].Sign(signingRoot[:])
sig1 = privKeys[1].Sign(signingRoot[:])
aggregateSig = bls.AggregateSignatures([]bls.Signature{sig0, sig1})
att2.Signature = aggregateSig.Marshal()
slashingPhase0 := &ethpb.AttesterSlashing{
Attestation_1: att1Phase0,
Attestation_2: att2Phase0,
}
slashingElectra := &ethpb.AttesterSlashingElectra{
Attestation_1: att1Electra,
Attestation_2: att2Electra,
}
type testCase struct {
name string
st state.BeaconState
keys []bls.SecretKey
att1 ethpb.IndexedAtt
att2 ethpb.IndexedAtt
slashing ethpb.AttSlashing
slashedBalance uint64
}
testCases := []testCase{
slashings := []*ethpb.AttesterSlashing{
{
name: "phase0",
st: statePhase0,
keys: keysPhase0,
att1: att1Phase0,
att2: att2Phase0,
slashing: slashingPhase0,
slashedBalance: 31750000000,
},
{
name: "altair",
st: stateAltair,
keys: keysAltair,
att1: att1Phase0,
att2: att2Phase0,
slashing: slashingPhase0,
slashedBalance: 31500000000,
},
{
name: "bellatrix",
st: stateBellatrix,
keys: keysBellatrix,
att1: att1Phase0,
att2: att2Phase0,
slashing: slashingPhase0,
slashedBalance: 31000000000,
},
{
name: "capella",
st: stateCapella,
keys: keysCapella,
att1: att1Phase0,
att2: att2Phase0,
slashing: slashingPhase0,
slashedBalance: 31000000000,
},
{
name: "deneb",
st: stateDeneb,
keys: keysDeneb,
att1: att1Phase0,
att2: att2Phase0,
slashing: slashingPhase0,
slashedBalance: 31000000000,
},
{
name: "electra",
st: stateElectra,
keys: keysElectra,
att1: att1Electra,
att2: att2Electra,
slashing: slashingElectra,
slashedBalance: 31992187500,
Attestation_1: att1,
Attestation_2: att2,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
for _, vv := range tc.st.Validators() {
vv.WithdrawableEpoch = primitives.Epoch(params.BeaconConfig().SlotsPerEpoch)
}
currentSlot := 2 * params.BeaconConfig().SlotsPerEpoch
require.NoError(t, beaconState.SetSlot(currentSlot))
domain, err := signing.Domain(tc.st.Fork(), 0, params.BeaconConfig().DomainBeaconAttester, tc.st.GenesisValidatorsRoot())
require.NoError(t, err)
signingRoot, err := signing.ComputeSigningRoot(tc.att1.GetData(), domain)
assert.NoError(t, err, "Could not get signing root of beacon block header")
sig0 := tc.keys[0].Sign(signingRoot[:])
sig1 := tc.keys[1].Sign(signingRoot[:])
aggregateSig := bls.AggregateSignatures([]bls.Signature{sig0, sig1})
b := util.NewBeaconBlock()
b.Block = &ethpb.BeaconBlock{
Body: &ethpb.BeaconBlockBody{
AttesterSlashings: slashings,
},
}
if tc.att1.Version() >= version.Electra {
tc.att1.(*ethpb.IndexedAttestationElectra).Signature = aggregateSig.Marshal()
} else {
tc.att1.(*ethpb.IndexedAttestation).Signature = aggregateSig.Marshal()
}
ss := make([]ethpb.AttSlashing, len(b.Block.Body.AttesterSlashings))
for i, s := range b.Block.Body.AttesterSlashings {
ss[i] = s
}
newState, err := blocks.ProcessAttesterSlashings(context.Background(), beaconState, ss, v.SlashValidator)
require.NoError(t, err)
newRegistry := newState.Validators()
signingRoot, err = signing.ComputeSigningRoot(tc.att2.GetData(), domain)
assert.NoError(t, err, "Could not get signing root of beacon block header")
sig0 = tc.keys[0].Sign(signingRoot[:])
sig1 = tc.keys[1].Sign(signingRoot[:])
aggregateSig = bls.AggregateSignatures([]bls.Signature{sig0, sig1})
if tc.att2.Version() >= version.Electra {
tc.att2.(*ethpb.IndexedAttestationElectra).Signature = aggregateSig.Marshal()
} else {
tc.att2.(*ethpb.IndexedAttestation).Signature = aggregateSig.Marshal()
}
currentSlot := 2 * params.BeaconConfig().SlotsPerEpoch
require.NoError(t, tc.st.SetSlot(currentSlot))
newState, err := blocks.ProcessAttesterSlashings(context.Background(), tc.st, []ethpb.AttSlashing{tc.slashing}, v.SlashValidator)
require.NoError(t, err)
newRegistry := newState.Validators()
// Given the intersection of slashable indices is [1], only validator
// at index 1 should be slashed and exited. We confirm this below.
if newRegistry[1].ExitEpoch != tc.st.Validators()[1].ExitEpoch {
t.Errorf(
`
// Given the intersection of slashable indices is [1], only validator
// at index 1 should be slashed and exited. We confirm this below.
if newRegistry[1].ExitEpoch != beaconState.Validators()[1].ExitEpoch {
t.Errorf(
`
Expected validator at index 1's exit epoch to match
%d, received %d instead
`,
tc.st.Validators()[1].ExitEpoch,
newRegistry[1].ExitEpoch,
)
}
require.Equal(t, tc.slashedBalance, newState.Balances()[1])
require.Equal(t, uint64(32000000000), newState.Balances()[2])
})
beaconState.Validators()[1].ExitEpoch,
newRegistry[1].ExitEpoch,
)
}
require.Equal(t, uint64(31750000000), newState.Balances()[1])
require.Equal(t, uint64(32000000000), newState.Balances()[2])
}
func TestProcessAttesterSlashings_AppliesCorrectStatusAltair(t *testing.T) {
beaconState, privKeys := util.DeterministicGenesisStateAltair(t, 100)
for _, vv := range beaconState.Validators() {
vv.WithdrawableEpoch = primitives.Epoch(params.BeaconConfig().SlotsPerEpoch)
}
att1 := util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 1},
},
AttestingIndices: []uint64{0, 1},
})
domain, err := signing.Domain(beaconState.Fork(), 0, params.BeaconConfig().DomainBeaconAttester, beaconState.GenesisValidatorsRoot())
require.NoError(t, err)
signingRoot, err := signing.ComputeSigningRoot(att1.Data, domain)
assert.NoError(t, err, "Could not get signing root of beacon block header")
sig0 := privKeys[0].Sign(signingRoot[:])
sig1 := privKeys[1].Sign(signingRoot[:])
aggregateSig := bls.AggregateSignatures([]bls.Signature{sig0, sig1})
att1.Signature = aggregateSig.Marshal()
att2 := util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
AttestingIndices: []uint64{0, 1},
})
signingRoot, err = signing.ComputeSigningRoot(att2.Data, domain)
assert.NoError(t, err, "Could not get signing root of beacon block header")
sig0 = privKeys[0].Sign(signingRoot[:])
sig1 = privKeys[1].Sign(signingRoot[:])
aggregateSig = bls.AggregateSignatures([]bls.Signature{sig0, sig1})
att2.Signature = aggregateSig.Marshal()
slashings := []*ethpb.AttesterSlashing{
{
Attestation_1: att1,
Attestation_2: att2,
},
}
currentSlot := 2 * params.BeaconConfig().SlotsPerEpoch
require.NoError(t, beaconState.SetSlot(currentSlot))
b := util.NewBeaconBlock()
b.Block = &ethpb.BeaconBlock{
Body: &ethpb.BeaconBlockBody{
AttesterSlashings: slashings,
},
}
ss := make([]ethpb.AttSlashing, len(b.Block.Body.AttesterSlashings))
for i, s := range b.Block.Body.AttesterSlashings {
ss[i] = s
}
newState, err := blocks.ProcessAttesterSlashings(context.Background(), beaconState, ss, v.SlashValidator)
require.NoError(t, err)
newRegistry := newState.Validators()
// Given the intersection of slashable indices is [1], only validator
// at index 1 should be slashed and exited. We confirm this below.
if newRegistry[1].ExitEpoch != beaconState.Validators()[1].ExitEpoch {
t.Errorf(
`
Expected validator at index 1's exit epoch to match
%d, received %d instead
`,
beaconState.Validators()[1].ExitEpoch,
newRegistry[1].ExitEpoch,
)
}
require.Equal(t, uint64(31500000000), newState.Balances()[1])
require.Equal(t, uint64(32000000000), newState.Balances()[2])
}
func TestProcessAttesterSlashings_AppliesCorrectStatusBellatrix(t *testing.T) {
beaconState, privKeys := util.DeterministicGenesisStateBellatrix(t, 100)
for _, vv := range beaconState.Validators() {
vv.WithdrawableEpoch = primitives.Epoch(params.BeaconConfig().SlotsPerEpoch)
}
att1 := util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 1},
},
AttestingIndices: []uint64{0, 1},
})
domain, err := signing.Domain(beaconState.Fork(), 0, params.BeaconConfig().DomainBeaconAttester, beaconState.GenesisValidatorsRoot())
require.NoError(t, err)
signingRoot, err := signing.ComputeSigningRoot(att1.Data, domain)
assert.NoError(t, err, "Could not get signing root of beacon block header")
sig0 := privKeys[0].Sign(signingRoot[:])
sig1 := privKeys[1].Sign(signingRoot[:])
aggregateSig := bls.AggregateSignatures([]bls.Signature{sig0, sig1})
att1.Signature = aggregateSig.Marshal()
att2 := util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
AttestingIndices: []uint64{0, 1},
})
signingRoot, err = signing.ComputeSigningRoot(att2.Data, domain)
assert.NoError(t, err, "Could not get signing root of beacon block header")
sig0 = privKeys[0].Sign(signingRoot[:])
sig1 = privKeys[1].Sign(signingRoot[:])
aggregateSig = bls.AggregateSignatures([]bls.Signature{sig0, sig1})
att2.Signature = aggregateSig.Marshal()
slashings := []*ethpb.AttesterSlashing{
{
Attestation_1: att1,
Attestation_2: att2,
},
}
currentSlot := 2 * params.BeaconConfig().SlotsPerEpoch
require.NoError(t, beaconState.SetSlot(currentSlot))
b := util.NewBeaconBlock()
b.Block = &ethpb.BeaconBlock{
Body: &ethpb.BeaconBlockBody{
AttesterSlashings: slashings,
},
}
ss := make([]ethpb.AttSlashing, len(b.Block.Body.AttesterSlashings))
for i, s := range b.Block.Body.AttesterSlashings {
ss[i] = s
}
newState, err := blocks.ProcessAttesterSlashings(context.Background(), beaconState, ss, v.SlashValidator)
require.NoError(t, err)
newRegistry := newState.Validators()
// Given the intersection of slashable indices is [1], only validator
// at index 1 should be slashed and exited. We confirm this below.
if newRegistry[1].ExitEpoch != beaconState.Validators()[1].ExitEpoch {
t.Errorf(
`
Expected validator at index 1's exit epoch to match
%d, received %d instead
`,
beaconState.Validators()[1].ExitEpoch,
newRegistry[1].ExitEpoch,
)
}
require.Equal(t, uint64(31000000000), newState.Balances()[1])
require.Equal(t, uint64(32000000000), newState.Balances()[2])
}
func TestProcessAttesterSlashings_AppliesCorrectStatusCapella(t *testing.T) {
beaconState, privKeys := util.DeterministicGenesisStateCapella(t, 100)
for _, vv := range beaconState.Validators() {
vv.WithdrawableEpoch = primitives.Epoch(params.BeaconConfig().SlotsPerEpoch)
}
att1 := util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 1},
},
AttestingIndices: []uint64{0, 1},
})
domain, err := signing.Domain(beaconState.Fork(), 0, params.BeaconConfig().DomainBeaconAttester, beaconState.GenesisValidatorsRoot())
require.NoError(t, err)
signingRoot, err := signing.ComputeSigningRoot(att1.Data, domain)
assert.NoError(t, err, "Could not get signing root of beacon block header")
sig0 := privKeys[0].Sign(signingRoot[:])
sig1 := privKeys[1].Sign(signingRoot[:])
aggregateSig := bls.AggregateSignatures([]bls.Signature{sig0, sig1})
att1.Signature = aggregateSig.Marshal()
att2 := util.HydrateIndexedAttestation(&ethpb.IndexedAttestation{
AttestingIndices: []uint64{0, 1},
})
signingRoot, err = signing.ComputeSigningRoot(att2.Data, domain)
assert.NoError(t, err, "Could not get signing root of beacon block header")
sig0 = privKeys[0].Sign(signingRoot[:])
sig1 = privKeys[1].Sign(signingRoot[:])
aggregateSig = bls.AggregateSignatures([]bls.Signature{sig0, sig1})
att2.Signature = aggregateSig.Marshal()
slashings := []*ethpb.AttesterSlashing{
{
Attestation_1: att1,
Attestation_2: att2,
},
}
currentSlot := 2 * params.BeaconConfig().SlotsPerEpoch
require.NoError(t, beaconState.SetSlot(currentSlot))
b := util.NewBeaconBlock()
b.Block = &ethpb.BeaconBlock{
Body: &ethpb.BeaconBlockBody{
AttesterSlashings: slashings,
},
}
ss := make([]ethpb.AttSlashing, len(b.Block.Body.AttesterSlashings))
for i, s := range b.Block.Body.AttesterSlashings {
ss[i] = s
}
newState, err := blocks.ProcessAttesterSlashings(context.Background(), beaconState, ss, v.SlashValidator)
require.NoError(t, err)
newRegistry := newState.Validators()
// Given the intersection of slashable indices is [1], only validator
// at index 1 should be slashed and exited. We confirm this below.
if newRegistry[1].ExitEpoch != beaconState.Validators()[1].ExitEpoch {
t.Errorf(
`
Expected validator at index 1's exit epoch to match
%d, received %d instead
`,
beaconState.Validators()[1].ExitEpoch,
newRegistry[1].ExitEpoch,
)
}
require.Equal(t, uint64(31000000000), newState.Balances()[1])
require.Equal(t, uint64(32000000000), newState.Balances()[2])
}

View File

@@ -291,52 +291,3 @@ func TestProcessBlockHeader_OK(t *testing.T) {
}
assert.Equal(t, true, proto.Equal(nsh, expected), "Expected %v, received %v", expected, nsh)
}
func TestBlockSignatureSet_OK(t *testing.T) {
validators := make([]*ethpb.Validator, params.BeaconConfig().MinGenesisActiveValidatorCount)
for i := 0; i < len(validators); i++ {
validators[i] = &ethpb.Validator{
PublicKey: make([]byte, 32),
WithdrawalCredentials: make([]byte, 32),
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
Slashed: true,
}
}
state, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, state.SetValidators(validators))
require.NoError(t, state.SetSlot(10))
require.NoError(t, state.SetLatestBlockHeader(util.HydrateBeaconHeader(&ethpb.BeaconBlockHeader{
Slot: 9,
ProposerIndex: 0,
})))
latestBlockSignedRoot, err := state.LatestBlockHeader().HashTreeRoot()
require.NoError(t, err)
currentEpoch := time.CurrentEpoch(state)
priv, err := bls.RandKey()
require.NoError(t, err)
pID, err := helpers.BeaconProposerIndex(context.Background(), state)
require.NoError(t, err)
block := util.NewBeaconBlock()
block.Block.Slot = 10
block.Block.ProposerIndex = pID
block.Block.Body.RandaoReveal = bytesutil.PadTo([]byte{'A', 'B', 'C'}, 96)
block.Block.ParentRoot = latestBlockSignedRoot[:]
block.Signature, err = signing.ComputeDomainAndSign(state, currentEpoch, block.Block, params.BeaconConfig().DomainBeaconProposer, priv)
require.NoError(t, err)
proposerIdx, err := helpers.BeaconProposerIndex(context.Background(), state)
require.NoError(t, err)
validators[proposerIdx].Slashed = false
validators[proposerIdx].PublicKey = priv.PublicKey().Marshal()
err = state.UpdateValidatorAtIndex(proposerIdx, validators[proposerIdx])
require.NoError(t, err)
set, err := blocks.BlockSignatureBatch(state, block.Block.ProposerIndex, block.Signature, block.Block.HashTreeRoot)
require.NoError(t, err)
verified, err := set.Verify()
require.NoError(t, err)
assert.Equal(t, true, verified, "Block signature set returned a set which was unable to be verified")
}

View File

@@ -120,24 +120,6 @@ func VerifyBlockSignatureUsingCurrentFork(beaconState state.ReadOnlyBeaconState,
})
}
// BlockSignatureBatch retrieves the block signature batch from the provided block and its corresponding state.
func BlockSignatureBatch(beaconState state.ReadOnlyBeaconState,
proposerIndex primitives.ValidatorIndex,
sig []byte,
rootFunc func() ([32]byte, error)) (*bls.SignatureBatch, error) {
currentEpoch := slots.ToEpoch(beaconState.Slot())
domain, err := signing.Domain(beaconState.Fork(), currentEpoch, params.BeaconConfig().DomainBeaconProposer, beaconState.GenesisValidatorsRoot())
if err != nil {
return nil, err
}
proposer, err := beaconState.ValidatorAtIndex(proposerIndex)
if err != nil {
return nil, err
}
proposerPubKey := proposer.PublicKey
return signing.BlockSignatureBatch(proposerPubKey, sig, domain, rootFunc)
}
// RandaoSignatureBatch retrieves the relevant randao specific signature batch object
// from a block and its corresponding state.
func RandaoSignatureBatch(

View File

@@ -0,0 +1,17 @@
load("@prysm//tools/go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = ["signature.go"],
importpath = "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/chunks",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/state:go_default_library",
"//config/params:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//network/forks:go_default_library",
"//time/slots:go_default_library",
],
)

View File

@@ -0,0 +1,53 @@
package chunks
import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/signing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/network/forks"
"github.com/prysmaticlabs/prysm/v5/time/slots"
)
// VerifyChunkSignature verifies the proposer signature of a beacon block chunk.
func VerifyChunkSignature(beaconState state.ReadOnlyBeaconState,
proposerIndex primitives.ValidatorIndex,
sig []byte,
rootFunc func() ([32]byte, error)) error {
currentEpoch := slots.ToEpoch(beaconState.Slot())
domain, err := signing.Domain(beaconState.Fork(), currentEpoch, params.BeaconConfig().DomainBeaconProposer, beaconState.GenesisValidatorsRoot())
if err != nil {
return err
}
proposer, err := beaconState.ValidatorAtIndex(proposerIndex)
if err != nil {
return err
}
proposerPubKey := proposer.PublicKey
return signing.VerifyBlockSigningRoot(proposerPubKey, sig, domain, rootFunc)
}
// VerifyChunkSignatureUsingCurrentFork verifies the proposer signature of a beacon block chunk. This differs
// from the above method by not using fork data from the state and instead retrieving it
// via the respective epoch.
func VerifyChunkSignatureUsingCurrentFork(beaconState state.ReadOnlyBeaconState, chunk interfaces.ReadOnlyBeaconBlockChunk) error {
currentEpoch := slots.ToEpoch(chunk.Slot())
fork, err := forks.Fork(currentEpoch)
if err != nil {
return err
}
domain, err := signing.Domain(fork, currentEpoch, params.BeaconConfig().DomainBeaconProposer, beaconState.GenesisValidatorsRoot())
if err != nil {
return err
}
proposer, err := beaconState.ValidatorAtIndex(chunk.ProposerIndex())
if err != nil {
return err
}
proposerPubKey := proposer.PublicKey
sig := chunk.Signature()
return signing.VerifyBlockSigningRoot(proposerPubKey, sig[:], domain, func() ([32]byte, error) {
return chunk.HeaderRoot(), nil
})
}

View File

@@ -8,7 +8,6 @@ go_library(
"consolidations.go",
"deposits.go",
"effective_balance_updates.go",
"error.go",
"registry_updates.go",
"transition.go",
"transition_no_verify_sig.go",
@@ -56,16 +55,13 @@ go_test(
"deposit_fuzz_test.go",
"deposits_test.go",
"effective_balance_updates_test.go",
"error_test.go",
"export_test.go",
"registry_updates_test.go",
"transition_no_verify_sig_test.go",
"transition_test.go",
"upgrade_test.go",
"validator_test.go",
"withdrawals_test.go",
],
data = glob(["testdata/**"]),
embed = [":go_default_library"],
deps = [
"//beacon-chain/core/helpers:go_default_library",
@@ -90,7 +86,6 @@ go_test(
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_google_gofuzz//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
],

View File

@@ -6,7 +6,6 @@ import (
"testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/electra"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
state_native "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native"
"github.com/prysmaticlabs/prysm/v5/config/params"
@@ -130,57 +129,6 @@ func TestComputeConsolidationEpochAndUpdateChurn(t *testing.T) {
expectedEpoch: 16, // Flows into another epoch.
expectedConsolidationBalanceToConsume: 200000000000, // 200 ETH
},
{
name: "balance to consume is zero, consolidation balance at limit",
state: func(t *testing.T) state.BeaconState {
activeBal := 32000000000000000 // 32M ETH
s, err := state_native.InitializeFromProtoUnsafeElectra(&eth.BeaconStateElectra{
Slot: slots.UnsafeEpochStart(10),
EarliestConsolidationEpoch: 16,
ConsolidationBalanceToConsume: 0,
Validators: createValidatorsWithTotalActiveBalance(primitives.Gwei(activeBal)),
})
require.NoError(t, err)
return s
}(t),
consolidationBalance: helpers.ConsolidationChurnLimit(32000000000000000),
expectedEpoch: 17, // Flows into another epoch.
expectedConsolidationBalanceToConsume: 0,
},
{
name: "consolidation balance equals consolidation balance to consume",
state: func(t *testing.T) state.BeaconState {
activeBal := 32000000000000000 // 32M ETH
s, err := state_native.InitializeFromProtoUnsafeElectra(&eth.BeaconStateElectra{
Slot: slots.UnsafeEpochStart(10),
EarliestConsolidationEpoch: 16,
ConsolidationBalanceToConsume: helpers.ConsolidationChurnLimit(32000000000000000),
Validators: createValidatorsWithTotalActiveBalance(primitives.Gwei(activeBal)),
})
require.NoError(t, err)
return s
}(t),
consolidationBalance: helpers.ConsolidationChurnLimit(32000000000000000),
expectedEpoch: 16,
expectedConsolidationBalanceToConsume: 0,
},
{
name: "consolidation balance exceeds limit by one",
state: func(t *testing.T) state.BeaconState {
activeBal := 32000000000000000 // 32M ETH
s, err := state_native.InitializeFromProtoUnsafeElectra(&eth.BeaconStateElectra{
Slot: slots.UnsafeEpochStart(10),
EarliestConsolidationEpoch: 16,
ConsolidationBalanceToConsume: 0,
Validators: createValidatorsWithTotalActiveBalance(primitives.Gwei(activeBal)),
})
require.NoError(t, err)
return s
}(t),
consolidationBalance: helpers.ConsolidationChurnLimit(32000000000000000) + 1,
expectedEpoch: 18, // Flows into another epoch.
expectedConsolidationBalanceToConsume: helpers.ConsolidationChurnLimit(32000000000000000) - 1,
},
}
for _, tt := range tests {

View File

@@ -185,9 +185,6 @@ func ProcessConsolidationRequests(ctx context.Context, st state.BeaconState, req
pcLimit := params.BeaconConfig().PendingConsolidationsLimit
for _, cr := range reqs {
if cr == nil {
return errors.New("nil consolidation request")
}
if ctx.Err() != nil {
return fmt.Errorf("cannot process consolidation requests: %w", ctx.Err())
}
@@ -265,7 +262,7 @@ func ProcessConsolidationRequests(ctx context.Context, st state.BeaconState, req
if !helpers.IsActiveValidator(srcV, curEpoch) || !helpers.IsActiveValidatorUsingTrie(tgtV, curEpoch) {
continue
}
// Neither validator is exiting.
// Neither validator are exiting.
if srcV.ExitEpoch != ffe || tgtV.ExitEpoch() != ffe {
continue
}

View File

@@ -209,22 +209,7 @@ func TestProcessConsolidationRequests(t *testing.T) {
state state.BeaconState
reqs []*enginev1.ConsolidationRequest
validate func(*testing.T, state.BeaconState)
wantErr bool
}{
{
name: "nil request",
state: func() state.BeaconState {
st := &eth.BeaconStateElectra{}
s, err := state_native.InitializeFromProtoElectra(st)
require.NoError(t, err)
return s
}(),
reqs: []*enginev1.ConsolidationRequest{nil},
validate: func(t *testing.T, st state.BeaconState) {
require.DeepEqual(t, st, st)
},
wantErr: true,
},
{
name: "one valid request",
state: func() state.BeaconState {
@@ -420,13 +405,7 @@ func TestProcessConsolidationRequests(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := electra.ProcessConsolidationRequests(context.TODO(), tt.state, tt.reqs)
if (err != nil) != tt.wantErr {
t.Errorf("ProcessWithdrawalRequests() error = %v, wantErr %v", err, tt.wantErr)
return
}
if !tt.wantErr {
require.NoError(t, err)
}
require.NoError(t, err)
if tt.validate != nil {
tt.validate(t, tt.state)
}

View File

@@ -385,8 +385,14 @@ func batchProcessNewPendingDeposits(ctx context.Context, state state.BeaconState
return errors.Wrap(err, "batch signature verification failed")
}
pubKeyMap := make(map[[48]byte]struct{}, len(pendingDeposits))
// Process each deposit individually
for _, pendingDeposit := range pendingDeposits {
_, found := pubKeyMap[bytesutil.ToBytes48(pendingDeposit.PublicKey)]
if !found {
pubKeyMap[bytesutil.ToBytes48(pendingDeposit.PublicKey)] = struct{}{}
}
validSignature := allSignaturesVerified
// If batch verification failed, check the individual deposit signature
@@ -404,8 +410,7 @@ func batchProcessNewPendingDeposits(ctx context.Context, state state.BeaconState
// Add validator to the registry if the signature is valid
if validSignature {
_, has := state.ValidatorIndexByPubkey(bytesutil.ToBytes48(pendingDeposit.PublicKey))
if has {
if found {
index, _ := state.ValidatorIndexByPubkey(bytesutil.ToBytes48(pendingDeposit.PublicKey))
if err := helpers.IncreaseBalance(state, index, pendingDeposit.Amount); err != nil {
return errors.Wrap(err, "could not increase balance")
@@ -587,10 +592,10 @@ func processDepositRequest(beaconState state.BeaconState, request *enginev1.Depo
if err != nil {
return nil, errors.Wrap(err, "could not get deposit requests start index")
}
if request == nil {
return nil, errors.New("nil deposit request")
}
if requestsStartIndex == params.BeaconConfig().UnsetDepositRequestsStartIndex {
if request == nil {
return nil, errors.New("nil deposit request")
}
if err := beaconState.SetDepositRequestsStartIndex(request.Index); err != nil {
return nil, errors.Wrap(err, "could not set deposit requests start index")
}

View File

@@ -333,7 +333,6 @@ func TestProcessDepositRequests(t *testing.T) {
st, _ := util.DeterministicGenesisStateElectra(t, 1)
sk, err := bls.RandKey()
require.NoError(t, err)
require.NoError(t, st.SetDepositRequestsStartIndex(1))
t.Run("empty requests continues", func(t *testing.T) {
newSt, err := electra.ProcessDepositRequests(context.Background(), st, []*enginev1.DepositRequest{})

View File

@@ -1,16 +0,0 @@
package electra
import "github.com/pkg/errors"
type execReqErr struct {
error
}
// IsExecutionRequestError returns true if the error has `execReqErr`.
func IsExecutionRequestError(e error) bool {
if e == nil {
return false
}
var d execReqErr
return errors.As(e, &d)
}

View File

@@ -1,45 +0,0 @@
package electra
import (
"testing"
"github.com/pkg/errors"
)
func TestIsExecutionRequestError(t *testing.T) {
tests := []struct {
name string
err error
want bool
}{
{
name: "nil error",
err: nil,
want: false,
},
{
name: "random error",
err: errors.New("some error"),
want: false,
},
{
name: "execution request error",
err: execReqErr{errors.New("execution request failed")},
want: true,
},
{
name: "wrapped execution request error",
err: errors.Wrap(execReqErr{errors.New("execution request failed")}, "wrapped"),
want: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := IsExecutionRequestError(tt.err)
if got != tt.want {
t.Errorf("IsExecutionRequestError(%v) = %v, want %v", tt.err, got, tt.want)
}
})
}
}

View File

@@ -2,6 +2,7 @@ package electra
import (
"context"
"fmt"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks"
@@ -81,31 +82,16 @@ func ProcessOperations(
if err != nil {
return nil, errors.Wrap(err, "could not get execution requests")
}
for _, d := range requests.Deposits {
if d == nil {
return nil, errors.New("nil deposit request")
}
}
st, err = ProcessDepositRequests(ctx, st, requests.Deposits)
if err != nil {
return nil, execReqErr{errors.Wrap(err, "could not process deposit requests")}
}
for _, w := range requests.Withdrawals {
if w == nil {
return nil, errors.New("nil withdrawal request")
}
return nil, errors.Wrap(err, "could not process deposit requests")
}
st, err = ProcessWithdrawalRequests(ctx, st, requests.Withdrawals)
if err != nil {
return nil, execReqErr{errors.Wrap(err, "could not process withdrawal requests")}
}
for _, c := range requests.Consolidations {
if c == nil {
return nil, errors.New("nil consolidation request")
}
return nil, errors.Wrap(err, "could not process withdrawal requests")
}
if err := ProcessConsolidationRequests(ctx, st, requests.Consolidations); err != nil {
return nil, execReqErr{errors.Wrap(err, "could not process consolidation requests")}
return nil, fmt.Errorf("could not process consolidation requests: %w", err)
}
return st, nil
}

View File

@@ -1,61 +0,0 @@
package electra_test
import (
"context"
"testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/electra"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
)
func TestProcessOperationsWithNilRequests(t *testing.T) {
tests := []struct {
name string
modifyBlk func(blockElectra *ethpb.SignedBeaconBlockElectra)
errMsg string
}{
{
name: "Nil deposit request",
modifyBlk: func(blk *ethpb.SignedBeaconBlockElectra) {
blk.Block.Body.ExecutionRequests.Deposits = []*enginev1.DepositRequest{nil}
},
errMsg: "nil deposit request",
},
{
name: "Nil withdrawal request",
modifyBlk: func(blk *ethpb.SignedBeaconBlockElectra) {
blk.Block.Body.ExecutionRequests.Withdrawals = []*enginev1.WithdrawalRequest{nil}
},
errMsg: "nil withdrawal request",
},
{
name: "Nil consolidation request",
modifyBlk: func(blk *ethpb.SignedBeaconBlockElectra) {
blk.Block.Body.ExecutionRequests.Consolidations = []*enginev1.ConsolidationRequest{nil}
},
errMsg: "nil consolidation request",
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
st, ks := util.DeterministicGenesisStateElectra(t, 128)
blk, err := util.GenerateFullBlockElectra(st, ks, util.DefaultBlockGenConfig(), 1)
require.NoError(t, err)
tc.modifyBlk(blk)
b, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, st.SetSlot(1))
_, err = electra.ProcessOperations(context.Background(), st, b.Block())
require.ErrorContains(t, tc.errMsg, err)
})
}
}

View File

@@ -38,17 +38,6 @@ func TestProcessWithdrawRequests(t *testing.T) {
wantFn func(t *testing.T, got state.BeaconState)
wantErr bool
}{
{
name: "nil request",
args: args{
st: func() state.BeaconState { return st }(),
wrs: []*enginev1.WithdrawalRequest{nil},
},
wantErr: true,
wantFn: func(t *testing.T, got state.BeaconState) {
require.DeepEqual(t, got, nil)
},
},
{
name: "happy path exit and withdrawal only",
args: args{

View File

@@ -32,9 +32,6 @@ const (
// AttesterSlashingReceived is sent after an attester slashing is received from gossip or rpc
AttesterSlashingReceived = 8
// SingleAttReceived is sent after a single attestation object is received from gossip or rpc
SingleAttReceived = 9
)
// UnAggregatedAttReceivedData is the data sent with UnaggregatedAttReceived events.
@@ -46,7 +43,7 @@ type UnAggregatedAttReceivedData struct {
// AggregatedAttReceivedData is the data sent with AggregatedAttReceived events.
type AggregatedAttReceivedData struct {
// Attestation is the aggregated attestation object.
Attestation ethpb.AggregateAttAndProof
Attestation *ethpb.AggregateAttestationAndProof
}
// ExitReceivedData is the data sent with ExitReceived events.
@@ -80,8 +77,3 @@ type ProposerSlashingReceivedData struct {
type AttesterSlashingReceivedData struct {
AttesterSlashing ethpb.AttSlashing
}
// SingleAttReceivedData is the data sent with SingleAttReceived events.
type SingleAttReceivedData struct {
Attestation ethpb.Att
}

View File

@@ -102,7 +102,7 @@ func UpgradeToFulu(beaconState state.BeaconState) (state.BeaconState, error) {
return nil, err
}
s := &ethpb.BeaconStateElectra{
s := &ethpb.BeaconStateFulu{
GenesisTime: beaconState.GenesisTime(),
GenesisValidatorsRoot: beaconState.GenesisValidatorsRoot(),
Slot: beaconState.Slot(),

View File

@@ -7,7 +7,6 @@ go_library(
"beacon_committee.go",
"block.go",
"genesis.go",
"legacy.go",
"metrics.go",
"randao.go",
"rewards_penalties.go",
@@ -53,7 +52,6 @@ go_test(
"attestation_test.go",
"beacon_committee_test.go",
"block_test.go",
"legacy_test.go",
"private_access_fuzz_noop_test.go", # keep
"private_access_test.go",
"randao_test.go",
@@ -88,6 +86,5 @@ go_test(
"//time:go_default_library",
"//time/slots:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_stretchr_testify//require:go_default_library",
],
)

View File

@@ -1,20 +0,0 @@
package helpers
import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
)
// DepositRequestsStarted determines if the deposit requests have started.
func DepositRequestsStarted(beaconState state.BeaconState) bool {
if beaconState.Version() < version.Electra {
return false
}
requestsStartIndex, err := beaconState.DepositRequestsStartIndex()
if err != nil {
return false
}
return beaconState.Eth1DepositIndex() == requestsStartIndex
}

View File

@@ -1,33 +0,0 @@
package helpers_test
import (
"testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/testing/util"
"github.com/stretchr/testify/require"
)
// TestDepositRequestHaveStarted contains several test cases for depositRequestHaveStarted.
func TestDepositRequestHaveStarted(t *testing.T) {
t.Run("Version below Electra returns false", func(t *testing.T) {
st, _ := util.DeterministicGenesisStateBellatrix(t, 1)
result := helpers.DepositRequestsStarted(st)
require.False(t, result)
})
t.Run("Version is Electra or higher, no error, but Eth1DepositIndex != requestsStartIndex returns false", func(t *testing.T) {
st, _ := util.DeterministicGenesisStateElectra(t, 1)
require.NoError(t, st.SetEth1DepositIndex(1))
result := helpers.DepositRequestsStarted(st)
require.False(t, result)
})
t.Run("Version is Electra or higher, no error, and Eth1DepositIndex == requestsStartIndex returns true", func(t *testing.T) {
st, _ := util.DeterministicGenesisStateElectra(t, 1)
require.NoError(t, st.SetEth1DepositIndex(33))
require.NoError(t, st.SetDepositRequestsStartIndex(33))
result := helpers.DepositRequestsStarted(st)
require.True(t, result)
})
}

View File

@@ -22,7 +22,7 @@ import (
func BalanceChurnLimit(activeBalance primitives.Gwei) primitives.Gwei {
churn := max(
params.BeaconConfig().MinPerEpochChurnLimitElectra,
uint64(activeBalance)/params.BeaconConfig().ChurnLimitQuotient,
(uint64(activeBalance) / params.BeaconConfig().ChurnLimitQuotient),
)
return primitives.Gwei(churn - churn%params.BeaconConfig().EffectiveBalanceIncrement)
}

View File

@@ -393,9 +393,9 @@ func ComputeProposerIndex(bState state.ReadOnlyBeaconState, activeIndices []prim
effectiveBal := v.EffectiveBalance()
if bState.Version() >= version.Electra {
binary.LittleEndian.PutUint64(seedBuffer[len(seed):], i/16)
randomBytes := hashFunc(seedBuffer)
randomByte := hashFunc(seedBuffer)
offset := (i % 16) * 2
randomValue := uint64(randomBytes[offset]) | uint64(randomBytes[offset+1])<<8
randomValue := uint64(randomByte[offset]) | uint64(randomByte[offset+1])<<8
if effectiveBal*fieldparams.MaxRandomValueElectra >= beaconConfig.MaxEffectiveBalanceElectra*randomValue {
return candidateIndex, nil
@@ -579,7 +579,7 @@ func IsPartiallyWithdrawableValidator(val state.ReadOnlyValidator, balance uint6
// """
// Check if ``validator`` is partially withdrawable.
// """
// max_effective_balance = get_max_effective_balance(validator)
// max_effective_balance = get_validator_max_effective_balance(validator)
// has_max_effective_balance = validator.effective_balance == max_effective_balance # [Modified in Electra:EIP7251]
// has_excess_balance = balance > max_effective_balance # [Modified in Electra:EIP7251]
// return (
@@ -619,7 +619,7 @@ func isPartiallyWithdrawableValidatorCapella(val state.ReadOnlyValidator, balanc
//
// Spec definition:
//
// def get_max_effective_balance(validator: Validator) -> Gwei:
// def get_validator_max_effective_balance(validator: Validator) -> Gwei:
// """
// Get max effective balance for ``validator``.
// """

View File

@@ -22,15 +22,6 @@ import (
)
func TestLightClient_NewLightClientOptimisticUpdateFromBeaconState(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.AltairForkEpoch = 1
cfg.BellatrixForkEpoch = 2
cfg.CapellaForkEpoch = 3
cfg.DenebForkEpoch = 4
cfg.ElectraForkEpoch = 5
params.OverrideBeaconConfig(cfg)
t.Run("Altair", func(t *testing.T) {
l := util.NewTestLightClient(t).SetupTestAltair()
@@ -68,31 +59,9 @@ func TestLightClient_NewLightClientOptimisticUpdateFromBeaconState(t *testing.T)
l.CheckSyncAggregate(update.SyncAggregate())
l.CheckAttestedHeader(update.AttestedHeader())
})
t.Run("Electra", func(t *testing.T) {
l := util.NewTestLightClient(t).SetupTestElectra(false)
update, err := lightClient.NewLightClientOptimisticUpdateFromBeaconState(l.Ctx, l.State.Slot(), l.State, l.Block, l.AttestedState, l.AttestedBlock)
require.NoError(t, err)
require.NotNil(t, update, "update is nil")
require.Equal(t, l.Block.Block().Slot(), update.SignatureSlot(), "Signature slot is not equal")
l.CheckSyncAggregate(update.SyncAggregate())
l.CheckAttestedHeader(update.AttestedHeader())
})
}
func TestLightClient_NewLightClientFinalityUpdateFromBeaconState(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.AltairForkEpoch = 1
cfg.BellatrixForkEpoch = 2
cfg.CapellaForkEpoch = 3
cfg.DenebForkEpoch = 4
cfg.ElectraForkEpoch = 5
params.OverrideBeaconConfig(cfg)
t.Run("Altair", func(t *testing.T) {
l := util.NewTestLightClient(t).SetupTestAltair()
@@ -387,157 +356,6 @@ func TestLightClient_NewLightClientFinalityUpdateFromBeaconState(t *testing.T) {
require.DeepSSZEqual(t, execution, updateExecution.Proto(), "Finalized Block Execution is not equal")
})
})
t.Run("Electra", func(t *testing.T) {
t.Run("FinalizedBlock Not Nil", func(t *testing.T) {
l := util.NewTestLightClient(t).SetupTestElectra(false)
update, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l.Ctx, l.State.Slot(), l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
require.NoError(t, err)
require.NotNil(t, update, "update is nil")
require.Equal(t, l.Block.Block().Slot(), update.SignatureSlot(), "Signature slot is not equal")
l.CheckSyncAggregate(update.SyncAggregate())
l.CheckAttestedHeader(update.AttestedHeader())
//zeroHash := params.BeaconConfig().ZeroHash[:]
finalizedBlockHeader, err := l.FinalizedBlock.Header()
require.NoError(t, err)
require.NotNil(t, update.FinalizedHeader(), "Finalized header is nil")
updateFinalizedHeaderBeacon := update.FinalizedHeader().Beacon()
require.Equal(t, finalizedBlockHeader.Header.Slot, updateFinalizedHeaderBeacon.Slot, "Finalized header slot is not equal")
require.Equal(t, finalizedBlockHeader.Header.ProposerIndex, updateFinalizedHeaderBeacon.ProposerIndex, "Finalized header proposer index is not equal")
require.DeepSSZEqual(t, finalizedBlockHeader.Header.ParentRoot, updateFinalizedHeaderBeacon.ParentRoot, "Finalized header parent root is not equal")
require.DeepSSZEqual(t, finalizedBlockHeader.Header.StateRoot, updateFinalizedHeaderBeacon.StateRoot, "Finalized header state root is not equal")
require.DeepSSZEqual(t, finalizedBlockHeader.Header.BodyRoot, updateFinalizedHeaderBeacon.BodyRoot, "Finalized header body root is not equal")
fb, err := update.FinalityBranchElectra()
require.NoError(t, err)
proof, err := l.AttestedState.FinalizedRootProof(l.Ctx)
require.NoError(t, err)
for i, leaf := range fb {
require.DeepSSZEqual(t, proof[i], leaf[:], "Leaf is not equal")
}
// Check Execution BlockHash
payloadInterface, err := l.FinalizedBlock.Block().Body().Execution()
require.NoError(t, err)
transactionsRoot, err := payloadInterface.TransactionsRoot()
if errors.Is(err, consensustypes.ErrUnsupportedField) {
transactions, err := payloadInterface.Transactions()
require.NoError(t, err)
transactionsRootArray, err := ssz.TransactionsRoot(transactions)
require.NoError(t, err)
transactionsRoot = transactionsRootArray[:]
} else {
require.NoError(t, err)
}
withdrawalsRoot, err := payloadInterface.WithdrawalsRoot()
if errors.Is(err, consensustypes.ErrUnsupportedField) {
withdrawals, err := payloadInterface.Withdrawals()
require.NoError(t, err)
withdrawalsRootArray, err := ssz.WithdrawalSliceRoot(withdrawals, fieldparams.MaxWithdrawalsPerPayload)
require.NoError(t, err)
withdrawalsRoot = withdrawalsRootArray[:]
} else {
require.NoError(t, err)
}
execution := &v11.ExecutionPayloadHeaderDeneb{
ParentHash: payloadInterface.ParentHash(),
FeeRecipient: payloadInterface.FeeRecipient(),
StateRoot: payloadInterface.StateRoot(),
ReceiptsRoot: payloadInterface.ReceiptsRoot(),
LogsBloom: payloadInterface.LogsBloom(),
PrevRandao: payloadInterface.PrevRandao(),
BlockNumber: payloadInterface.BlockNumber(),
GasLimit: payloadInterface.GasLimit(),
GasUsed: payloadInterface.GasUsed(),
Timestamp: payloadInterface.Timestamp(),
ExtraData: payloadInterface.ExtraData(),
BaseFeePerGas: payloadInterface.BaseFeePerGas(),
BlockHash: payloadInterface.BlockHash(),
TransactionsRoot: transactionsRoot,
WithdrawalsRoot: withdrawalsRoot,
}
updateExecution, err := update.FinalizedHeader().Execution()
require.NoError(t, err)
require.DeepSSZEqual(t, execution, updateExecution.Proto(), "Finalized Block Execution is not equal")
})
t.Run("FinalizedBlock In Previous Fork", func(t *testing.T) {
l := util.NewTestLightClient(t).SetupTestElectraFinalizedBlockDeneb(false)
update, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l.Ctx, l.State.Slot(), l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
require.NoError(t, err)
require.NotNil(t, update, "update is nil")
require.Equal(t, l.Block.Block().Slot(), update.SignatureSlot(), "Signature slot is not equal")
l.CheckSyncAggregate(update.SyncAggregate())
l.CheckAttestedHeader(update.AttestedHeader())
finalizedBlockHeader, err := l.FinalizedBlock.Header()
require.NoError(t, err)
require.NotNil(t, update.FinalizedHeader(), "Finalized header is nil")
updateFinalizedHeaderBeacon := update.FinalizedHeader().Beacon()
require.Equal(t, finalizedBlockHeader.Header.Slot, updateFinalizedHeaderBeacon.Slot, "Finalized header slot is not equal")
require.Equal(t, finalizedBlockHeader.Header.ProposerIndex, updateFinalizedHeaderBeacon.ProposerIndex, "Finalized header proposer index is not equal")
require.DeepSSZEqual(t, finalizedBlockHeader.Header.ParentRoot, updateFinalizedHeaderBeacon.ParentRoot, "Finalized header parent root is not equal")
require.DeepSSZEqual(t, finalizedBlockHeader.Header.StateRoot, updateFinalizedHeaderBeacon.StateRoot, "Finalized header state root is not equal")
require.DeepSSZEqual(t, finalizedBlockHeader.Header.BodyRoot, updateFinalizedHeaderBeacon.BodyRoot, "Finalized header body root is not equal")
fb, err := update.FinalityBranchElectra()
require.NoError(t, err)
proof, err := l.AttestedState.FinalizedRootProof(l.Ctx)
require.NoError(t, err)
for i, leaf := range fb {
require.DeepSSZEqual(t, proof[i], leaf[:], "Leaf is not equal")
}
// Check Execution BlockHash
payloadInterface, err := l.FinalizedBlock.Block().Body().Execution()
require.NoError(t, err)
transactionsRoot, err := payloadInterface.TransactionsRoot()
if errors.Is(err, consensustypes.ErrUnsupportedField) {
transactions, err := payloadInterface.Transactions()
require.NoError(t, err)
transactionsRootArray, err := ssz.TransactionsRoot(transactions)
require.NoError(t, err)
transactionsRoot = transactionsRootArray[:]
} else {
require.NoError(t, err)
}
withdrawalsRoot, err := payloadInterface.WithdrawalsRoot()
if errors.Is(err, consensustypes.ErrUnsupportedField) {
withdrawals, err := payloadInterface.Withdrawals()
require.NoError(t, err)
withdrawalsRootArray, err := ssz.WithdrawalSliceRoot(withdrawals, fieldparams.MaxWithdrawalsPerPayload)
require.NoError(t, err)
withdrawalsRoot = withdrawalsRootArray[:]
} else {
require.NoError(t, err)
}
execution := &v11.ExecutionPayloadHeaderDeneb{
ParentHash: payloadInterface.ParentHash(),
FeeRecipient: payloadInterface.FeeRecipient(),
StateRoot: payloadInterface.StateRoot(),
ReceiptsRoot: payloadInterface.ReceiptsRoot(),
LogsBloom: payloadInterface.LogsBloom(),
PrevRandao: payloadInterface.PrevRandao(),
BlockNumber: payloadInterface.BlockNumber(),
GasLimit: payloadInterface.GasLimit(),
GasUsed: payloadInterface.GasUsed(),
Timestamp: payloadInterface.Timestamp(),
ExtraData: payloadInterface.ExtraData(),
BaseFeePerGas: payloadInterface.BaseFeePerGas(),
BlockHash: payloadInterface.BlockHash(),
TransactionsRoot: transactionsRoot,
WithdrawalsRoot: withdrawalsRoot,
}
updateExecution, err := update.FinalizedHeader().Execution()
require.NoError(t, err)
require.DeepSSZEqual(t, execution, updateExecution.Proto(), "Finalized Block Execution is not equal")
})
})
}
func TestLightClient_BlockToLightClientHeader(t *testing.T) {

View File

@@ -403,15 +403,11 @@ func VerifyOperationLengths(_ context.Context, state state.BeaconState, b interf
)
}
maxSlashings := params.BeaconConfig().MaxAttesterSlashings
if body.Version() >= version.Electra {
maxSlashings = params.BeaconConfig().MaxAttesterSlashingsElectra
}
if uint64(len(body.AttesterSlashings())) > maxSlashings {
if uint64(len(body.AttesterSlashings())) > params.BeaconConfig().MaxAttesterSlashings {
return nil, fmt.Errorf(
"number of attester slashings (%d) in block body exceeds allowed threshold of %d",
len(body.AttesterSlashings()),
maxSlashings,
params.BeaconConfig().MaxAttesterSlashings,
)
}

View File

@@ -180,12 +180,6 @@ func ProcessBlockNoVerifyAnySig(
return nil, nil, err
}
sig := signed.Signature()
bSet, err := b.BlockSignatureBatch(st, blk.ProposerIndex(), sig[:], blk.HashTreeRoot)
if err != nil {
tracing.AnnotateError(span, err)
return nil, nil, errors.Wrap(err, "could not retrieve block signature set")
}
randaoReveal := signed.Block().Body().RandaoReveal()
rSet, err := b.RandaoSignatureBatch(ctx, st, randaoReveal[:])
if err != nil {
@@ -199,7 +193,7 @@ func ProcessBlockNoVerifyAnySig(
// Merge beacon block, randao and attestations signatures into a set.
set := bls.NewSet()
set.Join(bSet).Join(rSet).Join(aSet)
set.Join(rSet).Join(aSet)
if blk.Version() >= version.Capella {
changes, err := signed.Block().Body().BLSToExecutionChanges()

View File

@@ -158,9 +158,8 @@ func TestProcessBlockNoVerify_SigSetContainsDescriptions(t *testing.T) {
set, _, err := transition.ProcessBlockNoVerifyAnySig(context.Background(), beaconState, wsb)
require.NoError(t, err)
assert.Equal(t, len(set.Signatures), len(set.Descriptions), "Signatures and descriptions do not match up")
assert.Equal(t, "block signature", set.Descriptions[0])
assert.Equal(t, "randao signature", set.Descriptions[1])
assert.Equal(t, "attestation signature", set.Descriptions[2])
assert.Equal(t, "randao signature", set.Descriptions[0])
assert.Equal(t, "attestation signature", set.Descriptions[1])
}
func TestProcessOperationsNoVerifyAttsSigs_OK(t *testing.T) {

View File

@@ -437,25 +437,6 @@ func TestProcessBlock_OverMaxAttesterSlashings(t *testing.T) {
assert.ErrorContains(t, want, err)
}
func TestProcessBlock_OverMaxAttesterSlashingsElectra(t *testing.T) {
maxSlashings := params.BeaconConfig().MaxAttesterSlashingsElectra
b := &ethpb.SignedBeaconBlockElectra{
Block: &ethpb.BeaconBlockElectra{
Body: &ethpb.BeaconBlockBodyElectra{
AttesterSlashings: make([]*ethpb.AttesterSlashingElectra, maxSlashings+1),
},
},
}
want := fmt.Sprintf("number of attester slashings (%d) in block body exceeds allowed threshold of %d",
len(b.Block.Body.AttesterSlashings), params.BeaconConfig().MaxAttesterSlashingsElectra)
s, err := state_native.InitializeFromProtoUnsafeElectra(&ethpb.BeaconStateElectra{})
require.NoError(t, err)
wsb, err := consensusblocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
_, err = transition.VerifyOperationLengths(context.Background(), s, wsb.Block())
assert.ErrorContains(t, want, err)
}
func TestProcessBlock_OverMaxAttestations(t *testing.T) {
b := &ethpb.SignedBeaconBlock{
Block: &ethpb.BeaconBlock{

View File

@@ -75,7 +75,7 @@ func InitiateValidatorExit(ctx context.Context, s state.BeaconState, idx primiti
// Compute exit queue epoch.
if s.Version() < version.Electra {
// Relevant spec code from phase0:
// Relevant spec code from deneb:
//
// exit_epochs = [v.exit_epoch for v in state.validators if v.exit_epoch != FAR_FUTURE_EPOCH]
// exit_queue_epoch = max(exit_epochs + [compute_activation_exit_epoch(get_current_epoch(state))])

View File

@@ -94,7 +94,14 @@ func (s *LazilyPersistentStore) IsDataAvailable(ctx context.Context, current pri
entry := s.cache.ensure(key)
defer s.cache.delete(key)
root := b.Root()
entry.setDiskSummary(s.store.Summary(root))
sumz, err := s.store.WaitForSummarizer(ctx)
if err != nil {
log.WithField("root", fmt.Sprintf("%#x", b.Root())).
WithError(err).
Debug("Failed to receive BlobStorageSummarizer within IsDataAvailable")
} else {
entry.setDiskSummary(sumz.Summary(root))
}
// Verify we have all the expected sidecars, and fail fast if any are missing or inconsistent.
// We don't try to salvage problematic batches because this indicates a misbehaving peer and we'd rather

View File

@@ -5,10 +5,6 @@ go_library(
srcs = [
"blob.go",
"cache.go",
"iteration.go",
"layout.go",
"layout_by_epoch.go",
"layout_flat.go",
"log.go",
"metrics.go",
"mock.go",
@@ -17,7 +13,6 @@ go_library(
importpath = "github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filesystem",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/db:go_default_library",
"//beacon-chain/verification:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
@@ -25,6 +20,7 @@ go_library(
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//io/file:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/logging:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
@@ -41,14 +37,10 @@ go_test(
srcs = [
"blob_test.go",
"cache_test.go",
"iteration_test.go",
"layout_test.go",
"migration_test.go",
"pruner_test.go",
],
embed = [":go_default_library"],
deps = [
"//beacon-chain/db:go_default_library",
"//beacon-chain/verification:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
@@ -56,7 +48,6 @@ go_test(
"//proto/prysm/v1alpha1:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//time/slots:go_default_library",
"@com_github_prysmaticlabs_fastssz//:go_default_library",
"@com_github_spf13_afero//:go_default_library",
],

View File

@@ -1,31 +1,42 @@
package filesystem
import (
"context"
"fmt"
"math"
"os"
"path"
"strconv"
"strings"
"time"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/verification"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/io/file"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/logging"
"github.com/sirupsen/logrus"
"github.com/spf13/afero"
)
func directoryPermissions() os.FileMode {
return params.BeaconIoConfig().ReadWriteExecutePermissions
}
var (
errIndexOutOfBounds = errors.New("blob index in file name >= MAX_BLOBS_PER_BLOCK")
errIndexOutOfBounds = errors.New("blob index in file name >= DeprecatedMaxBlobsPerBlock")
errEmptyBlobWritten = errors.New("zero bytes written to disk when saving blob sidecar")
errSidecarEmptySSZData = errors.New("sidecar marshalled to an empty ssz byte slice")
errNoBasePath = errors.New("BlobStorage base path not specified in init")
errInvalidRootString = errors.New("Could not parse hex string as a [32]byte")
)
const (
sszExt = "ssz"
partExt = "part"
directoryPermissions = 0700
)
// BlobStorageOption is a functional option for configuring a BlobStorage.
@@ -55,23 +66,6 @@ func WithSaveFsync(fsync bool) BlobStorageOption {
}
}
// WithFs allows the afero.Fs implementation to be customized. Used by tests
// to substitute an in-memory filesystem.
func WithFs(fs afero.Fs) BlobStorageOption {
return func(b *BlobStorage) error {
b.fs = fs
return nil
}
}
// WithLayout enables the user to specify which layout scheme to use, dictating how blob files are stored on disk.
func WithLayout(name string) BlobStorageOption {
return func(b *BlobStorage) error {
b.layoutName = name
return nil
}
}
// NewBlobStorage creates a new instance of the BlobStorage object. Note that the implementation of BlobStorage may
// attempt to hold a file lock to guarantee exclusive control of the blob storage directory, so this should only be
// initialized once per beacon node.
@@ -82,27 +76,19 @@ func NewBlobStorage(opts ...BlobStorageOption) (*BlobStorage, error) {
return nil, errors.Wrap(err, "failed to create blob storage")
}
}
// Allow tests to set up a different fs using WithFs.
if b.fs == nil {
if b.base == "" {
return nil, errNoBasePath
}
b.base = path.Clean(b.base)
if err := file.MkdirAll(b.base); err != nil {
return nil, errors.Wrapf(err, "failed to create blob storage at %s", b.base)
}
b.fs = afero.NewBasePathFs(afero.NewOsFs(), b.base)
if b.base == "" {
return nil, errNoBasePath
}
b.cache = newBlobStorageCache()
pruner := newBlobPruner(b.retentionEpochs)
if b.layoutName == "" {
b.layoutName = LayoutNameFlat
b.base = path.Clean(b.base)
if err := file.MkdirAll(b.base); err != nil {
return nil, errors.Wrapf(err, "failed to create blob storage at %s", b.base)
}
layout, err := newLayout(b.layoutName, b.fs, b.cache, pruner)
b.fs = afero.NewBasePathFs(afero.NewOsFs(), b.base)
pruner, err := newBlobPruner(b.fs, b.retentionEpochs)
if err != nil {
return nil, err
}
b.layout = layout
b.pruner = pruner
return b, nil
}
@@ -110,103 +96,47 @@ func NewBlobStorage(opts ...BlobStorageOption) (*BlobStorage, error) {
type BlobStorage struct {
base string
retentionEpochs primitives.Epoch
layoutName string
fsync bool
fs afero.Fs
layout fsLayout
cache *blobStorageSummaryCache
pruner *blobPruner
}
// WarmCache runs the prune routine with an expiration of slot of 0, so nothing will be pruned, but the pruner's cache
// will be populated at node startup, avoiding a costly cold prune (~4s in syscalls) during syncing.
func (bs *BlobStorage) WarmCache() {
start := time.Now()
if bs.layoutName == LayoutNameFlat {
if bs.pruner == nil {
return
}
go func() {
start := time.Now()
log.Info("Blob filesystem cache warm-up started. This may take a few minutes.")
} else {
log.Info("Blob filesystem cache warm-up started.")
}
if err := warmCache(bs.layout, bs.cache); err != nil {
log.WithError(err).Error("Error encountered while warming up blob filesystem cache.")
}
if err := bs.migrateLayouts(); err != nil {
log.WithError(err).Error("Error encountered while migrating blob storage.")
}
log.WithField("elapsed", time.Since(start)).Info("Blob filesystem cache warm-up complete.")
}
// If any blob storage directories are found for layouts besides the configured layout, migrate them.
func (bs *BlobStorage) migrateLayouts() error {
for _, name := range LayoutNames {
if name == bs.layoutName {
continue
}
from, err := newLayout(name, bs.fs, bs.cache, nil)
if err != nil {
return err
}
if err := migrateLayout(bs.fs, from, bs.layout, bs.cache); err != nil {
if errors.Is(err, errLayoutNotDetected) {
continue
}
return errors.Wrapf(err, "failed to migrate layout from %s to %s", name, bs.layoutName)
}
}
return nil
}
func (bs *BlobStorage) writePart(sidecar blocks.VerifiedROBlob) (ppath string, err error) {
ident := identForSidecar(sidecar)
sidecarData, err := sidecar.MarshalSSZ()
if err != nil {
return "", errors.Wrap(err, "failed to serialize sidecar data")
}
if len(sidecarData) == 0 {
return "", errSidecarEmptySSZData
}
if err := bs.fs.MkdirAll(bs.layout.dir(ident), directoryPermissions()); err != nil {
return "", err
}
ppath = bs.layout.partPath(ident, fmt.Sprintf("%p", sidecarData))
// Create a partial file and write the serialized data to it.
partialFile, err := bs.fs.Create(ppath)
if err != nil {
return "", errors.Wrap(err, "failed to create partial file")
}
defer func() {
cerr := partialFile.Close()
// The close error is probably less important than any existing error, so only overwrite nil err.
if cerr != nil && err == nil {
err = cerr
if err := bs.pruner.warmCache(); err != nil {
log.WithError(err).Error("Error encountered while warming up blob pruner cache")
}
log.WithField("elapsed", time.Since(start)).Info("Blob filesystem cache warm-up complete")
}()
}
n, err := partialFile.Write(sidecarData)
if err != nil {
return ppath, errors.Wrap(err, "failed to write to partial file")
}
if bs.fsync {
if err := partialFile.Sync(); err != nil {
return ppath, err
}
}
// ErrBlobStorageSummarizerUnavailable is a sentinel error returned when there is no pruner/cache available.
// This should be used by code that optionally uses the summarizer to optimize rpc requests. Being able to
// fallback when there is no summarizer allows client code to avoid test complexity where the summarizer doesn't matter.
var ErrBlobStorageSummarizerUnavailable = errors.New("BlobStorage not initialized with a pruner or cache")
if n != len(sidecarData) {
return ppath, fmt.Errorf("failed to write the full bytes of sidecarData, wrote only %d of %d bytes", n, len(sidecarData))
// WaitForSummarizer blocks until the BlobStorageSummarizer is ready to use.
// BlobStorageSummarizer is not ready immediately on node startup because it needs to sample the blob filesystem to
// determine which blobs are available.
func (bs *BlobStorage) WaitForSummarizer(ctx context.Context) (BlobStorageSummarizer, error) {
if bs == nil || bs.pruner == nil {
return nil, ErrBlobStorageSummarizerUnavailable
}
return ppath, nil
return bs.pruner.waitForCache(ctx)
}
// Save saves blobs given a list of sidecars.
func (bs *BlobStorage) Save(sidecar blocks.VerifiedROBlob) error {
startTime := time.Now()
ident := identForSidecar(sidecar)
sszPath := bs.layout.sszPath(ident)
fname := namerForSidecar(sidecar)
sszPath := fname.path()
exists, err := afero.Exists(bs.fs, sszPath)
if err != nil {
return err
@@ -215,36 +145,78 @@ func (bs *BlobStorage) Save(sidecar blocks.VerifiedROBlob) error {
log.WithFields(logging.BlobFields(sidecar.ROBlob)).Debug("Ignoring a duplicate blob sidecar save attempt")
return nil
}
if bs.pruner != nil {
if err := bs.pruner.notify(sidecar.BlockRoot(), sidecar.Slot(), sidecar.Index); err != nil {
return errors.Wrapf(err, "problem maintaining pruning cache/metrics for sidecar with root=%#x", sidecar.BlockRoot())
}
}
// Serialize the ethpb.BlobSidecar to binary data using SSZ.
sidecarData, err := sidecar.MarshalSSZ()
if err != nil {
return errors.Wrap(err, "failed to serialize sidecar data")
} else if len(sidecarData) == 0 {
return errSidecarEmptySSZData
}
if err := bs.fs.MkdirAll(fname.dir(), directoryPermissions); err != nil {
return err
}
partPath := fname.partPath(fmt.Sprintf("%p", sidecarData))
partialMoved := false
partPath, err := bs.writePart(sidecar)
// Ensure the partial file is deleted.
defer func() {
if partialMoved || partPath == "" {
if partialMoved {
return
}
// It's expected to error if the save is successful.
err := bs.fs.Remove(partPath)
err = bs.fs.Remove(partPath)
if err == nil {
log.WithFields(logrus.Fields{
"partPath": partPath,
}).Debugf("Removed partial file")
}
}()
// Create a partial file and write the serialized data to it.
partialFile, err := bs.fs.Create(partPath)
if err != nil {
return errors.Wrap(err, "failed to create partial file")
}
n, err := partialFile.Write(sidecarData)
if err != nil {
closeErr := partialFile.Close()
if closeErr != nil {
return closeErr
}
return errors.Wrap(err, "failed to write to partial file")
}
if bs.fsync {
if err := partialFile.Sync(); err != nil {
return err
}
}
if err := partialFile.Close(); err != nil {
return err
}
if n != len(sidecarData) {
return fmt.Errorf("failed to write the full bytes of sidecarData, wrote only %d of %d bytes", n, len(sidecarData))
}
if n == 0 {
return errEmptyBlobWritten
}
// Atomically rename the partial file to its final name.
err = bs.fs.Rename(partPath, sszPath)
if err != nil {
return errors.Wrap(err, "failed to rename partial file to final name")
}
partialMoved = true
if err := bs.layout.notify(ident); err != nil {
return errors.Wrapf(err, "problem maintaining pruning cache/metrics for sidecar with root=%#x", sidecar.BlockRoot())
}
blobsWrittenCounter.Inc()
blobSaveLatency.Observe(float64(time.Since(startTime).Milliseconds()))
@@ -256,30 +228,70 @@ func (bs *BlobStorage) Save(sidecar blocks.VerifiedROBlob) error {
// value is always a VerifiedROBlob.
func (bs *BlobStorage) Get(root [32]byte, idx uint64) (blocks.VerifiedROBlob, error) {
startTime := time.Now()
ident, err := bs.layout.ident(root, idx)
expected := blobNamer{root: root, index: idx}
encoded, err := afero.ReadFile(bs.fs, expected.path())
var v blocks.VerifiedROBlob
if err != nil {
return verification.VerifiedROBlobError(err)
return v, err
}
s := &ethpb.BlobSidecar{}
if err := s.UnmarshalSSZ(encoded); err != nil {
return v, err
}
ro, err := blocks.NewROBlobWithRoot(s, root)
if err != nil {
return blocks.VerifiedROBlob{}, err
}
defer func() {
blobFetchLatency.Observe(float64(time.Since(startTime).Milliseconds()))
}()
return verification.VerifiedROBlobFromDisk(bs.fs, root, bs.layout.sszPath(ident))
return verification.BlobSidecarNoop(ro)
}
// Remove removes all blobs for a given root.
func (bs *BlobStorage) Remove(root [32]byte) error {
dirIdent, err := bs.layout.dirIdent(root)
if err != nil {
return err
}
_, err = bs.layout.remove(dirIdent)
return err
rootDir := blobNamer{root: root}.dir()
return bs.fs.RemoveAll(rootDir)
}
// Summary returns the BlobStorageSummary from the layout.
// Internally, this is a cached representation of the directory listing for the given root.
func (bs *BlobStorage) Summary(root [32]byte) BlobStorageSummary {
return bs.layout.summary(root)
// Indices generates a bitmap representing which BlobSidecar.Index values are present on disk for a given root.
// This value can be compared to the commitments observed in a block to determine which indices need to be found
// on the network to confirm data availability.
func (bs *BlobStorage) Indices(root [32]byte, s primitives.Slot) ([]bool, error) {
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(s)
mask := make([]bool, maxBlobsPerBlock)
rootDir := blobNamer{root: root}.dir()
entries, err := afero.ReadDir(bs.fs, rootDir)
if err != nil {
if os.IsNotExist(err) {
return mask, nil
}
return mask, err
}
for i := range entries {
if entries[i].IsDir() {
continue
}
name := entries[i].Name()
if !strings.HasSuffix(name, sszExt) {
continue
}
parts := strings.Split(name, ".")
if len(parts) != 2 {
continue
}
u, err := strconv.ParseUint(parts[0], 10, 64)
if err != nil {
return mask, errors.Wrapf(err, "unexpected directory entry breaks listing, %s", parts[0])
}
if u >= uint64(maxBlobsPerBlock) {
return mask, errIndexOutOfBounds
}
mask[u] = true
}
return mask, nil
}
// Clear deletes all files on the filesystem.
@@ -304,3 +316,36 @@ func (bs *BlobStorage) WithinRetentionPeriod(requested, current primitives.Epoch
}
return requested+bs.retentionEpochs >= current
}
type blobNamer struct {
root [32]byte
index uint64
}
func namerForSidecar(sc blocks.VerifiedROBlob) blobNamer {
return blobNamer{root: sc.BlockRoot(), index: sc.Index}
}
func (p blobNamer) dir() string {
return rootString(p.root)
}
func (p blobNamer) partPath(entropy string) string {
return path.Join(p.dir(), fmt.Sprintf("%s-%d.%s", entropy, p.index, partExt))
}
func (p blobNamer) path() string {
return path.Join(p.dir(), fmt.Sprintf("%d.%s", p.index, sszExt))
}
func rootString(root [32]byte) string {
return fmt.Sprintf("%#x", root)
}
func stringToRoot(str string) ([32]byte, error) {
slice, err := hexutil.Decode(str)
if err != nil {
return [32]byte{}, errors.Wrapf(errInvalidRootString, "input=%s", str)
}
return bytesutil.ToBytes32(slice), nil
}

View File

@@ -9,26 +9,26 @@ import (
"testing"
ssz "github.com/prysmaticlabs/fastssz"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/verification"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/spf13/afero"
)
func TestBlobStorage_SaveBlobData(t *testing.T) {
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, params.BeaconConfig().MaxBlobsPerBlock(1))
testSidecars := verification.FakeVerifySliceForTest(t, sidecars)
testSidecars, err := verification.BlobSidecarSliceNoop(sidecars)
require.NoError(t, err)
t.Run("no error for duplicate", func(t *testing.T) {
fs, bs := NewEphemeralBlobStorageAndFs(t)
fs, bs := NewEphemeralBlobStorageWithFs(t)
existingSidecar := testSidecars[0]
blobPath := bs.layout.sszPath(identForSidecar(existingSidecar))
blobPath := namerForSidecar(existingSidecar).path()
// Serialize the existing BlobSidecar to binary data.
existingSidecarData, err := ssz.MarshalSSZ(existingSidecar)
require.NoError(t, err)
@@ -56,8 +56,8 @@ func TestBlobStorage_SaveBlobData(t *testing.T) {
require.NoError(t, bs.Save(sc))
actualSc, err := bs.Get(sc.BlockRoot(), sc.Index)
require.NoError(t, err)
expectedIdx := blobIndexMask{false, false, true, false, false, false}
actualIdx := bs.Summary(actualSc.BlockRoot()).mask
expectedIdx := []bool{false, false, true, false, false, false}
actualIdx, err := bs.Indices(actualSc.BlockRoot(), 100)
require.NoError(t, err)
require.DeepEqual(t, expectedIdx, actualIdx)
})
@@ -85,7 +85,7 @@ func TestBlobStorage_SaveBlobData(t *testing.T) {
require.NoError(t, bs.Remove(expected.BlockRoot()))
_, err = bs.Get(expected.BlockRoot(), expected.Index)
require.Equal(t, true, db.IsNotFound(err))
require.ErrorContains(t, "file does not exist", err)
})
t.Run("clear", func(t *testing.T) {
@@ -126,14 +126,16 @@ func TestBlobStorage_SaveBlobData(t *testing.T) {
})
}
// pollUntil polls a condition function until it returns true or a timeout is reached.
func TestBlobIndicesBounds(t *testing.T) {
fs := afero.NewMemMapFs()
fs, bs := NewEphemeralBlobStorageWithFs(t)
root := [32]byte{}
okIdx := uint64(params.BeaconConfig().MaxBlobsPerBlock(0)) - 1
writeFakeSSZ(t, fs, root, 0, okIdx)
bs := NewWarmedEphemeralBlobStorageUsingFs(t, fs, WithLayout(LayoutNameByEpoch))
indices := bs.Summary(root).mask
writeFakeSSZ(t, fs, root, okIdx)
indices, err := bs.Indices(root, 100)
require.NoError(t, err)
expected := make([]bool, params.BeaconConfig().MaxBlobsPerBlock(0))
expected[okIdx] = true
for i := range expected {
@@ -141,23 +143,102 @@ func TestBlobIndicesBounds(t *testing.T) {
}
oobIdx := uint64(params.BeaconConfig().MaxBlobsPerBlock(0))
writeFakeSSZ(t, fs, root, 0, oobIdx)
// This now fails at cache warmup time.
require.ErrorIs(t, warmCache(bs.layout, bs.cache), errIndexOutOfBounds)
writeFakeSSZ(t, fs, root, oobIdx)
_, err = bs.Indices(root, 100)
require.ErrorIs(t, err, errIndexOutOfBounds)
}
func writeFakeSSZ(t *testing.T, fs afero.Fs, root [32]byte, slot primitives.Slot, idx uint64) {
epoch := slots.ToEpoch(slot)
namer := newBlobIdent(root, epoch, idx)
layout := periodicEpochLayout{}
require.NoError(t, fs.MkdirAll(layout.dir(namer), 0700))
fh, err := fs.Create(layout.sszPath(namer))
func writeFakeSSZ(t *testing.T, fs afero.Fs, root [32]byte, idx uint64) {
namer := blobNamer{root: root, index: idx}
require.NoError(t, fs.MkdirAll(namer.dir(), 0700))
fh, err := fs.Create(namer.path())
require.NoError(t, err)
_, err = fh.Write([]byte("derp"))
require.NoError(t, err)
require.NoError(t, fh.Close())
}
func TestBlobStoragePrune(t *testing.T) {
currentSlot := primitives.Slot(200000)
fs, bs := NewEphemeralBlobStorageWithFs(t)
t.Run("PruneOne", func(t *testing.T) {
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 300, params.BeaconConfig().MaxBlobsPerBlock(0))
testSidecars, err := verification.BlobSidecarSliceNoop(sidecars)
require.NoError(t, err)
for _, sidecar := range testSidecars {
require.NoError(t, bs.Save(sidecar))
}
require.NoError(t, bs.pruner.prune(currentSlot-bs.pruner.windowSize))
remainingFolders, err := afero.ReadDir(fs, ".")
require.NoError(t, err)
require.Equal(t, 0, len(remainingFolders))
})
t.Run("Prune dangling blob", func(t *testing.T) {
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 299, params.BeaconConfig().MaxBlobsPerBlock(0))
testSidecars, err := verification.BlobSidecarSliceNoop(sidecars)
require.NoError(t, err)
for _, sidecar := range testSidecars[4:] {
require.NoError(t, bs.Save(sidecar))
}
require.NoError(t, bs.pruner.prune(currentSlot-bs.pruner.windowSize))
remainingFolders, err := afero.ReadDir(fs, ".")
require.NoError(t, err)
require.Equal(t, 0, len(remainingFolders))
})
t.Run("PruneMany", func(t *testing.T) {
blockQty := 10
slot := primitives.Slot(1)
for j := 0; j <= blockQty; j++ {
root := bytesutil.ToBytes32(bytesutil.ToBytes(uint64(slot), 32))
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, root, slot, params.BeaconConfig().MaxBlobsPerBlock(0))
testSidecars, err := verification.BlobSidecarSliceNoop(sidecars)
require.NoError(t, err)
require.NoError(t, bs.Save(testSidecars[0]))
slot += 10000
}
require.NoError(t, bs.pruner.prune(currentSlot-bs.pruner.windowSize))
remainingFolders, err := afero.ReadDir(fs, ".")
require.NoError(t, err)
require.Equal(t, 4, len(remainingFolders))
})
}
func BenchmarkPruning(b *testing.B) {
var t *testing.T
_, bs := NewEphemeralBlobStorageWithFs(t)
blockQty := 10000
currentSlot := primitives.Slot(150000)
slot := primitives.Slot(0)
for j := 0; j <= blockQty; j++ {
root := bytesutil.ToBytes32(bytesutil.ToBytes(uint64(slot), 32))
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, root, slot, params.BeaconConfig().MaxBlobsPerBlock(0))
testSidecars, err := verification.BlobSidecarSliceNoop(sidecars)
require.NoError(t, err)
require.NoError(t, bs.Save(testSidecars[0]))
slot += 100
}
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
err := bs.pruner.prune(currentSlot)
require.NoError(b, err)
}
}
func TestNewBlobStorage(t *testing.T) {
_, err := NewBlobStorage()
require.ErrorIs(t, err, errNoBasePath)
@@ -211,13 +292,3 @@ func TestConfig_WithinRetentionPeriod(t *testing.T) {
require.Equal(t, true, storage.WithinRetentionPeriod(1, 1))
})
}
func TestLayoutNames(t *testing.T) {
badLayoutName := "bad"
for _, name := range LayoutNames {
_, err := newLayout(name, nil, nil, nil)
require.NoError(t, err)
}
_, err := newLayout(badLayoutName, nil, nil, nil)
require.ErrorIs(t, err, errInvalidLayoutName)
}

View File

@@ -1,10 +1,8 @@
package filesystem
import (
"fmt"
"sync"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
@@ -15,12 +13,17 @@ type blobIndexMask []bool
// BlobStorageSummary represents cached information about the BlobSidecars on disk for each root the cache knows about.
type BlobStorageSummary struct {
epoch primitives.Epoch
mask blobIndexMask
slot primitives.Slot
mask blobIndexMask
}
// HasIndex returns true if the BlobSidecar at the given index is available in the filesystem.
func (s BlobStorageSummary) HasIndex(idx uint64) bool {
// Protect from panic, but assume callers are sophisticated enough to not need an error telling them they have an invalid idx.
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(s.slot)
if idx >= uint64(maxBlobsPerBlock) {
return false
}
if idx >= uint64(len(s.mask)) {
return false
}
@@ -29,6 +32,10 @@ func (s BlobStorageSummary) HasIndex(idx uint64) bool {
// AllAvailable returns true if we have all blobs for all indices from 0 to count-1.
func (s BlobStorageSummary) AllAvailable(count int) bool {
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(s.slot)
if count > maxBlobsPerBlock {
return false
}
if count > len(s.mask) {
return false
}
@@ -40,121 +47,83 @@ func (s BlobStorageSummary) AllAvailable(count int) bool {
return true
}
func (s BlobStorageSummary) MaxBlobsForEpoch() uint64 {
return uint64(params.BeaconConfig().MaxBlobsPerBlockAtEpoch(s.epoch))
}
// NewBlobStorageSummary creates a new BlobStorageSummary for a given epoch and mask.
func NewBlobStorageSummary(epoch primitives.Epoch, mask []bool) (BlobStorageSummary, error) {
c := params.BeaconConfig().MaxBlobsPerBlockAtEpoch(epoch)
if len(mask) != c {
return BlobStorageSummary{}, fmt.Errorf("mask length %d does not match expected %d for epoch %d", len(mask), c, epoch)
}
return BlobStorageSummary{
epoch: epoch,
mask: mask,
}, nil
}
// BlobStorageSummarizer can be used to receive a summary of metadata about blobs on disk for a given root.
// The BlobStorageSummary can be used to check which indices (if any) are available for a given block by root.
type BlobStorageSummarizer interface {
Summary(root [32]byte) BlobStorageSummary
}
type blobStorageSummaryCache struct {
type blobStorageCache struct {
mu sync.RWMutex
nBlobs float64
cache map[[32]byte]BlobStorageSummary
}
var _ BlobStorageSummarizer = &blobStorageSummaryCache{}
var _ BlobStorageSummarizer = &blobStorageCache{}
func newBlobStorageCache() *blobStorageSummaryCache {
return &blobStorageSummaryCache{
cache: make(map[[32]byte]BlobStorageSummary),
func newBlobStorageCache() *blobStorageCache {
return &blobStorageCache{
cache: make(map[[32]byte]BlobStorageSummary, params.BeaconConfig().MinEpochsForBlobsSidecarsRequest*fieldparams.SlotsPerEpoch),
}
}
// Summary returns the BlobStorageSummary for `root`. The BlobStorageSummary can be used to check for the presence of
// BlobSidecars based on Index.
func (s *blobStorageSummaryCache) Summary(root [32]byte) BlobStorageSummary {
func (s *blobStorageCache) Summary(root [32]byte) BlobStorageSummary {
s.mu.RLock()
defer s.mu.RUnlock()
return s.cache[root]
}
func (s *blobStorageSummaryCache) ensure(ident blobIdent) error {
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlockAtEpoch(ident.epoch)
if ident.index >= uint64(maxBlobsPerBlock) {
func (s *blobStorageCache) ensure(key [32]byte, slot primitives.Slot, idx uint64) error {
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(slot)
if idx >= uint64(maxBlobsPerBlock) {
return errIndexOutOfBounds
}
s.mu.Lock()
defer s.mu.Unlock()
v := s.cache[ident.root]
v.epoch = ident.epoch
v := s.cache[key]
v.slot = slot
if v.mask == nil {
v.mask = make(blobIndexMask, maxBlobsPerBlock)
}
if !v.mask[ident.index] {
if !v.mask[idx] {
s.updateMetrics(1)
}
v.mask[ident.index] = true
s.cache[ident.root] = v
v.mask[idx] = true
s.cache[key] = v
return nil
}
func (s *blobStorageSummaryCache) get(key [32]byte) (BlobStorageSummary, bool) {
func (s *blobStorageCache) slot(key [32]byte) (primitives.Slot, bool) {
s.mu.RLock()
defer s.mu.RUnlock()
v, ok := s.cache[key]
return v, ok
}
func (s *blobStorageSummaryCache) identForIdx(key [32]byte, idx uint64) (blobIdent, error) {
v, ok := s.get(key)
if !ok || !v.HasIndex(idx) {
return blobIdent{}, db.ErrNotFound
}
return blobIdent{
root: key,
index: idx,
epoch: v.epoch,
}, nil
}
func (s *blobStorageSummaryCache) identForRoot(key [32]byte) (blobIdent, error) {
v, ok := s.get(key)
if !ok {
return blobIdent{}, db.ErrNotFound
return 0, false
}
return blobIdent{
root: key,
epoch: v.epoch,
}, nil
return v.slot, ok
}
func (s *blobStorageSummaryCache) evict(key [32]byte) int {
deleted := 0
func (s *blobStorageCache) evict(key [32]byte) {
var deleted float64
s.mu.Lock()
defer s.mu.Unlock()
v, ok := s.cache[key]
if !ok {
return 0
}
for i := range v.mask {
if v.mask[i] {
deleted += 1
if ok {
for i := range v.mask {
if v.mask[i] {
deleted += 1
}
}
}
delete(s.cache, key)
s.mu.Unlock()
if deleted > 0 {
s.updateMetrics(-float64(deleted))
s.updateMetrics(-deleted)
}
return deleted
}
func (s *blobStorageSummaryCache) updateMetrics(delta float64) {
func (s *blobStorageCache) updateMetrics(delta float64) {
s.nBlobs += delta
blobDiskCount.Set(s.nBlobs)
blobDiskSize.Set(s.nBlobs * fieldparams.BlobSidecarSize)

View File

@@ -53,7 +53,7 @@ func TestSlotByRoot_Summary(t *testing.T) {
for _, c := range cases {
if c.expected != nil {
key := bytesutil.ToBytes32([]byte(c.name))
sc.cache[key] = BlobStorageSummary{epoch: 0, mask: c.expected}
sc.cache[key] = BlobStorageSummary{slot: 0, mask: c.expected}
}
}
for _, c := range cases {

View File

@@ -1,237 +0,0 @@
package filesystem
import (
"fmt"
"io"
"path/filepath"
"strconv"
"strings"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/sirupsen/logrus"
"github.com/spf13/afero"
)
var errIdentFailure = errors.New("failed to determine blob metadata, ignoring all sub-paths.")
type identificationError struct {
err error
path string
ident blobIdent
}
func (ide *identificationError) Error() string {
return fmt.Sprintf("%s path=%s, err=%s", errIdentFailure.Error(), ide.path, ide.err.Error())
}
func (ide *identificationError) Unwrap() error {
return ide.err
}
func (*identificationError) Is(err error) bool {
return err == errIdentFailure
}
func (ide *identificationError) LogFields() logrus.Fields {
fields := ide.ident.logFields()
fields["path"] = ide.path
return fields
}
func newIdentificationError(path string, ident blobIdent, err error) *identificationError {
return &identificationError{path: path, ident: ident, err: err}
}
func listDir(fs afero.Fs, dir string) ([]string, error) {
top, err := fs.Open(dir)
if err != nil {
return nil, errors.Wrap(err, "failed to open directory descriptor")
}
defer func() {
if err := top.Close(); err != nil {
log.WithError(err).Errorf("Could not close file %s", dir)
}
}()
// re the -1 param: "If n <= 0, Readdirnames returns all the names from the directory in a single slice"
dirs, err := top.Readdirnames(-1)
if err != nil {
return nil, errors.Wrap(err, "failed to read directory listing")
}
return dirs, nil
}
// identPopulator is a function that sets values in the blobIdent for a given layer of the filesystem layout.
type identPopulator func(blobIdent, string) (blobIdent, error)
// layoutLayer represents a layer of the nested directory scheme. Each layer is defined by a filter function that
// ensures any entries at that layer of the scheme are named in a valid way, and a populateIdent function that
// parses the directory name into a blobIdent object, used for iterating across the layout in a layout-independent way.
type layoutLayer struct {
populateIdent identPopulator
filter func(string) bool
}
// identIterator moves through the filesystem in order to yield blobIdents.
// layoutLayers (in the 'layers' field) allows a filesystem layout to control how the
// layout is traversed. A layoutLayer can filter out entries from the directory listing
// via the filter function, and populate fields in the blobIdent via the populateIdent function.
// The blobIdent is populated from an empty value at the root, accumulating values for its fields at each layer.
// The fully populated blobIdent is returned when the iterator reaches the leaf layer.
type identIterator struct {
fs afero.Fs
path string
child *identIterator
ident blobIdent
// layoutLayers are the heart of how the layout defines the nesting of the components of the path.
// Each layer of the layout represents a different layer of the directory layout hierarchy,
// from the relative root at the zero index to the blob files at the end.
layers []layoutLayer
entries []string
offset int
eof bool
}
// atEOF can be used to peek at the iterator to see if it's already finished. This is useful for the migration code to check
// if there are any entries in the directory indicated by the migration.
func (iter *identIterator) atEOF() bool {
return iter.eof
}
// next is the only method that a user of the identIterator needs to call.
// identIterator will yield blobIdents in a breadth-first fashion,
// returning an empty blobIdent and io.EOF once all branches have been traversed.
func (iter *identIterator) next() (blobIdent, error) {
if iter.eof {
return blobIdent{}, io.EOF
}
if iter.child != nil {
next, err := iter.child.next()
if err == nil {
return next, nil
}
if !errors.Is(err, io.EOF) {
return blobIdent{}, err
}
}
return iter.advanceChild()
}
// advanceChild is used to move to the next directory at each layer of the tree, either when
// the nodes are first being initialized at a layer, or when a sub-branch has been exhausted.
func (iter *identIterator) advanceChild() (blobIdent, error) {
defer func() {
iter.offset += 1
}()
for i := iter.offset; i < len(iter.entries); i++ {
iter.offset = i
nextPath := filepath.Join(iter.path, iter.entries[iter.offset])
nextLayer := iter.layers[0]
if !nextLayer.filter(nextPath) {
continue
}
ident, err := nextLayer.populateIdent(iter.ident, nextPath)
if err != nil {
return ident, newIdentificationError(nextPath, ident, err)
}
// if we're at the leaf layer , we can return the updated ident.
if len(iter.layers) == 1 {
return ident, nil
}
entries, err := listDir(iter.fs, nextPath)
if err != nil {
return blobIdent{}, err
}
if len(entries) == 0 {
continue
}
iter.child = &identIterator{
fs: iter.fs,
path: nextPath,
ident: ident,
layers: iter.layers[1:],
entries: entries,
}
return iter.child.next()
}
return blobIdent{}, io.EOF
}
func populateNoop(namer blobIdent, _ string) (blobIdent, error) {
return namer, nil
}
func populateRoot(namer blobIdent, dir string) (blobIdent, error) {
root, err := rootFromPath(dir)
if err != nil {
return namer, err
}
namer.root = root
return namer, nil
}
func populateIndex(namer blobIdent, fname string) (blobIdent, error) {
idx, err := idxFromPath(fname)
if err != nil {
return namer, err
}
namer.index = idx
return namer, nil
}
func rootFromPath(p string) ([32]byte, error) {
subdir := filepath.Base(p)
root, err := stringToRoot(subdir)
if err != nil {
return root, errors.Wrapf(err, "invalid directory, could not parse subdir as root %s", p)
}
return root, nil
}
func idxFromPath(p string) (uint64, error) {
p = filepath.Base(p)
if !isSszFile(p) {
return 0, errors.Wrap(errNotBlobSSZ, "does not have .ssz extension")
}
parts := strings.Split(p, ".")
if len(parts) != 2 {
return 0, errors.Wrap(errNotBlobSSZ, "unexpected filename structure (want <index>.ssz)")
}
idx, err := strconv.ParseUint(parts[0], 10, 64)
if err != nil {
return 0, err
}
return idx, nil
}
func filterNoop(_ string) bool {
return true
}
func isRootDir(p string) bool {
dir := filepath.Base(p)
return len(dir) == rootStringLen && strings.HasPrefix(dir, "0x")
}
func isSszFile(s string) bool {
return filepath.Ext(s) == "."+sszExt
}
func rootToString(root [32]byte) string {
return fmt.Sprintf("%#x", root)
}
func stringToRoot(str string) ([32]byte, error) {
if len(str) != rootStringLen {
return [32]byte{}, errors.Wrapf(errInvalidRootString, "incorrect len for input=%s", str)
}
slice, err := hexutil.Decode(str)
if err != nil {
return [32]byte{}, errors.Wrapf(errInvalidRootString, "input=%s", str)
}
return bytesutil.ToBytes32(slice), nil
}

View File

@@ -1,304 +0,0 @@
package filesystem
import (
"bytes"
"fmt"
"io"
"math"
"os"
"path"
"sort"
"testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/verification"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
"github.com/spf13/afero"
)
func TestRootFromDir(t *testing.T) {
cases := []struct {
name string
dir string
err error
root [32]byte
}{
{
name: "happy path",
dir: "0xffff875e1d985c5ccb214894983f2428edb271f0f87b68ba7010e4a99df3b5cb",
root: [32]byte{255, 255, 135, 94, 29, 152, 92, 92, 203, 33, 72, 148, 152, 63, 36, 40,
237, 178, 113, 240, 248, 123, 104, 186, 112, 16, 228, 169, 157, 243, 181, 203},
},
{
name: "too short",
dir: "0xffff875e1d985c5ccb214894983f2428edb271f0f87b68ba7010e4a99df3b5c",
err: errInvalidRootString,
},
{
name: "too log",
dir: "0xffff875e1d985c5ccb214894983f2428edb271f0f87b68ba7010e4a99df3b5cbb",
err: errInvalidRootString,
},
{
name: "missing prefix",
dir: "ffff875e1d985c5ccb214894983f2428edb271f0f87b68ba7010e4a99df3b5cb",
err: errInvalidRootString,
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
root, err := stringToRoot(c.dir)
if c.err != nil {
require.ErrorIs(t, err, c.err)
return
}
require.NoError(t, err)
require.Equal(t, c.root, root)
})
}
}
func TestSlotFromFile(t *testing.T) {
cases := []struct {
slot primitives.Slot
}{
{slot: 0},
{slot: 2},
{slot: 1123581321},
{slot: math.MaxUint64},
}
for _, c := range cases {
t.Run(fmt.Sprintf("slot %d", c.slot), func(t *testing.T) {
fs, bs := NewEphemeralBlobStorageAndFs(t)
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, c.slot, 1)
sc := verification.FakeVerifyForTest(t, sidecars[0])
require.NoError(t, bs.Save(sc))
namer := identForSidecar(sc)
sszPath := bs.layout.sszPath(namer)
slot, err := slotFromFile(sszPath, fs)
require.NoError(t, err)
require.Equal(t, c.slot, slot)
})
}
}
type dirFiles struct {
name string
isDir bool
children []dirFiles
}
func (df dirFiles) reify(t *testing.T, fs afero.Fs, base string) {
fullPath := path.Join(base, df.name)
if df.isDir {
if df.name != "" {
require.NoError(t, fs.Mkdir(fullPath, directoryPermissions()))
}
for _, c := range df.children {
c.reify(t, fs, fullPath)
}
} else {
fp, err := fs.Create(fullPath)
require.NoError(t, err)
_, err = fp.WriteString("derp")
require.NoError(t, err)
}
}
func (df dirFiles) childNames() []string {
cn := make([]string, len(df.children))
for i := range df.children {
cn[i] = df.children[i].name
}
return cn
}
func TestListDir(t *testing.T) {
fs := afero.NewMemMapFs()
rootStrs := []string{
"0x0023dc5d063c7c1b37016bb54963c6ff4bfe5dfdf6dac29e7ceeb2b8fa81ed7a",
"0xff30526cd634a5af3a09cc9bff67f33a621fc5b975750bb4432f74df077554b4",
"0x23f5f795aaeb78c01fadaf3d06da2e99bd4b3622ae4dfea61b05b7d9adb119c2",
}
// parent directory
tree := dirFiles{isDir: true}
// break out each subdir for easier assertions
notABlob := dirFiles{name: "notABlob", isDir: true}
childlessBlob := dirFiles{name: rootStrs[0], isDir: true}
blobWithSsz := dirFiles{name: rootStrs[1], isDir: true,
children: []dirFiles{{name: "1.ssz"}, {name: "2.ssz"}},
}
blobWithSszAndTmp := dirFiles{name: rootStrs[2], isDir: true,
children: []dirFiles{{name: "5.ssz"}, {name: "0.part"}}}
tree.children = append(tree.children,
notABlob, childlessBlob, blobWithSsz, blobWithSszAndTmp)
topChildren := make([]string, len(tree.children))
for i := range tree.children {
topChildren[i] = tree.children[i].name
}
var filter = func(entries []string, filt func(string) bool) []string {
filtered := make([]string, 0, len(entries))
for i := range entries {
if filt(entries[i]) {
filtered = append(filtered, entries[i])
}
}
return filtered
}
tree.reify(t, fs, "")
cases := []struct {
name string
dirPath string
expected []string
filter func(string) bool
err error
}{
{
name: "non-existent",
dirPath: "derp",
expected: []string{},
err: os.ErrNotExist,
},
{
name: "empty",
dirPath: childlessBlob.name,
expected: []string{},
},
{
name: "top",
dirPath: ".",
expected: topChildren,
},
{
name: "custom filter: only notABlob",
dirPath: ".",
expected: []string{notABlob.name},
filter: func(s string) bool {
return s == notABlob.name
},
},
{
name: "root filter",
dirPath: ".",
expected: []string{childlessBlob.name, blobWithSsz.name, blobWithSszAndTmp.name},
filter: isRootDir,
},
{
name: "ssz filter",
dirPath: blobWithSsz.name,
expected: blobWithSsz.childNames(),
filter: isSszFile,
},
{
name: "ssz mixed filter",
dirPath: blobWithSszAndTmp.name,
expected: []string{"5.ssz"},
filter: isSszFile,
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
result, err := listDir(fs, c.dirPath)
if c.filter != nil {
result = filter(result, c.filter)
}
if c.err != nil {
require.ErrorIs(t, err, c.err)
require.Equal(t, 0, len(result))
} else {
require.NoError(t, err)
sort.Strings(c.expected)
sort.Strings(result)
require.DeepEqual(t, c.expected, result)
}
})
}
}
func TestSlotFromBlob(t *testing.T) {
cases := []struct {
slot primitives.Slot
}{
{slot: 0},
{slot: 2},
{slot: 1123581321},
{slot: math.MaxUint64},
}
for _, c := range cases {
t.Run(fmt.Sprintf("slot %d", c.slot), func(t *testing.T) {
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, c.slot, 1)
sc := sidecars[0]
enc, err := sc.MarshalSSZ()
require.NoError(t, err)
slot, err := slotFromBlob(bytes.NewReader(enc))
require.NoError(t, err)
require.Equal(t, c.slot, slot)
})
}
}
func TestIterationComplete(t *testing.T) {
targets := []migrationTestTarget{
{
ident: ezIdent(t, "0x0125e54c64c925018c9296965a5b622d9f5ab626c10917860dcfb6aa09a0a00b", 1234, 0),
path: "by-epoch/0/1234/0x0125e54c64c925018c9296965a5b622d9f5ab626c10917860dcfb6aa09a0a00b/0.ssz",
},
{
ident: ezIdent(t, "0x0127dba6fd30fdbb47e73e861d5c6e602b38ac3ddc945bb6a2fc4e10761e9a86", 5330, 0),
slotOffset: 31,
path: "by-epoch/1/5330/0x0127dba6fd30fdbb47e73e861d5c6e602b38ac3ddc945bb6a2fc4e10761e9a86/0.ssz",
},
{
ident: ezIdent(t, "0x0127dba6fd30fdbb47e73e861d5c6e602b38ac3ddc945bb6a2fc4e10761e9a86", 5330, 1),
slotOffset: 31,
path: "by-epoch/1/5330/0x0127dba6fd30fdbb47e73e861d5c6e602b38ac3ddc945bb6a2fc4e10761e9a86/1.ssz",
},
{
ident: ezIdent(t, "0x0232521756a0b965eab2c2245d7ad85feaeaf5f427cd14d1a7531f9d555b415c", 16777216, 0),
slotOffset: 16,
path: "by-epoch/4096/16777216/0x0232521756a0b965eab2c2245d7ad85feaeaf5f427cd14d1a7531f9d555b415c/0.ssz",
},
{
ident: ezIdent(t, "0x0232521756a0b965eab2c2245d7ad85feaeaf5f427cd14d1a7531f9d555b415c", 16777216, 1),
slotOffset: 16,
path: "by-epoch/4096/16777216/0x0232521756a0b965eab2c2245d7ad85feaeaf5f427cd14d1a7531f9d555b415c/1.ssz",
},
{
ident: ezIdent(t, "0x42eabe3d2c125410cd226de6f2825fb7575ab896c3f52e43de1fa29e4c809aba", 16777217, 0),
slotOffset: 16,
path: "by-epoch/4096/16777217/0x42eabe3d2c125410cd226de6f2825fb7575ab896c3f52e43de1fa29e4c809aba/0.ssz",
},
{
ident: ezIdent(t, "0x666cea5034e22bd3b849cb33914cad59afd88ee08e4d5bc0e997411c945fbc1d", 11235, 1),
path: "by-epoch/2/11235/0x666cea5034e22bd3b849cb33914cad59afd88ee08e4d5bc0e997411c945fbc1d/1.ssz",
},
}
fs := afero.NewMemMapFs()
cache := newBlobStorageCache()
byEpoch, err := newLayout(LayoutNameByEpoch, fs, cache, nil)
require.NoError(t, err)
for _, tar := range targets {
setupTestBlobFile(t, tar.ident, tar.slotOffset, fs, byEpoch)
}
iter, err := byEpoch.iterateIdents(0)
require.NoError(t, err)
nIdents := 0
for ident, err := iter.next(); err != io.EOF; ident, err = iter.next() {
require.NoError(t, err)
nIdents++
require.NoError(t, cache.ensure(ident))
}
require.Equal(t, len(targets), nIdents)
for _, tar := range targets {
entry, ok := cache.get(tar.ident.root)
require.Equal(t, true, ok)
require.Equal(t, tar.ident.epoch, entry.epoch)
require.Equal(t, true, entry.HasIndex(tar.ident.index))
require.Equal(t, tar.path, byEpoch.sszPath(tar.ident))
}
}

View File

@@ -1,291 +0,0 @@
package filesystem
import (
"fmt"
"io"
"path/filepath"
"strings"
"time"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/sirupsen/logrus"
"github.com/spf13/afero"
)
const (
// Full root in directory will be 66 chars, eg:
// >>> len('0x0002fb4db510b8618b04dc82d023793739c26346a8b02eb73482e24b0fec0555') == 66
rootStringLen = 66
sszExt = "ssz"
partExt = "part"
periodicEpochBaseDir = "by-epoch"
)
const (
LayoutNameFlat = "flat"
LayoutNameByEpoch = "by-epoch"
)
var LayoutNames = []string{LayoutNameFlat, LayoutNameByEpoch}
var (
errMigrationFailure = errors.New("unable to migrate blob directory between old and new layout")
errCacheWarmFailed = errors.New("failed to warm blob filesystem cache")
errPruneFailed = errors.New("failed to prune root")
errInvalidRootString = errors.New("Could not parse hex string as a [32]byte")
errInvalidDirectoryLayout = errors.New("Could not parse blob directory path")
errInvalidLayoutName = errors.New("unknown layout name")
errLayoutNotDetected = errors.New("given layout not observed in the blob filesystem tree")
)
type blobIdent struct {
root [32]byte
epoch primitives.Epoch
index uint64
}
func newBlobIdent(root [32]byte, epoch primitives.Epoch, index uint64) blobIdent {
return blobIdent{root: root, epoch: epoch, index: index}
}
func identForSidecar(sc blocks.VerifiedROBlob) blobIdent {
return newBlobIdent(sc.BlockRoot(), slots.ToEpoch(sc.Slot()), sc.Index)
}
func (n blobIdent) sszFname() string {
return fmt.Sprintf("%d.%s", n.index, sszExt)
}
func (n blobIdent) partFname(entropy string) string {
return fmt.Sprintf("%s-%d.%s", entropy, n.index, partExt)
}
func (n blobIdent) logFields() logrus.Fields {
return logrus.Fields{
"root": fmt.Sprintf("%#x", n.root),
"epoch": n.epoch,
"index": n.index,
}
}
type fsLayout interface {
name() string
dir(n blobIdent) string
sszPath(n blobIdent) string
partPath(n blobIdent, entropy string) string
iterateIdents(before primitives.Epoch) (*identIterator, error)
ident(root [32]byte, idx uint64) (blobIdent, error)
dirIdent(root [32]byte) (blobIdent, error)
summary(root [32]byte) BlobStorageSummary
notify(ident blobIdent) error
pruneBefore(before primitives.Epoch) (*pruneSummary, error)
remove(ident blobIdent) (int, error)
blockParentDirs(ident blobIdent) []string
}
func newLayout(name string, fs afero.Fs, cache *blobStorageSummaryCache, pruner *blobPruner) (fsLayout, error) {
switch name {
case LayoutNameFlat:
return newFlatLayout(fs, cache, pruner), nil
case LayoutNameByEpoch:
return newPeriodicEpochLayout(fs, cache, pruner), nil
default:
return nil, errors.Wrapf(errInvalidLayoutName, "name=%s", name)
}
}
func warmCache(l fsLayout, cache *blobStorageSummaryCache) error {
iter, err := l.iterateIdents(0)
if err != nil {
return errors.Wrap(errCacheWarmFailed, err.Error())
}
for ident, err := iter.next(); !errors.Is(err, io.EOF); ident, err = iter.next() {
if errors.Is(err, errIdentFailure) {
idf := &identificationError{}
if errors.As(err, &idf) {
log.WithFields(idf.LogFields()).WithError(err).Error("Failed to cache blob data for path")
}
continue
}
if err != nil {
return fmt.Errorf("%w: failed to populate blob data cache: %w", errCacheWarmFailed, err)
}
if err := cache.ensure(ident); err != nil {
return fmt.Errorf("%w: failed to write cache entry for %s: %w", errCacheWarmFailed, l.sszPath(ident), err)
}
}
return nil
}
func migrateLayout(fs afero.Fs, from, to fsLayout, cache *blobStorageSummaryCache) error {
start := time.Now()
iter, err := from.iterateIdents(0)
if err != nil {
return errors.Wrapf(errMigrationFailure, "failed to iterate legacy structure while migrating blobs, err=%s", err.Error())
}
if iter.atEOF() {
return errLayoutNotDetected
}
log.WithField("fromLayout", from.name()).WithField("toLayout", to.name()).Info("Migrating blob filesystem layout. This one-time operation can take extra time (up to a few minutes for systems with extended blob storage and a cold disk cache).")
lastMoved := ""
parentDirs := make(map[string]bool) // this map should have < 65k keys by design
moved := 0
dc := newDirCleaner()
for ident, err := iter.next(); !errors.Is(err, io.EOF); ident, err = iter.next() {
if err != nil {
if errors.Is(err, errIdentFailure) {
idf := &identificationError{}
if errors.As(err, &idf) {
log.WithFields(idf.LogFields()).WithError(err).Error("Failed to migrate blob path")
}
continue
}
return errors.Wrapf(errMigrationFailure, "failed to iterate previous layout structure while migrating blobs, err=%s", err.Error())
}
src := from.dir(ident)
target := to.dir(ident)
if src != lastMoved {
targetParent := filepath.Dir(target)
if targetParent != "" && targetParent != "." && !parentDirs[targetParent] {
if err := fs.MkdirAll(targetParent, directoryPermissions()); err != nil {
return errors.Wrapf(errMigrationFailure, "failed to make enclosing path before moving %s to %s", src, target)
}
parentDirs[targetParent] = true
}
if err := fs.Rename(src, target); err != nil {
return errors.Wrapf(errMigrationFailure, "could not rename %s to %s", src, target)
}
moved += 1
lastMoved = src
for _, dir := range from.blockParentDirs(ident) {
dc.add(dir)
}
}
if err := cache.ensure(ident); err != nil {
return errors.Wrapf(errMigrationFailure, "could not cache path %s, err=%s", to.sszPath(ident), err.Error())
}
}
dc.clean(fs)
if moved > 0 {
log.WithField("dirsMoved", moved).WithField("elapsed", time.Since(start)).
Info("Blob filesystem migration complete.")
}
return nil
}
type dirCleaner struct {
maxDepth int
layers map[int]map[string]struct{}
}
func newDirCleaner() *dirCleaner {
return &dirCleaner{layers: make(map[int]map[string]struct{})}
}
func (d *dirCleaner) add(dir string) {
nLayers := len(strings.Split(dir, string(filepath.Separator)))
_, ok := d.layers[nLayers]
if !ok {
d.layers[nLayers] = make(map[string]struct{})
}
d.layers[nLayers][dir] = struct{}{}
if nLayers > d.maxDepth {
d.maxDepth = nLayers
}
}
func (d *dirCleaner) clean(fs afero.Fs) {
for i := d.maxDepth; i >= 0; i-- {
d.cleanLayer(fs, i)
}
}
func (d *dirCleaner) cleanLayer(fs afero.Fs, layer int) {
dirs, ok := d.layers[layer]
if !ok {
return
}
for dir := range dirs {
// Use Remove rather than RemoveAll to make sure we're only removing empty directories
if err := fs.Remove(dir); err != nil {
log.WithField("dir", dir).WithError(err).Error("Failed to remove blob directory, please remove it manually if desired.")
contents, err := listDir(fs, dir)
if err != nil {
log.WithField("dir", dir).WithError(err).Error("Could not list blob directory contents to find reason for removal failure.")
continue
}
for _, c := range contents {
log.WithField("file", c).WithField("dir", dir).Debug("Unexpected file blocking migrated blob directory cleanup.")
}
}
}
}
type pruneSummary struct {
blobsPruned int
failedRemovals []string
}
func (s pruneSummary) LogFields() logrus.Fields {
return logrus.Fields{
"blobsPruned": s.blobsPruned,
"failedRemovals": len(s.failedRemovals),
}
}
func pruneBefore(before primitives.Epoch, l fsLayout) (map[primitives.Epoch]*pruneSummary, error) {
sums := make(map[primitives.Epoch]*pruneSummary)
iter, err := l.iterateIdents(before)
if err != nil {
return nil, errors.Wrap(err, "failed to iterate blob paths for pruning")
}
// We will get an ident for each index, but want to prune all indexes for the given root together.
var lastIdent blobIdent
for ident, err := iter.next(); !errors.Is(err, io.EOF); ident, err = iter.next() {
if err != nil {
if errors.Is(err, errIdentFailure) {
idf := &identificationError{}
if errors.As(err, &idf) {
log.WithFields(idf.LogFields()).WithError(err).Error("Failed to prune blob path due to identification errors")
}
continue
}
log.WithError(err).Error("encountered unhandled error during pruning")
return nil, errors.Wrap(errPruneFailed, err.Error())
}
if ident.epoch >= before {
continue
}
if lastIdent.root != ident.root {
pruneOne(lastIdent, l, sums)
lastIdent = ident
}
}
// handle the final ident
pruneOne(lastIdent, l, sums)
return sums, nil
}
func pruneOne(ident blobIdent, l fsLayout, sums map[primitives.Epoch]*pruneSummary) {
// Skip pruning the n-1 ident if we're on the first real ident (lastIdent will be zero value).
if ident.root == params.BeaconConfig().ZeroHash {
return
}
_, ok := sums[ident.epoch]
if !ok {
sums[ident.epoch] = &pruneSummary{}
}
s := sums[ident.epoch]
removed, err := l.remove(ident)
if err != nil {
s.failedRemovals = append(s.failedRemovals, l.dir(ident))
log.WithField("root", fmt.Sprintf("%#x", ident.root)).Error("Failed to delete blob directory for root")
}
s.blobsPruned += removed
}

View File

@@ -1,212 +0,0 @@
package filesystem
import (
"fmt"
"os"
"path"
"path/filepath"
"strconv"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/spf13/afero"
)
const epochsPerDirectory = 4096
type periodicEpochLayout struct {
fs afero.Fs
cache *blobStorageSummaryCache
pruner *blobPruner
}
var _ fsLayout = &periodicEpochLayout{}
func newPeriodicEpochLayout(fs afero.Fs, cache *blobStorageSummaryCache, pruner *blobPruner) fsLayout {
l := &periodicEpochLayout{fs: fs, cache: cache, pruner: pruner}
return l
}
func (l *periodicEpochLayout) name() string {
return LayoutNameByEpoch
}
func (l *periodicEpochLayout) blockParentDirs(ident blobIdent) []string {
return []string{
periodicEpochBaseDir,
l.periodDir(ident.epoch),
l.epochDir(ident.epoch),
}
}
func (l *periodicEpochLayout) notify(ident blobIdent) error {
if err := l.cache.ensure(ident); err != nil {
return err
}
l.pruner.notify(ident.epoch, l)
return nil
}
// If before == 0, it won't be used as a filter and all idents will be returned.
func (l *periodicEpochLayout) iterateIdents(before primitives.Epoch) (*identIterator, error) {
_, err := l.fs.Stat(periodicEpochBaseDir)
if err != nil {
if os.IsNotExist(err) {
return &identIterator{eof: true}, nil // The directory is non-existent, which is fine; stop iteration.
}
return nil, errors.Wrapf(err, "error reading path %s", periodicEpochBaseDir)
}
// iterate root, which should have directories named by "period"
entries, err := listDir(l.fs, periodicEpochBaseDir)
if err != nil {
return nil, errors.Wrapf(err, "failed to list %s", periodicEpochBaseDir)
}
return &identIterator{
fs: l.fs,
path: periodicEpochBaseDir,
// Please see comments on the `layers` field in `identIterator`` if the role of the layers is unclear.
layers: []layoutLayer{
{populateIdent: populateNoop, filter: isBeforePeriod(before)},
{populateIdent: populateEpoch, filter: isBeforeEpoch(before)},
{populateIdent: populateRoot, filter: isRootDir}, // extract root from path
{populateIdent: populateIndex, filter: isSszFile}, // extract index from filename
},
entries: entries,
}, nil
}
func (l *periodicEpochLayout) ident(root [32]byte, idx uint64) (blobIdent, error) {
return l.cache.identForIdx(root, idx)
}
func (l *periodicEpochLayout) dirIdent(root [32]byte) (blobIdent, error) {
return l.cache.identForRoot(root)
}
func (l *periodicEpochLayout) summary(root [32]byte) BlobStorageSummary {
return l.cache.Summary(root)
}
func (l *periodicEpochLayout) dir(n blobIdent) string {
return filepath.Join(l.epochDir(n.epoch), rootToString(n.root))
}
func (l *periodicEpochLayout) epochDir(epoch primitives.Epoch) string {
return filepath.Join(l.periodDir(epoch), fmt.Sprintf("%d", epoch))
}
func (l *periodicEpochLayout) periodDir(epoch primitives.Epoch) string {
return filepath.Join(periodicEpochBaseDir, fmt.Sprintf("%d", periodForEpoch(epoch)))
}
func (l *periodicEpochLayout) sszPath(n blobIdent) string {
return filepath.Join(l.dir(n), n.sszFname())
}
func (l *periodicEpochLayout) partPath(n blobIdent, entropy string) string {
return path.Join(l.dir(n), n.partFname(entropy))
}
func (l *periodicEpochLayout) pruneBefore(before primitives.Epoch) (*pruneSummary, error) {
sums, err := pruneBefore(before, l)
if err != nil {
return nil, err
}
// Roll up summaries and clean up per-epoch directories.
rollup := &pruneSummary{}
for epoch, sum := range sums {
rollup.blobsPruned += sum.blobsPruned
rollup.failedRemovals = append(rollup.failedRemovals, sum.failedRemovals...)
rmdir := l.epochDir(epoch)
if len(sum.failedRemovals) == 0 {
if err := l.fs.Remove(rmdir); err != nil {
log.WithField("dir", rmdir).WithError(err).Error("Failed to remove epoch directory while pruning")
}
} else {
log.WithField("dir", rmdir).WithField("numFailed", len(sum.failedRemovals)).WithError(err).Error("Unable to remove epoch directory due to pruning failures")
}
}
return rollup, nil
}
func (l *periodicEpochLayout) remove(ident blobIdent) (int, error) {
removed := l.cache.evict(ident.root)
// Skip the syscall if there are no blobs to remove.
if removed == 0 {
return 0, nil
}
if err := l.fs.RemoveAll(l.dir(ident)); err != nil {
return removed, err
}
return removed, nil
}
func periodForEpoch(epoch primitives.Epoch) primitives.Epoch {
return epoch / params.BeaconConfig().MinEpochsForBlobsSidecarsRequest
}
// Funcs below this line are iteration support methods that are specific to the epoch layout.
func isBeforePeriod(before primitives.Epoch) func(string) bool {
if before == 0 {
return filterNoop
}
beforePeriod := periodForEpoch(before)
if before%epochsPerDirectory != 0 {
// Add one because we need to include the period the epoch is in, unless it is the first epoch in the period,
// in which case we can just look at any previous period.
beforePeriod += 1
}
return func(p string) bool {
period, err := periodFromPath(p)
if err != nil {
return false
}
return primitives.Epoch(period) < beforePeriod
}
}
func isBeforeEpoch(before primitives.Epoch) func(string) bool {
if before == 0 {
return filterNoop
}
return func(p string) bool {
epoch, err := epochFromPath(p)
if err != nil {
return false
}
return epoch < before
}
}
func epochFromPath(p string) (primitives.Epoch, error) {
subdir := filepath.Base(p)
epoch, err := strconv.ParseUint(subdir, 10, 64)
if err != nil {
return 0, errors.Wrapf(errInvalidDirectoryLayout,
"failed to decode epoch as uint, err=%s, dir=%s", err.Error(), p)
}
return primitives.Epoch(epoch), nil
}
func periodFromPath(p string) (uint64, error) {
subdir := filepath.Base(p)
period, err := strconv.ParseUint(subdir, 10, 64)
if err != nil {
return 0, errors.Wrapf(errInvalidDirectoryLayout,
"failed to decode period from path as uint, err=%s, dir=%s", err.Error(), p)
}
return period, nil
}
func populateEpoch(namer blobIdent, dir string) (blobIdent, error) {
epoch, err := epochFromPath(dir)
if err != nil {
return namer, err
}
namer.epoch = epoch
return namer, nil
}

View File

@@ -1,219 +0,0 @@
package filesystem
import (
"encoding/binary"
"io"
"os"
"path"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/spf13/afero"
)
type flatLayout struct {
fs afero.Fs
cache *blobStorageSummaryCache
pruner *blobPruner
}
var _ fsLayout = &flatLayout{}
func newFlatLayout(fs afero.Fs, cache *blobStorageSummaryCache, pruner *blobPruner) fsLayout {
l := &flatLayout{fs: fs, cache: cache, pruner: pruner}
return l
}
func (l *flatLayout) iterateIdents(before primitives.Epoch) (*identIterator, error) {
if _, err := l.fs.Stat("."); err != nil {
if os.IsNotExist(err) {
return &identIterator{eof: true}, nil // The directory is non-existent, which is fine; stop iteration.
}
return nil, errors.Wrapf(err, "error reading path %s", periodicEpochBaseDir)
}
entries, err := listDir(l.fs, ".")
if err != nil {
return nil, errors.Wrapf(err, "could not list root directory")
}
slotAndIndex := &flatSlotReader{fs: l.fs, cache: l.cache, before: before}
return &identIterator{
fs: l.fs,
// Please see comments on the `layers` field in `identIterator`` if the role of the layers is unclear.
layers: []layoutLayer{
{populateIdent: populateRoot, filter: isFlatCachedAndBefore(l.cache, before)},
{populateIdent: slotAndIndex.populateEpoch, filter: slotAndIndex.isSSZAndBefore}},
entries: entries,
}, nil
}
func (*flatLayout) name() string {
return LayoutNameFlat
}
func (l *flatLayout) blockParentDirs(ident blobIdent) []string {
return []string{}
}
func (*flatLayout) dir(n blobIdent) string {
return rootToString(n.root)
}
func (l *flatLayout) sszPath(n blobIdent) string {
return path.Join(l.dir(n), n.sszFname())
}
func (l *flatLayout) partPath(n blobIdent, entropy string) string {
return path.Join(l.dir(n), n.partFname(entropy))
}
func (l *flatLayout) ident(root [32]byte, idx uint64) (blobIdent, error) {
return l.cache.identForIdx(root, idx)
}
func (l *flatLayout) dirIdent(root [32]byte) (blobIdent, error) {
return l.cache.identForRoot(root)
}
func (l *flatLayout) summary(root [32]byte) BlobStorageSummary {
return l.cache.Summary(root)
}
func (l *flatLayout) remove(ident blobIdent) (int, error) {
removed := l.cache.evict(ident.root)
if err := l.fs.RemoveAll(l.dir(ident)); err != nil {
return removed, err
}
return removed, nil
}
func (l *flatLayout) notify(ident blobIdent) error {
if err := l.cache.ensure(ident); err != nil {
return err
}
l.pruner.notify(ident.epoch, l)
return nil
}
func (l *flatLayout) pruneBefore(before primitives.Epoch) (*pruneSummary, error) {
sums, err := pruneBefore(before, l)
if err != nil {
return nil, err
}
// Roll up summaries and clean up per-epoch directories.
rollup := &pruneSummary{}
for _, sum := range sums {
rollup.blobsPruned += sum.blobsPruned
rollup.failedRemovals = append(rollup.failedRemovals, sum.failedRemovals...)
}
return rollup, nil
}
// Below this line are iteration support funcs and types that are specific to the flat layout.
// Read slot from marshaled BlobSidecar data in the given file. See slotFromBlob for details.
func slotFromFile(name string, fs afero.Fs) (primitives.Slot, error) {
f, err := fs.Open(name)
if err != nil {
return 0, err
}
defer func() {
if err := f.Close(); err != nil {
log.WithError(err).Errorf("Could not close blob file")
}
}()
return slotFromBlob(f)
}
// slotFromBlob reads the ssz data of a file at the specified offset (8 + 131072 + 48 + 48 = 131176 bytes),
// which is calculated based on the size of the BlobSidecar struct and is based on the size of the fields
// preceding the slot information within SignedBeaconBlockHeader.
func slotFromBlob(at io.ReaderAt) (primitives.Slot, error) {
b := make([]byte, 8)
_, err := at.ReadAt(b, 131176)
if err != nil {
return 0, err
}
rawSlot := binary.LittleEndian.Uint64(b)
return primitives.Slot(rawSlot), nil
}
type flatSlotReader struct {
before primitives.Epoch
fs afero.Fs
cache *blobStorageSummaryCache
}
func (l *flatSlotReader) populateEpoch(ident blobIdent, fname string) (blobIdent, error) {
ident, err := populateIndex(ident, fname)
if err != nil {
return ident, err
}
sum, ok := l.cache.get(ident.root)
if ok {
ident.epoch = sum.epoch
// Return early if the index is already known to the cache.
if sum.HasIndex(ident.index) {
return ident, nil
}
} else {
// If the root is not in the cache, we need to read the slot from the file.
slot, err := slotFromFile(fname, l.fs)
if err != nil {
return ident, err
}
ident.epoch = slots.ToEpoch(slot)
}
return ident, l.cache.ensure(ident)
}
func (l *flatSlotReader) isSSZAndBefore(fname string) bool {
if !isSszFile(fname) {
return false
}
// If 'before' != 0, assuming isSSZAndBefore is used as a filter on the same layer with populateEpoch, this will typically
// call populateEpoch before the iteration code calls it. So we can guarantee that the cache gets populated
// in either case, because if it is filtered out here, we either have a malformed path (root can't be determined) in which case
// the filter code won't call it anyway, or we have a valid path and the cache will be populated before the epoch can be compared.
if l.before == 0 {
return true
}
ident, err := populateRoot(blobIdent{}, path.Dir(fname))
// Filter out the path if we can't determine its root.
if err != nil {
return false
}
ident, err = l.populateEpoch(ident, fname)
// Filter out the path if we can't determine its epoch or properly cache it.
if err != nil {
return false
}
return ident.epoch < l.before
}
// isFlatCachedAndBefore returns a filter callback function to exclude roots that are known to be after the given epoch
// based on the cache. It's an opportunistic filter; if the cache is not populated, it will not attempt to populate it.
// isSSZAndBefore on the other hand, is a strict filter that will only return true if the file is an SSZ file and
// the epoch can be determined.
func isFlatCachedAndBefore(cache *blobStorageSummaryCache, before primitives.Epoch) func(string) bool {
if before == 0 {
return isRootDir
}
return func(p string) bool {
if !isRootDir(p) {
return false
}
root, err := rootFromPath(p)
if err != nil {
return false
}
sum, ok := cache.get(root)
// If we don't know the epoch by looking at the root, don't try to filter it.
if !ok {
return true
}
return sum.epoch < before
}
}

View File

@@ -1,75 +0,0 @@
package filesystem
import (
"testing"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
type mockLayout struct {
pruneBeforeFunc func(primitives.Epoch) (*pruneSummary, error)
}
var _ fsLayout = &mockLayout{}
func (m *mockLayout) name() string {
return "mock"
}
func (*mockLayout) dir(_ blobIdent) string {
return ""
}
func (*mockLayout) blockParentDirs(id blobIdent) []string {
return []string{}
}
func (*mockLayout) sszPath(_ blobIdent) string {
return ""
}
func (*mockLayout) partPath(_ blobIdent, _ string) string {
return ""
}
func (*mockLayout) iterateIdents(_ primitives.Epoch) (*identIterator, error) {
return nil, nil
}
func (*mockLayout) ident(_ [32]byte, _ uint64) (blobIdent, error) {
return blobIdent{}, nil
}
func (*mockLayout) dirIdent(_ [32]byte) (blobIdent, error) {
return blobIdent{}, nil
}
func (*mockLayout) summary(_ [32]byte) BlobStorageSummary {
return BlobStorageSummary{}
}
func (*mockLayout) notify(blobIdent) error {
return nil
}
func (m *mockLayout) pruneBefore(before primitives.Epoch) (*pruneSummary, error) {
return m.pruneBeforeFunc(before)
}
func (*mockLayout) remove(ident blobIdent) (int, error) {
return 0, nil
}
var _ fsLayout = &mockLayout{}
func TestCleaner(t *testing.T) {
l := &periodicEpochLayout{}
p := l.periodDir(11235813)
e := l.epochDir(11235813)
dc := newDirCleaner()
dc.add(p)
require.Equal(t, 2, dc.maxDepth)
dc.add(e)
require.Equal(t, 3, dc.maxDepth)
}

View File

@@ -1,180 +0,0 @@
package filesystem
import (
"os"
"testing"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/spf13/afero"
)
func ezIdent(t *testing.T, rootStr string, epoch primitives.Epoch, index uint64) blobIdent {
r, err := stringToRoot(rootStr)
require.NoError(t, err)
return blobIdent{root: r, epoch: epoch, index: index}
}
func setupTestBlobFile(t *testing.T, ident blobIdent, offset primitives.Slot, fs afero.Fs, l fsLayout) {
slot, err := slots.EpochStart(ident.epoch)
require.NoError(t, err)
slot += offset
_, sc := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, slot, 1)
scb, err := sc[0].MarshalSSZ()
require.NoError(t, err)
dir := l.dir(ident)
require.NoError(t, fs.MkdirAll(dir, directoryPermissions()))
p := l.sszPath(ident)
require.NoError(t, afero.WriteFile(fs, p, scb, 0666))
_, err = fs.Stat(p)
require.NoError(t, err)
}
type migrationTestTarget struct {
ident blobIdent
slotOffset primitives.Slot
migrated bool
path string
}
func testAssertFsMigrated(t *testing.T, fs afero.Fs, ident blobIdent, before, after fsLayout) {
// Assert the pre-migration path is gone.
_, err := fs.Stat(before.sszPath(ident))
require.ErrorIs(t, err, os.ErrNotExist)
dir := before.dir(ident)
_, err = listDir(fs, dir)
require.ErrorIs(t, err, os.ErrNotExist)
// Assert the post-migration path present.
_, err = fs.Stat(after.sszPath(ident))
require.NoError(t, err)
}
func TestMigrations(t *testing.T) {
cases := []struct {
name string
forwardLayout string
backwardLayout string
targets []migrationTestTarget
}{
{
name: "all need migration",
backwardLayout: LayoutNameFlat,
forwardLayout: LayoutNameByEpoch,
targets: []migrationTestTarget{
{
ident: ezIdent(t, "0x0125e54c64c925018c9296965a5b622d9f5ab626c10917860dcfb6aa09a0a00b", 1234, 0),
},
{
ident: ezIdent(t, "0x0127dba6fd30fdbb47e73e861d5c6e602b38ac3ddc945bb6a2fc4e10761e9a86", 5330, 0),
slotOffset: 31,
},
{
ident: ezIdent(t, "0x0127dba6fd30fdbb47e73e861d5c6e602b38ac3ddc945bb6a2fc4e10761e9a86", 5330, 1),
slotOffset: 31,
},
{
ident: ezIdent(t, "0x0232521756a0b965eab2c2245d7ad85feaeaf5f427cd14d1a7531f9d555b415c", 16777216, 0),
slotOffset: 16,
},
},
},
{
name: "mix old and new",
backwardLayout: LayoutNameFlat,
forwardLayout: LayoutNameByEpoch,
targets: []migrationTestTarget{
{
ident: ezIdent(t, "0x0125e54c64c925018c9296965a5b622d9f5ab626c10917860dcfb6aa09a0a00b", 1234, 0),
},
{
ident: ezIdent(t, "0x0127dba6fd30fdbb47e73e861d5c6e602b38ac3ddc945bb6a2fc4e10761e9a86", 5330, 0),
slotOffset: 31,
},
{
ident: ezIdent(t, "0x0127dba6fd30fdbb47e73e861d5c6e602b38ac3ddc945bb6a2fc4e10761e9a86", 5330, 1),
slotOffset: 31,
},
{
ident: ezIdent(t, "0x0232521756a0b965eab2c2245d7ad85feaeaf5f427cd14d1a7531f9d555b415c", 16777216, 0),
slotOffset: 16,
migrated: true,
},
{
ident: ezIdent(t, "0x0232521756a0b965eab2c2245d7ad85feaeaf5f427cd14d1a7531f9d555b415c", 16777216, 1),
slotOffset: 16,
migrated: true,
},
{
ident: ezIdent(t, "0x42eabe3d2c125410cd226de6f2825fb7575ab896c3f52e43de1fa29e4c809aba", 16777217, 0),
slotOffset: 16,
migrated: true,
},
{
ident: ezIdent(t, "0x666cea5034e22bd3b849cb33914cad59afd88ee08e4d5bc0e997411c945fbc1d", 11235, 1),
migrated: true,
},
},
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
t.Run("forward", func(t *testing.T) {
testMigration(t, c.forwardLayout, c.backwardLayout, c.targets)
})
// run the same test in reverse - to cover both directions while making the test table smaller.
t.Run("backward", func(t *testing.T) {
testMigration(t, c.forwardLayout, c.backwardLayout, c.targets)
})
})
}
}
func testMigration(t *testing.T, forwardName, backwardName string, targets []migrationTestTarget) {
fs := afero.NewMemMapFs()
cache := newBlobStorageCache()
forward, err := newLayout(forwardName, fs, cache, nil)
require.NoError(t, err)
backward, err := newLayout(backwardName, fs, cache, nil)
require.NoError(t, err)
for _, tar := range targets {
if tar.migrated {
setupTestBlobFile(t, tar.ident, tar.slotOffset, fs, forward)
} else {
setupTestBlobFile(t, tar.ident, tar.slotOffset, fs, backward)
}
}
require.NoError(t, migrateLayout(fs, backward, forward, cache))
for _, tar := range targets {
// Make sure the file wound up in the right spot, according to the forward layout
// and that the old file is gone, according to the backward layout.
testAssertFsMigrated(t, fs, tar.ident, backward, forward)
entry, ok := cache.get(tar.ident.root)
// we only expect cache to be populated here by files that needed to be moved.
if !tar.migrated {
require.Equal(t, true, ok)
require.Equal(t, true, entry.HasIndex(tar.ident.index))
require.Equal(t, tar.ident.epoch, entry.epoch)
}
}
// Run migration in reverse - testing "undo"
cache = newBlobStorageCache()
forward, err = newLayout(forwardName, fs, cache, nil)
require.NoError(t, err)
backward, err = newLayout(backwardName, fs, cache, nil)
require.NoError(t, err)
// forward and backward are flipped compared to the above
require.NoError(t, migrateLayout(fs, forward, backward, cache))
for _, tar := range targets {
// just like the above, but forward and backward are flipped
testAssertFsMigrated(t, fs, tar.ident, forward, backward)
entry, ok := cache.get(tar.ident.root)
require.Equal(t, true, ok)
require.Equal(t, true, entry.HasIndex(tar.ident.index))
require.Equal(t, tar.ident.epoch, entry.epoch)
}
}

View File

@@ -4,41 +4,30 @@ import (
"testing"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/spf13/afero"
)
// NewEphemeralBlobStorage should only be used for tests.
// The instance of BlobStorage returned is backed by an in-memory virtual filesystem,
// improving test performance and simplifying cleanup.
func NewEphemeralBlobStorage(t testing.TB, opts ...BlobStorageOption) *BlobStorage {
return NewWarmedEphemeralBlobStorageUsingFs(t, afero.NewMemMapFs(), opts...)
}
// NewEphemeralBlobStorageAndFs can be used by tests that want access to the virtual filesystem
// in order to interact with it outside the parameters of the BlobStorage api.
func NewEphemeralBlobStorageAndFs(t testing.TB, opts ...BlobStorageOption) (afero.Fs, *BlobStorage) {
func NewEphemeralBlobStorage(t testing.TB) *BlobStorage {
fs := afero.NewMemMapFs()
bs := NewWarmedEphemeralBlobStorageUsingFs(t, fs, opts...)
return fs, bs
}
func NewEphemeralBlobStorageUsingFs(t testing.TB, fs afero.Fs, opts ...BlobStorageOption) *BlobStorage {
opts = append(opts,
WithBlobRetentionEpochs(params.BeaconConfig().MinEpochsForBlobsSidecarsRequest),
WithFs(fs))
bs, err := NewBlobStorage(opts...)
pruner, err := newBlobPruner(fs, params.BeaconConfig().MinEpochsForBlobsSidecarsRequest, withWarmedCache())
if err != nil {
t.Fatalf("error initializing test BlobStorage, err=%s", err.Error())
t.Fatal("test setup issue", err)
}
return bs
return &BlobStorage{fs: fs, pruner: pruner}
}
func NewWarmedEphemeralBlobStorageUsingFs(t testing.TB, fs afero.Fs, opts ...BlobStorageOption) *BlobStorage {
bs := NewEphemeralBlobStorageUsingFs(t, fs, opts...)
bs.WarmCache()
return bs
// NewEphemeralBlobStorageWithFs can be used by tests that want access to the virtual filesystem
// in order to interact with it outside the parameters of the BlobStorage api.
func NewEphemeralBlobStorageWithFs(t testing.TB) (afero.Fs, *BlobStorage) {
fs := afero.NewMemMapFs()
pruner, err := newBlobPruner(fs, params.BeaconConfig().MinEpochsForBlobsSidecarsRequest, withWarmedCache())
if err != nil {
t.Fatal("test setup issue", err)
}
return fs, &BlobStorage{fs: fs, pruner: pruner}
}
type BlobMocker struct {
@@ -48,9 +37,17 @@ type BlobMocker struct {
// CreateFakeIndices creates empty blob sidecar files at the expected path for the given
// root and indices to influence the result of Indices().
func (bm *BlobMocker) CreateFakeIndices(root [32]byte, slot primitives.Slot, indices ...uint64) error {
func (bm *BlobMocker) CreateFakeIndices(root [32]byte, indices ...uint64) error {
for i := range indices {
if err := bm.bs.layout.notify(newBlobIdent(root, slots.ToEpoch(slot), indices[i])); err != nil {
n := blobNamer{root: root, index: indices[i]}
if err := bm.fs.MkdirAll(n.dir(), directoryPermissions); err != nil {
return err
}
f, err := bm.fs.Create(n.path())
if err != nil {
return err
}
if err := f.Close(); err != nil {
return err
}
}
@@ -59,8 +56,9 @@ func (bm *BlobMocker) CreateFakeIndices(root [32]byte, slot primitives.Slot, ind
// NewEphemeralBlobStorageWithMocker returns a *BlobMocker value in addition to the BlobStorage value.
// BlockMocker encapsulates things blob path construction to avoid leaking implementation details.
func NewEphemeralBlobStorageWithMocker(t testing.TB) (*BlobMocker, *BlobStorage) {
fs, bs := NewEphemeralBlobStorageAndFs(t)
func NewEphemeralBlobStorageWithMocker(_ testing.TB) (*BlobMocker, *BlobStorage) {
fs := afero.NewMemMapFs()
bs := &BlobStorage{fs: fs}
return &BlobMocker{fs: fs, bs: bs}, bs
}
@@ -68,7 +66,7 @@ func NewMockBlobStorageSummarizer(t *testing.T, set map[[32]byte][]int) BlobStor
c := newBlobStorageCache()
for k, v := range set {
for i := range v {
if err := c.ensure(blobIdent{root: k, epoch: 0, index: uint64(v[i])}); err != nil {
if err := c.ensure(k, 0, uint64(v[i])); err != nil {
t.Fatal(err)
}
}

View File

@@ -1,67 +1,319 @@
package filesystem
import (
"context"
"encoding/binary"
"io"
"path"
"path/filepath"
"strconv"
"strings"
"sync"
"sync/atomic"
"time"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/sirupsen/logrus"
"github.com/spf13/afero"
)
const retentionBuffer primitives.Epoch = 2
var errNotBlobSSZ = errors.New("not a blob ssz file")
var (
errPruningFailures = errors.New("blobs could not be pruned for some roots")
errNotBlobSSZ = errors.New("not a blob ssz file")
)
// blobPruner keeps track of the tail end of the retention period, based only the blobs it has seen via the notify method.
// If the retention period advances in response to notify being called,
// the pruner will invoke the pruneBefore method of the given layout in a new goroutine.
// The details of pruning are left entirely to the layout, with the pruner's only responsibility being to
// schedule just one pruning operation at a time, for each forward movement of the minimum retention epoch.
type blobPruner struct {
mu sync.Mutex
prunedBefore atomic.Uint64
retentionPeriod primitives.Epoch
sync.Mutex
prunedBefore atomic.Uint64
windowSize primitives.Slot
cache *blobStorageCache
cacheReady chan struct{}
warmed bool
fs afero.Fs
}
func newBlobPruner(retain primitives.Epoch) *blobPruner {
p := &blobPruner{retentionPeriod: retain + retentionBuffer}
return p
type prunerOpt func(*blobPruner) error
func withWarmedCache() prunerOpt {
return func(p *blobPruner) error {
return p.warmCache()
}
}
// notify returns a channel that is closed when the pruning operation is complete.
// This is useful for tests, but at runtime fsLayouts or BlobStorage should not wait for completion.
func (p *blobPruner) notify(latest primitives.Epoch, layout fsLayout) chan struct{} {
done := make(chan struct{})
floor := periodFloor(latest, p.retentionPeriod)
if primitives.Epoch(p.prunedBefore.Swap(uint64(floor))) >= floor {
// Only trigger pruning if the atomic swap changed the previous value of prunedBefore.
close(done)
return done
func newBlobPruner(fs afero.Fs, retain primitives.Epoch, opts ...prunerOpt) (*blobPruner, error) {
r, err := slots.EpochStart(retain + retentionBuffer)
if err != nil {
return nil, errors.Wrap(err, "could not set retentionSlots")
}
cw := make(chan struct{})
p := &blobPruner{fs: fs, windowSize: r, cache: newBlobStorageCache(), cacheReady: cw}
for _, o := range opts {
if err := o(p); err != nil {
return nil, err
}
}
return p, nil
}
// notify updates the pruner's view of root->blob mappings. This allows the pruner to build a cache
// of root->slot mappings and decide when to evict old blobs based on the age of present blobs.
func (p *blobPruner) notify(root [32]byte, latest primitives.Slot, idx uint64) error {
if err := p.cache.ensure(root, latest, idx); err != nil {
return err
}
pruned := uint64(windowMin(latest, p.windowSize))
if p.prunedBefore.Swap(pruned) == pruned {
return nil
}
go func() {
p.mu.Lock()
start := time.Now()
defer p.mu.Unlock()
sum, err := layout.pruneBefore(floor)
if err != nil {
log.WithError(err).WithFields(sum.LogFields()).Warn("Encountered errors during blob pruning.")
p.Lock()
defer p.Unlock()
if err := p.prune(primitives.Slot(pruned)); err != nil {
log.WithError(err).Errorf("Failed to prune blobs from slot %d", latest)
}
log.WithFields(logrus.Fields{
"upToEpoch": floor,
"duration": time.Since(start).String(),
"filesRemoved": sum.blobsPruned,
}).Debug("Pruned old blobs")
blobsPrunedCounter.Add(float64(sum.blobsPruned))
close(done)
}()
return done
return nil
}
func periodFloor(latest, period primitives.Epoch) primitives.Epoch {
if latest < period {
func windowMin(latest, offset primitives.Slot) primitives.Slot {
// Safely compute the first slot in the epoch for the latest slot
latest = latest - latest%params.BeaconConfig().SlotsPerEpoch
if latest < offset {
return 0
}
return latest - period
return latest - offset
}
func (p *blobPruner) warmCache() error {
p.Lock()
defer func() {
if !p.warmed {
p.warmed = true
close(p.cacheReady)
}
p.Unlock()
}()
if err := p.prune(0); err != nil {
return err
}
return nil
}
func (p *blobPruner) waitForCache(ctx context.Context) (*blobStorageCache, error) {
select {
case <-p.cacheReady:
return p.cache, nil
case <-ctx.Done():
return nil, ctx.Err()
}
}
// Prune prunes blobs in the base directory based on the retention epoch.
// It deletes blobs older than currentEpoch - (retentionEpochs+bufferEpochs).
// This is so that we keep a slight buffer and blobs are deleted after n+2 epochs.
func (p *blobPruner) prune(pruneBefore primitives.Slot) error {
start := time.Now()
totalPruned, totalErr := 0, 0
// Customize logging/metrics behavior for the initial cache warmup when slot=0.
// We'll never see a prune request for slot 0, unless this is the initial call to warm up the cache.
if pruneBefore == 0 {
defer func() {
log.WithField("duration", time.Since(start).String()).Debug("Warmed up pruner cache")
}()
} else {
defer func() {
log.WithFields(logrus.Fields{
"upToEpoch": slots.ToEpoch(pruneBefore),
"duration": time.Since(start).String(),
"filesRemoved": totalPruned,
}).Debug("Pruned old blobs")
blobsPrunedCounter.Add(float64(totalPruned))
}()
}
entries, err := listDir(p.fs, ".")
if err != nil {
return errors.Wrap(err, "unable to list root blobs directory")
}
dirs := filter(entries, filterRoot)
for _, dir := range dirs {
pruned, err := p.tryPruneDir(dir, pruneBefore)
if err != nil {
totalErr += 1
log.WithError(err).WithField("directory", dir).Error("Unable to prune directory")
}
totalPruned += pruned
}
if totalErr > 0 {
return errors.Wrapf(errPruningFailures, "pruning failed for %d root directories", totalErr)
}
return nil
}
func shouldRetain(slot, pruneBefore primitives.Slot) bool {
return slot >= pruneBefore
}
func (p *blobPruner) tryPruneDir(dir string, pruneBefore primitives.Slot) (int, error) {
root, err := rootFromDir(dir)
if err != nil {
return 0, errors.Wrapf(err, "invalid directory, could not parse subdir as root %s", dir)
}
slot, slotCached := p.cache.slot(root)
// Return early if the slot is cached and doesn't need pruning.
if slotCached && shouldRetain(slot, pruneBefore) {
return 0, nil
}
// entries will include things that aren't ssz files, like dangling .part files. We need these to
// completely clean up the directory.
entries, err := listDir(p.fs, dir)
if err != nil {
return 0, errors.Wrapf(err, "failed to list blobs in directory %s", dir)
}
// scFiles filters the dir listing down to the ssz encoded BlobSidecar files. This allows us to peek
// at the first one in the list to figure out the slot.
scFiles := filter(entries, filterSsz)
if len(scFiles) == 0 {
log.WithField("dir", dir).Warn("Pruner ignoring directory with no blob files")
return 0, nil
}
if !slotCached {
slot, err = slotFromFile(path.Join(dir, scFiles[0]), p.fs)
if err != nil {
return 0, errors.Wrapf(err, "slot could not be read from blob file %s", scFiles[0])
}
for i := range scFiles {
idx, err := idxFromPath(scFiles[i])
if err != nil {
return 0, errors.Wrapf(err, "index could not be determined for blob file %s", scFiles[i])
}
if err := p.cache.ensure(root, slot, idx); err != nil {
return 0, errors.Wrapf(err, "could not update prune cache for blob file %s", scFiles[i])
}
}
if shouldRetain(slot, pruneBefore) {
return 0, nil
}
}
removed := 0
for _, fname := range entries {
fullName := path.Join(dir, fname)
if err := p.fs.Remove(fullName); err != nil {
return removed, errors.Wrapf(err, "unable to remove %s", fullName)
}
// Don't count other files that happen to be in the dir, like dangling .part files.
if filterSsz(fname) {
removed += 1
}
// Log a warning whenever we clean up a .part file
if filterPart(fullName) {
log.WithField("file", fullName).Warn("Deleting abandoned blob .part file")
}
}
if err := p.fs.Remove(dir); err != nil {
return removed, errors.Wrapf(err, "unable to remove blob directory %s", dir)
}
p.cache.evict(root)
return len(scFiles), nil
}
func idxFromPath(fname string) (uint64, error) {
fname = path.Base(fname)
if filepath.Ext(fname) != dotSszExt {
return 0, errors.Wrap(errNotBlobSSZ, "does not have .ssz extension")
}
parts := strings.Split(fname, ".")
if len(parts) != 2 {
return 0, errors.Wrap(errNotBlobSSZ, "unexpected filename structure (want <index>.ssz)")
}
return strconv.ParseUint(parts[0], 10, 64)
}
func rootFromDir(dir string) ([32]byte, error) {
subdir := filepath.Base(dir) // end of the path should be the blob directory, named by hex encoding of root
root, err := stringToRoot(subdir)
if err != nil {
return root, errors.Wrapf(err, "invalid directory, could not parse subdir as root %s", dir)
}
return root, nil
}
// Read slot from marshaled BlobSidecar data in the given file. See slotFromBlob for details.
func slotFromFile(file string, fs afero.Fs) (primitives.Slot, error) {
f, err := fs.Open(file)
if err != nil {
return 0, err
}
defer func() {
if err := f.Close(); err != nil {
log.WithError(err).Errorf("Could not close blob file")
}
}()
return slotFromBlob(f)
}
// slotFromBlob reads the ssz data of a file at the specified offset (8 + 131072 + 48 + 48 = 131176 bytes),
// which is calculated based on the size of the BlobSidecar struct and is based on the size of the fields
// preceding the slot information within SignedBeaconBlockHeader.
func slotFromBlob(at io.ReaderAt) (primitives.Slot, error) {
b := make([]byte, 8)
_, err := at.ReadAt(b, 131176)
if err != nil {
return 0, err
}
rawSlot := binary.LittleEndian.Uint64(b)
return primitives.Slot(rawSlot), nil
}
func listDir(fs afero.Fs, dir string) ([]string, error) {
top, err := fs.Open(dir)
if err != nil {
return nil, errors.Wrap(err, "failed to open directory descriptor")
}
defer func() {
if err := top.Close(); err != nil {
log.WithError(err).Errorf("Could not close file %s", dir)
}
}()
// re the -1 param: "If n <= 0, Readdirnames returns all the names from the directory in a single slice"
dirs, err := top.Readdirnames(-1)
if err != nil {
return nil, errors.Wrap(err, "failed to read directory listing")
}
return dirs, nil
}
func filter(entries []string, filt func(string) bool) []string {
filtered := make([]string, 0, len(entries))
for i := range entries {
if filt(entries[i]) {
filtered = append(filtered, entries[i])
}
}
return filtered
}
func filterRoot(s string) bool {
return strings.HasPrefix(s, "0x")
}
var dotSszExt = "." + sszExt
var dotPartExt = "." + partExt
func filterSsz(s string) bool {
return filepath.Ext(s) == dotSszExt
}
func filterPart(s string) bool {
return filepath.Ext(s) == dotPartExt
}

View File

@@ -1,197 +1,394 @@
package filesystem
import (
"encoding/binary"
"bytes"
"context"
"fmt"
"math"
"os"
"path"
"sort"
"testing"
"time"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/verification"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/spf13/afero"
)
type prunerScenario struct {
name string
prunedBefore primitives.Epoch
retentionPeriod primitives.Epoch
latest primitives.Epoch
expected pruneExpectation
func TestTryPruneDir_CachedNotExpired(t *testing.T) {
fs := afero.NewMemMapFs()
pr, err := newBlobPruner(fs, 0)
require.NoError(t, err)
slot := pr.windowSize
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, slot, params.BeaconConfig().MaxBlobsPerBlock(slot))
sc, err := verification.BlobSidecarNoop(sidecars[0])
require.NoError(t, err)
rootStr := rootString(sc.BlockRoot())
// This slot is right on the edge of what would need to be pruned, so by adding it to the cache and
// skipping any other test setup, we can be certain the hot cache path never touches the filesystem.
require.NoError(t, pr.cache.ensure(sc.BlockRoot(), sc.Slot(), 0))
pruned, err := pr.tryPruneDir(rootStr, pr.windowSize)
require.NoError(t, err)
require.Equal(t, 0, pruned)
}
type pruneExpectation struct {
called bool
arg primitives.Epoch
summary *pruneSummary
err error
func TestCacheWarmFail(t *testing.T) {
fs := afero.NewMemMapFs()
n := blobNamer{root: bytesutil.ToBytes32([]byte("derp")), index: 0}
bp := n.path()
mkdir := path.Dir(bp)
require.NoError(t, fs.MkdirAll(mkdir, directoryPermissions))
// Create an empty blob index in the fs by touching the file at a seemingly valid path.
fi, err := fs.Create(bp)
require.NoError(t, err)
require.NoError(t, fi.Close())
// Cache warm should fail due to the unexpected EOF.
pr, err := newBlobPruner(fs, 0)
require.NoError(t, err)
require.ErrorIs(t, pr.warmCache(), errPruningFailures)
// The cache warm has finished, so calling waitForCache with a super short deadline
// should not block or hit the context deadline.
ctx := context.Background()
ctx, cancel := context.WithDeadline(ctx, time.Now().Add(1*time.Millisecond))
defer cancel()
c, err := pr.waitForCache(ctx)
// We will get an error and a nil value for the cache if we hit the deadline.
require.NoError(t, err)
require.NotNil(t, c)
}
func (e *pruneExpectation) record(before primitives.Epoch) (*pruneSummary, error) {
e.called = true
e.arg = before
if e.summary == nil {
e.summary = &pruneSummary{}
}
return e.summary, e.err
func TestTryPruneDir_CachedExpired(t *testing.T) {
t.Run("empty directory", func(t *testing.T) {
fs := afero.NewMemMapFs()
pr, err := newBlobPruner(fs, 0)
require.NoError(t, err)
var slot primitives.Slot = 0
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, slot, 1)
sc, err := verification.BlobSidecarNoop(sidecars[0])
require.NoError(t, err)
rootStr := rootString(sc.BlockRoot())
require.NoError(t, fs.Mkdir(rootStr, directoryPermissions)) // make empty directory
require.NoError(t, pr.cache.ensure(sc.BlockRoot(), sc.Slot(), 0))
pruned, err := pr.tryPruneDir(rootStr, slot+1)
require.NoError(t, err)
require.Equal(t, 0, pruned)
})
t.Run("blobs to delete", func(t *testing.T) {
fs, bs := NewEphemeralBlobStorageWithFs(t)
var slot primitives.Slot = 0
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, slot, 2)
scs, err := verification.BlobSidecarSliceNoop(sidecars)
require.NoError(t, err)
require.NoError(t, bs.Save(scs[0]))
require.NoError(t, bs.Save(scs[1]))
// check that the root->slot is cached
root := scs[0].BlockRoot()
rootStr := rootString(root)
cs, cok := bs.pruner.cache.slot(scs[0].BlockRoot())
require.Equal(t, true, cok)
require.Equal(t, slot, cs)
// ensure that we see the saved files in the filesystem
files, err := listDir(fs, rootStr)
require.NoError(t, err)
require.Equal(t, 2, len(files))
pruned, err := bs.pruner.tryPruneDir(rootStr, slot+1)
require.NoError(t, err)
require.Equal(t, 2, pruned)
files, err = listDir(fs, rootStr)
require.ErrorIs(t, err, os.ErrNotExist)
require.Equal(t, 0, len(files))
})
}
func TestPrunerNotify(t *testing.T) {
defaultRetention := params.BeaconConfig().MinEpochsForBlobsSidecarsRequest
cases := []prunerScenario{
{
name: "last epoch of period",
retentionPeriod: defaultRetention,
prunedBefore: 11235,
latest: defaultRetention + 11235,
expected: pruneExpectation{called: false},
},
{
name: "within period",
retentionPeriod: defaultRetention,
prunedBefore: 11235,
latest: 11235 + defaultRetention - 1,
expected: pruneExpectation{called: false},
},
{
name: "triggers",
retentionPeriod: defaultRetention,
prunedBefore: 11235,
latest: 11235 + 1 + defaultRetention,
expected: pruneExpectation{called: true, arg: 11235 + 1},
},
{
name: "from zero - before first period",
retentionPeriod: defaultRetention,
prunedBefore: 0,
latest: defaultRetention - 1,
expected: pruneExpectation{called: false},
},
{
name: "from zero - at boundary",
retentionPeriod: defaultRetention,
prunedBefore: 0,
latest: defaultRetention,
expected: pruneExpectation{called: false},
},
{
name: "from zero - triggers",
retentionPeriod: defaultRetention,
prunedBefore: 0,
latest: defaultRetention + 1,
expected: pruneExpectation{called: true, arg: 1},
},
func TestTryPruneDir_SlotFromFile(t *testing.T) {
t.Run("expired blobs deleted", func(t *testing.T) {
fs, bs := NewEphemeralBlobStorageWithFs(t)
var slot primitives.Slot = 0
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, slot, 2)
scs, err := verification.BlobSidecarSliceNoop(sidecars)
require.NoError(t, err)
require.NoError(t, bs.Save(scs[0]))
require.NoError(t, bs.Save(scs[1]))
// check that the root->slot is cached
root := scs[0].BlockRoot()
rootStr := rootString(root)
cs, ok := bs.pruner.cache.slot(root)
require.Equal(t, true, ok)
require.Equal(t, slot, cs)
// evict it from the cache so that we trigger the file read path
bs.pruner.cache.evict(root)
_, ok = bs.pruner.cache.slot(root)
require.Equal(t, false, ok)
// ensure that we see the saved files in the filesystem
files, err := listDir(fs, rootStr)
require.NoError(t, err)
require.Equal(t, 2, len(files))
pruned, err := bs.pruner.tryPruneDir(rootStr, slot+1)
require.NoError(t, err)
require.Equal(t, 2, pruned)
files, err = listDir(fs, rootStr)
require.ErrorIs(t, err, os.ErrNotExist)
require.Equal(t, 0, len(files))
})
t.Run("not expired, intact", func(t *testing.T) {
fs, bs := NewEphemeralBlobStorageWithFs(t)
// Set slot equal to the window size, so it should be retained.
slot := bs.pruner.windowSize
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, slot, 2)
scs, err := verification.BlobSidecarSliceNoop(sidecars)
require.NoError(t, err)
require.NoError(t, bs.Save(scs[0]))
require.NoError(t, bs.Save(scs[1]))
// Evict slot mapping from the cache so that we trigger the file read path.
root := scs[0].BlockRoot()
rootStr := rootString(root)
bs.pruner.cache.evict(root)
_, ok := bs.pruner.cache.slot(root)
require.Equal(t, false, ok)
// Ensure that we see the saved files in the filesystem.
files, err := listDir(fs, rootStr)
require.NoError(t, err)
require.Equal(t, 2, len(files))
// This should use the slotFromFile code (simulating restart).
// Setting pruneBefore == slot, so that the slot will be outside the window (at the boundary).
pruned, err := bs.pruner.tryPruneDir(rootStr, slot)
require.NoError(t, err)
require.Equal(t, 0, pruned)
// Ensure files are still present.
files, err = listDir(fs, rootStr)
require.NoError(t, err)
require.Equal(t, 2, len(files))
})
}
func TestSlotFromBlob(t *testing.T) {
cases := []struct {
slot primitives.Slot
}{
{slot: 0},
{slot: 2},
{slot: 1123581321},
{slot: math.MaxUint64},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
actual := &pruneExpectation{}
l := &mockLayout{pruneBeforeFunc: actual.record}
pruner := &blobPruner{retentionPeriod: c.retentionPeriod}
pruner.prunedBefore.Store(uint64(c.prunedBefore))
done := pruner.notify(c.latest, l)
<-done
require.Equal(t, c.expected.called, actual.called)
require.Equal(t, c.expected.arg, actual.arg)
t.Run(fmt.Sprintf("slot %d", c.slot), func(t *testing.T) {
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, c.slot, 1)
sc := sidecars[0]
enc, err := sc.MarshalSSZ()
require.NoError(t, err)
slot, err := slotFromBlob(bytes.NewReader(enc))
require.NoError(t, err)
require.Equal(t, c.slot, slot)
})
}
}
func testSetupBlobIdentPaths(t *testing.T, fs afero.Fs, bs *BlobStorage, idents []testIdent) []blobIdent {
created := make([]blobIdent, len(idents))
for i, id := range idents {
slot, err := slots.EpochStart(id.epoch)
require.NoError(t, err)
slot += id.offset
_, scs := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, slot, 1)
sc := verification.FakeVerifyForTest(t, scs[0])
require.NoError(t, bs.Save(sc))
ident := identForSidecar(sc)
_, err = fs.Stat(bs.layout.sszPath(ident))
require.NoError(t, err)
created[i] = ident
}
return created
}
func testAssertBlobsPruned(t *testing.T, fs afero.Fs, bs *BlobStorage, pruned, remain []blobIdent) {
for _, id := range pruned {
_, err := fs.Stat(bs.layout.sszPath(id))
require.NotNil(t, err)
require.Equal(t, true, os.IsNotExist(err))
}
for _, id := range remain {
_, err := fs.Stat(bs.layout.sszPath(id))
require.NoError(t, err)
}
}
type testIdent struct {
blobIdent
offset primitives.Slot
}
func testRoots(n int) [][32]byte {
roots := make([][32]byte, n)
for i := range roots {
binary.LittleEndian.PutUint32(roots[i][:], uint32(1+i))
}
return roots
}
func TestLayoutPruneBefore(t *testing.T) {
roots := testRoots(10)
func TestSlotFromFile(t *testing.T) {
cases := []struct {
name string
pruned []testIdent
remain []testIdent
pruneBefore primitives.Epoch
err error
sum pruneSummary
slot primitives.Slot
}{
{slot: 0},
{slot: 2},
{slot: 1123581321},
{slot: math.MaxUint64},
}
for _, c := range cases {
t.Run(fmt.Sprintf("slot %d", c.slot), func(t *testing.T) {
fs, bs := NewEphemeralBlobStorageWithFs(t)
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, c.slot, 1)
sc, err := verification.BlobSidecarNoop(sidecars[0])
require.NoError(t, err)
require.NoError(t, bs.Save(sc))
fname := namerForSidecar(sc)
sszPath := fname.path()
slot, err := slotFromFile(sszPath, fs)
require.NoError(t, err)
require.Equal(t, c.slot, slot)
})
}
}
type dirFiles struct {
name string
isDir bool
children []dirFiles
}
func (df dirFiles) reify(t *testing.T, fs afero.Fs, base string) {
fullPath := path.Join(base, df.name)
if df.isDir {
if df.name != "" {
require.NoError(t, fs.Mkdir(fullPath, directoryPermissions))
}
for _, c := range df.children {
c.reify(t, fs, fullPath)
}
} else {
fp, err := fs.Create(fullPath)
require.NoError(t, err)
_, err = fp.WriteString("derp")
require.NoError(t, err)
}
}
func (df dirFiles) childNames() []string {
cn := make([]string, len(df.children))
for i := range df.children {
cn[i] = df.children[i].name
}
return cn
}
func TestListDir(t *testing.T) {
fs := afero.NewMemMapFs()
// parent directory
fsLayout := dirFiles{isDir: true}
// break out each subdir for easier assertions
notABlob := dirFiles{name: "notABlob", isDir: true}
childlessBlob := dirFiles{name: "0x0987654321", isDir: true}
blobWithSsz := dirFiles{name: "0x1123581321", isDir: true,
children: []dirFiles{{name: "1.ssz"}, {name: "2.ssz"}},
}
blobWithSszAndTmp := dirFiles{name: "0x1234567890", isDir: true,
children: []dirFiles{{name: "5.ssz"}, {name: "0.part"}}}
fsLayout.children = append(fsLayout.children,
notABlob, childlessBlob, blobWithSsz, blobWithSszAndTmp)
topChildren := make([]string, len(fsLayout.children))
for i := range fsLayout.children {
topChildren[i] = fsLayout.children[i].name
}
fsLayout.reify(t, fs, "")
cases := []struct {
name string
dirPath string
expected []string
filter func(string) bool
err error
}{
{
name: "none pruned",
pruneBefore: 1,
pruned: []testIdent{},
remain: []testIdent{
{offset: 1, blobIdent: blobIdent{root: roots[0], epoch: 1, index: 0}},
{offset: 1, blobIdent: blobIdent{root: roots[1], epoch: 1, index: 0}},
name: "non-existent",
dirPath: "derp",
expected: []string{},
err: os.ErrNotExist,
},
{
name: "empty",
dirPath: childlessBlob.name,
expected: []string{},
},
{
name: "top",
dirPath: ".",
expected: topChildren,
},
{
name: "custom filter: only notABlob",
dirPath: ".",
expected: []string{notABlob.name},
filter: func(s string) bool {
return s == notABlob.name
},
},
{
name: "expected pruned before epoch",
pruneBefore: 3,
pruned: []testIdent{
{offset: 0, blobIdent: blobIdent{root: roots[0], epoch: 1, index: 0}},
{offset: 31, blobIdent: blobIdent{root: roots[1], epoch: 1, index: 5}},
{offset: 0, blobIdent: blobIdent{root: roots[2], epoch: 2, index: 0}},
{offset: 31, blobIdent: blobIdent{root: roots[3], epoch: 2, index: 3}},
},
remain: []testIdent{
{offset: 0, blobIdent: blobIdent{root: roots[4], epoch: 3, index: 2}}, // boundary
{offset: 31, blobIdent: blobIdent{root: roots[5], epoch: 3, index: 0}}, // boundary
{offset: 0, blobIdent: blobIdent{root: roots[6], epoch: 4, index: 1}},
{offset: 31, blobIdent: blobIdent{root: roots[7], epoch: 4, index: 5}},
},
sum: pruneSummary{blobsPruned: 4},
name: "root filter",
dirPath: ".",
expected: []string{childlessBlob.name, blobWithSsz.name, blobWithSszAndTmp.name},
filter: filterRoot,
},
{
name: "ssz filter",
dirPath: blobWithSsz.name,
expected: blobWithSsz.childNames(),
filter: filterSsz,
},
{
name: "ssz mixed filter",
dirPath: blobWithSszAndTmp.name,
expected: []string{"5.ssz"},
filter: filterSsz,
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
fs, bs := NewEphemeralBlobStorageAndFs(t, WithLayout(LayoutNameByEpoch))
pruned := testSetupBlobIdentPaths(t, fs, bs, c.pruned)
remain := testSetupBlobIdentPaths(t, fs, bs, c.remain)
sum, err := bs.layout.pruneBefore(c.pruneBefore)
result, err := listDir(fs, c.dirPath)
if c.filter != nil {
result = filter(result, c.filter)
}
if c.err != nil {
require.ErrorIs(t, err, c.err)
require.Equal(t, 0, len(result))
} else {
require.NoError(t, err)
sort.Strings(c.expected)
sort.Strings(result)
require.DeepEqual(t, c.expected, result)
}
})
}
}
func TestRootFromDir(t *testing.T) {
cases := []struct {
name string
dir string
err error
root [32]byte
}{
{
name: "happy path",
dir: "0xffff875e1d985c5ccb214894983f2428edb271f0f87b68ba7010e4a99df3b5cb",
root: [32]byte{255, 255, 135, 94, 29, 152, 92, 92, 203, 33, 72, 148, 152, 63, 36, 40,
237, 178, 113, 240, 248, 123, 104, 186, 112, 16, 228, 169, 157, 243, 181, 203},
},
{
name: "too short",
dir: "0xffff875e1d985c5ccb214894983f2428edb271f0f87b68ba7010e4a99df3b5c",
err: errInvalidRootString,
},
{
name: "too log",
dir: "0xffff875e1d985c5ccb214894983f2428edb271f0f87b68ba7010e4a99df3b5cbb",
err: errInvalidRootString,
},
{
name: "missing prefix",
dir: "ffff875e1d985c5ccb214894983f2428edb271f0f87b68ba7010e4a99df3b5cb",
err: errInvalidRootString,
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
root, err := stringToRoot(c.dir)
if c.err != nil {
require.ErrorIs(t, err, c.err)
return
}
require.NoError(t, err)
testAssertBlobsPruned(t, fs, bs, pruned, remain)
require.Equal(t, c.sum.blobsPruned, sum.blobsPruned)
require.Equal(t, len(c.pruned), sum.blobsPruned)
require.Equal(t, len(c.sum.failedRemovals), len(sum.failedRemovals))
require.Equal(t, c.root, root)
})
}
}

View File

@@ -101,7 +101,6 @@ type NoHeadAccessDatabase interface {
SaveLightClientBootstrap(ctx context.Context, blockRoot []byte, bootstrap interfaces.LightClientBootstrap) error
CleanUpDirtyStates(ctx context.Context, slotsPerArchivedPoint primitives.Slot) error
DeleteHistoricalDataBeforeSlot(ctx context.Context, slot primitives.Slot) error
}
// HeadAccessDatabase defines a struct with access to reading chain head data.

View File

@@ -227,7 +227,10 @@ func (s *Store) DeleteBlock(ctx context.Context, root [32]byte) error {
return ErrDeleteJustifiedAndFinalized
}
if err := s.deleteBlock(tx, root[:]); err != nil {
if err := tx.Bucket(blocksBucket).Delete(root[:]); err != nil {
return err
}
if err := tx.Bucket(blockParentRootIndicesBucket).Delete(root[:]); err != nil {
return err
}
s.blockCache.Del(string(root[:]))
@@ -235,89 +238,6 @@ func (s *Store) DeleteBlock(ctx context.Context, root [32]byte) error {
})
}
// DeleteHistoricalDataBeforeSlot deletes all blocks and states before the given slot.
// This function deletes data from the following buckets:
// - blocksBucket
// - blockParentRootIndicesBucket
// - finalizedBlockRootsIndexBucket
// - stateBucket
// - stateSummaryBucket
// - blockRootValidatorHashesBucket
// - blockSlotIndicesBucket
// - stateSlotIndicesBucket
func (s *Store) DeleteHistoricalDataBeforeSlot(ctx context.Context, cutoffSlot primitives.Slot) error {
ctx, span := trace.StartSpan(ctx, "BeaconDB.DeleteHistoricalDataBeforeSlot")
defer span.End()
// Collect slot/root pairs to perform deletions in a separate read only transaction.
var (
roots [][]byte
slts []primitives.Slot
)
err := s.db.View(func(tx *bolt.Tx) error {
var err error
roots, slts, err = blockRootsBySlotRange(ctx, tx.Bucket(blockSlotIndicesBucket), primitives.Slot(0), cutoffSlot, nil, nil, nil)
if err != nil {
return errors.Wrap(err, "could not retrieve block roots")
}
return nil
})
if err != nil {
return errors.Wrap(err, "could not retrieve block roots and slots")
}
// Perform all deletions in a single transaction for atomicity
return s.db.Update(func(tx *bolt.Tx) error {
for _, root := range roots {
// Delete block
if err = s.deleteBlock(tx, root); err != nil {
return err
}
// Delete finalized block roots index
if err = tx.Bucket(finalizedBlockRootsIndexBucket).Delete(root); err != nil {
return errors.Wrap(err, "could not delete finalized block root index")
}
// Delete state
if err = tx.Bucket(stateBucket).Delete(root); err != nil {
return errors.Wrap(err, "could not delete state")
}
// Delete state summary
if err = tx.Bucket(stateSummaryBucket).Delete(root); err != nil {
return errors.Wrap(err, "could not delete state summary")
}
// Delete validator entries
if err = s.deleteValidatorHashes(tx, root); err != nil {
return errors.Wrap(err, "could not delete validators")
}
}
for _, slot := range slts {
// Delete slot indices
if err = tx.Bucket(blockSlotIndicesBucket).Delete(bytesutil.SlotToBytesBigEndian(slot)); err != nil {
return errors.Wrap(err, "could not delete block slot index")
}
if err = tx.Bucket(stateSlotIndicesBucket).Delete(bytesutil.SlotToBytesBigEndian(slot)); err != nil {
return errors.Wrap(err, "could not delete state slot index")
}
}
// Delete all caches after we have deleted everything from buckets.
// This is done after the buckets are deleted to avoid any issues in case of transaction rollback.
for _, root := range roots {
// Delete block from cache
s.blockCache.Del(string(root))
// Delete state summary from cache
s.stateSummaryCache.delete([32]byte(root))
}
return nil
})
}
// SaveBlock to the db.
func (s *Store) SaveBlock(ctx context.Context, signed interfaces.ReadOnlySignedBeaconBlock) error {
ctx, span := trace.StartSpan(ctx, "BeaconDB.SaveBlock")
@@ -689,7 +609,7 @@ func blockRootsByFilter(ctx context.Context, tx *bolt.Tx, f *filters.QueryFilter
// We retrieve block roots that match a filter criteria of slot ranges, if specified.
filtersMap := f.Filters()
rootsBySlotRange, _, err := blockRootsBySlotRange(
rootsBySlotRange, err := blockRootsBySlotRange(
ctx,
tx.Bucket(blockSlotIndicesBucket),
filtersMap[filters.StartSlot],
@@ -707,7 +627,6 @@ func blockRootsByFilter(ctx context.Context, tx *bolt.Tx, f *filters.QueryFilter
// that list of roots to lookup the block. These block will
// meet the filter criteria.
indices := lookupValuesForIndices(ctx, indicesByBucket, tx)
keys := rootsBySlotRange
if len(indices) > 0 {
// If we have found indices that meet the filter criteria, and there are also
@@ -734,13 +653,13 @@ func blockRootsBySlotRange(
ctx context.Context,
bkt *bolt.Bucket,
startSlotEncoded, endSlotEncoded, startEpochEncoded, endEpochEncoded, slotStepEncoded interface{},
) ([][]byte, []primitives.Slot, error) {
) ([][]byte, error) {
_, span := trace.StartSpan(ctx, "BeaconDB.blockRootsBySlotRange")
defer span.End()
// Return nothing when all slot parameters are missing
if startSlotEncoded == nil && endSlotEncoded == nil && startEpochEncoded == nil && endEpochEncoded == nil {
return [][]byte{}, nil, nil
return [][]byte{}, nil
}
var startSlot, endSlot primitives.Slot
@@ -761,11 +680,11 @@ func blockRootsBySlotRange(
if startEpochOk && endEpochOk {
startSlot, err = slots.EpochStart(startEpoch)
if err != nil {
return nil, nil, err
return nil, err
}
endSlot, err = slots.EpochStart(endEpoch)
if err != nil {
return nil, nil, err
return nil, err
}
endSlot = endSlot + params.BeaconConfig().SlotsPerEpoch - 1
}
@@ -776,15 +695,14 @@ func blockRootsBySlotRange(
return key != nil && bytes.Compare(key, max) <= 0
}
if endSlot < startSlot {
return nil, nil, errInvalidSlotRange
return nil, errInvalidSlotRange
}
rootsRange := endSlot.SubSlot(startSlot).Div(step)
roots := make([][]byte, 0, rootsRange)
var slts []primitives.Slot
c := bkt.Cursor()
for k, v := c.Seek(min); conditional(k, max); k, v = c.Next() {
slot := bytesutil.BytesToSlotBigEndian(k)
if step > 1 {
slot := bytesutil.BytesToSlotBigEndian(k)
if slot.SubSlot(startSlot).Mod(step) != 0 {
continue
}
@@ -795,9 +713,8 @@ func blockRootsBySlotRange(
splitRoots = append(splitRoots, v[i:i+32])
}
roots = append(roots, splitRoots...)
slts = append(slts, slot)
}
return roots, slts, nil
return roots, nil
}
// blockRootsBySlot retrieves the block roots by slot
@@ -896,9 +813,9 @@ func unmarshalBlock(_ context.Context, enc []byte) (interfaces.ReadOnlySignedBea
if err := rawBlock.UnmarshalSSZ(enc[len(denebBlindKey):]); err != nil {
return nil, errors.Wrap(err, "could not unmarshal blinded Deneb block")
}
case HasElectraKey(enc):
case hasElectraKey(enc):
rawBlock = &ethpb.SignedBeaconBlockElectra{}
if err := rawBlock.UnmarshalSSZ(enc[len(ElectraKey):]); err != nil {
if err := rawBlock.UnmarshalSSZ(enc[len(electraKey):]); err != nil {
return nil, errors.Wrap(err, "could not unmarshal Electra block")
}
case hasElectraBlindKey(enc):
@@ -957,7 +874,7 @@ func keyForBlock(blk interfaces.ReadOnlySignedBeaconBlock) ([]byte, error) {
if blk.IsBlinded() {
return electraBlindKey, nil
}
return ElectraKey, nil
return electraKey, nil
}
if v >= version.Deneb {
@@ -991,32 +908,3 @@ func keyForBlock(blk interfaces.ReadOnlySignedBeaconBlock) ([]byte, error) {
return nil, fmt.Errorf("unsupported block version: %v", blk.Version())
}
func (s *Store) deleteBlock(tx *bolt.Tx, root []byte) error {
if err := tx.Bucket(blocksBucket).Delete(root); err != nil {
return errors.Wrap(err, "could not delete block")
}
if err := tx.Bucket(blockParentRootIndicesBucket).Delete(root); err != nil {
return errors.Wrap(err, "could not delete block parent indices")
}
return nil
}
func (s *Store) deleteValidatorHashes(tx *bolt.Tx, root []byte) error {
ok, err := s.isStateValidatorMigrationOver()
if err != nil {
return err
}
if !ok {
return nil
}
// Delete the validator hash index
if err = tx.Bucket(blockRootValidatorHashesBucket).Delete(root); err != nil {
return errors.Wrap(err, "could not delete validator index")
}
return nil
}

View File

@@ -2,14 +2,12 @@ package kv
import (
"context"
"fmt"
"testing"
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filters"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
@@ -20,7 +18,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
bolt "go.etcd.io/bbolt"
"google.golang.org/protobuf/proto"
)
@@ -356,189 +353,6 @@ func TestStore_DeleteFinalizedBlock(t *testing.T) {
require.NoError(t, db.SaveFinalizedCheckpoint(ctx, cp))
require.ErrorIs(t, db.DeleteBlock(ctx, root), ErrDeleteJustifiedAndFinalized)
}
func TestStore_HistoricalDataBeforeSlot(t *testing.T) {
slotsPerEpoch := uint64(params.BeaconConfig().SlotsPerEpoch)
db := setupDB(t)
ctx := context.Background()
// Save genesis block root
require.NoError(t, db.SaveGenesisBlockRoot(ctx, genesisBlockRoot))
// Create and save blocks for 4 epochs
blks := makeBlocks(t, 0, slotsPerEpoch*4, genesisBlockRoot)
require.NoError(t, db.SaveBlocks(ctx, blks))
// Mark state validator migration as complete
err := db.db.Update(func(tx *bolt.Tx) error {
return tx.Bucket(migrationsBucket).Put(migrationStateValidatorsKey, migrationCompleted)
})
require.NoError(t, err)
migrated, err := db.isStateValidatorMigrationOver()
require.NoError(t, err)
require.Equal(t, true, migrated)
// Create state summaries and states for each block
ss := make([]*ethpb.StateSummary, len(blks))
states := make([]state.BeaconState, len(blks))
for i, blk := range blks {
slot := blk.Block().Slot()
r, err := blk.Block().HashTreeRoot()
require.NoError(t, err)
// Create and save state summary
ss[i] = &ethpb.StateSummary{
Slot: slot,
Root: r[:],
}
// Create and save state with validator entries
vals := make([]*ethpb.Validator, 2)
for j := range vals {
vals[j] = &ethpb.Validator{
PublicKey: bytesutil.PadTo([]byte{byte(i*j + 1)}, 48),
WithdrawalCredentials: bytesutil.PadTo([]byte{byte(i*j + 2)}, 32),
}
}
st, err := util.NewBeaconState(func(state *ethpb.BeaconState) error {
state.Validators = vals
state.Slot = slot
return nil
})
require.NoError(t, err)
require.NoError(t, db.SaveState(ctx, st, r))
states[i] = st
// Verify validator entries are saved to db
valsActual, err := db.validatorEntries(ctx, r)
require.NoError(t, err)
for j, val := range valsActual {
require.DeepEqual(t, vals[j], val)
}
}
require.NoError(t, db.SaveStateSummaries(ctx, ss))
// Verify slot indices exist before deletion
err = db.db.View(func(tx *bolt.Tx) error {
blockSlotBkt := tx.Bucket(blockSlotIndicesBucket)
stateSlotBkt := tx.Bucket(stateSlotIndicesBucket)
for i := uint64(0); i < slotsPerEpoch; i++ {
slot := bytesutil.SlotToBytesBigEndian(primitives.Slot(i + 1))
assert.NotNil(t, blockSlotBkt.Get(slot), "Expected block slot index to exist")
assert.NotNil(t, stateSlotBkt.Get(slot), "Expected state slot index to exist", i)
}
return nil
})
require.NoError(t, err)
// Delete data before slot at epoch 1
require.NoError(t, db.DeleteHistoricalDataBeforeSlot(ctx, primitives.Slot(slotsPerEpoch)))
// Verify blocks from epoch 0 are deleted
for i := uint64(0); i < slotsPerEpoch; i++ {
root, err := blks[i].Block().HashTreeRoot()
require.NoError(t, err)
// Check block is deleted
retrievedBlocks, err := db.BlocksBySlot(ctx, primitives.Slot(i))
require.NoError(t, err)
assert.Equal(t, 0, len(retrievedBlocks))
// Verify block does not exist
assert.Equal(t, false, db.HasBlock(ctx, root))
// Verify block parent root does not exist
err = db.db.View(func(tx *bolt.Tx) error {
require.Equal(t, 0, len(tx.Bucket(blockParentRootIndicesBucket).Get(root[:])))
return nil
})
require.NoError(t, err)
// Verify state is deleted
hasState := db.HasState(ctx, root)
assert.Equal(t, false, hasState)
// Verify state summary is deleted
hasSummary := db.HasStateSummary(ctx, root)
assert.Equal(t, false, hasSummary)
// Verify validator hashes for block roots are deleted
err = db.db.View(func(tx *bolt.Tx) error {
assert.Equal(t, 0, len(tx.Bucket(blockRootValidatorHashesBucket).Get(root[:])))
return nil
})
require.NoError(t, err)
}
// Verify slot indices are deleted
err = db.db.View(func(tx *bolt.Tx) error {
blockSlotBkt := tx.Bucket(blockSlotIndicesBucket)
stateSlotBkt := tx.Bucket(stateSlotIndicesBucket)
for i := uint64(0); i < slotsPerEpoch; i++ {
slot := bytesutil.SlotToBytesBigEndian(primitives.Slot(i + 1))
assert.Equal(t, 0, len(blockSlotBkt.Get(slot)), fmt.Sprintf("Expected block slot index to be deleted, slot: %d", slot))
assert.Equal(t, 0, len(stateSlotBkt.Get(slot)), fmt.Sprintf("Expected state slot index to be deleted, slot: %d", slot))
}
return nil
})
require.NoError(t, err)
// Verify blocks from epochs 1-3 still exist
for i := slotsPerEpoch; i < slotsPerEpoch*4; i++ {
root, err := blks[i].Block().HashTreeRoot()
require.NoError(t, err)
// Verify block exists
assert.Equal(t, true, db.HasBlock(ctx, root))
// Verify remaining block parent root exists, except last slot since we store parent roots of each block.
if i < slotsPerEpoch*4-1 {
err = db.db.View(func(tx *bolt.Tx) error {
require.NotNil(t, tx.Bucket(blockParentRootIndicesBucket).Get(root[:]), fmt.Sprintf("Expected block parent index to be deleted, slot: %d", i))
return nil
})
require.NoError(t, err)
}
// Verify state exists
hasState := db.HasState(ctx, root)
assert.Equal(t, true, hasState)
// Verify state summary exists
hasSummary := db.HasStateSummary(ctx, root)
assert.Equal(t, true, hasSummary)
// Verify slot indices still exist
err = db.db.View(func(tx *bolt.Tx) error {
blockSlotBkt := tx.Bucket(blockSlotIndicesBucket)
stateSlotBkt := tx.Bucket(stateSlotIndicesBucket)
slot := bytesutil.SlotToBytesBigEndian(primitives.Slot(i + 1))
assert.NotNil(t, blockSlotBkt.Get(slot), "Expected block slot index to exist")
assert.NotNil(t, stateSlotBkt.Get(slot), "Expected state slot index to exist")
return nil
})
require.NoError(t, err)
// Verify validator entries still exist
valsActual, err := db.validatorEntries(ctx, root)
require.NoError(t, err)
assert.NotNil(t, valsActual)
// Verify remaining validator hashes for block roots exists
err = db.db.View(func(tx *bolt.Tx) error {
assert.NotNil(t, tx.Bucket(blockRootValidatorHashesBucket).Get(root[:]))
return nil
})
require.NoError(t, err)
}
}
func TestStore_GenesisBlock(t *testing.T) {
db := setupDB(t)
ctx := context.Background()

View File

@@ -52,12 +52,11 @@ func hasDenebBlindKey(enc []byte) bool {
return bytes.Equal(enc[:len(denebBlindKey)], denebBlindKey)
}
// HasElectraKey verifies if the encoding is Electra compatible.
func HasElectraKey(enc []byte) bool {
if len(ElectraKey) >= len(enc) {
func hasElectraKey(enc []byte) bool {
if len(electraKey) >= len(enc) {
return false
}
return bytes.Equal(enc[:len(ElectraKey)], ElectraKey)
return bytes.Equal(enc[:len(electraKey)], electraKey)
}
func hasElectraBlindKey(enc []byte) bool {

View File

@@ -167,13 +167,13 @@ func decodeLightClientBootstrap(enc []byte) (interfaces.LightClientBootstrap, []
}
m = bootstrap
syncCommitteeHash = enc[len(denebKey) : len(denebKey)+32]
case HasElectraKey(enc):
case hasElectraKey(enc):
bootstrap := &ethpb.LightClientBootstrapElectra{}
if err := bootstrap.UnmarshalSSZ(enc[len(ElectraKey)+32:]); err != nil {
if err := bootstrap.UnmarshalSSZ(enc[len(electraKey)+32:]); err != nil {
return nil, nil, errors.Wrap(err, "could not unmarshal Electra light client bootstrap")
}
m = bootstrap
syncCommitteeHash = enc[len(ElectraKey) : len(ElectraKey)+32]
syncCommitteeHash = enc[len(electraKey) : len(electraKey)+32]
default:
return nil, nil, errors.New("decoding of saved light client bootstrap is unsupported")
}
@@ -215,7 +215,7 @@ func (s *Store) LightClientUpdates(ctx context.Context, startPeriod, endPeriod u
if err != nil {
return nil, err
}
return updates, nil
return updates, err
}
func (s *Store) LightClientUpdate(ctx context.Context, period uint64) (interfaces.LightClientUpdate, error) {
@@ -277,9 +277,9 @@ func decodeLightClientUpdate(enc []byte) (interfaces.LightClientUpdate, error) {
return nil, errors.Wrap(err, "could not unmarshal Deneb light client update")
}
m = update
case HasElectraKey(enc):
case hasElectraKey(enc):
update := &ethpb.LightClientUpdateElectra{}
if err := update.UnmarshalSSZ(enc[len(ElectraKey):]); err != nil {
if err := update.UnmarshalSSZ(enc[len(electraKey):]); err != nil {
return nil, errors.Wrap(err, "could not unmarshal Electra light client update")
}
m = update
@@ -292,7 +292,7 @@ func decodeLightClientUpdate(enc []byte) (interfaces.LightClientUpdate, error) {
func keyForLightClientUpdate(v int) ([]byte, error) {
switch v {
case version.Electra:
return ElectraKey, nil
return electraKey, nil
case version.Deneb:
return denebKey, nil
case version.Capella:

View File

@@ -53,7 +53,7 @@ var (
saveBlindedBeaconBlocksKey = []byte("save-blinded-beacon-blocks")
denebKey = []byte("deneb")
denebBlindKey = []byte("blind-deneb")
ElectraKey = []byte("electra")
electraKey = []byte("electra")
electraBlindKey = []byte("blind-electra")
fuluKey = []byte("fulu")
fuluBlindKey = []byte("blind-fulu")

View File

@@ -357,7 +357,7 @@ func (s *Store) processElectra(ctx context.Context, pbState *ethpb.BeaconStateEl
if err != nil {
return err
}
encodedState := snappy.Encode(nil, append(ElectraKey, rawObj...))
encodedState := snappy.Encode(nil, append(electraKey, rawObj...))
if err := bucket.Put(rootHash, encodedState); err != nil {
return err
}
@@ -518,9 +518,9 @@ func (s *Store) unmarshalState(_ context.Context, enc []byte, validatorEntries [
switch {
case hasFuluKey(enc):
protoState := &ethpb.BeaconStateElectra{}
protoState := &ethpb.BeaconStateFulu{}
if err := protoState.UnmarshalSSZ(enc[len(fuluKey):]); err != nil {
return nil, errors.Wrap(err, "failed to unmarshal encoding for Fulu")
return nil, errors.Wrap(err, "failed to unmarshal encoding for Electra")
}
ok, err := s.isStateValidatorMigrationOver()
if err != nil {
@@ -530,9 +530,9 @@ func (s *Store) unmarshalState(_ context.Context, enc []byte, validatorEntries [
protoState.Validators = validatorEntries
}
return statenative.InitializeFromProtoUnsafeFulu(protoState)
case HasElectraKey(enc):
case hasElectraKey(enc):
protoState := &ethpb.BeaconStateElectra{}
if err := protoState.UnmarshalSSZ(enc[len(ElectraKey):]); err != nil {
if err := protoState.UnmarshalSSZ(enc[len(electraKey):]); err != nil {
return nil, errors.Wrap(err, "failed to unmarshal encoding for Electra")
}
ok, err := s.isStateValidatorMigrationOver()
@@ -688,9 +688,9 @@ func marshalState(ctx context.Context, st state.ReadOnlyBeaconState) ([]byte, er
if err != nil {
return nil, err
}
return snappy.Encode(nil, append(ElectraKey, rawObj...)), nil
return snappy.Encode(nil, append(electraKey, rawObj...)), nil
case version.Fulu:
rState, ok := st.ToProtoUnsafe().(*ethpb.BeaconStateElectra)
rState, ok := st.ToProtoUnsafe().(*ethpb.BeaconStateFulu)
if !ok {
return nil, errors.New("non valid inner state")
}
@@ -725,7 +725,7 @@ func (s *Store) validatorEntries(ctx context.Context, blockRoot [32]byte) ([]*et
idxBkt := tx.Bucket(blockRootValidatorHashesBucket)
valKey := idxBkt.Get(blockRoot[:])
if len(valKey) == 0 {
return errors.Errorf("validator keys not found for given block root: %x", blockRoot)
return errors.Errorf("invalid compressed validator keys length")
}
// decompress the keys and check if they are of proper length.

Some files were not shown because too many files have changed in this diff Show More