Compare commits

..

29 Commits

Author SHA1 Message Date
Manu NALEPA
5960a6bbab RefreshPersistentSubnets Officialise hack. 2025-04-08 16:25:29 +02:00
Manu NALEPA
725dda51c3 Fix tests 2025-04-08 16:13:44 +02:00
Manu NALEPA
6a44fef25e Fix tests 2025-04-08 15:46:25 +02:00
Manu NALEPA
654f927fc2 Increase max blobs fulu 2025-04-08 14:25:19 +02:00
Manu NALEPA
08bb7c72fb RefreshPersistentSubnets: Hack to announce one epoch in advante the cgc. 2025-04-08 11:55:54 +02:00
Manu NALEPA
9f91957ec3 Fix UpgradeToFulu. 2025-04-08 10:37:14 +02:00
Manu NALEPA
9a80f043de Implement distributed block building.
Credits: Francis
2025-04-08 10:37:01 +02:00
Francis Li
8192e653e7 Add new vars defined in consensus-spec (#15101) 2025-04-08 00:09:27 +02:00
Manu NALEPA
613c9c5de9 DataColumnSidecar: Rename variables to stick with
https://github.com/ethereum/consensus-specs/blob/dev/specs/fulu/das-core.md#datacolumnsidecar
2025-04-04 15:03:43 +02:00
Manu NALEPA
731a6fec5c DataColumnsAlignWithBlock: Move into its own file. 2025-04-04 14:10:23 +02:00
Manu NALEPA
ffc794182f Kasey comment: Implement and use storageIndices. 2025-04-04 11:39:07 +02:00
Manu NALEPA
c7bd2d11b2 Kasey comment: prune - Flatten the == case. 2025-04-03 18:06:29 +02:00
Manu NALEPA
02a6c034a7 Kasey comment: DataColumnStorage.Get: Set verified into the verification package. 2025-04-03 17:16:49 +02:00
Manu NALEPA
584ef60ab0 Kasey comment: limit ==> nonZeroOffset. 2025-04-03 15:50:53 +02:00
Manu NALEPA
bc017ed493 Kasey comment: Lock mutexes before checking if the file exists. 2025-04-03 15:39:43 +02:00
Manu NALEPA
22b3327a97 Kasey comment: Compute file size instead of reading it from stats. 2025-04-03 15:23:10 +02:00
Manu NALEPA
aa84fbc597 Kasey comment: Read all metadata at once. 2025-04-03 14:40:10 +02:00
Manu NALEPA
c6e8c1a380 Kasey commnet: Stop exporting errors for nothing. 2025-04-03 12:12:47 +02:00
Manu NALEPA
7b9eef4a45 Kasey comment: Add comment. 2025-04-03 11:50:17 +02:00
Manu NALEPA
5dfef04a6b Kasey comment: Fix typo. 2025-04-03 11:47:15 +02:00
Manu NALEPA
6d16ed4a89 Kasey comment: AAAA! 2025-04-03 11:36:09 +02:00
Manu NALEPA
799ed429ca Store ==> Save. 2025-04-03 11:35:19 +02:00
Manu NALEPA
30ee3a7225 Kasey comment: Move CreateTestVerifiedRoDataColumnSidecars in beacon-chain/verification/fake. 2025-04-03 11:34:17 +02:00
Manu NALEPA
07f73857f9 Kasey comment: indice ==> index 2025-04-03 11:09:57 +02:00
Manu NALEPA
5c76dc6c3a Kasey comment: IsDataAvailable: Remove nodeID. 2025-04-03 11:05:14 +02:00
Manu NALEPA
fa81334983 Kasey comment: Fix clutter 2025-04-03 10:38:13 +02:00
Manu NALEPA
7fd169da5f Kasey comment: Fix typo 2025-04-03 10:16:40 +02:00
Manu NALEPA
ee4e09effe Implement data columns storage 2025-03-31 17:37:00 +02:00
Manu NALEPA
91738a24fa DB Filesystem: Move all data column related code to data_columns.go
Only code move.
2025-03-18 23:10:53 +01:00
3523 changed files with 61778 additions and 99525 deletions

View File

@@ -22,6 +22,7 @@ coverage --define=coverage_enabled=1
build --workspace_status_command=./hack/workspace_status.sh
build --define blst_disabled=false
build --compilation_mode=opt
run --define blst_disabled=false
build:blst_disabled --define blst_disabled=true

View File

@@ -50,7 +50,7 @@ build:nostamp --nostamp
# Build metadata
build --build_metadata=ROLE=CI
build --build_metadata=REPO_URL=https://github.com/OffchainLabs/prysm.git
build --build_metadata=REPO_URL=https://github.com/prysmaticlabs/prysm.git
build --workspace_status_command=./hack/workspace_status_ci.sh
# Buildbuddy

View File

@@ -11,7 +11,7 @@ name = "go"
enabled = true
[analyzers.meta]
import_paths = ["github.com/OffchainLabs/prysm/v6"]
import_paths = ["github.com/prysmaticlabs/prysm/v5"]
[[analyzers]]
name = "test-coverage"

View File

@@ -1,6 +1,6 @@
name: 🐞 Bug report
description: Report a bug or problem with running Prysm
type: "Bug"
labels: ["Bug"]
body:
- type: markdown
attributes:

View File

@@ -1,4 +1,4 @@
FROM golang:1.24-alpine
FROM golang:1.23-alpine
COPY entrypoint.sh /entrypoint.sh

View File

@@ -12,18 +12,18 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout source code
uses: actions/checkout@f43a0e5ff2bd294095638e18286ca9a3d1956744 # v3.6.0
uses: actions/checkout@v3
- name: Download unclog binary
uses: dsaltares/fetch-gh-release-asset@aa2ab1243d6e0d5b405b973c89fa4d06a2d0fff7 # 1.1.2
with:
repo: OffchainLabs/unclog
version: "tags/v0.1.5"
version: "tags/v0.1.3"
file: "unclog"
- name: Get new changelog files
id: new-changelog-files
uses: OffchainLabs/gh-action-changed-files@9200e69727eb73eb060652b19946b8a2fdfb654b # v4.0.8
uses: tj-actions/changed-files@v45
with:
files: |
changelog/**.md

View File

@@ -1,43 +0,0 @@
name: Check Spec References
on: [push, pull_request]
jobs:
check-specrefs:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Check version consistency
run: |
WORKSPACE_VERSION=$(grep 'consensus_spec_version = ' WORKSPACE | sed 's/.*"\(.*\)"/\1/')
ETHSPECIFY_VERSION=$(grep '^version:' specrefs/.ethspecify.yml | sed 's/version: //')
if [ "$WORKSPACE_VERSION" != "$ETHSPECIFY_VERSION" ]; then
echo "Version mismatch between WORKSPACE and ethspecify"
echo " WORKSPACE: $WORKSPACE_VERSION"
echo " specrefs/.ethspecify.yml: $ETHSPECIFY_VERSION"
exit 1
else
echo "Versions match: $WORKSPACE_VERSION"
fi
- name: Install ethspecify
run: python3 -mpip install ethspecify
- name: Update spec references
run: ethspecify process --path=specrefs
- name: Check for differences
run: |
if ! git diff --exit-code specrefs >/dev/null; then
echo "Spec references are out-of-date!"
echo ""
git --no-pager diff specrefs
exit 1
else
echo "Spec references are up-to-date!"
fi
- name: Check spec references
run: ethspecify check --path=specrefs

View File

@@ -28,15 +28,15 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up Go 1.24
- name: Set up Go 1.23
uses: actions/setup-go@v4
with:
go-version: '1.24.0'
go-version: '1.23.5'
- name: Run Gosec Security Scanner
run: | # https://github.com/securego/gosec/issues/469
export PATH=$PATH:$(go env GOPATH)/bin
go install github.com/securego/gosec/v2/cmd/gosec@v2.22.1
gosec -exclude-generated -exclude=G307,G115 -exclude-dir=crypto/bls/herumi ./...
go install github.com/securego/gosec/v2/cmd/gosec@v2.19.0
gosec -exclude-generated -exclude=G307 -exclude-dir=crypto/bls/herumi ./...
lint:
name: Lint
@@ -45,16 +45,16 @@ jobs:
- name: Checkout
uses: actions/checkout@v4
- name: Set up Go 1.24
- name: Set up Go 1.23
uses: actions/setup-go@v4
with:
go-version: '1.24.0'
go-version: '1.23.5'
id: go
- name: Golangci-lint
uses: golangci/golangci-lint-action@v5
with:
version: v1.64.5
version: v1.63.4
args: --config=.golangci.yml --out-${NO_FUTURE}format colored-line-number
build:
@@ -64,7 +64,7 @@ jobs:
- name: Set up Go 1.x
uses: actions/setup-go@v4
with:
go-version: '1.24.0'
go-version: '1.23.5'
id: go
- name: Check out code into the Go module directory

View File

@@ -75,7 +75,6 @@ linters:
- tagliatelle
- thelper
- unparam
- usetesting
- varnamelen
- wrapcheck
- wsl

View File

@@ -1,5 +1,5 @@
language: go
go_import_path: github.com/OffchainLabs/prysm
go_import_path: github.com/prysmaticlabs/prysm
sudo: false
matrix:
include:

View File

@@ -6,13 +6,13 @@ load("@io_bazel_rules_go//go:def.bzl", "nogo")
load("@bazel_skylib//rules:common_settings.bzl", "string_setting")
load("@prysm//tools/nogo_config:def.bzl", "nogo_config_exclude")
prefix = "github.com/OffchainLabs/prysm"
prefix = "github.com/prysmaticlabs/prysm"
exports_files([
"LICENSE.md",
])
# gazelle:prefix github.com/OffchainLabs/prysm/v6
# gazelle:prefix github.com/prysmaticlabs/prysm/v5
# gazelle:map_kind go_library go_library @prysm//tools/go:def.bzl
# gazelle:map_kind go_test go_test @prysm//tools/go:def.bzl
# gazelle:map_kind go_repository go_repository @prysm//tools/go:def.bzl
@@ -165,7 +165,7 @@ STATICCHECK_ANALYZERS = [
"sa6006",
"sa9001",
"sa9002",
"sa9003",
#"sa9003", # Doesn't build. See https://github.com/dominikh/go-tools/pull/1483
"sa9004",
"sa9005",
"sa9006",
@@ -194,11 +194,9 @@ nogo(
"//tools/analyzers/gocognit:go_default_library",
"//tools/analyzers/ineffassign:go_default_library",
"//tools/analyzers/interfacechecker:go_default_library",
"//tools/analyzers/logcapitalization:go_default_library",
"//tools/analyzers/logruswitherror:go_default_library",
"//tools/analyzers/maligned:go_default_library",
"//tools/analyzers/nop:go_default_library",
"//tools/analyzers/nopanic:go_default_library",
"//tools/analyzers/properpermissions:go_default_library",
"//tools/analyzers/recursivelock:go_default_library",
"//tools/analyzers/shadowpredecl:go_default_library",

View File

@@ -4,342 +4,6 @@ All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
## [v6.0.4](https://github.com/prysmaticlabs/prysm/compare/v6.0.3...v6.0.4) - 2025-06-05
This release has more work on PeerDAS, and light client support. Additionally, we have a few bug fixes:
- Blob cache size now correctly set at startup.
- A fix for slashing protection history exports where the validator database was in a nested folder.
- Corrected behavior of the API call for state committees with an invalid request.
- `/bin/sh` is now symlinked to `/bin/bash` for Prysm docker images.
In the [Hoodi](https://github.com/eth-clients/hoodi) testnet, the default gas limit is raised to 60M gas.
### Added
- Add light client mainnet spec test. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15295)
- Add support for light client req/resp domain. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15281)
- Added /bin/sh simlink to docker images. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15294)
- Added Prysm build data to otel tracing spans. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15302)
- Add light client minimal spec test support for `update_ranking` tests. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15297)
- Add fulu operation and epoch processing spec tests. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15284)
- Updated e2e Beacon API evaluator to support more endpoints, including the ones introduced in Electra. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15304)
- Data column sidecars verification methods. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15232)
- Implement data column sidecars filesystem. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15257)
- Add blob schedule support from https://github.com/ethereum/consensus-specs/pull/4277. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15272)
- random forkchoice spec tests for fulu. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15287)
- Add ability to download nightly test vectors. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15312)
- PeerDAS: Validation pipeline for data column sidecars received via gossip. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15310)
- PeerDAS: Implement P2P. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15347)
- PeerDAS: Implement the blockchain package. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15350)
### Changed
- Update spec tests to v1.6.0-alpha.0. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15306)
- PeerDAS: Refactor the reconstruction pipeline. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15309)
- PeerDAS: `DataColumnStorage.Get` - Exit early no columns are available. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15309)
- Default hoodi testnet builder gas limit to 60M. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15361)
### Fixed
- Fix cyclical dependencies issue when using testing/util package. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15248)
- Set seen blob cache size correctly based on current slot time at start up. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15348)
- Fix `slashing-protection-history export` failing when `validator.db` is in a nested folder like `data/direct/`. (#14954). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15351)
- Made `/eth/v1/beacon/states/{state_id}/committees` endpoint return `400` when slot does not belong to the specified epoch, aligning with the Beacon API spec (#15355). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15356)
- Removed eager validator context cancellation that was causing validator builder registrations to fail occasionally. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15369)
## [v6.0.3](https://github.com/prysmaticlabs/prysm/compare/v6.0.2...v6.0.3) - 2025-05-21
This release has important bugfixes for users of the [Beacon API](https://ethereum.github.io/beacon-APIs/). These fixes include:
- Fixed pending consolidations endpoint to return the correct response.
- Fixed incorrect field name from pending partial withdrawals response.
- Fixed attester slashing to return an empty array instead of nil/null.
- Fixed validator participation and active set changes endpoints to accept a `{state_id}` parameter.
Other improvements include:
- Disabled deposit log processing routine for Electra and beyond.
Operators are encouraged to update at their own convenience.
### Added
- ssz static spec tests for fulu. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15279)
- finality and merkle proof spec tests for fulu. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15286)
- sanity and rewards spec tests for fulu. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15285)
### Changed
- Added more tracing spans to various helpers related to GetDuties. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15271)
- Disable log processing after deposit requests are activated. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15274)
### Fixed
- fixed wrong handler for get pending consolidations endpoint. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15290)
- Fixed /eth/v2/beacon/pool/attester_slashings no slashings returns empty array instead of nil. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15291)
- Fix Prysm endpoints `/prysm/v1/validators/{state_id}/participation` and `/prysm/v1/validators/{state_id}/active_set_changes` to properly handle `{state_id}`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15245)
## [v6.0.2](https://github.com/prysmaticlabs/prysm/compare/v6.0.1...v6.0.2) - 2025-05-12
This is a patch release to fix a few important bugs. Most importantly, we have adjusted the index limit for field tries in the beacon state to better support Pectra states. This should alleviate memory issues that clients are seeing since Pectra mainnet fork.
### Added
- Enable light client gossip for optimistic and finality updates. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15220)
- Implement peerDAS core functions. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15192)
- Force duties start on received blocks. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15251)
- Added additional tracing spans for the GetDuties routine. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15258)
### Changed
- Use otelgrpc for tracing grpc server and client. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15237)
- Upgraded ristretto to v2.2.0, for RISC-V support. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15170)
- Update spec to v1.5.0 compliance which changes minimal execution requests size. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15256)
- Increase indices limit in field trie rebuilding. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15252)
- Increase sepolia gas limit to 60M. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15253)
### Fixed
- Fixed wrong field name in pending partial withdrawals was returned on state json representation, described in https://github.com/ethereum/consensus-specs/blob/dev/specs/electra/beacon-chain.md#pendingpartialwithdrawal. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15254)
- Fixed gocognit on propose block rest path. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15147)
## [v6.0.1](https://github.com/prysmaticlabs/prysm/compare/v6.0.0...v6.0.1) - 2025-05-02
This release fixes two bugs related to the `payload_attributes` [event stream](https://ethereum.github.io/beacon-APIs/#/Events/eventstream). If you are using or planning to use this endpoint, upgrading to version 6.0.1 is mandatory.
Also, a reminder: like other Beacon API endpoints, when a node is syncing, it may return historical data as `finalized` or `head`. Until the node is fully synced to the head of the chain, you should avoid using this data, depending on your application's needs.
We currently recommend against using the `--enable-beacon-rest-api` flag on Mainnet. As you may have noticed, we put a deprecation notice on our gRPC code, in particular on gRPC-related flags. The reason for this is that we want to eventually remove gRPC and have REST HTTP as the standard way of communication between the validator client and the beacon node. That being said, the REST option is still unstable and thus marked as experimental in the flag's description (the flag is `--enable-beacon-rest-api`). Therefore we encourage everyone to keep using gRPC, which is currently the default. It is fine to test the REST option on testnets, but doing it on Mainnet can lead to missing attestations and even missing blocks.
**Updating to v6.0.0 or later is required for Pectra Mainnet!**
This patch release has a few important fixes from v6.0.0. Updating to this release is encouraged.
### Added
- `UpgradeToFulu` spectests. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15190)
- PeerDAS related KZG wrappers. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15186)
- Add light client p2p broadcaster functions. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15175)
- Added immediate broadcasting of proposer slashings when equivocating blocks are detected during block processing. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14693)
- Added 2 new errors: `HeadStateErr` and `ErrCouldNotVerifyBlockHeader`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14693)
- Implement pending consolidations Beacon API endpoint. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15219)
- PeerDAS: Add needed proto files and corresponding generated code. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15187)
- Add light client p2p validator and subscriber functions. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15214)
### Changed
- Refactored internal function `reValidateSubscriptions` to `pruneSubscriptions` in `beacon-chain/sync/subscriber.go` for improved clarity, addressing a TODO comment. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15160)
- Updated geth to v1.15.9. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15216)
- Removed the slot from `UpdateDuties`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15223)
- Update hoodie bootnodes. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15240)
### Fixed
- avoid nondeterministic default fork value when generate genesis. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15151)
- `UpgradeToFulu`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15190)
- fixed underflow with balances in leaking edge case with expected withdrawals. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15191)
- Fixes our generated ssz files to have the correct package name. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15199)
- Fixes our blob sidecar by root request lists for electra. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15209)
- Ensure that the `payload_attributes` event has a consistent view of the head state by passing the head block in the event and using stategen to retrieve the corresponding state. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15213)
- Pass parent context to update duties when dependent roots change. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15221)
- Process slots across epoch for payload attribute event. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15228)
- extend the payload attribute computation deadline to the beginning of the proposal slot. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15230)
### Security
- Fix CVE-2025-22869. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15204)
- Fix CVE-2025-22870. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15204)
- Fix CVE-2025-22872. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15204)
- Fix CVE-2025-30204. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15204)
## [v6.0.0](https://github.com/prysmaticlabs/prysm/compare/v5.3.2...v6.0.0) - 2025-04-21
This release introduces Mainnet support for the upcoming Electra + Prague (Pectra) fork. The fork is scheduled for mainnet epoch 364032 (May 7, 2025, 10:05:11 UTC). You MUST update Prysm Beacon Node, Prysm Validator Client, and your execution layer client to the Pectra ready release prior to the fork to stay on the correct chain.
Besides Pectra, we have more light client API support, cleanups, and a few bugfixes. Please review the changelog below and update your client as soon as practical before May 7.
This release is **mandatory** for all operators before May 7.
### Added
- Implemented validator identities Beacon API endpoint. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15086)
- Add SSZ support to light client updates by range API. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15082)
- Add light client ssz types to the spec test. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15097)
- Added the ability for execution requests to be tested in e2e with electra. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14971)
- Add warning messages for gas limit ranges that might be problematic. Low gas limits (≤10% of default) may cause transactions to fail, while high gas limits (>150% of default) could lead to block propagation issues. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15078)
- Add light client store object to the beacon node object. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15120)
- prysmctl option in wrapper script to generate devnet ssz. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15145)
- Add support for Electra fork epoch. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15132)
### Changed
- The validator client will no longer use the full list of committee values but instead use the committee length and validator committee index. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15039)
- Remove the header `Content-Disposition` from the `httputil.WriteSSZ` function. No `filename` parameter is needed anymore. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15092)
- Sort attestations in proposer block by reward. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15093)
- More efficient query method for stategen to retrieve blocks between a given state and the replay target block. This avoids attempting to look up blocks that are not needed for head replay queries, which may be missing due to a previous rollback bug. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15063)
- removed old web3signer metrics in favor for a universal one. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14920)
- Deprecated everything related with the gRPC API. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14944)
- Migrate Prysm repo to Offchain Labs organization ahead of Pectra upgrade v6. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15140)
### Deprecated
- deprecates and removes usage of the `--trace` flag and`--cpuprofile` flag in favor of just using `--pprof`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15083)
### Removed
- Remove /eth/v1/beacon/states/head/committees call when getting duties. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15039)
- Removed unused hack scripts. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15157)
- Remove `disable-committee-aware-packing` flag. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15162)
- Remove deprecated flags for the major release. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15165)
- Removed Beacon API endpoints which have been deprecated at the Deneb fork. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15166)
### Fixed
- The `--rpc` flag will now properly enable the keymanager APIs without web. The `--web` will enable both validator api endpoints and web. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15080)
- Use latest state to pack attestation. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15113)
- Clean up dangling block index entries for blocks that were previously deleted by incomplete cleanup code. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15040)
- Fixed to use io stream instead of stream read. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15089)
- When using a DV, send all aggregations for a slot and committee. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15110)
- Fixed a bug in consolidation request processing. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15122)
- Fix State Getter for pending withdrawal balance. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15123)
- Fixed a bug in checking for attestation lengths in our block. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15134)
- Fix Committee Index Check For Aggregates. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15146)
- Fix filtering by committee index post-Electra in `ListAttestationsV2`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15148)
- Peers giving invalid data in range syncing are now downscored. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15149)
- Adding fork guard to attestation api endpoints so that it doesn't accidentally include wrong attestation types in the pool. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15161)
- fixed underflow with balances in leaking edge case with expected withdrawals. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15191)
- Attribute block and blob issues to correct peers during range syncing. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15173)
## [v5.3.2](https://github.com/prysmaticlabs/prysm/compare/v5.3.1...v5.3.2) - 2025-03-25
This release introduces support for the `Hoodi` testnet.
Release highlights:
- Ability to run the node on the `Hoodi` tesnet. See https://blog.ethereum.org/2025/03/18/hoodi-holesky for more information about `Hoodi`.
- A new feature that allows treat certain blocks as invalid. This is especially useful when the network is split, allowing the node to discontinue following unwanted forks.
Testnet operators are required to update to this release. Without this release you will be unable to run the node on the `Hoodi` testnet.
Mainnet operators are recommended to update to this release at their regular cadence.
### Added
- enable SSZ for builder API calls. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14976)
- Add Hoodi testnet flag `--hoodi` to specify Hoodi testnet config and bootnodes. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15057)
- block_gossip topic support to the beacon api event stream. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15038)
- Added a static analyzer to discourage use of panic() in Prysm. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15075)
- Add a feature flag `--blacklist-roots` to allow the node to specify blocks that will be treated as invalid. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15030)
### Changed
- changed request object for `POST /eth/v1/beacon/states/head/validators` to omit the field if empty for satisfying other clients. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15031)
- Update spec test to v1.5.0-beta.3. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15050)
- Update Gossip and RPC message limits to comply with the spec. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14799)
- Return 404 instead of 500 from API when when a blob for a requested index is not found. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14845)
- Save Electra orphaned attestations into attestations pool's block attestations. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15060)
- Removed redundant string conversion in `BeaconDbStater.State` to improve code clarity and maintainability. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15081)
### Fixed
- Update seen unaggregated att cache to properly handle Electra attestations. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15034)
- cosmetic fix for calling `/prysm/validators/performance` when connecting to non prysm beacon nodes and removing the 404 error log. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15062)
- Tracked validator cache: Make sure no to loose the reference. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15077)
- Fixed proposing at genesis when starting post Bellatrix. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15084)
## [v5.3.1](https://github.com/prysmaticlabs/prysm/compare/v5.3.0...v5.3.1) - 2025-03-13
This release is packed with critical fixes for **Electra** and some important fixes for mainnet too.
The release highlights include:
- Ensure that deleting a block from the database clears its entry in the slot->root db index. This issue was causing some operators to have a bricked database, requiring a full resync. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15011)
- Updated go to go1.24.0. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14969)
- Added a feature flag to sync from an arbitrary beacon block root at startup. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15000)
- Updated default gas limit from 30M to 36M. Override this with `--suggested-gas-limit=` in the validator client. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14858)
Known issues in **Electra**:
- Duplicate attestations are needlessly processed. This is being addressed in [[PR]](https://github.com/prysmaticlabs/prysm/pull/15034).
Testnet operators are strongly encouraged to update to this release. There are many fixes and improvements from the Holesky upgrade incident.
Mainnet operators are recommended to update to this release at their regular cadence.
### Added
- enable E2E for minimal and mainnet tests. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14842)
- enable web3signer E2E for electra. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14936)
- Enable multiclient E2E for electra. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14946)
- Enable Scenario E2E tests with electra. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14946)
- Add endpoint for getting pending deposits. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14941)
- Add request hash to header for builder: executable data to block. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14955)
- Log execution requests in each block. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14956)
- Add endpoint for getting pending partial withdrawals. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14949)
- Tracked validators cache: Added the `ItemCount` method. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14957)
- Tracked validators cache: Added the `Indices` method. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14957)
- Added deposit request testing for electra. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14964)
- Added support for otel tracing transport in HTTP clients in Prysm. This allows for tracing headers to be sent with http requests such that spans between the validator and beacon chain can be connected in the tracing graph. This change does nothing without `--enable-tracing`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14972)
- Add SSZ support to light client finality and optimistic APIs. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14836)
- add log to committee index when committeebits are not the expected length of 1. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14993)
- Add acceptable address types for static peers. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14886)
- Added a feature flag to sync from an arbitrary beacon block root at startup. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15000)
### Changed
- updates geth to 1.15.0. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14842)
- Updates blst to v3.14.0 and fixes the references in our deps.bzl file. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14921)
- Updated tracing exporter from jaeger to otelhttp. This should not be a breaking change. Jaeger supports otel format, however you may need to update your URL as the default otel-collector port is 4318. See the [OpenTelemtry Protocol Exporter docs](https://opentelemetry.io/docs/specs/otel/protocol/exporter/) for more details. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14928)
- Don't use MaxCover for Electra on-chain attestations. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14925)
- Tracked validators cache: Remove validators from the cache if not seen after 1 hour. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14957)
- execution requests errors on ssz length have been improved. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14962)
- deprecate beacon api endpoints based on [3.0.0 release](https://github.com/ethereum/beacon-APIs/pull/506) for electra. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14967)
- Use go-cmp for printing better diffs for assertions.DeepEqual. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14978)
- Reorganized beacon chain flags in `--help` text into logical sections. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14959)
- `--validators-registration-batch-size`: Change default value from `0` to `200`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14981)
- Updated go to go1.24.0. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14969)
- Updated gosec to v2.22.1 and golangci to v1.64.5. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14969)
- Updated github.com/trailofbits/go-mutexasserts. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14969)
- Updated rules_go to cf3c3af34bd869b864f5f2b98e2f41c2b220d6c9 to support go1.24.0. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14969)
- Validate blob sidecar re-order signature and bad parent block. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15013)
- Updated default gas limit from 30M to 36M. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14858)
- Ignore errors from `hasSeenBit` and don't pack unaggregated attestations. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15018)
### Removed
- Remove Fulu state and block. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14905)
- Removed the log summarizing all started services. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14958)
### Fixed
- fixed max and target blob per block from static to dynamic values. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14911)
- refactored publish block and block ssz functions to fix gocognit. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14913)
- refactored publish blinded block and blinded block ssz to correctly deal with version headers and sent blocks. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14913)
- Only check for electra related engine methods if electra is active. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14924)
- Fixed bug that breaks new blob storage layout code on Windows, caused by accidental use of platform-dependent path parsing package. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14931)
- Fix E2E Process Deposit Evaluator for Electra. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14933)
- Fixed the `bazel run //:gazelle` command in `DEPENDENCIES.md`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14934)
- Fix E2E Deposit Activation Evaluator for Electra. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14938)
- Dedicated processing of `SingleAttestation` in the monitor service. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14965)
- adding in content type and accept headers for builder API call on registration. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14961)
- fixed gocognit in block conversions between json and proto types. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14953)
- Lint: Fix violations of S1009: should omit nil check; len() for nil slices is defined as zero. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14973)
- Lint: Fix violations of non-constant format string in call. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14974)
- Fixed violations of gosec G301. This is a check that created files and directories have file permissions 0750 and 0600 respectively. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14980)
- Check for the correct attester slashing type during gossip validation. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14985)
- cosmetic fix for post electra validator logs displaying attestation committee information correctly. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14992)
- fix inserting the wrong committee index into the seen cache for electra attestations. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14998)
- Allow any block type to be unmarshaled rather than only phase0 blocks in `slotByBlockRoot`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15008)
- Fixed pruner to not block while pruning large database by introducing batchSize. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14929)
- Decompose Electra block attestations to prevent redundant packing. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14896)
- Fixed use of deprecated rand.Seed. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14969)
- Fixed build issue with SszGen where the go binary was not present in the $PATH. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14969)
- fixed /eth/v1/config/spec displays BLOB_SIDECAR_SUBNET_COUNT,BLOB_SIDECAR_SUBNET_COUNT_ELECTRA. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15016)
- Ensure that deleting a block from the database clears its entry in the slot->root db index. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15011)
- Broadcasting BLS to execution changes should not use the request context in a go routine. Use context.Background() for the broadcasting go routine. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15019)
- /eth/v1/validator/sync_committee_contribution should check for optimistic status and return a 503 if it's optimistic. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15022)
- Fixes printing superfluous response.WriteHeader call from error in builder. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15025)
- Fixes e2e run with builder having wrong gaslimit header due to not being set on eth1 nodes. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15025)
- Fixed a bug in the event stream handler when processing payload attribute events where the timestamp and slot of the event would be based on the head rather than the current slot. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14963)
- Handle unaggregated attestations when decomposing Electra block attestations. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15027)
## [v5.3.0](https://github.com/prysmaticlabs/prysm/compare/v5.2.0...v5.3.0) - 2025-02-12
This release includes support for Pectra activation in the [Holesky](https://github.com/eth-clients/holesky) and [Sepolia](https://github.com/eth-clients/sepolia) testnets! The release contains many fixes for Electra that have been found in rigorous testing through devnets in the last few months.

View File

@@ -4,7 +4,7 @@ Note: The latest and most up-to-date documentation can be found on our [docs por
Excited by our work and want to get involved in building out our sharding releases? Or maybe you haven't learned as much about the Ethereum protocol but are a savvy developer?
You can explore our [Open Issues](https://github.com/OffchainLabs/prysm/issues) in-the works for our different releases. Feel free to fork our repo and start creating PRs after assigning yourself to an issue of interest. We are always chatting on [Discord](https://discord.gg/prysm) drop us a line there if you want to get more involved or have any questions on our implementation!
You can explore our [Open Issues](https://github.com/prysmaticlabs/prysm/issues) in-the works for our different releases. Feel free to fork our repo and start creating PRs after assigning yourself to an issue of interest. We are always chatting on [Discord](https://discord.gg/CTYGPUJ) drop us a line there if you want to get more involved or have any questions on our implementation!
> [!IMPORTANT]
> Please, **do not send pull requests for trivial changes**, such as typos, these will be rejected. These types of pull requests incur a cost to reviewers and do not provide much value to the project. If you are unsure, please open an issue first to discuss the change.
@@ -15,15 +15,15 @@ You can explore our [Open Issues](https://github.com/OffchainLabs/prysm/issues)
**2. Fork the Prysm repo.**
Sign in to your GitHub account or create a new account if you do not have one already. Then navigate your browser to https://github.com/OffchainLabs/prysm/. In the upper right hand corner of the page, click “fork”. This will create a copy of the Prysm repo in your account.
Sign in to your GitHub account or create a new account if you do not have one already. Then navigate your browser to https://github.com/prysmaticlabs/prysm/. In the upper right hand corner of the page, click “fork”. This will create a copy of the Prysm repo in your account.
**3. Create a local clone of Prysm.**
```
$ mkdir -p $GOPATH/src/github.com/OffchainLabs
$ cd $GOPATH/src/github.com/OffchainLabs
$ git clone https://github.com/OffchainLabs/prysm.git
$ cd $GOPATH/src/github.com/OffchainLabs/prysm
$ mkdir -p $GOPATH/src/github.com/prysmaticlabs
$ cd $GOPATH/src/github.com/prysmaticlabs
$ git clone https://github.com/prysmaticlabs/prysm.git
$ cd $GOPATH/src/github.com/prysmaticlabs/prysm
```
**4. Link your local clone to the fork on your GitHub repo.**
@@ -35,13 +35,13 @@ $ git remote add myprysmrepo https://github.com/<your_github_user_name>/prysm.gi
**5. Link your local clone to the Prysmatic Labs repo so that you can easily fetch future changes to the Prysmatic Labs repo.**
```
$ git remote add prysm https://github.com/OffchainLabs/prysm.git
$ git remote add prysm https://github.com/prysmaticlabs/prysm.git
$ git remote -v (you should see myrepo and prysm in the list of remotes)
```
**6. Find an issue to work on.**
Check out open issues at https://github.com/OffchainLabs/prysm/issues and pick one. Leave a comment to let the development team know that you would like to work on it. Or examine the code for areas that can be improved and leave a comment to the development team to ask if they would like you to work on it.
Check out open issues at https://github.com/prysmaticlabs/prysm/issues and pick one. Leave a comment to let the development team know that you would like to work on it. Or examine the code for areas that can be improved and leave a comment to the development team to ask if they would like you to work on it.
**7. Create a local branch with a name that clearly identifies what you will be working on.**
@@ -129,7 +129,7 @@ All PRs must must include a changelog fragment file in the `changelog` directory
**17. Create a pull request.**
Navigate your browser to https://github.com/OffchainLabs/prysm and click on the new pull request button. In the “base” box on the left, leave the default selection “base develop”, the branch that you want your changes to be applied to. In the “compare” box on the right, select feature-in-progress-branch, the branch containing the changes you want to apply. You will then be asked to answer a few questions about your pull request. After you complete the questionnaire, the pull request will appear in the list of pull requests at https://github.com/OffchainLabs/prysm/pulls. Ensure that you have added an entry to CHANGELOG.md if your PR is a user-facing change. See the [Maintaining CHANGELOG.md](#maintaining-changelogmd) section for more information.
Navigate your browser to https://github.com/prysmaticlabs/prysm and click on the new pull request button. In the “base” box on the left, leave the default selection “base develop”, the branch that you want your changes to be applied to. In the “compare” box on the right, select feature-in-progress-branch, the branch containing the changes you want to apply. You will then be asked to answer a few questions about your pull request. After you complete the questionnaire, the pull request will appear in the list of pull requests at https://github.com/prysmaticlabs/prysm/pulls. Ensure that you have added an entry to CHANGELOG.md if your PR is a user-facing change. See the [Maintaining CHANGELOG.md](#maintaining-changelogmd) section for more information.
**18. Respond to comments by Core Contributors.**

View File

@@ -2,7 +2,7 @@
Prysm is go project with many complicated dependencies, including some c++ based libraries. There
are two parts to Prysm's dependency management. Go modules and bazel managed dependencies. Be sure
to read [Why Bazel?](https://github.com/OffchainLabs/documentation/issues/138) to fully
to read [Why Bazel?](https://github.com/prysmaticlabs/documentation/issues/138) to fully
understand the reasoning behind an additional layer of build tooling via Bazel rather than a pure
"go build" project.
@@ -60,7 +60,7 @@ bazel build //beacon-chain --config=release
Example:
```bash
go get github.com/OffchainLabs/example@v1.2.3
go get github.com/prysmaticlabs/example@v1.2.3
bazel run //:gazelle -- update-repos -from_file=go.mod -to_macro=deps.bzl%prysm_deps -prune=true
```

View File

@@ -10,7 +10,7 @@ This README details how to setup Prysm for interop testing for usage with other
## Installation & Setup
1. Install [Bazel](https://docs.bazel.build/versions/master/install.html) **(Recommended)**
2. `git clone https://github.com/OffchainLabs/prysm && cd prysm`
2. `git clone https://github.com/prysmaticlabs/prysm && cd prysm`
3. `bazel build //cmd/...`
## Starting from Genesis

View File

@@ -1,68 +1,37 @@
<h1 align="left">Prysm: An Ethereum Consensus Implementation Written in Go</h1>
# Prysm: An Ethereum Consensus Implementation Written in Go
<div align="left">
[![Build status](https://badge.buildkite.com/b555891daf3614bae4284dcf365b2340cefc0089839526f096.svg?branch=master)](https://buildkite.com/prysmatic-labs/prysm)
[![Go Report Card](https://goreportcard.com/badge/github.com/OffchainLabs/prysm)](https://goreportcard.com/report/github.com/OffchainLabs/prysm)
[![Go Report Card](https://goreportcard.com/badge/github.com/prysmaticlabs/prysm)](https://goreportcard.com/report/github.com/prysmaticlabs/prysm)
[![Consensus_Spec_Version 1.4.0](https://img.shields.io/badge/Consensus%20Spec%20Version-v1.4.0-blue.svg)](https://github.com/ethereum/consensus-specs/tree/v1.4.0)
[![Execution_API_Version 1.0.0-beta.2](https://img.shields.io/badge/Execution%20API%20Version-v1.0.0.beta.2-blue.svg)](https://github.com/ethereum/execution-apis/tree/v1.0.0-beta.2/src/engine)
[![Discord](https://user-images.githubusercontent.com/7288322/34471967-1df7808a-efbb-11e7-9088-ed0b04151291.png)](https://discord.gg/prysm)
[![GitPOAP Badge](https://public-api.gitpoap.io/v1/repo/OffchainLabs/prysm/badge)](https://www.gitpoap.io/gh/OffchainLabs/prysm)
[![Discord](https://user-images.githubusercontent.com/7288322/34471967-1df7808a-efbb-11e7-9088-ed0b04151291.png)](https://discord.gg/prysmaticlabs)
[![GitPOAP Badge](https://public-api.gitpoap.io/v1/repo/prysmaticlabs/prysm/badge)](https://www.gitpoap.io/gh/prysmaticlabs/prysm)
</div>
This is the core repository for Prysm, a [Golang](https://golang.org/) implementation of the [Ethereum Consensus](https://ethereum.org/en/developers/docs/consensus-mechanisms/#proof-of-stake) [specification](https://github.com/ethereum/consensus-specs), developed by [Offchain Labs](https://www.offchainlabs.com). See the [Changelog](https://github.com/prysmaticlabs/prysm/releases) for details of the latest releases and upcoming breaking changes.
---
### Getting Started
## 📖 Overview
A detailed set of installation and usage instructions as well as breakdowns of each individual component are available in the [official documentation portal](https://docs.prylabs.network). If you still have questions, feel free to stop by our [Discord](https://discord.gg/prysmaticlabs).
This is the core repository for Prysm, a [Golang](https://golang.org/) implementation of the [Ethereum Consensus](https://ethereum.org/en/developers/docs/consensus-mechanisms/#proof-of-stake) [specification](https://github.com/ethereum/consensus-specs), developed by [Offchain Labs](https://www.offchainlabs.com).
### Staking on Mainnet
See the [Changelog](https://github.com/OffchainLabs/prysm/releases) for details of the latest releases and upcoming breaking changes.
To participate in staking, you can join the [official eth2 launchpad](https://launchpad.ethereum.org). The launchpad is the only recommended way to become a validator on mainnet. You can explore validator rewards/penalties via Bitfly's block explorer: [beaconcha.in](https://beaconcha.in), and follow the latest blocks added to the chain on [beaconscan](https://beaconscan.com).
---
## 🚀 Getting Started
A detailed set of installation and usage instructions as well as breakdowns of each individual component are available in the **[official documentation portal](https://docs.prylabs.network)**.
💬 **Need help?** Join our **[Discord Community](https://discord.gg/prysm)** for support.
---
## 🏆 Staking on Mainnet
To participate in staking, you can join the **[official Ethereum launchpad](https://launchpad.ethereum.org)**. The launchpad is the **only recommended** way to become a validator on mainnet.
🔍 Explore validator rewards/penalties:
- **[beaconcha.in](https://beaconcha.in)**
- **[beaconscan](https://beaconscan.com)**
---
## 🤝 Contributing
### 🔥 Branches
## Contributing
### Branches
Prysm maintains two permanent branches:
- **[`master`](https://github.com/OffchainLabs/prysm/tree/master)** - This points to the latest stable release. It is ideal for most users.
- **[`develop`](https://github.com/OffchainLabs/prysm/tree/develop)** - This is used for development and contains the latest PRs. Developers should base their PRs on this branch.
* [master](https://github.com/prysmaticlabs/prysm/tree/master): This points to the latest stable release. It is ideal for most users.
* [develop](https://github.com/prysmaticlabs/prysm/tree/develop): This is used for development, it contains the latest PRs. Developers should base their PRs on this branch.
### 🛠 Contribution Guide
### Guide
Want to get involved? Check out our [Contribution Guide](https://docs.prylabs.network/docs/contribute/contribution-guidelines/) to learn more!
Want to get involved? Check out our **[Contribution Guide](https://docs.prylabs.network/docs/contribute/contribution-guidelines/)** to learn more!
## License
---
[GNU General Public License v3.0](https://www.gnu.org/licenses/gpl-3.0.en.html)
## 📜 License
## Legal Disclaimer
[![License: GPL v3](https://img.shields.io/badge/License-GPLv3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0.en.html)
This project is licensed under the **GNU General Public License v3.0**.
---
## ⚖️ Legal Disclaimer
📜 [Terms of Use](/TERMS_OF_SERVICE.md)
[Terms of Use](/TERMS_OF_SERVICE.md)

View File

@@ -2,10 +2,10 @@
## Supported Versions
[Releases](https://github.com/OffchainLabs/prysm/releases/) contains all available releases. We recommend using the [most recently released version](https://github.com/OffchainLabs/prysm/releases/latest).
[Releases](https://github.com/prysmaticlabs/prysm/releases/) contains all available releases. We recommend using the [most recently released version](https://github.com/prysmaticlabs/prysm/releases/latest).
## Reporting a Vulnerability
Please see our signed [security.txt](https://github.com/OffchainLabs/prysm/blob/develop/.well-known/security.txt) for preferred encryption and reporting destination.
Please see our signed [security.txt](https://github.com/prysmaticlabs/prysm/blob/develop/.well-known/security.txt) for preferred encryption and reporting destination.
**Please do not file a public ticket** mentioning the vulnerability, as doing so could increase the likelihood of the vulnerability being used before a fix has been created, released and installed on the network.

122
WORKSPACE
View File

@@ -1,7 +1,7 @@
workspace(name = "prysm")
load("@bazel_tools//tools/build_defs/repo:git.bzl", "git_repository")
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
load("@bazel_tools//tools/build_defs/repo:git.bzl", "git_repository")
http_archive(
name = "rules_pkg",
@@ -16,6 +16,8 @@ load("@rules_pkg//:deps.bzl", "rules_pkg_dependencies")
rules_pkg_dependencies()
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "toolchains_protoc",
sha256 = "abb1540f8a9e045422730670ebb2f25b41fa56ca5a7cf795175a110a0a68f4ad",
@@ -158,15 +160,15 @@ oci_register_toolchains(
http_archive(
name = "io_bazel_rules_go",
integrity = "sha256-JD8o94crTb2DFiJJR8nMAGdBAW95zIENB4cbI+JnrI4=",
patch_args = ["-p1"],
patches = [
# Expose internals of go_test for custom build transitions.
"//third_party:io_bazel_rules_go_test.patch",
],
strip_prefix = "rules_go-cf3c3af34bd869b864f5f2b98e2f41c2b220d6c9",
sha256 = "b2038e2de2cace18f032249cb4bb0048abf583a36369fa98f687af1b3f880b26",
urls = [
"https://github.com/bazel-contrib/rules_go/archive/cf3c3af34bd869b864f5f2b98e2f41c2b220d6c9.tar.gz",
"https://mirror.bazel.build/github.com/bazelbuild/rules_go/releases/download/v0.48.1/rules_go-v0.48.1.zip",
"https://github.com/bazelbuild/rules_go/releases/download/v0.48.1/rules_go-v0.48.1.zip",
],
)
@@ -190,7 +192,7 @@ load("@rules_oci//oci:pull.bzl", "oci_pull")
# A multi-arch base image
oci_pull(
name = "linux_debian11_multiarch_base", # Debian bullseye
digest = "sha256:55a5e011b2c4246b4c51e01fcc2b452d151e03df052e357465f0392fcd59fddf",
digest = "sha256:b82f113425c5b5c714151aaacd8039bc141821cdcd3c65202d42bdf9c43ae60b", # 2023-12-12
image = "gcr.io/prysmaticlabs/distroless/cc-debian11",
platforms = [
"linux/amd64",
@@ -208,7 +210,7 @@ load("@io_bazel_rules_go//go:deps.bzl", "go_register_toolchains", "go_rules_depe
go_rules_dependencies()
go_register_toolchains(
go_version = "1.24.6",
go_version = "1.23.5",
nogo = "@//:nogo",
)
@@ -253,18 +255,56 @@ filegroup(
url = "https://github.com/ethereum/EIPs/archive/5480440fe51742ed23342b68cf106cefd427e39d.tar.gz",
)
consensus_spec_version = "v1.6.0-alpha.5"
consensus_spec_version = "v1.5.0-alpha.10"
load("@prysm//tools:download_spectests.bzl", "consensus_spec_tests")
bls_test_version = "v0.1.1"
consensus_spec_tests(
name = "consensus_spec_tests",
flavors = {
"general": "sha256-BXuEb1XbeSft0qzVFnoB8KC0YR1qM3ybT5lKUDbUWn8=",
"minimal": "sha256-EjwSHgBbWSoy5hm9V+A/bVMabyojaKsBNPrRtuPVq4k=",
"mainnet": "sha256-OGWMzarzaV1B9mVpy48/DCUbhjfX+b64pAxWwPLWhAs=",
},
version = consensus_spec_version,
http_archive(
name = "consensus_spec_tests_general",
build_file_content = """
filegroup(
name = "test_data",
srcs = glob([
"**/*.ssz_snappy",
"**/*.yaml",
]),
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-NtWIhbO/mVMb1edq5jqABL0o8R1tNFiuG8PCMAsUHcs=",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/general.tar.gz" % consensus_spec_version,
)
http_archive(
name = "consensus_spec_tests_minimal",
build_file_content = """
filegroup(
name = "test_data",
srcs = glob([
"**/*.ssz_snappy",
"**/*.yaml",
]),
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-DFlFlnzls1bBrDm+/xD8NK2ivvkhxR+rSNVLLqScVKc=",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/minimal.tar.gz" % consensus_spec_version,
)
http_archive(
name = "consensus_spec_tests_mainnet",
build_file_content = """
filegroup(
name = "test_data",
srcs = glob([
"**/*.ssz_snappy",
"**/*.yaml",
]),
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-G9ENPF8udZL/BqRHbi60GhFPnZDPZAH6UjcjRiOlvbk=",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/mainnet.tar.gz" % consensus_spec_version,
)
http_archive(
@@ -278,13 +318,11 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-FQWR5EZuVcQGR0ol9vpd7eunnfGexJ/7J3xycrFEJbU=",
integrity = "sha256-ClOLKkmAcEi8/uKi6LDeqthask5+E3sgxVoA0bqmQ0c=",
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
)
bls_test_version = "v0.1.1"
http_archive(
name = "bls_spec_tests",
build_file_content = """
@@ -327,25 +365,9 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-YVFFrCmjoGZ3fXMWpsCpSsYbANy1grnqYwOLKIg2SsA=",
strip_prefix = "holesky-32a72e21c6e53c262f27d50dd540cb654517d03a",
url = "https://github.com/eth-clients/holesky/archive/32a72e21c6e53c262f27d50dd540cb654517d03a.tar.gz", # 2025-03-17
)
http_archive(
name = "mainnet",
build_file_content = """
filegroup(
name = "configs",
srcs = [
"metadata/config.yaml",
],
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-NZr/gsQK9rBHRnznlPBiNzJpK8MPMrfUa3f+QYqn1+g=",
strip_prefix = "mainnet-978f1794eada6f85bee76e4d2d5959a5fb8e0cc5",
url = "https://github.com/eth-clients/mainnet/archive/978f1794eada6f85bee76e4d2d5959a5fb8e0cc5.tar.gz",
integrity = "sha256-b7ZTT+olF+VXEJYNTV5jggNtCkt9dOejm1i2VE+zy+0=",
strip_prefix = "holesky-874c199423ccd180607320c38cbaca05d9a1573a",
url = "https://github.com/eth-clients/holesky/archive/874c199423ccd180607320c38cbaca05d9a1573a.tar.gz", # 2024-06-18
)
http_archive(
@@ -359,25 +381,9 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-b5F7Wg9LLMqGRIpP2uqb/YsSFVn2ynzlV7g/Nb1EFLk=",
strip_prefix = "sepolia-562d9938f08675e9ba490a1dfba21fb05843f39f",
url = "https://github.com/eth-clients/sepolia/archive/562d9938f08675e9ba490a1dfba21fb05843f39f.tar.gz", # 2025-03-17
)
http_archive(
name = "hoodi_testnet",
build_file_content = """
filegroup(
name = "configs",
srcs = [
"metadata/config.yaml",
],
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-dPiEWUd8QvbYGwGtIm0QtCekitVLOLsW5rpQIGzz8PU=",
strip_prefix = "hoodi-828c2c940e1141092bd4bb979cef547ea926d272",
url = "https://github.com/eth-clients/hoodi/archive/828c2c940e1141092bd4bb979cef547ea926d272.tar.gz",
integrity = "sha256-cY/UgpCcYEhQf7JefD65FI8tn/A+rAvKhcm2/qiVdqY=",
strip_prefix = "sepolia-f2c219a93c4491cee3d90c18f2f8e82aed850eab",
url = "https://github.com/eth-clients/sepolia/archive/f2c219a93c4491cee3d90c18f2f8e82aed850eab.tar.gz", # 2024-09-19
)
http_archive(
@@ -425,7 +431,7 @@ gometalinter_dependencies()
load("@bazel_gazelle//:deps.bzl", "gazelle_dependencies")
gazelle_dependencies(go_sdk = "go_sdk")
gazelle_dependencies()
load("@com_google_protobuf//:protobuf_deps.bzl", "protobuf_deps")

View File

@@ -7,7 +7,7 @@ go_library(
"headers.go",
"jwt.go",
],
importpath = "github.com/OffchainLabs/prysm/v6/api",
importpath = "github.com/prysmaticlabs/prysm/v5/api",
visibility = ["//visibility:public"],
deps = [
"//crypto/rand:go_default_library",

View File

@@ -1,29 +0,0 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"common.go",
"header.go",
],
importpath = "github.com/OffchainLabs/prysm/v6/api/apiutil",
visibility = ["//visibility:public"],
deps = [
"//consensus-types/primitives:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = [
"common_test.go",
"header_test.go",
],
embed = [":go_default_library"],
deps = [
"//consensus-types/primitives:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
],
)

View File

@@ -1,23 +0,0 @@
package apiutil
import (
"fmt"
neturl "net/url"
"strconv"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
)
// Uint64ToString is a util function that will convert uints to string
func Uint64ToString[T uint64 | primitives.Slot | primitives.ValidatorIndex | primitives.CommitteeIndex | primitives.Epoch](val T) string {
return strconv.FormatUint(uint64(val), 10)
}
// BuildURL is a util function that assists with adding query parameters to the url
func BuildURL(path string, queryParams ...neturl.Values) string {
if len(queryParams) == 0 {
return path
}
return fmt.Sprintf("%s?%s", path, queryParams[0].Encode())
}

View File

@@ -1,37 +0,0 @@
package apiutil
import (
"net/url"
"testing"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/testing/assert"
)
func TestBeaconApiHelpers_TestUint64ToString(t *testing.T) {
const expectedResult = "1234"
const val = uint64(1234)
assert.Equal(t, expectedResult, Uint64ToString(val))
assert.Equal(t, expectedResult, Uint64ToString(primitives.Slot(val)))
assert.Equal(t, expectedResult, Uint64ToString(primitives.ValidatorIndex(val)))
assert.Equal(t, expectedResult, Uint64ToString(primitives.CommitteeIndex(val)))
assert.Equal(t, expectedResult, Uint64ToString(primitives.Epoch(val)))
}
func TestBuildURL_NoParams(t *testing.T) {
wanted := "/aaa/bbb/ccc"
actual := BuildURL("/aaa/bbb/ccc")
assert.Equal(t, wanted, actual)
}
func TestBuildURL_WithParams(t *testing.T) {
params := url.Values{}
params.Add("xxxx", "1")
params.Add("yyyy", "2")
params.Add("zzzz", "3")
wanted := "/aaa/bbb/ccc?xxxx=1&yyyy=2&zzzz=3"
actual := BuildURL("/aaa/bbb/ccc", params)
assert.Equal(t, wanted, actual)
}

View File

@@ -1,122 +0,0 @@
package apiutil
import (
"mime"
"sort"
"strconv"
"strings"
log "github.com/sirupsen/logrus"
)
type mediaRange struct {
mt string // canonicalised mediatype, e.g. "application/json"
q float64 // quality factor (01)
raw string // original string useful for logging/debugging
spec int // 2=exact, 1=type/*, 0=*/*
}
func parseMediaRange(field string) (mediaRange, bool) {
field = strings.TrimSpace(field)
mt, params, err := mime.ParseMediaType(field)
if err != nil {
log.WithError(err).Debug("Failed to parse header field")
return mediaRange{}, false
}
r := mediaRange{mt: mt, q: 1, spec: 2, raw: field}
if qs, ok := params["q"]; ok {
v, err := strconv.ParseFloat(qs, 64)
if err != nil || v < 0 || v > 1 {
log.WithField("q", qs).Debug("Invalid quality factor (01)")
return mediaRange{}, false // skip invalid entry
}
r.q = v
}
switch {
case mt == "*/*":
r.spec = 0
case strings.HasSuffix(mt, "/*"):
r.spec = 1
}
return r, true
}
func hasExplicitQ(r mediaRange) bool {
return strings.Contains(strings.ToLower(r.raw), ";q=")
}
// ParseAccept returns media ranges sorted by q (desc) then specificity.
func ParseAccept(header string) []mediaRange {
if header == "" {
return []mediaRange{{mt: "*/*", q: 1, spec: 0, raw: "*/*"}}
}
var out []mediaRange
for _, field := range strings.Split(header, ",") {
if r, ok := parseMediaRange(field); ok {
out = append(out, r)
}
}
sort.SliceStable(out, func(i, j int) bool {
ei, ej := hasExplicitQ(out[i]), hasExplicitQ(out[j])
if ei != ej {
return ei // explicit beats implicit
}
if out[i].q != out[j].q {
return out[i].q > out[j].q
}
return out[i].spec > out[j].spec
})
return out
}
// Matches reports whether content type is acceptable per the header.
func Matches(header, ct string) bool {
for _, r := range ParseAccept(header) {
switch {
case r.q == 0:
continue
case r.mt == "*/*":
return true
case strings.HasSuffix(r.mt, "/*"):
if strings.HasPrefix(ct, r.mt[:len(r.mt)-1]) {
return true
}
case r.mt == ct:
return true
}
}
return false
}
// Negotiate selects the best server type according to the header.
// Returns the chosen type and true, or "", false when nothing matches.
func Negotiate(header string, serverTypes []string) (string, bool) {
for _, r := range ParseAccept(header) {
if r.q == 0 {
continue
}
for _, s := range serverTypes {
if Matches(r.mt, s) {
return s, true
}
}
}
return "", false
}
// PrimaryAcceptMatches only checks if the first accept matches
func PrimaryAcceptMatches(header, produced string) bool {
for _, r := range ParseAccept(header) {
if r.q == 0 {
continue // explicitly unacceptable skip
}
return Matches(r.mt, produced)
}
return false
}

View File

@@ -1,174 +0,0 @@
package apiutil
import (
"testing"
"github.com/OffchainLabs/prysm/v6/testing/require"
)
func TestParseAccept(t *testing.T) {
type want struct {
mt string
q float64
spec int
}
cases := []struct {
name string
header string
want []want
}{
{
name: "empty header becomes */*;q=1",
header: "",
want: []want{{mt: "*/*", q: 1, spec: 0}},
},
{
name: "quality ordering then specificity",
header: "application/json;q=0.2, */*;q=0.1, application/xml;q=0.5, text/*;q=0.5",
want: []want{
{mt: "application/xml", q: 0.5, spec: 2},
{mt: "text/*", q: 0.5, spec: 1},
{mt: "application/json", q: 0.2, spec: 2},
{mt: "*/*", q: 0.1, spec: 0},
},
},
{
name: "invalid pieces are skipped",
header: "text/plain; q=boom, application/json",
want: []want{{mt: "application/json", q: 1, spec: 2}},
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
got := ParseAccept(tc.header)
gotProjected := make([]want, len(got))
for i, g := range got {
gotProjected[i] = want{mt: g.mt, q: g.q, spec: g.spec}
}
require.DeepEqual(t, gotProjected, tc.want)
})
}
}
func TestMatches(t *testing.T) {
cases := []struct {
name string
accept string
ct string
matches bool
}{
{"exact match", "application/json", "application/json", true},
{"type wildcard", "application/*;q=0.8", "application/xml", true},
{"global wildcard", "*/*;q=0.1", "image/png", true},
{"explicitly unacceptable (q=0)", "text/*;q=0", "text/plain", false},
{"no match", "image/png", "application/json", false},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
got := Matches(tc.accept, tc.ct)
require.Equal(t, tc.matches, got)
})
}
}
func TestNegotiate(t *testing.T) {
cases := []struct {
name string
accept string
serverTypes []string
wantType string
ok bool
}{
{
name: "highest quality wins",
accept: "application/json;q=0.8,application/xml;q=0.9",
serverTypes: []string{"application/json", "application/xml"},
wantType: "application/xml",
ok: true,
},
{
name: "wildcard matches first server type",
accept: "*/*;q=0.5",
serverTypes: []string{"application/octet-stream", "application/json"},
wantType: "application/octet-stream",
ok: true,
},
{
name: "no acceptable type",
accept: "image/png",
serverTypes: []string{"application/json"},
wantType: "",
ok: false,
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
got, ok := Negotiate(tc.accept, tc.serverTypes)
require.Equal(t, tc.ok, ok)
require.Equal(t, tc.wantType, got)
})
}
}
func TestPrimaryAcceptMatches(t *testing.T) {
tests := []struct {
name string
accept string
produced string
expect bool
}{
{
name: "prefers json",
accept: "application/json;q=0.9,application/xml",
produced: "application/json",
expect: true,
},
{
name: "wildcard application beats other wildcard",
accept: "application/*;q=0.2,*/*;q=0.1",
produced: "application/xml",
expect: true,
},
{
name: "json wins",
accept: "application/xml;q=0.8,application/json;q=0.9",
produced: "application/json",
expect: true,
},
{
name: "json loses",
accept: "application/xml;q=0.8,application/json;q=0.9,application/octet-stream;q=0.99",
produced: "application/json",
expect: false,
},
{
name: "json wins with non q option",
accept: "application/xml;q=0.8,image/png,application/json;q=0.9",
produced: "application/json",
expect: true,
},
{
name: "json not primary",
accept: "image/png,application/json",
produced: "application/json",
expect: false,
},
{
name: "absent header",
accept: "",
produced: "text/plain",
expect: true,
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
got := PrimaryAcceptMatches(tc.accept, tc.produced)
require.Equal(t, got, tc.expect)
})
}
}

View File

@@ -7,7 +7,7 @@ go_library(
"errors.go",
"options.go",
],
importpath = "github.com/OffchainLabs/prysm/v6/api/client",
importpath = "github.com/prysmaticlabs/prysm/v5/api/client",
visibility = ["//visibility:public"],
deps = ["@com_github_pkg_errors//:go_default_library"],
)

View File

@@ -5,17 +5,19 @@ go_library(
srcs = [
"client.go",
"doc.go",
"health.go",
"log.go",
"template.go",
],
importpath = "github.com/OffchainLabs/prysm/v6/api/client/beacon",
importpath = "github.com/prysmaticlabs/prysm/v5/api/client/beacon",
visibility = ["//visibility:public"],
deps = [
"//api/client:go_default_library",
"//api/client/beacon/iface:go_default_library",
"//api/server:go_default_library",
"//api/server/structs:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//network/forks:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
@@ -25,10 +27,15 @@ go_library(
go_test(
name = "go_default_test",
srcs = ["client_test.go"],
srcs = [
"client_test.go",
"health_test.go",
],
embed = [":go_default_library"],
deps = [
"//api/client:go_default_library",
"//api/client/beacon/testing:go_default_library",
"//testing/require:go_default_library",
"@org_uber_go_mock//gomock:go_default_library",
],
)

View File

@@ -9,16 +9,19 @@ import (
"net/url"
"path"
"regexp"
"sort"
"strconv"
"text/template"
"github.com/OffchainLabs/prysm/v6/api/client"
"github.com/OffchainLabs/prysm/v6/api/server"
"github.com/OffchainLabs/prysm/v6/api/server/structs"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/api/client"
"github.com/prysmaticlabs/prysm/v5/api/server"
"github.com/prysmaticlabs/prysm/v5/api/server/structs"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/network/forks"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/sirupsen/logrus"
)
@@ -61,6 +64,23 @@ func IdFromSlot(s primitives.Slot) StateOrBlockId {
return StateOrBlockId(strconv.FormatUint(uint64(s), 10))
}
// idTemplate is used to create template functions that can interpolate StateOrBlockId values.
func idTemplate(ts string) func(StateOrBlockId) string {
t := template.Must(template.New("").Parse(ts))
f := func(id StateOrBlockId) string {
b := bytes.NewBuffer(nil)
err := t.Execute(b, struct{ Id string }{Id: string(id)})
if err != nil {
panic(fmt.Sprintf("invalid idTemplate: %s", ts))
}
return b.String()
}
// run the template to ensure that it is valid
// this should happen load time (using package scoped vars) to ensure runtime errors aren't possible
_ = f(IdGenesis)
return f
}
// RenderGetBlockPath formats a block id into a path for the GetBlock API endpoint.
func RenderGetBlockPath(id StateOrBlockId) string {
return path.Join(getSignedBlockPath, string(id))
@@ -94,6 +114,8 @@ func (c *Client) GetBlock(ctx context.Context, blockId StateOrBlockId) ([]byte,
return b, nil
}
var getBlockRootTpl = idTemplate(getBlockRootPath)
// GetBlockRoot retrieves the hash_tree_root of the BeaconBlock for the given block id.
// Block identifier can be one of: "head" (canonical head in node's view), "genesis", "finalized",
// <slot>, <hex encoded blockRoot with 0x prefix>. Variables of type StateOrBlockId are exported by this package
@@ -116,6 +138,8 @@ func (c *Client) GetBlockRoot(ctx context.Context, blockId StateOrBlockId) ([32]
return bytesutil.ToBytes32(rs), nil
}
var getForkTpl = idTemplate(getForkForStatePath)
// GetFork queries the Beacon Node API for the Fork from the state identified by stateId.
// Block identifier can be one of: "head" (canonical head in node's view), "genesis", "finalized",
// <slot>, <hex encoded blockRoot with 0x prefix>. Variables of type StateOrBlockId are exported by this package
@@ -135,6 +159,24 @@ func (c *Client) GetFork(ctx context.Context, stateId StateOrBlockId) (*ethpb.Fo
return fr.ToConsensus()
}
// GetForkSchedule retrieve all forks, past present and future, of which this node is aware.
func (c *Client) GetForkSchedule(ctx context.Context) (forks.OrderedSchedule, error) {
body, err := c.Get(ctx, getForkSchedulePath)
if err != nil {
return nil, errors.Wrap(err, "error requesting fork schedule")
}
fsr := &forkScheduleResponse{}
err = json.Unmarshal(body, fsr)
if err != nil {
return nil, err
}
ofs, err := fsr.OrderedForkSchedule()
if err != nil {
return nil, errors.Wrap(err, fmt.Sprintf("problem unmarshaling %s response", getForkSchedulePath))
}
return ofs, nil
}
// GetConfigSpec retrieve the current configs of the network used by the beacon node.
func (c *Client) GetConfigSpec(ctx context.Context) (*structs.GetSpecResponse, error) {
body, err := c.Get(ctx, getConfigSpecPath)
@@ -314,3 +356,31 @@ func (c *Client) GetBLStoExecutionChanges(ctx context.Context) (*structs.BLSToEx
}
return poolResponse, nil
}
type forkScheduleResponse struct {
Data []structs.Fork
}
func (fsr *forkScheduleResponse) OrderedForkSchedule() (forks.OrderedSchedule, error) {
ofs := make(forks.OrderedSchedule, 0)
for _, d := range fsr.Data {
epoch, err := strconv.ParseUint(d.Epoch, 10, 64)
if err != nil {
return nil, errors.Wrapf(err, "error parsing epoch %s", d.Epoch)
}
vSlice, err := hexutil.Decode(d.CurrentVersion)
if err != nil {
return nil, err
}
if len(vSlice) != 4 {
return nil, fmt.Errorf("got %d byte version, expected 4 bytes. version hex=%s", len(vSlice), d.CurrentVersion)
}
version := bytesutil.ToBytes4(vSlice)
ofs = append(ofs, forks.ForkScheduleEntry{
Version: version,
Epoch: primitives.Epoch(epoch),
})
}
sort.Sort(ofs)
return ofs, nil
}

View File

@@ -4,8 +4,8 @@ import (
"net/url"
"testing"
"github.com/OffchainLabs/prysm/v6/api/client"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/prysmaticlabs/prysm/v5/api/client"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func TestParseNodeVersion(t *testing.T) {

View File

@@ -0,0 +1,60 @@
package beacon
import (
"context"
"sync"
"github.com/prysmaticlabs/prysm/v5/api/client/beacon/iface"
)
type NodeHealthTracker struct {
isHealthy *bool
healthChan chan bool
node iface.HealthNode
sync.RWMutex
}
func NewNodeHealthTracker(node iface.HealthNode) *NodeHealthTracker {
return &NodeHealthTracker{
node: node,
healthChan: make(chan bool, 1),
}
}
// HealthUpdates provides a read-only channel for health updates.
func (n *NodeHealthTracker) HealthUpdates() <-chan bool {
return n.healthChan
}
func (n *NodeHealthTracker) IsHealthy() bool {
n.RLock()
defer n.RUnlock()
if n.isHealthy == nil {
return false
}
return *n.isHealthy
}
func (n *NodeHealthTracker) CheckHealth(ctx context.Context) bool {
n.Lock()
defer n.Unlock()
newStatus := n.node.IsHealthy(ctx)
if n.isHealthy == nil {
n.isHealthy = &newStatus
}
isStatusChanged := newStatus != *n.isHealthy
if isStatusChanged {
// Update the health status
n.isHealthy = &newStatus
// Send the new status to the health channel, potentially overwriting the existing value
select {
case <-n.healthChan:
n.healthChan <- newStatus
default:
n.healthChan <- newStatus
}
}
return newStatus
}

View File

@@ -0,0 +1,112 @@
package beacon
import (
"context"
"sync"
"testing"
healthTesting "github.com/prysmaticlabs/prysm/v5/api/client/beacon/testing"
"go.uber.org/mock/gomock"
)
func TestNodeHealth_IsHealthy(t *testing.T) {
tests := []struct {
name string
isHealthy bool
want bool
}{
{"initially healthy", true, true},
{"initially unhealthy", false, false},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
n := &NodeHealthTracker{
isHealthy: &tt.isHealthy,
healthChan: make(chan bool, 1),
}
if got := n.IsHealthy(); got != tt.want {
t.Errorf("IsHealthy() = %v, want %v", got, tt.want)
}
})
}
}
func TestNodeHealth_UpdateNodeHealth(t *testing.T) {
tests := []struct {
name string
initial bool // Initial health status
newStatus bool // Status to update to
shouldSend bool // Should a message be sent through the channel
}{
{"healthy to unhealthy", true, false, true},
{"unhealthy to healthy", false, true, true},
{"remain healthy", true, true, false},
{"remain unhealthy", false, false, false},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
ctrl := gomock.NewController(t)
defer ctrl.Finish()
client := healthTesting.NewMockHealthClient(ctrl)
client.EXPECT().IsHealthy(gomock.Any()).Return(tt.newStatus)
n := &NodeHealthTracker{
isHealthy: &tt.initial,
node: client,
healthChan: make(chan bool, 1),
}
s := n.CheckHealth(context.Background())
// Check if health status was updated
if s != tt.newStatus {
t.Errorf("UpdateNodeHealth() failed to update isHealthy from %v to %v", tt.initial, tt.newStatus)
}
select {
case status := <-n.HealthUpdates():
if !tt.shouldSend {
t.Errorf("UpdateNodeHealth() unexpectedly sent status %v to HealthCh", status)
} else if status != tt.newStatus {
t.Errorf("UpdateNodeHealth() sent wrong status %v, want %v", status, tt.newStatus)
}
default:
if tt.shouldSend {
t.Error("UpdateNodeHealth() did not send any status to HealthCh when expected")
}
}
})
}
}
func TestNodeHealth_Concurrency(t *testing.T) {
ctrl := gomock.NewController(t)
defer ctrl.Finish()
client := healthTesting.NewMockHealthClient(ctrl)
n := NewNodeHealthTracker(client)
var wg sync.WaitGroup
// Number of goroutines to spawn for both reading and writing
numGoroutines := 6
wg.Add(numGoroutines * 2) // for readers and writers
// Concurrently update health status
for i := 0; i < numGoroutines; i++ {
go func() {
defer wg.Done()
client.EXPECT().IsHealthy(gomock.Any()).Return(false).Times(1)
n.CheckHealth(context.Background())
client.EXPECT().IsHealthy(gomock.Any()).Return(true).Times(1)
n.CheckHealth(context.Background())
}()
}
// Concurrently read health status
for i := 0; i < numGoroutines; i++ {
go func() {
defer wg.Done()
_ = n.IsHealthy() // Just read the value
}()
}
wg.Wait() // Wait for all goroutines to finish
}

View File

@@ -2,7 +2,7 @@ load("@prysm//tools/go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = ["fuzz.go"],
importpath = "github.com/OffchainLabs/prysm/v6/testing/fuzz",
srcs = ["health.go"],
importpath = "github.com/prysmaticlabs/prysm/v5/api/client/beacon/iface",
visibility = ["//visibility:public"],
)

View File

@@ -0,0 +1,13 @@
package iface
import "context"
type HealthTracker interface {
HealthUpdates() <-chan bool
IsHealthy() bool
CheckHealth(ctx context.Context) bool
}
type HealthNode interface {
IsHealthy(ctx context.Context) bool
}

View File

@@ -1,34 +0,0 @@
package beacon
import (
"bytes"
"fmt"
"text/template"
)
type templateFn func(StateOrBlockId) string
var getBlockRootTpl templateFn
var getForkTpl templateFn
func init() {
// idTemplate is used to create template functions that can interpolate StateOrBlockId values.
idTemplate := func(ts string) func(StateOrBlockId) string {
t := template.Must(template.New("").Parse(ts))
f := func(id StateOrBlockId) string {
b := bytes.NewBuffer(nil)
err := t.Execute(b, struct{ Id string }{Id: string(id)})
if err != nil {
panic(fmt.Sprintf("invalid idTemplate: %s", ts))
}
return b.String()
}
// run the template to ensure that it is valid
// this should happen load time (using package scoped vars) to ensure runtime errors aren't possible
_ = f(IdGenesis)
return f
}
getBlockRootTpl = idTemplate(getBlockRootPath)
getForkTpl = idTemplate(getForkForStatePath)
}

View File

@@ -0,0 +1,12 @@
load("@prysm//tools/go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = ["mock.go"],
importpath = "github.com/prysmaticlabs/prysm/v5/api/client/beacon/testing",
visibility = ["//visibility:public"],
deps = [
"//api/client/beacon/iface:go_default_library",
"@org_uber_go_mock//gomock:go_default_library",
],
)

View File

@@ -0,0 +1,59 @@
package testing
import (
"context"
"reflect"
"sync"
"github.com/prysmaticlabs/prysm/v5/api/client/beacon/iface"
"go.uber.org/mock/gomock"
)
var (
_ = iface.HealthNode(&MockHealthClient{})
)
// MockHealthClient is a mock of HealthClient interface.
type MockHealthClient struct {
ctrl *gomock.Controller
recorder *MockHealthClientMockRecorder
sync.Mutex
}
// MockHealthClientMockRecorder is the mock recorder for MockHealthClient.
type MockHealthClientMockRecorder struct {
mock *MockHealthClient
}
// IsHealthy mocks base method.
func (m *MockHealthClient) IsHealthy(arg0 context.Context) bool {
m.Lock()
defer m.Unlock()
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "IsHealthy", arg0)
ret0, ok := ret[0].(bool)
if !ok {
return false
}
return ret0
}
// EXPECT returns an object that allows the caller to indicate expected use.
func (m *MockHealthClient) EXPECT() *MockHealthClientMockRecorder {
return m.recorder
}
// IsHealthy indicates an expected call of IsHealthy.
func (mr *MockHealthClientMockRecorder) IsHealthy(arg0 any) *gomock.Call {
mr.mock.Lock()
defer mr.mock.Unlock()
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "IsHealthy", reflect.TypeOf((*MockHealthClient)(nil).IsHealthy), arg0)
}
// NewMockHealthClient creates a new mock instance.
func NewMockHealthClient(ctrl *gomock.Controller) *MockHealthClient {
mock := &MockHealthClient{ctrl: ctrl}
mock.recorder = &MockHealthClientMockRecorder{mock}
return mock
}

View File

@@ -8,11 +8,12 @@ go_library(
"errors.go",
"types.go",
],
importpath = "github.com/OffchainLabs/prysm/v6/api/client/builder",
importpath = "github.com/prysmaticlabs/prysm/v5/api/client/builder",
visibility = ["//visibility:public"],
deps = [
"//api:go_default_library",
"//api/client:go_default_library",
"//api/server:go_default_library",
"//api/server/structs:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
@@ -27,11 +28,11 @@ go_library(
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_fastssz//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_opentelemetry_go_contrib_instrumentation_net_http_otelhttp//:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",
],
)
@@ -50,7 +51,6 @@ go_test(
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//math:go_default_library",

View File

@@ -1,14 +1,14 @@
package builder
import (
consensus_types "github.com/OffchainLabs/prysm/v6/consensus-types"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
v1 "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/runtime/version"
ssz "github.com/prysmaticlabs/fastssz"
consensus_types "github.com/prysmaticlabs/prysm/v5/consensus-types"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
v1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
)
// SignedBid is an interface describing the method set of a signed builder bid.

View File

@@ -12,21 +12,19 @@ import (
"strings"
"text/template"
"github.com/OffchainLabs/prysm/v6/api"
"github.com/OffchainLabs/prysm/v6/api/client"
"github.com/OffchainLabs/prysm/v6/api/server/structs"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
v1 "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/api"
"github.com/prysmaticlabs/prysm/v5/api/client"
"github.com/prysmaticlabs/prysm/v5/api/server/structs"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
v1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
log "github.com/sirupsen/logrus"
"go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
)
const (
@@ -36,14 +34,9 @@ const (
postRegisterValidatorPath = "/eth/v1/builder/validators"
)
var (
vrExample = &ethpb.SignedValidatorRegistrationV1{}
vrSize = vrExample.SizeSSZ()
errMalformedHostname = errors.New("hostname must include port, separated by one colon, like example.com:3500")
errMalformedRequest = errors.New("required request data are missing")
errNotBlinded = errors.New("submitted block is not blinded")
errVersionUnsupported = errors.New("version is not supported")
)
var errMalformedHostname = errors.New("hostname must include port, separated by one colon, like example.com:3500")
var errMalformedRequest = errors.New("required request data are missing")
var errNotBlinded = errors.New("submitted block is not blinded")
// ClientOpt is a functional option for the Client type (http.Client wrapper)
type ClientOpt func(*Client)
@@ -58,12 +51,6 @@ func WithObserver(m observer) ClientOpt {
}
}
func WithSSZ() ClientOpt {
return func(c *Client) {
c.sszEnabled = true
}
}
type requestLogger struct{}
func (*requestLogger) observe(r *http.Request) (e error) {
@@ -72,7 +59,7 @@ func (*requestLogger) observe(r *http.Request) (e error) {
log.WithFields(log.Fields{
"bodyBase64": "(nil value)",
"url": r.URL.String(),
}).Info("Builder http request")
}).Info("builder http request")
return nil
}
t := io.TeeReader(r.Body, b)
@@ -89,7 +76,7 @@ func (*requestLogger) observe(r *http.Request) (e error) {
log.WithFields(log.Fields{
"bodyBase64": string(body),
"url": r.URL.String(),
}).Info("Builder http request")
}).Info("builder http request")
return nil
}
@@ -101,17 +88,15 @@ type BuilderClient interface {
NodeURL() string
GetHeader(ctx context.Context, slot primitives.Slot, parentHash [32]byte, pubkey [48]byte) (SignedBid, error)
RegisterValidator(ctx context.Context, svr []*ethpb.SignedValidatorRegistrationV1) error
SubmitBlindedBlock(ctx context.Context, sb interfaces.ReadOnlySignedBeaconBlock) (interfaces.ExecutionData, v1.BlobsBundler, error)
SubmitBlindedBlockPostFulu(ctx context.Context, sb interfaces.ReadOnlySignedBeaconBlock) error
SubmitBlindedBlock(ctx context.Context, sb interfaces.ReadOnlySignedBeaconBlock) (interfaces.ExecutionData, *v1.BlobsBundle, error)
Status(ctx context.Context) error
}
// Client provides a collection of helper methods for calling Builder API endpoints.
type Client struct {
hc *http.Client
baseURL *url.URL
obvs []observer
sszEnabled bool
hc *http.Client
baseURL *url.URL
obvs []observer
}
// NewClient constructs a new client with the provided options (ex WithTimeout).
@@ -123,7 +108,7 @@ func NewClient(host string, opts ...ClientOpt) (*Client, error) {
return nil, err
}
c := &Client{
hc: &http.Client{Transport: otelhttp.NewTransport(http.DefaultTransport)},
hc: &http.Client{},
baseURL: u,
}
for _, o := range opts {
@@ -153,8 +138,7 @@ func (c *Client) NodeURL() string {
type reqOption func(*http.Request)
// do is a generic, opinionated request function to reduce boilerplate amongst the methods in this package api/client/builder.
// It validates that the HTTP response status matches the expectedStatus parameter.
func (c *Client) do(ctx context.Context, method string, path string, body io.Reader, expectedStatus int, opts ...reqOption) (res []byte, header http.Header, err error) {
func (c *Client) do(ctx context.Context, method string, path string, body io.Reader, opts ...reqOption) (res []byte, err error) {
ctx, span := trace.StartSpan(ctx, "builder.client.do")
defer func() {
tracing.AnnotateError(span, err)
@@ -170,6 +154,10 @@ func (c *Client) do(ctx context.Context, method string, path string, body io.Rea
if err != nil {
return
}
if method == http.MethodPost {
req.Header.Set("Content-Type", api.JsonMediaType)
}
req.Header.Set("Accept", api.JsonMediaType)
req.Header.Add("User-Agent", version.BuildData())
for _, o := range opts {
o(req)
@@ -189,8 +177,8 @@ func (c *Client) do(ctx context.Context, method string, path string, body io.Rea
log.WithError(closeErr).Error("Failed to close response body")
}
}()
if r.StatusCode != expectedStatus {
err = unexpectedStatusErr(r, expectedStatus)
if r.StatusCode != http.StatusOK {
err = non200Err(r)
return
}
res, err = io.ReadAll(io.LimitReader(r.Body, client.MaxBodySize))
@@ -198,7 +186,6 @@ func (c *Client) do(ctx context.Context, method string, path string, body io.Rea
err = errors.Wrap(err, "error reading http response body from builder server")
return
}
header = r.Header
return
}
@@ -228,145 +215,64 @@ func (c *Client) GetHeader(ctx context.Context, slot primitives.Slot, parentHash
if err != nil {
return nil, err
}
var getOpts reqOption
if c.sszEnabled {
getOpts = func(r *http.Request) {
r.Header.Set("Accept", api.OctetStreamMediaType)
}
} else {
getOpts = func(r *http.Request) {
r.Header.Set("Accept", api.JsonMediaType)
}
}
data, header, err := c.do(ctx, http.MethodGet, path, nil, http.StatusOK, getOpts)
hb, err := c.do(ctx, http.MethodGet, path, nil)
if err != nil {
return nil, errors.Wrap(err, "error getting header from builder server")
return nil, err
}
v := &VersionResponse{}
if err := json.Unmarshal(hb, v); err != nil {
return nil, errors.Wrapf(err, "error unmarshaling the builder GetHeader response, using slot=%d, parentHash=%#x, pubkey=%#x", slot, parentHash, pubkey)
}
bid, err := c.parseHeaderResponse(data, header, slot)
ver, err := version.FromString(strings.ToLower(v.Version))
if err != nil {
return nil, errors.Wrapf(
err,
"error rendering exec header template with slot=%d, parentHash=%#x, pubkey=%#x",
slot,
parentHash,
pubkey,
)
return nil, errors.Wrap(err, fmt.Sprintf("unsupported header version %s", strings.ToLower(v.Version)))
}
return bid, nil
}
func (c *Client) parseHeaderResponse(data []byte, header http.Header, slot primitives.Slot) (SignedBid, error) {
var versionHeader string
if c.sszEnabled || header.Get(api.VersionHeader) != "" {
versionHeader = header.Get(api.VersionHeader)
} else {
// If we don't have a version header, attempt to parse JSON for version
v := &VersionResponse{}
if err := json.Unmarshal(data, v); err != nil {
return nil, errors.Wrap(
err,
"error unmarshaling builder GetHeader response",
)
}
versionHeader = strings.ToLower(v.Version)
}
ver, err := version.FromString(versionHeader)
if err != nil {
return nil, errors.Wrap(err, fmt.Sprintf("unsupported header version %s", versionHeader))
}
if ver >= version.Electra {
return c.parseHeaderElectra(data, slot)
hr := &ExecHeaderResponseElectra{}
if err := json.Unmarshal(hb, hr); err != nil {
return nil, errors.Wrapf(err, "error unmarshaling the builder GetHeader response, using slot=%d, parentHash=%#x, pubkey=%#x", slot, parentHash, pubkey)
}
p, err := hr.ToProto()
if err != nil {
return nil, errors.Wrapf(err, "could not extract proto message from header")
}
return WrappedSignedBuilderBidElectra(p)
}
if ver >= version.Deneb {
return c.parseHeaderDeneb(data)
hr := &ExecHeaderResponseDeneb{}
if err := json.Unmarshal(hb, hr); err != nil {
return nil, errors.Wrapf(err, "error unmarshaling the builder GetHeader response, using slot=%d, parentHash=%#x, pubkey=%#x", slot, parentHash, pubkey)
}
p, err := hr.ToProto()
if err != nil {
return nil, errors.Wrapf(err, "could not extract proto message from header")
}
return WrappedSignedBuilderBidDeneb(p)
}
if ver >= version.Capella {
return c.parseHeaderCapella(data)
hr := &ExecHeaderResponseCapella{}
if err := json.Unmarshal(hb, hr); err != nil {
return nil, errors.Wrapf(err, "error unmarshaling the builder GetHeader response, using slot=%d, parentHash=%#x, pubkey=%#x", slot, parentHash, pubkey)
}
p, err := hr.ToProto()
if err != nil {
return nil, errors.Wrapf(err, "could not extract proto message from header")
}
return WrappedSignedBuilderBidCapella(p)
}
if ver >= version.Bellatrix {
return c.parseHeaderBellatrix(data)
}
return nil, fmt.Errorf("unsupported header version %s", versionHeader)
}
func (c *Client) parseHeaderElectra(data []byte, slot primitives.Slot) (SignedBid, error) {
if c.sszEnabled {
sb := &ethpb.SignedBuilderBidElectra{}
if err := sb.UnmarshalSSZ(data); err != nil {
return nil, errors.Wrap(err, "could not unmarshal SignedBuilderBidElectra SSZ")
hr := &ExecHeaderResponse{}
if err := json.Unmarshal(hb, hr); err != nil {
return nil, errors.Wrapf(err, "error unmarshaling the builder GetHeader response, using slot=%d, parentHash=%#x, pubkey=%#x", slot, parentHash, pubkey)
}
return WrappedSignedBuilderBidElectra(sb)
}
hr := &ExecHeaderResponseElectra{}
if err := json.Unmarshal(data, hr); err != nil {
return nil, errors.Wrap(err, "could not unmarshal ExecHeaderResponseElectra JSON")
}
p, err := hr.ToProto(slot)
if err != nil {
return nil, errors.Wrap(err, "could not convert ExecHeaderResponseElectra to proto")
}
return WrappedSignedBuilderBidElectra(p)
}
func (c *Client) parseHeaderDeneb(data []byte) (SignedBid, error) {
if c.sszEnabled {
sb := &ethpb.SignedBuilderBidDeneb{}
if err := sb.UnmarshalSSZ(data); err != nil {
return nil, errors.Wrap(err, "could not unmarshal SignedBuilderBidDeneb SSZ")
p, err := hr.ToProto()
if err != nil {
return nil, errors.Wrap(err, "could not extract proto message from header")
}
return WrappedSignedBuilderBidDeneb(sb)
return WrappedSignedBuilderBid(p)
}
hr := &ExecHeaderResponseDeneb{}
if err := json.Unmarshal(data, hr); err != nil {
return nil, errors.Wrap(err, "could not unmarshal ExecHeaderResponseDeneb JSON")
}
p, err := hr.ToProto()
if err != nil {
return nil, errors.Wrap(err, "could not convert ExecHeaderResponseDeneb to proto")
}
return WrappedSignedBuilderBidDeneb(p)
}
func (c *Client) parseHeaderCapella(data []byte) (SignedBid, error) {
if c.sszEnabled {
sb := &ethpb.SignedBuilderBidCapella{}
if err := sb.UnmarshalSSZ(data); err != nil {
return nil, errors.Wrap(err, "could not unmarshal SignedBuilderBidCapella SSZ")
}
return WrappedSignedBuilderBidCapella(sb)
}
hr := &ExecHeaderResponseCapella{}
if err := json.Unmarshal(data, hr); err != nil {
return nil, errors.Wrap(err, "could not unmarshal ExecHeaderResponseCapella JSON")
}
p, err := hr.ToProto()
if err != nil {
return nil, errors.Wrap(err, "could not convert ExecHeaderResponseCapella to proto")
}
return WrappedSignedBuilderBidCapella(p)
}
func (c *Client) parseHeaderBellatrix(data []byte) (SignedBid, error) {
if c.sszEnabled {
sb := &ethpb.SignedBuilderBid{}
if err := sb.UnmarshalSSZ(data); err != nil {
return nil, errors.Wrap(err, "could not unmarshal SignedBuilderBid SSZ")
}
return WrappedSignedBuilderBid(sb)
}
hr := &ExecHeaderResponse{}
if err := json.Unmarshal(data, hr); err != nil {
return nil, errors.Wrap(err, "could not unmarshal ExecHeaderResponse JSON")
}
p, err := hr.ToProto()
if err != nil {
return nil, errors.Wrap(err, "could not convert ExecHeaderResponse to proto")
}
return WrappedSignedBuilderBid(p)
return nil, fmt.Errorf("unsupported header version %s", strings.ToLower(v.Version))
}
// RegisterValidator encodes the SignedValidatorRegistrationV1 message to json (including hex-encoding the byte
@@ -381,274 +287,70 @@ func (c *Client) RegisterValidator(ctx context.Context, svr []*ethpb.SignedValid
tracing.AnnotateError(span, err)
return err
}
var (
body []byte
err error
postOpts reqOption
)
if c.sszEnabled {
postOpts = func(r *http.Request) {
r.Header.Set("Content-Type", api.OctetStreamMediaType)
r.Header.Set("Accept", api.OctetStreamMediaType)
}
body, err = sszValidatorRegisterRequest(svr)
if err != nil {
err := errors.Wrap(err, "error ssz encoding the SignedValidatorRegistration value body in RegisterValidator")
tracing.AnnotateError(span, err)
return err
}
} else {
postOpts = func(r *http.Request) {
r.Header.Set("Content-Type", api.JsonMediaType)
r.Header.Set("Accept", api.JsonMediaType)
}
body, err = jsonValidatorRegisterRequest(svr)
if err != nil {
err := errors.Wrap(err, "error json encoding the SignedValidatorRegistration value body in RegisterValidator")
tracing.AnnotateError(span, err)
return err
}
}
if _, _, err = c.do(ctx, http.MethodPost, postRegisterValidatorPath, bytes.NewBuffer(body), http.StatusOK, postOpts); err != nil {
return errors.Wrap(err, "do")
}
log.WithField("registrationCount", len(svr)).Debug("Successfully registered validator(s) on builder")
return nil
}
func jsonValidatorRegisterRequest(svr []*ethpb.SignedValidatorRegistrationV1) ([]byte, error) {
vs := make([]*structs.SignedValidatorRegistration, len(svr))
for i := 0; i < len(svr); i++ {
vs[i] = structs.SignedValidatorRegistrationFromConsensus(svr[i])
}
body, err := json.Marshal(vs)
if err != nil {
return nil, err
err := errors.Wrap(err, "error encoding the SignedValidatorRegistration value body in RegisterValidator")
tracing.AnnotateError(span, err)
return err
}
return body, nil
}
func sszValidatorRegisterRequest(svr []*ethpb.SignedValidatorRegistrationV1) ([]byte, error) {
if uint64(len(svr)) > params.BeaconConfig().ValidatorRegistryLimit {
return nil, errors.Wrap(errMalformedRequest, "validator registry limit exceeded")
_, err = c.do(ctx, http.MethodPost, postRegisterValidatorPath, bytes.NewBuffer(body))
if err != nil {
return err
}
ssz := make([]byte, vrSize*len(svr))
for i, vr := range svr {
sszrep, err := vr.MarshalSSZ()
if err != nil {
return nil, errors.Wrap(err, "failed to marshal validator registry ssz")
}
copy(ssz[i*vrSize:(i+1)*vrSize], sszrep)
}
return ssz, nil
log.WithField("registrationCount", len(svr)).Debug("Successfully registered validator(s) on builder")
return nil
}
var errResponseVersionMismatch = errors.New("builder API response uses a different version than requested in " + api.VersionHeader + " header")
func getVersionsBlockToPayload(blockVersion int) (int, error) {
if blockVersion >= version.Fulu {
return version.Fulu, nil
}
if blockVersion >= version.Deneb {
return version.Deneb, nil
}
if blockVersion == version.Capella {
return version.Capella, nil
}
if blockVersion == version.Bellatrix {
return version.Bellatrix, nil
}
return 0, errors.Wrapf(errVersionUnsupported, "block version %d", blockVersion)
}
// SubmitBlindedBlock calls the builder API endpoint that binds the validator to the builder and submits the block.
// The response is the full execution payload used to create the blinded block.
func (c *Client) SubmitBlindedBlock(ctx context.Context, sb interfaces.ReadOnlySignedBeaconBlock) (interfaces.ExecutionData, v1.BlobsBundler, error) {
body, postOpts, err := c.buildBlindedBlockRequest(sb)
if err != nil {
return nil, nil, err
}
// post the blinded block - the execution payload response should contain the unblinded payload, along with the
// blobs bundle if it is post deneb.
data, header, err := c.do(ctx, http.MethodPost, postBlindedBeaconBlockPath, bytes.NewBuffer(body), http.StatusOK, postOpts)
if err != nil {
return nil, nil, errors.Wrap(err, "error posting the blinded block to the builder api")
}
ver, err := c.checkBlockVersion(data, header)
if err != nil {
return nil, nil, err
}
expectedPayloadVer, err := getVersionsBlockToPayload(sb.Version())
if err != nil {
return nil, nil, err
}
gotPayloadVer, err := getVersionsBlockToPayload(ver)
if err != nil {
return nil, nil, err
}
if expectedPayloadVer != gotPayloadVer {
return nil, nil, errors.Wrapf(errResponseVersionMismatch, "expected payload version %d, got %d", expectedPayloadVer, gotPayloadVer)
}
ed, blobs, err := c.parseBlindedBlockResponse(data, ver)
if err != nil {
return nil, nil, err
}
return ed, blobs, nil
}
// SubmitBlindedBlockPostFulu calls the builder API endpoint post-Fulu where relays only return status codes.
// This method is used after the Fulu fork when MEV-boost relays no longer return execution payloads.
func (c *Client) SubmitBlindedBlockPostFulu(ctx context.Context, sb interfaces.ReadOnlySignedBeaconBlock) error {
body, postOpts, err := c.buildBlindedBlockRequest(sb)
if err != nil {
return err
}
// Post the blinded block - the response should only contain a status code (no payload)
_, _, err = c.do(ctx, http.MethodPost, postBlindedBeaconBlockPath, bytes.NewBuffer(body), http.StatusAccepted, postOpts)
if err != nil {
return errors.Wrap(err, "error posting the blinded block to the builder api post-Fulu")
}
// Success is indicated by no error (status 202)
return nil
}
func (c *Client) checkBlockVersion(respBytes []byte, header http.Header) (int, error) {
var versionHeader string
if c.sszEnabled {
versionHeader = strings.ToLower(header.Get(api.VersionHeader))
} else {
// fallback to JSON-based version extraction
v := &VersionResponse{}
if err := json.Unmarshal(respBytes, v); err != nil {
return 0, errors.Wrapf(err, "error unmarshaling JSON version fallback")
}
versionHeader = strings.ToLower(v.Version)
}
ver, err := version.FromString(versionHeader)
if err != nil {
return 0, errors.Wrapf(err, "unsupported header version %s", versionHeader)
}
return ver, nil
}
// Helper: build request body for SubmitBlindedBlock
func (c *Client) buildBlindedBlockRequest(sb interfaces.ReadOnlySignedBeaconBlock) ([]byte, reqOption, error) {
func (c *Client) SubmitBlindedBlock(ctx context.Context, sb interfaces.ReadOnlySignedBeaconBlock) (interfaces.ExecutionData, *v1.BlobsBundle, error) {
if !sb.IsBlinded() {
return nil, nil, errNotBlinded
}
if c.sszEnabled {
body, err := sb.MarshalSSZ()
if err != nil {
return nil, nil, errors.Wrap(err, "could not marshal SSZ for blinded block")
}
opt := func(r *http.Request) {
r.Header.Set(api.VersionHeader, version.String(sb.Version()))
r.Header.Set("Content-Type", api.OctetStreamMediaType)
r.Header.Set("Accept", api.OctetStreamMediaType)
}
return body, opt, nil
}
// massage the proto struct type data into the api response type.
mj, err := structs.SignedBeaconBlockMessageJsoner(sb)
if err != nil {
return nil, nil, errors.Wrap(err, "error generating blinded beacon block post request")
}
body, err := json.Marshal(mj)
if err != nil {
return nil, nil, errors.Wrap(err, "error marshaling blinded block to JSON")
return nil, nil, errors.Wrap(err, "error marshaling blinded block post request to json")
}
opt := func(r *http.Request) {
r.Header.Set(api.VersionHeader, version.String(sb.Version()))
postOpts := func(r *http.Request) {
r.Header.Add("Eth-Consensus-Version", version.String(sb.Version()))
r.Header.Set("Content-Type", api.JsonMediaType)
r.Header.Set("Accept", api.JsonMediaType)
}
return body, opt, nil
}
// Helper: parse the response returned by SubmitBlindedBlock
func (c *Client) parseBlindedBlockResponse(
respBytes []byte,
forkVersion int,
) (interfaces.ExecutionData, v1.BlobsBundler, error) {
if c.sszEnabled {
return c.parseBlindedBlockResponseSSZ(respBytes, forkVersion)
// post the blinded block - the execution payload response should contain the unblinded payload, along with the
// blobs bundle if it is post deneb.
rb, err := c.do(ctx, http.MethodPost, postBlindedBeaconBlockPath, bytes.NewBuffer(body), postOpts)
if err != nil {
return nil, nil, errors.Wrap(err, "error posting the blinded block to the builder api")
}
return c.parseBlindedBlockResponseJSON(respBytes, forkVersion)
}
func (c *Client) parseBlindedBlockResponseSSZ(
respBytes []byte,
forkVersion int,
) (interfaces.ExecutionData, v1.BlobsBundler, error) {
if forkVersion >= version.Fulu {
payloadAndBlobs := &v1.ExecutionPayloadDenebAndBlobsBundleV2{}
if err := payloadAndBlobs.UnmarshalSSZ(respBytes); err != nil {
return nil, nil, errors.Wrap(err, "unable to unmarshal ExecutionPayloadDenebAndBlobsBundleV2 SSZ")
}
ed, err := blocks.NewWrappedExecutionData(payloadAndBlobs.Payload)
if err != nil {
return nil, nil, errors.Wrapf(err, "unable to wrap execution data for %s", version.String(forkVersion))
}
return ed, payloadAndBlobs.BlobsBundle, nil
} else if forkVersion >= version.Deneb {
payloadAndBlobs := &v1.ExecutionPayloadDenebAndBlobsBundle{}
if err := payloadAndBlobs.UnmarshalSSZ(respBytes); err != nil {
return nil, nil, errors.Wrap(err, "unable to unmarshal ExecutionPayloadDenebAndBlobsBundle SSZ")
}
ed, err := blocks.NewWrappedExecutionData(payloadAndBlobs.Payload)
if err != nil {
return nil, nil, errors.Wrapf(err, "unable to wrap execution data for %s", version.String(forkVersion))
}
return ed, payloadAndBlobs.BlobsBundle, nil
} else if forkVersion >= version.Capella {
payload := &v1.ExecutionPayloadCapella{}
if err := payload.UnmarshalSSZ(respBytes); err != nil {
return nil, nil, errors.Wrap(err, "unable to unmarshal ExecutionPayloadCapella SSZ")
}
ed, err := blocks.NewWrappedExecutionData(payload)
if err != nil {
return nil, nil, errors.Wrapf(err, "unable to wrap execution data for %s", version.String(forkVersion))
}
return ed, nil, nil
} else if forkVersion >= version.Bellatrix {
payload := &v1.ExecutionPayload{}
if err := payload.UnmarshalSSZ(respBytes); err != nil {
return nil, nil, errors.Wrap(err, "unable to unmarshal ExecutionPayload SSZ")
}
ed, err := blocks.NewWrappedExecutionData(payload)
if err != nil {
return nil, nil, errors.Wrapf(err, "unable to wrap execution data for %s", version.String(forkVersion))
}
return ed, nil, nil
} else {
return nil, nil, fmt.Errorf("unsupported header version %s", version.String(forkVersion))
}
}
func (c *Client) parseBlindedBlockResponseJSON(
respBytes []byte,
forkVersion int,
) (interfaces.ExecutionData, *v1.BlobsBundle, error) {
// ExecutionPayloadResponse parses just the outer container and the Value key, enabling it to use the .Value
// key to determine which underlying data type to use to finish the unmarshaling.
ep := &ExecutionPayloadResponse{}
if err := json.Unmarshal(respBytes, ep); err != nil {
return nil, nil, errors.Wrap(err, "error unmarshaling ExecutionPayloadResponse")
if err := json.Unmarshal(rb, ep); err != nil {
return nil, nil, errors.Wrap(err, "error unmarshaling the builder ExecutionPayloadResponse")
}
if strings.ToLower(ep.Version) != version.String(sb.Version()) {
return nil, nil, errors.Wrapf(errResponseVersionMismatch, "req=%s, recv=%s", strings.ToLower(ep.Version), version.String(sb.Version()))
}
// This parses the rest of the response and returns the inner data field.
pp, err := ep.ParsePayload()
if err != nil {
return nil, nil, errors.Wrapf(err, "failed to parse payload with version=%s", ep.Version)
return nil, nil, errors.Wrapf(err, "failed to parse execution payload from builder with version=%s", ep.Version)
}
// Get the payload as a proto.Message so it can be wrapped as an execution payload interface.
pb, err := pp.PayloadProto()
if err != nil {
return nil, nil, err
@@ -657,13 +359,11 @@ func (c *Client) parseBlindedBlockResponseJSON(
if err != nil {
return nil, nil, err
}
// Check if it contains blobs
bb, ok := pp.(BlobBundler)
if ok {
bbpb, err := bb.BundleProto()
if err != nil {
return nil, nil, errors.Wrapf(err, "failed to extract blobs bundle from version=%s", ep.Version)
return nil, nil, errors.Wrapf(err, "failed to extract blobs bundle from builder response with version=%s", ep.Version)
}
return ed, bbpb, nil
}
@@ -674,14 +374,11 @@ func (c *Client) parseBlindedBlockResponseJSON(
// response, and an error response may have an error message. This method will return a nil value for error in the
// happy path, and an error with information about the server response body for a non-200 response.
func (c *Client) Status(ctx context.Context) error {
getOpts := func(r *http.Request) {
r.Header.Set("Accept", api.JsonMediaType)
}
_, _, err := c.do(ctx, http.MethodGet, getStatus, nil, http.StatusOK, getOpts)
_, err := c.do(ctx, http.MethodGet, getStatus, nil)
return err
}
func unexpectedStatusErr(response *http.Response, expected int) error {
func non200Err(response *http.Response) error {
bodyBytes, err := io.ReadAll(io.LimitReader(response.Body, client.MaxErrBodySize))
var errMessage ErrorMessage
var body string
@@ -690,20 +387,8 @@ func unexpectedStatusErr(response *http.Response, expected int) error {
} else {
body = "response body:\n" + string(bodyBytes)
}
msg := fmt.Sprintf("expected=%d, got=%d, url=%s, body=%s", expected, response.StatusCode, response.Request.URL, body)
msg := fmt.Sprintf("code=%d, url=%s, body=%s", response.StatusCode, response.Request.URL, body)
switch response.StatusCode {
case http.StatusUnsupportedMediaType:
log.WithError(ErrUnsupportedMediaType).Debug(msg)
if jsonErr := json.Unmarshal(bodyBytes, &errMessage); jsonErr != nil {
return errors.Wrap(jsonErr, "unable to read response body")
}
return errors.Wrap(ErrUnsupportedMediaType, errMessage.Message)
case http.StatusNotAcceptable:
log.WithError(ErrNotAcceptable).Debug(msg)
if jsonErr := json.Unmarshal(bodyBytes, &errMessage); jsonErr != nil {
return errors.Wrap(jsonErr, "unable to read response body")
}
return errors.Wrap(ErrNotAcceptable, errMessage.Message)
case http.StatusNoContent:
log.WithError(ErrNoContent).Debug(msg)
return ErrNoContent

File diff suppressed because it is too large Load Diff

View File

@@ -15,9 +15,3 @@ var ErrBadRequest = errors.Wrap(ErrNotOK, "recv 400 BadRequest response from API
// ErrNoContent specifically means that a '204 - No Content' response was received from the API.
// Typically, a 204 is a success but in this case for the Header API means No header is available
var ErrNoContent = errors.New("recv 204 no content response from API, No header is available")
// ErrUnsupportedMediaType specifically means that a '415 - Unsupported Media Type' was received from the API.
var ErrUnsupportedMediaType = errors.Wrap(ErrNotOK, "The media type in \"Content-Type\" header is unsupported, and the request has been rejected. This occurs when a HTTP request supplies a payload in a content-type that the server is not able to handle.")
// ErrNotAcceptable specifically means that a '406 - Not Acceptable' was received from the API.
var ErrNotAcceptable = errors.Wrap(ErrNotOK, "The accept header value is not acceptable")

View File

@@ -3,7 +3,7 @@ load("@prysm//tools/go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = ["mock.go"],
importpath = "github.com/OffchainLabs/prysm/v6/api/client/builder/testing",
importpath = "github.com/prysmaticlabs/prysm/v5/api/client/builder/testing",
visibility = ["//visibility:public"],
deps = [
"//api/client/builder:go_default_library",

View File

@@ -3,12 +3,12 @@ package testing
import (
"context"
"github.com/OffchainLabs/prysm/v6/api/client/builder"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
v1 "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/api/client/builder"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
v1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
)
// MockClient is a mock implementation of BuilderClient.
@@ -41,15 +41,10 @@ func (m MockClient) RegisterValidator(_ context.Context, svr []*ethpb.SignedVali
}
// SubmitBlindedBlock --
func (MockClient) SubmitBlindedBlock(_ context.Context, _ interfaces.ReadOnlySignedBeaconBlock) (interfaces.ExecutionData, v1.BlobsBundler, error) {
func (MockClient) SubmitBlindedBlock(_ context.Context, _ interfaces.ReadOnlySignedBeaconBlock) (interfaces.ExecutionData, *v1.BlobsBundle, error) {
return nil, nil, nil
}
// SubmitBlindedBlockPostFulu --
func (MockClient) SubmitBlindedBlockPostFulu(_ context.Context, _ interfaces.ReadOnlySignedBeaconBlock) error {
return nil
}
// Status --
func (MockClient) Status(_ context.Context) error {
return nil

File diff suppressed because it is too large Load Diff

View File

@@ -11,18 +11,18 @@ import (
"os"
"testing"
"github.com/OffchainLabs/prysm/v6/api/server/structs"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
consensusblocks "github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/math"
v1 "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
eth "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v5/api/server/structs"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
consensusblocks "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/math"
v1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func ezDecode(t *testing.T, s string) []byte {
@@ -328,72 +328,72 @@ func TestExecutionHeaderResponseUnmarshal(t *testing.T) {
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hr.Data.Message.Header.ParentHash,
actual: hexutil.Encode(hr.Data.Message.Header.ParentHash),
name: "ExecHeaderResponse.ExecutionPayloadHeader.ParentHash",
},
{
expected: "0xabcf8e0d4e9587369b2301d0790347320302cc09",
actual: hr.Data.Message.Header.FeeRecipient,
actual: hexutil.Encode(hr.Data.Message.Header.FeeRecipient),
name: "ExecHeaderResponse.ExecutionPayloadHeader.FeeRecipient",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hr.Data.Message.Header.StateRoot,
actual: hexutil.Encode(hr.Data.Message.Header.StateRoot),
name: "ExecHeaderResponse.ExecutionPayloadHeader.StateRoot",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hr.Data.Message.Header.ReceiptsRoot,
actual: hexutil.Encode(hr.Data.Message.Header.ReceiptsRoot),
name: "ExecHeaderResponse.ExecutionPayloadHeader.ReceiptsRoot",
},
{
expected: "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
actual: hr.Data.Message.Header.LogsBloom,
actual: hexutil.Encode(hr.Data.Message.Header.LogsBloom),
name: "ExecHeaderResponse.ExecutionPayloadHeader.LogsBloom",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hr.Data.Message.Header.PrevRandao,
actual: hexutil.Encode(hr.Data.Message.Header.PrevRandao),
name: "ExecHeaderResponse.ExecutionPayloadHeader.PrevRandao",
},
{
expected: "1",
actual: hr.Data.Message.Header.BlockNumber,
actual: fmt.Sprintf("%d", hr.Data.Message.Header.BlockNumber),
name: "ExecHeaderResponse.ExecutionPayloadHeader.BlockNumber",
},
{
expected: "1",
actual: hr.Data.Message.Header.GasLimit,
actual: fmt.Sprintf("%d", hr.Data.Message.Header.GasLimit),
name: "ExecHeaderResponse.ExecutionPayloadHeader.GasLimit",
},
{
expected: "1",
actual: hr.Data.Message.Header.GasUsed,
actual: fmt.Sprintf("%d", hr.Data.Message.Header.GasUsed),
name: "ExecHeaderResponse.ExecutionPayloadHeader.GasUsed",
},
{
expected: "1",
actual: hr.Data.Message.Header.Timestamp,
actual: fmt.Sprintf("%d", hr.Data.Message.Header.Timestamp),
name: "ExecHeaderResponse.ExecutionPayloadHeader.Timestamp",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hr.Data.Message.Header.ExtraData,
actual: hexutil.Encode(hr.Data.Message.Header.ExtraData),
name: "ExecHeaderResponse.ExecutionPayloadHeader.ExtraData",
},
{
expected: "452312848583266388373324160190187140051835877600158453279131187530910662656",
actual: hr.Data.Message.Header.BaseFeePerGas,
actual: fmt.Sprintf("%d", hr.Data.Message.Header.BaseFeePerGas),
name: "ExecHeaderResponse.ExecutionPayloadHeader.BaseFeePerGas",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hr.Data.Message.Header.BlockHash,
actual: hexutil.Encode(hr.Data.Message.Header.BlockHash),
name: "ExecHeaderResponse.ExecutionPayloadHeader.BlockHash",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hr.Data.Message.Header.TransactionsRoot,
actual: hexutil.Encode(hr.Data.Message.Header.TransactionsRoot),
name: "ExecHeaderResponse.ExecutionPayloadHeader.TransactionsRoot",
},
}
@@ -427,77 +427,77 @@ func TestExecutionHeaderResponseCapellaUnmarshal(t *testing.T) {
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hr.Data.Message.Header.ParentHash,
actual: hexutil.Encode(hr.Data.Message.Header.ParentHash),
name: "ExecHeaderResponse.ExecutionPayloadHeader.ParentHash",
},
{
expected: "0xabcf8e0d4e9587369b2301d0790347320302cc09",
actual: hr.Data.Message.Header.FeeRecipient,
actual: hexutil.Encode(hr.Data.Message.Header.FeeRecipient),
name: "ExecHeaderResponse.ExecutionPayloadHeader.FeeRecipient",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hr.Data.Message.Header.StateRoot,
actual: hexutil.Encode(hr.Data.Message.Header.StateRoot),
name: "ExecHeaderResponse.ExecutionPayloadHeader.StateRoot",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hr.Data.Message.Header.ReceiptsRoot,
actual: hexutil.Encode(hr.Data.Message.Header.ReceiptsRoot),
name: "ExecHeaderResponse.ExecutionPayloadHeader.ReceiptsRoot",
},
{
expected: "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
actual: hr.Data.Message.Header.LogsBloom,
actual: hexutil.Encode(hr.Data.Message.Header.LogsBloom),
name: "ExecHeaderResponse.ExecutionPayloadHeader.LogsBloom",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hr.Data.Message.Header.PrevRandao,
actual: hexutil.Encode(hr.Data.Message.Header.PrevRandao),
name: "ExecHeaderResponse.ExecutionPayloadHeader.PrevRandao",
},
{
expected: "1",
actual: hr.Data.Message.Header.BlockNumber,
actual: fmt.Sprintf("%d", hr.Data.Message.Header.BlockNumber),
name: "ExecHeaderResponse.ExecutionPayloadHeader.BlockNumber",
},
{
expected: "1",
actual: hr.Data.Message.Header.GasLimit,
actual: fmt.Sprintf("%d", hr.Data.Message.Header.GasLimit),
name: "ExecHeaderResponse.ExecutionPayloadHeader.GasLimit",
},
{
expected: "1",
actual: hr.Data.Message.Header.GasUsed,
actual: fmt.Sprintf("%d", hr.Data.Message.Header.GasUsed),
name: "ExecHeaderResponse.ExecutionPayloadHeader.GasUsed",
},
{
expected: "1",
actual: hr.Data.Message.Header.Timestamp,
actual: fmt.Sprintf("%d", hr.Data.Message.Header.Timestamp),
name: "ExecHeaderResponse.ExecutionPayloadHeader.Timestamp",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hr.Data.Message.Header.ExtraData,
actual: hexutil.Encode(hr.Data.Message.Header.ExtraData),
name: "ExecHeaderResponse.ExecutionPayloadHeader.ExtraData",
},
{
expected: "452312848583266388373324160190187140051835877600158453279131187530910662656",
actual: hr.Data.Message.Header.BaseFeePerGas,
actual: fmt.Sprintf("%d", hr.Data.Message.Header.BaseFeePerGas),
name: "ExecHeaderResponse.ExecutionPayloadHeader.BaseFeePerGas",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hr.Data.Message.Header.BlockHash,
actual: hexutil.Encode(hr.Data.Message.Header.BlockHash),
name: "ExecHeaderResponse.ExecutionPayloadHeader.BlockHash",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hr.Data.Message.Header.TransactionsRoot,
actual: hexutil.Encode(hr.Data.Message.Header.TransactionsRoot),
name: "ExecHeaderResponse.ExecutionPayloadHeader.TransactionsRoot",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hr.Data.Message.Header.WithdrawalsRoot,
actual: hexutil.Encode(hr.Data.Message.Header.WithdrawalsRoot),
name: "ExecHeaderResponse.ExecutionPayloadHeader.WithdrawalsRoot",
},
}
@@ -867,6 +867,88 @@ var testExampleExecutionPayloadDenebDifferentProofCount = fmt.Sprintf(`{
}
}`, hexutil.Encode(make([]byte, fieldparams.BlobLength)))
func TestExecutionPayloadResponseUnmarshal(t *testing.T) {
epr := &ExecPayloadResponse{}
require.NoError(t, json.Unmarshal([]byte(testExampleExecutionPayload), epr))
cases := []struct {
expected string
actual string
name string
}{
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hexutil.Encode(epr.Data.ParentHash),
name: "ExecPayloadResponse.ExecutionPayload.ParentHash",
},
{
expected: "0xabcf8e0d4e9587369b2301d0790347320302cc09",
actual: hexutil.Encode(epr.Data.FeeRecipient),
name: "ExecPayloadResponse.ExecutionPayload.FeeRecipient",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hexutil.Encode(epr.Data.StateRoot),
name: "ExecPayloadResponse.ExecutionPayload.StateRoot",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hexutil.Encode(epr.Data.ReceiptsRoot),
name: "ExecPayloadResponse.ExecutionPayload.ReceiptsRoot",
},
{
expected: "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
actual: hexutil.Encode(epr.Data.LogsBloom),
name: "ExecPayloadResponse.ExecutionPayload.LogsBloom",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hexutil.Encode(epr.Data.PrevRandao),
name: "ExecPayloadResponse.ExecutionPayload.PrevRandao",
},
{
expected: "1",
actual: fmt.Sprintf("%d", epr.Data.BlockNumber),
name: "ExecPayloadResponse.ExecutionPayload.BlockNumber",
},
{
expected: "1",
actual: fmt.Sprintf("%d", epr.Data.GasLimit),
name: "ExecPayloadResponse.ExecutionPayload.GasLimit",
},
{
expected: "1",
actual: fmt.Sprintf("%d", epr.Data.GasUsed),
name: "ExecPayloadResponse.ExecutionPayload.GasUsed",
},
{
expected: "1",
actual: fmt.Sprintf("%d", epr.Data.Timestamp),
name: "ExecPayloadResponse.ExecutionPayload.Timestamp",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hexutil.Encode(epr.Data.ExtraData),
name: "ExecPayloadResponse.ExecutionPayload.ExtraData",
},
{
expected: "452312848583266388373324160190187140051835877600158453279131187530910662656",
actual: fmt.Sprintf("%d", epr.Data.BaseFeePerGas),
name: "ExecPayloadResponse.ExecutionPayload.BaseFeePerGas",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hexutil.Encode(epr.Data.BlockHash),
name: "ExecPayloadResponse.ExecutionPayload.BlockHash",
},
}
for _, c := range cases {
require.Equal(t, c.expected, c.actual, fmt.Sprintf("unexpected value for field %s", c.name))
}
require.Equal(t, 1, len(epr.Data.Transactions))
txHash := "0x02f878831469668303f51d843b9ac9f9843b9aca0082520894c93269b73096998db66be0441e836d873535cb9c8894a19041886f000080c001a031cc29234036afbf9a1fb9476b463367cb1f957ac0b919b69bbc798436e604aaa018c4e9c3914eb27aadd0b91e10b18655739fcf8c1fc398763a9f1beecb8ddc86"
require.Equal(t, txHash, hexutil.Encode(epr.Data.Transactions[0]))
}
func TestExecutionPayloadResponseCapellaUnmarshal(t *testing.T) {
epr := &ExecPayloadResponseCapella{}
require.NoError(t, json.Unmarshal([]byte(testExampleExecutionPayloadCapella), epr))
@@ -877,67 +959,67 @@ func TestExecutionPayloadResponseCapellaUnmarshal(t *testing.T) {
}{
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: epr.Data.ParentHash,
actual: hexutil.Encode(epr.Data.ParentHash),
name: "ExecPayloadResponse.ExecutionPayload.ParentHash",
},
{
expected: "0xabcf8e0d4e9587369b2301d0790347320302cc09",
actual: epr.Data.FeeRecipient,
actual: hexutil.Encode(epr.Data.FeeRecipient),
name: "ExecPayloadResponse.ExecutionPayload.FeeRecipient",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: epr.Data.StateRoot,
actual: hexutil.Encode(epr.Data.StateRoot),
name: "ExecPayloadResponse.ExecutionPayload.StateRoot",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: epr.Data.ReceiptsRoot,
actual: hexutil.Encode(epr.Data.ReceiptsRoot),
name: "ExecPayloadResponse.ExecutionPayload.ReceiptsRoot",
},
{
expected: "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
actual: epr.Data.LogsBloom,
actual: hexutil.Encode(epr.Data.LogsBloom),
name: "ExecPayloadResponse.ExecutionPayload.LogsBloom",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: epr.Data.PrevRandao,
actual: hexutil.Encode(epr.Data.PrevRandao),
name: "ExecPayloadResponse.ExecutionPayload.PrevRandao",
},
{
expected: "1",
actual: epr.Data.BlockNumber,
actual: fmt.Sprintf("%d", epr.Data.BlockNumber),
name: "ExecPayloadResponse.ExecutionPayload.BlockNumber",
},
{
expected: "1",
actual: epr.Data.GasLimit,
actual: fmt.Sprintf("%d", epr.Data.GasLimit),
name: "ExecPayloadResponse.ExecutionPayload.GasLimit",
},
{
expected: "1",
actual: epr.Data.GasUsed,
actual: fmt.Sprintf("%d", epr.Data.GasUsed),
name: "ExecPayloadResponse.ExecutionPayload.GasUsed",
},
{
expected: "1",
actual: epr.Data.Timestamp,
actual: fmt.Sprintf("%d", epr.Data.Timestamp),
name: "ExecPayloadResponse.ExecutionPayload.Timestamp",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: epr.Data.ExtraData,
actual: hexutil.Encode(epr.Data.ExtraData),
name: "ExecPayloadResponse.ExecutionPayload.ExtraData",
},
{
expected: "452312848583266388373324160190187140051835877600158453279131187530910662656",
actual: epr.Data.BaseFeePerGas,
actual: fmt.Sprintf("%d", epr.Data.BaseFeePerGas),
name: "ExecPayloadResponse.ExecutionPayload.BaseFeePerGas",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: epr.Data.BlockHash,
actual: hexutil.Encode(epr.Data.BlockHash),
name: "ExecPayloadResponse.ExecutionPayload.BlockHash",
},
}
@@ -946,14 +1028,14 @@ func TestExecutionPayloadResponseCapellaUnmarshal(t *testing.T) {
}
require.Equal(t, 1, len(epr.Data.Transactions))
txHash := "0x02f878831469668303f51d843b9ac9f9843b9aca0082520894c93269b73096998db66be0441e836d873535cb9c8894a19041886f000080c001a031cc29234036afbf9a1fb9476b463367cb1f957ac0b919b69bbc798436e604aaa018c4e9c3914eb27aadd0b91e10b18655739fcf8c1fc398763a9f1beecb8ddc86"
require.Equal(t, txHash, epr.Data.Transactions[0])
require.Equal(t, txHash, hexutil.Encode(epr.Data.Transactions[0]))
require.Equal(t, 1, len(epr.Data.Withdrawals))
w := epr.Data.Withdrawals[0]
assert.Equal(t, "1", w.WithdrawalIndex)
assert.Equal(t, "1", w.ValidatorIndex)
assert.DeepEqual(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943", w.ExecutionAddress)
assert.Equal(t, "1", w.Amount)
assert.Equal(t, uint64(1), w.Index.Uint64())
assert.Equal(t, uint64(1), w.ValidatorIndex.Uint64())
assert.DeepEqual(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943", w.Address.String())
assert.Equal(t, uint64(1), w.Amount.Uint64())
}
func TestExecutionPayloadResponseDenebUnmarshal(t *testing.T) {
@@ -966,77 +1048,77 @@ func TestExecutionPayloadResponseDenebUnmarshal(t *testing.T) {
}{
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: epr.Data.ExecutionPayload.ParentHash,
actual: hexutil.Encode(epr.Data.ExecutionPayload.ParentHash),
name: "ExecPayloadResponse.ExecutionPayload.ParentHash",
},
{
expected: "0xabcf8e0d4e9587369b2301d0790347320302cc09",
actual: epr.Data.ExecutionPayload.FeeRecipient,
actual: hexutil.Encode(epr.Data.ExecutionPayload.FeeRecipient),
name: "ExecPayloadResponse.ExecutionPayload.FeeRecipient",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: epr.Data.ExecutionPayload.StateRoot,
actual: hexutil.Encode(epr.Data.ExecutionPayload.StateRoot),
name: "ExecPayloadResponse.ExecutionPayload.StateRoot",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: epr.Data.ExecutionPayload.ReceiptsRoot,
actual: hexutil.Encode(epr.Data.ExecutionPayload.ReceiptsRoot),
name: "ExecPayloadResponse.ExecutionPayload.ReceiptsRoot",
},
{
expected: "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
actual: epr.Data.ExecutionPayload.LogsBloom,
actual: hexutil.Encode(epr.Data.ExecutionPayload.LogsBloom),
name: "ExecPayloadResponse.ExecutionPayload.LogsBloom",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: epr.Data.ExecutionPayload.PrevRandao,
actual: hexutil.Encode(epr.Data.ExecutionPayload.PrevRandao),
name: "ExecPayloadResponse.ExecutionPayload.PrevRandao",
},
{
expected: "1",
actual: epr.Data.ExecutionPayload.BlockNumber,
actual: fmt.Sprintf("%d", epr.Data.ExecutionPayload.BlockNumber),
name: "ExecPayloadResponse.ExecutionPayload.BlockNumber",
},
{
expected: "1",
actual: epr.Data.ExecutionPayload.GasLimit,
actual: fmt.Sprintf("%d", epr.Data.ExecutionPayload.GasLimit),
name: "ExecPayloadResponse.ExecutionPayload.GasLimit",
},
{
expected: "1",
actual: epr.Data.ExecutionPayload.GasUsed,
actual: fmt.Sprintf("%d", epr.Data.ExecutionPayload.GasUsed),
name: "ExecPayloadResponse.ExecutionPayload.GasUsed",
},
{
expected: "1",
actual: epr.Data.ExecutionPayload.Timestamp,
actual: fmt.Sprintf("%d", epr.Data.ExecutionPayload.Timestamp),
name: "ExecPayloadResponse.ExecutionPayload.Timestamp",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: epr.Data.ExecutionPayload.ExtraData,
actual: hexutil.Encode(epr.Data.ExecutionPayload.ExtraData),
name: "ExecPayloadResponse.ExecutionPayload.ExtraData",
},
{
expected: "452312848583266388373324160190187140051835877600158453279131187530910662656",
actual: epr.Data.ExecutionPayload.BaseFeePerGas,
actual: fmt.Sprintf("%d", epr.Data.ExecutionPayload.BaseFeePerGas),
name: "ExecPayloadResponse.ExecutionPayload.BaseFeePerGas",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: epr.Data.ExecutionPayload.BlockHash,
actual: hexutil.Encode(epr.Data.ExecutionPayload.BlockHash),
name: "ExecPayloadResponse.ExecutionPayload.BlockHash",
},
{
expected: "2",
actual: epr.Data.ExecutionPayload.BlobGasUsed,
actual: fmt.Sprintf("%d", epr.Data.ExecutionPayload.BlobGasUsed),
name: "ExecPayloadResponse.ExecutionPayload.BlobGasUsed",
},
{
expected: "3",
actual: epr.Data.ExecutionPayload.ExcessBlobGas,
actual: fmt.Sprintf("%d", epr.Data.ExecutionPayload.ExcessBlobGas),
name: "ExecPayloadResponse.ExecutionPayload.ExcessBlobGas",
},
}
@@ -1045,16 +1127,64 @@ func TestExecutionPayloadResponseDenebUnmarshal(t *testing.T) {
}
require.Equal(t, 1, len(epr.Data.ExecutionPayload.Transactions))
txHash := "0x02f878831469668303f51d843b9ac9f9843b9aca0082520894c93269b73096998db66be0441e836d873535cb9c8894a19041886f000080c001a031cc29234036afbf9a1fb9476b463367cb1f957ac0b919b69bbc798436e604aaa018c4e9c3914eb27aadd0b91e10b18655739fcf8c1fc398763a9f1beecb8ddc86"
require.Equal(t, txHash, epr.Data.ExecutionPayload.Transactions[0])
require.Equal(t, txHash, hexutil.Encode(epr.Data.ExecutionPayload.Transactions[0]))
require.Equal(t, 1, len(epr.Data.ExecutionPayload.Withdrawals))
w := epr.Data.ExecutionPayload.Withdrawals[0]
assert.Equal(t, "1", w.WithdrawalIndex)
assert.Equal(t, "1", w.ValidatorIndex)
assert.DeepEqual(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943", w.ExecutionAddress)
assert.Equal(t, "1", w.Amount)
assert.Equal(t, "2", epr.Data.ExecutionPayload.BlobGasUsed)
assert.Equal(t, "3", epr.Data.ExecutionPayload.ExcessBlobGas)
assert.Equal(t, uint64(1), w.Index.Uint64())
assert.Equal(t, uint64(1), w.ValidatorIndex.Uint64())
assert.DeepEqual(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943", w.Address.String())
assert.Equal(t, uint64(1), w.Amount.Uint64())
assert.Equal(t, uint64(2), uint64(epr.Data.ExecutionPayload.BlobGasUsed))
assert.Equal(t, uint64(3), uint64(epr.Data.ExecutionPayload.ExcessBlobGas))
}
func TestExecutionPayloadResponseToProto(t *testing.T) {
hr := &ExecPayloadResponse{}
require.NoError(t, json.Unmarshal([]byte(testExampleExecutionPayload), hr))
p, err := hr.ToProto()
require.NoError(t, err)
parentHash, err := hexutil.Decode("0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")
require.NoError(t, err)
feeRecipient, err := hexutil.Decode("0xabcf8e0d4e9587369b2301d0790347320302cc09")
require.NoError(t, err)
stateRoot, err := hexutil.Decode("0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")
require.NoError(t, err)
receiptsRoot, err := hexutil.Decode("0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")
require.NoError(t, err)
logsBloom, err := hexutil.Decode("0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000")
require.NoError(t, err)
prevRandao, err := hexutil.Decode("0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")
require.NoError(t, err)
extraData, err := hexutil.Decode("0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")
require.NoError(t, err)
blockHash, err := hexutil.Decode("0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")
require.NoError(t, err)
tx, err := hexutil.Decode("0x02f878831469668303f51d843b9ac9f9843b9aca0082520894c93269b73096998db66be0441e836d873535cb9c8894a19041886f000080c001a031cc29234036afbf9a1fb9476b463367cb1f957ac0b919b69bbc798436e604aaa018c4e9c3914eb27aadd0b91e10b18655739fcf8c1fc398763a9f1beecb8ddc86")
require.NoError(t, err)
txList := [][]byte{tx}
bfpg, err := stringToUint256("452312848583266388373324160190187140051835877600158453279131187530910662656")
require.NoError(t, err)
expected := &v1.ExecutionPayload{
ParentHash: parentHash,
FeeRecipient: feeRecipient,
StateRoot: stateRoot,
ReceiptsRoot: receiptsRoot,
LogsBloom: logsBloom,
PrevRandao: prevRandao,
BlockNumber: 1,
GasLimit: 1,
GasUsed: 1,
Timestamp: 1,
ExtraData: extraData,
BaseFeePerGas: bfpg.SSZBytes(),
BlockHash: blockHash,
Transactions: txList,
}
require.DeepEqual(t, expected, p)
}
func TestExecutionPayloadResponseCapellaToProto(t *testing.T) {
@@ -1222,6 +1352,16 @@ func pbEth1Data() *eth.Eth1Data {
}
}
func TestEth1DataMarshal(t *testing.T) {
ed := &Eth1Data{
Eth1Data: pbEth1Data(),
}
b, err := json.Marshal(ed)
require.NoError(t, err)
expected := `{"deposit_root":"0x0000000000000000000000000000000000000000000000000000000000000000","deposit_count":"23","block_hash":"0x0000000000000000000000000000000000000000000000000000000000000000"}`
require.Equal(t, expected, string(b))
}
func pbSyncAggregate() *eth.SyncAggregate {
return &eth.SyncAggregate{
SyncCommitteeSignature: make([]byte, 48),
@@ -1229,6 +1369,14 @@ func pbSyncAggregate() *eth.SyncAggregate {
}
}
func TestSyncAggregate_MarshalJSON(t *testing.T) {
sa := &SyncAggregate{pbSyncAggregate()}
b, err := json.Marshal(sa)
require.NoError(t, err)
expected := `{"sync_committee_bits":"0x01","sync_committee_signature":"0x000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"}`
require.Equal(t, expected, string(b))
}
func pbDeposit(t *testing.T) *eth.Deposit {
return &eth.Deposit{
Proof: [][]byte{ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")},
@@ -1241,6 +1389,16 @@ func pbDeposit(t *testing.T) *eth.Deposit {
}
}
func TestDeposit_MarshalJSON(t *testing.T) {
d := &Deposit{
Deposit: pbDeposit(t),
}
b, err := json.Marshal(d)
require.NoError(t, err)
expected := `{"proof":["0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"],"data":{"pubkey":"0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a","withdrawal_credentials":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","amount":"1","signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"}}`
require.Equal(t, expected, string(b))
}
func pbSignedVoluntaryExit(t *testing.T) *eth.SignedVoluntaryExit {
return &eth.SignedVoluntaryExit{
Exit: &eth.VoluntaryExit{
@@ -1251,6 +1409,16 @@ func pbSignedVoluntaryExit(t *testing.T) *eth.SignedVoluntaryExit {
}
}
func TestVoluntaryExit(t *testing.T) {
ve := &SignedVoluntaryExit{
SignedVoluntaryExit: pbSignedVoluntaryExit(t),
}
b, err := json.Marshal(ve)
require.NoError(t, err)
expected := `{"message":{"epoch":"1","validator_index":"1"},"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"}`
require.Equal(t, expected, string(b))
}
func pbAttestation(t *testing.T) *eth.Attestation {
return &eth.Attestation{
AggregationBits: bitfield.Bitlist{0x01},
@@ -1271,6 +1439,16 @@ func pbAttestation(t *testing.T) *eth.Attestation {
}
}
func TestAttestationMarshal(t *testing.T) {
a := &Attestation{
Attestation: pbAttestation(t),
}
b, err := json.Marshal(a)
require.NoError(t, err)
expected := `{"aggregation_bits":"0x01","data":{"slot":"1","index":"1","beacon_block_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","source":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"},"target":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"}},"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"}`
require.Equal(t, expected, string(b))
}
func pbAttesterSlashing(t *testing.T) *eth.AttesterSlashing {
return &eth.AttesterSlashing{
Attestation_1: &eth.IndexedAttestation{
@@ -1311,7 +1489,9 @@ func pbAttesterSlashing(t *testing.T) *eth.AttesterSlashing {
}
func TestAttesterSlashing_MarshalJSON(t *testing.T) {
as := structs.AttesterSlashingFromConsensus(pbAttesterSlashing(t))
as := &AttesterSlashing{
AttesterSlashing: pbAttesterSlashing(t),
}
b, err := json.Marshal(as)
require.NoError(t, err)
expected := `{"attestation_1":{"attesting_indices":["1"],"data":{"slot":"1","index":"1","beacon_block_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","source":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"},"target":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"}},"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"},"attestation_2":{"attesting_indices":["1"],"data":{"slot":"1","index":"1","beacon_block_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","source":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"},"target":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"}},"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"}}`
@@ -1419,8 +1599,9 @@ func pbExecutionPayloadHeaderDeneb(t *testing.T) *v1.ExecutionPayloadHeaderDeneb
}
func TestExecutionPayloadHeader_MarshalJSON(t *testing.T) {
h, err := structs.ExecutionPayloadHeaderFromConsensus(pbExecutionPayloadHeader(t))
require.NoError(t, err)
h := &ExecutionPayloadHeader{
ExecutionPayloadHeader: pbExecutionPayloadHeader(t),
}
b, err := json.Marshal(h)
require.NoError(t, err)
expected := `{"parent_hash":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","fee_recipient":"0xabcf8e0d4e9587369b2301d0790347320302cc09","state_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","receipts_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","logs_bloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","prev_randao":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","block_number":"1","gas_limit":"1","gas_used":"1","timestamp":"1","extra_data":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","base_fee_per_gas":"452312848583266388373324160190187140051835877600158453279131187530910662656","block_hash":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","transactions_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"}`
@@ -1428,9 +1609,9 @@ func TestExecutionPayloadHeader_MarshalJSON(t *testing.T) {
}
func TestExecutionPayloadHeaderCapella_MarshalJSON(t *testing.T) {
h, err := structs.ExecutionPayloadHeaderCapellaFromConsensus(pbExecutionPayloadHeaderCapella(t))
require.NoError(t, err)
require.NoError(t, err)
h := &ExecutionPayloadHeaderCapella{
ExecutionPayloadHeaderCapella: pbExecutionPayloadHeaderCapella(t),
}
b, err := json.Marshal(h)
require.NoError(t, err)
expected := `{"parent_hash":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","fee_recipient":"0xabcf8e0d4e9587369b2301d0790347320302cc09","state_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","receipts_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","logs_bloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","prev_randao":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","block_number":"1","gas_limit":"1","gas_used":"1","timestamp":"1","extra_data":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","base_fee_per_gas":"452312848583266388373324160190187140051835877600158453279131187530910662656","block_hash":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","transactions_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","withdrawals_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"}`
@@ -1438,8 +1619,9 @@ func TestExecutionPayloadHeaderCapella_MarshalJSON(t *testing.T) {
}
func TestExecutionPayloadHeaderDeneb_MarshalJSON(t *testing.T) {
h, err := structs.ExecutionPayloadHeaderDenebFromConsensus(pbExecutionPayloadHeaderDeneb(t))
require.NoError(t, err)
h := &ExecutionPayloadHeaderDeneb{
ExecutionPayloadHeaderDeneb: pbExecutionPayloadHeaderDeneb(t),
}
b, err := json.Marshal(h)
require.NoError(t, err)
expected := `{"parent_hash":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","fee_recipient":"0xabcf8e0d4e9587369b2301d0790347320302cc09","state_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","receipts_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","logs_bloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","prev_randao":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","block_number":"1","gas_limit":"1","gas_used":"1","timestamp":"1","extra_data":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","base_fee_per_gas":"452312848583266388373324160190187140051835877600158453279131187530910662656","block_hash":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","transactions_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","withdrawals_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","blob_gas_used":"1","excess_blob_gas":"2"}`
@@ -1668,13 +1850,12 @@ func TestRoundTripUint256(t *testing.T) {
func TestRoundTripProtoUint256(t *testing.T) {
h := pbExecutionPayloadHeader(t)
h.BaseFeePerGas = []byte{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31}
hm, err := structs.ExecutionPayloadHeaderFromConsensus(h)
require.NoError(t, err)
hm := &ExecutionPayloadHeader{ExecutionPayloadHeader: h}
m, err := json.Marshal(hm)
require.NoError(t, err)
hu := &structs.ExecutionPayloadHeader{}
hu := &ExecutionPayloadHeader{}
require.NoError(t, json.Unmarshal(m, hu))
hp, err := hu.ToConsensus()
hp, err := hu.ToProto()
require.NoError(t, err)
require.DeepEqual(t, h.BaseFeePerGas, hp.BaseFeePerGas)
}
@@ -1682,7 +1863,7 @@ func TestRoundTripProtoUint256(t *testing.T) {
func TestExecutionPayloadHeaderRoundtrip(t *testing.T) {
expected, err := os.ReadFile("testdata/execution-payload.json")
require.NoError(t, err)
hu := &structs.ExecutionPayloadHeader{}
hu := &ExecutionPayloadHeader{}
require.NoError(t, json.Unmarshal(expected, hu))
m, err := json.Marshal(hu)
require.NoError(t, err)
@@ -1692,14 +1873,14 @@ func TestExecutionPayloadHeaderRoundtrip(t *testing.T) {
func TestExecutionPayloadHeaderCapellaRoundtrip(t *testing.T) {
expected, err := os.ReadFile("testdata/execution-payload-capella.json")
require.NoError(t, err)
hu := &structs.ExecutionPayloadHeaderCapella{}
hu := &ExecutionPayloadHeaderCapella{}
require.NoError(t, json.Unmarshal(expected, hu))
m, err := json.Marshal(hu)
require.NoError(t, err)
require.DeepEqual(t, string(expected[0:len(expected)-1]), string(m))
}
func TestErrorMessage_unexpectedStatusErr(t *testing.T) {
func TestErrorMessage_non200Err(t *testing.T) {
mockRequest := &http.Request{
URL: &url.URL{Path: "example.com"},
}
@@ -1779,7 +1960,7 @@ func TestErrorMessage_unexpectedStatusErr(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := unexpectedStatusErr(tt.args, http.StatusOK)
err := non200Err(tt.args)
if err != nil && tt.wantMessage != "" {
require.ErrorContains(t, tt.wantMessage, err)
}
@@ -1813,9 +1994,11 @@ func TestEmptyResponseBody(t *testing.T) {
epr := &ExecutionPayloadResponse{}
require.NoError(t, json.Unmarshal(encoded, epr))
pp, err := epr.ParsePayload()
require.NoError(t, err)
pb, err := pp.PayloadProto()
if err == nil {
require.NoError(t, err)
require.Equal(t, false, pp == nil)
require.Equal(t, false, pb == nil)
} else {
require.ErrorIs(t, err, consensusblocks.ErrNilObject)
}

View File

@@ -4,7 +4,7 @@ import (
"net/url"
"testing"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func TestValidHostname(t *testing.T) {

View File

@@ -6,7 +6,7 @@ go_library(
"event_stream.go",
"utils.go",
],
importpath = "github.com/OffchainLabs/prysm/v6/api/client/event",
importpath = "github.com/prysmaticlabs/prysm/v5/api/client/event",
visibility = ["//visibility:public"],
deps = [
"//api:go_default_library",

View File

@@ -7,17 +7,29 @@ import (
"net/url"
"strings"
"github.com/OffchainLabs/prysm/v6/api"
"github.com/OffchainLabs/prysm/v6/api/client"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/api"
"github.com/prysmaticlabs/prysm/v5/api/client"
log "github.com/sirupsen/logrus"
)
const (
EventHead = "head"
EventError = "error"
EventConnectionError = "connection_error"
EventHead = "head"
EventBlock = "block"
EventAttestation = "attestation"
EventVoluntaryExit = "voluntary_exit"
EventBlsToExecutionChange = "bls_to_execution_change"
EventProposerSlashing = "proposer_slashing"
EventAttesterSlashing = "attester_slashing"
EventFinalizedCheckpoint = "finalized_checkpoint"
EventChainReorg = "chain_reorg"
EventContributionAndProof = "contribution_and_proof"
EventLightClientFinalityUpdate = "light_client_finality_update"
EventLightClientOptimisticUpdate = "light_client_optimistic_update"
EventPayloadAttributes = "payload_attributes"
EventBlobSidecar = "blob_sidecar"
EventError = "error"
EventConnectionError = "connection_error"
)
var (

View File

@@ -1,13 +1,14 @@
package event
import (
"context"
"fmt"
"net/http"
"net/http/httptest"
"testing"
"time"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/require"
log "github.com/sirupsen/logrus"
)
@@ -29,7 +30,7 @@ func TestNewEventStream(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
_, err := NewEventStream(t.Context(), &http.Client{}, tt.host, tt.topics)
_, err := NewEventStream(context.Background(), &http.Client{}, tt.host, tt.topics)
if (err != nil) != tt.wantErr {
t.Errorf("NewEventStream() error = %v, wantErr %v", err, tt.wantErr)
}
@@ -55,7 +56,7 @@ func TestEventStream(t *testing.T) {
topics := []string{"head"}
eventsChannel := make(chan *Event, 1)
stream, err := NewEventStream(t.Context(), http.DefaultClient, server.URL, topics)
stream, err := NewEventStream(context.Background(), http.DefaultClient, server.URL, topics)
require.NoError(t, err)
go stream.Subscribe(eventsChannel)
@@ -82,7 +83,8 @@ func TestEventStream(t *testing.T) {
func TestEventStreamRequestError(t *testing.T) {
topics := []string{"head"}
eventsChannel := make(chan *Event, 1)
ctx := t.Context()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// use valid url that will result in failed request with nil body
stream, err := NewEventStream(ctx, http.DefaultClient, "http://badhost:1234", topics)

View File

@@ -5,7 +5,7 @@ import (
"bytes"
"testing"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func TestScanLinesWithCarriage(t *testing.T) {

View File

@@ -3,7 +3,7 @@ load("@prysm//tools/go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = ["client.go"],
importpath = "github.com/OffchainLabs/prysm/v6/api/client/validator",
importpath = "github.com/prysmaticlabs/prysm/v5/api/client/validator",
visibility = ["//visibility:public"],
deps = [
"//api/client:go_default_library",

View File

@@ -6,9 +6,9 @@ import (
"fmt"
"strings"
"github.com/OffchainLabs/prysm/v6/api/client"
"github.com/OffchainLabs/prysm/v6/validator/rpc"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/api/client"
"github.com/prysmaticlabs/prysm/v5/validator/rpc"
)
const (

View File

@@ -6,7 +6,7 @@ go_library(
"grpcutils.go",
"parameters.go",
],
importpath = "github.com/OffchainLabs/prysm/v6/api/grpc",
importpath = "github.com/prysmaticlabs/prysm/v5/api/grpc",
visibility = ["//visibility:public"],
deps = [
"@com_github_sirupsen_logrus//:go_default_library",

View File

@@ -1,10 +1,11 @@
package grpc
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
logTest "github.com/sirupsen/logrus/hooks/test"
"google.golang.org/grpc/metadata"
)
@@ -15,7 +16,7 @@ type customErrorData struct {
func TestAppendHeaders(t *testing.T) {
t.Run("one_header", func(t *testing.T) {
ctx := AppendHeaders(t.Context(), []string{"first=value1"})
ctx := AppendHeaders(context.Background(), []string{"first=value1"})
md, ok := metadata.FromOutgoingContext(ctx)
require.Equal(t, true, ok, "Failed to read context metadata")
require.Equal(t, 1, md.Len(), "MetadataV0 contains wrong number of values")
@@ -23,7 +24,7 @@ func TestAppendHeaders(t *testing.T) {
})
t.Run("multiple_headers", func(t *testing.T) {
ctx := AppendHeaders(t.Context(), []string{"first=value1", "second=value2"})
ctx := AppendHeaders(context.Background(), []string{"first=value1", "second=value2"})
md, ok := metadata.FromOutgoingContext(ctx)
require.Equal(t, true, ok, "Failed to read context metadata")
require.Equal(t, 2, md.Len(), "MetadataV0 contains wrong number of values")
@@ -32,7 +33,7 @@ func TestAppendHeaders(t *testing.T) {
})
t.Run("one_empty_header", func(t *testing.T) {
ctx := AppendHeaders(t.Context(), []string{"first=value1", ""})
ctx := AppendHeaders(context.Background(), []string{"first=value1", ""})
md, ok := metadata.FromOutgoingContext(ctx)
require.Equal(t, true, ok, "Failed to read context metadata")
require.Equal(t, 1, md.Len(), "MetadataV0 contains wrong number of values")
@@ -41,7 +42,7 @@ func TestAppendHeaders(t *testing.T) {
t.Run("incorrect_header", func(t *testing.T) {
logHook := logTest.NewGlobal()
ctx := AppendHeaders(t.Context(), []string{"first=value1", "second"})
ctx := AppendHeaders(context.Background(), []string{"first=value1", "second"})
md, ok := metadata.FromOutgoingContext(ctx)
require.Equal(t, true, ok, "Failed to read context metadata")
require.Equal(t, 1, md.Len(), "MetadataV0 contains wrong number of values")
@@ -50,7 +51,7 @@ func TestAppendHeaders(t *testing.T) {
})
t.Run("header_value_with_equal_sign", func(t *testing.T) {
ctx := AppendHeaders(t.Context(), []string{"first=value=1"})
ctx := AppendHeaders(context.Background(), []string{"first=value=1"})
md, ok := metadata.FromOutgoingContext(ctx)
require.Equal(t, true, ok, "Failed to read context metadata")
require.Equal(t, 1, md.Len(), "MetadataV0 contains wrong number of values")

View File

@@ -1,9 +1,9 @@
package api
import (
"github.com/OffchainLabs/prysm/v6/crypto/rand"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/crypto/rand"
)
// GenerateRandomHexString generates a random hex string that follows the standards for jwt token

View File

@@ -3,7 +3,7 @@ package api
import (
"testing"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func TestGenerateRandomHexString(t *testing.T) {

View File

@@ -3,7 +3,7 @@ load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = ["pagination.go"],
importpath = "github.com/OffchainLabs/prysm/v6/api/pagination",
importpath = "github.com/prysmaticlabs/prysm/v5/api/pagination",
visibility = ["//visibility:public"],
deps = [
"//config/params:go_default_library",

View File

@@ -5,8 +5,8 @@ import (
"fmt"
"strconv"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/config/params"
)
// StartAndEndPage takes in the requested page token, wanted page size, total page size.

View File

@@ -3,9 +3,9 @@ package pagination_test
import (
"testing"
"github.com/OffchainLabs/prysm/v6/api/pagination"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/prysmaticlabs/prysm/v5/api/pagination"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func TestStartAndEndPage(t *testing.T) {

View File

@@ -3,7 +3,7 @@ load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = ["error.go"],
importpath = "github.com/OffchainLabs/prysm/v6/api/server",
importpath = "github.com/prysmaticlabs/prysm/v5/api/server",
visibility = ["//visibility:public"],
)

View File

@@ -4,7 +4,7 @@ import (
"errors"
"testing"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
)
func TestDecodeError(t *testing.T) {

View File

@@ -7,7 +7,7 @@ go_library(
"options.go",
"server.go",
],
importpath = "github.com/OffchainLabs/prysm/v6/api/server/httprest",
importpath = "github.com/prysmaticlabs/prysm/v5/api/server/httprest",
visibility = ["//visibility:public"],
deps = [
"//api/server/middleware:go_default_library",

View File

@@ -5,7 +5,7 @@ import (
"net/http"
"github.com/OffchainLabs/prysm/v6/api/server/middleware"
"github.com/prysmaticlabs/prysm/v5/api/server/middleware"
)
// Option is a http rest server functional parameter type.

View File

@@ -5,9 +5,9 @@ import (
"net/http"
"time"
"github.com/OffchainLabs/prysm/v6/api/server/middleware"
"github.com/OffchainLabs/prysm/v6/runtime"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/api/server/middleware"
"github.com/prysmaticlabs/prysm/v5/runtime"
)
var _ runtime.Service = (*Server)(nil)

View File

@@ -1,6 +1,7 @@
package httprest
import (
"context"
"flag"
"fmt"
"net"
@@ -9,9 +10,9 @@ import (
"net/url"
"testing"
"github.com/OffchainLabs/prysm/v6/cmd/beacon-chain/flags"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/prysmaticlabs/prysm/v5/cmd/beacon-chain/flags"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
logTest "github.com/sirupsen/logrus/hooks/test"
"github.com/urfave/cli/v2"
)
@@ -33,7 +34,7 @@ func TestServer_StartStop(t *testing.T) {
WithRouter(handler),
}
g, err := New(t.Context(), opts...)
g, err := New(context.Background(), opts...)
require.NoError(t, err)
g.Start()
@@ -61,7 +62,7 @@ func TestServer_NilHandler_NotFoundHandlerRegistered(t *testing.T) {
WithRouter(handler),
}
g, err := New(t.Context(), opts...)
g, err := New(context.Background(), opts...)
require.NoError(t, err)
writer := httptest.NewRecorder()

View File

@@ -6,14 +6,9 @@ go_library(
"middleware.go",
"util.go",
],
importpath = "github.com/OffchainLabs/prysm/v6/api/server/middleware",
importpath = "github.com/prysmaticlabs/prysm/v5/api/server/middleware",
visibility = ["//visibility:public"],
deps = [
"//api:go_default_library",
"//api/apiutil:go_default_library",
"@com_github_rs_cors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
deps = ["@com_github_rs_cors//:go_default_library"],
)
go_test(
@@ -27,6 +22,5 @@ go_test(
"//api:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
)

View File

@@ -1,15 +1,11 @@
package middleware
import (
"compress/gzip"
"fmt"
"net/http"
"strings"
"github.com/OffchainLabs/prysm/v6/api"
"github.com/OffchainLabs/prysm/v6/api/apiutil"
"github.com/rs/cors"
log "github.com/sirupsen/logrus"
)
type Middleware func(http.Handler) http.Handler
@@ -75,64 +71,47 @@ func ContentTypeHandler(acceptedMediaTypes []string) Middleware {
func AcceptHeaderHandler(serverAcceptedTypes []string) Middleware {
return func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if _, ok := apiutil.Negotiate(r.Header.Get("Accept"), serverAcceptedTypes); !ok {
http.Error(w, "Not Acceptable", http.StatusNotAcceptable)
return
}
next.ServeHTTP(w, r)
})
}
}
// AcceptEncodingHeaderHandler compresses the response before sending it back to the client, if gzip is supported.
func AcceptEncodingHeaderHandler() Middleware {
return func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if !strings.Contains(r.Header.Get("Accept-Encoding"), "gzip") {
acceptHeader := r.Header.Get("Accept")
// header is optional and should skip if not provided
if acceptHeader == "" {
next.ServeHTTP(w, r)
return
}
gz := gzip.NewWriter(w)
gzipRW := &gzipResponseWriter{gz: gz, ResponseWriter: w}
defer func() {
if !gzipRW.zip {
return
accepted := false
acceptTypes := strings.Split(acceptHeader, ",")
// follows rules defined in https://datatracker.ietf.org/doc/html/rfc2616#section-14.1
for _, acceptType := range acceptTypes {
acceptType = strings.TrimSpace(acceptType)
if acceptType == "*/*" {
accepted = true
break
}
if err := gz.Close(); err != nil {
log.WithError(err).Error("Failed to close gzip writer")
for _, serverAcceptedType := range serverAcceptedTypes {
if strings.HasPrefix(acceptType, serverAcceptedType) {
accepted = true
break
}
if acceptType != "/*" && strings.HasSuffix(acceptType, "/*") && strings.HasPrefix(serverAcceptedType, acceptType[:len(acceptType)-2]) {
accepted = true
break
}
}
}()
if accepted {
break
}
}
next.ServeHTTP(gzipRW, r)
if !accepted {
http.Error(w, fmt.Sprintf("Not Acceptable: %s", acceptHeader), http.StatusNotAcceptable)
return
}
next.ServeHTTP(w, r)
})
}
}
type gzipResponseWriter struct {
gz *gzip.Writer
http.ResponseWriter
zip bool
}
func (g *gzipResponseWriter) WriteHeader(statusCode int) {
if strings.Contains(g.Header().Get("Content-Type"), api.JsonMediaType) {
// Removing the current Content-Length because zipping will change it.
g.Header().Del("Content-Length")
g.Header().Set("Content-Encoding", "gzip")
g.zip = true
}
g.ResponseWriter.WriteHeader(statusCode)
}
func (g *gzipResponseWriter) Write(b []byte) (int, error) {
if g.zip {
return g.gz.Write(b)
}
return g.ResponseWriter.Write(b)
}
func MiddlewareChain(h http.Handler, mw []Middleware) http.Handler {
if len(mw) < 1 {
return h

View File

@@ -1,35 +1,14 @@
package middleware
import (
"bytes"
"compress/gzip"
"io"
"net/http"
"net/http/httptest"
"testing"
"github.com/OffchainLabs/prysm/v6/api"
"github.com/OffchainLabs/prysm/v6/testing/require"
log "github.com/sirupsen/logrus"
"github.com/prysmaticlabs/prysm/v5/api"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
// frozenHeaderRecorder allows asserting that response headers were not modified
// after the call to WriteHeader.
//
// Its purpose is to have a regression test for https://github.com/OffchainLabs/prysm/pull/15499.
type frozenHeaderRecorder struct {
*httptest.ResponseRecorder
frozenHeader http.Header
}
func (r *frozenHeaderRecorder) WriteHeader(code int) {
if r.frozenHeader != nil {
return
}
r.ResponseRecorder.WriteHeader(code)
r.frozenHeader = r.ResponseRecorder.Header().Clone()
}
func TestNormalizeQueryValuesHandler(t *testing.T) {
nextHandler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
_, err := w.Write([]byte("next handler"))
@@ -145,90 +124,6 @@ func TestContentTypeHandler(t *testing.T) {
}
}
func TestAcceptEncodingHeaderHandler(t *testing.T) {
dummyContent := "Test gzip middleware content"
nextHandler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", r.Header.Get("Accept"))
w.WriteHeader(http.StatusOK)
_, err := w.Write([]byte(dummyContent))
require.NoError(t, err)
})
handler := AcceptEncodingHeaderHandler()(nextHandler)
tests := []struct {
name string
accept string
acceptEncoding string
expectCompressed bool
}{
{
name: "Accept gzip",
accept: api.JsonMediaType,
acceptEncoding: "gzip",
expectCompressed: true,
},
{
name: "Accept multiple encodings",
accept: api.JsonMediaType,
acceptEncoding: "deflate, gzip",
expectCompressed: true,
},
{
name: "Accept unsupported encoding",
accept: api.JsonMediaType,
acceptEncoding: "deflate",
expectCompressed: false,
},
{
name: "No accept encoding header",
accept: api.JsonMediaType,
acceptEncoding: "",
expectCompressed: false,
},
{
name: "SSZ",
accept: api.OctetStreamMediaType,
acceptEncoding: "gzip",
expectCompressed: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
req := httptest.NewRequest("GET", "/", nil)
req.Header.Set("Accept", tt.accept)
if tt.acceptEncoding != "" {
req.Header.Set("Accept-Encoding", tt.acceptEncoding)
}
rr := &frozenHeaderRecorder{ResponseRecorder: httptest.NewRecorder()}
handler.ServeHTTP(rr, req)
if tt.expectCompressed {
require.Equal(t, "gzip", rr.frozenHeader.Get("Content-Encoding"), "Expected Content-Encoding header to be 'gzip'")
compressedBody := rr.Body.Bytes()
require.NotEqual(t, dummyContent, string(compressedBody), "Response body should be compressed and differ from the original")
gzReader, err := gzip.NewReader(bytes.NewReader(compressedBody))
require.NoError(t, err, "Failed to create gzipReader")
defer func() {
if err := gzReader.Close(); err != nil {
log.WithError(err).Error("Failed to close gzip reader")
}
}()
decompressedBody, err := io.ReadAll(gzReader)
require.NoError(t, err, "Failed to decompress response body")
require.Equal(t, dummyContent, string(decompressedBody), "Decompressed content should match the original")
} else {
require.Equal(t, dummyContent, rr.Body.String(), "Response body should be uncompressed and match the original")
}
})
}
}
func TestAcceptHeaderHandler(t *testing.T) {
acceptedTypes := []string{"application/json", "application/octet-stream"}

View File

@@ -3,8 +3,8 @@ package middleware
import (
"testing"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func TestNormalizeQueryValues(t *testing.T) {

View File

@@ -4,11 +4,9 @@ go_library(
name = "go_default_library",
srcs = [
"block.go",
"block_execution.go",
"conversions.go",
"conversions_blob.go",
"conversions_block.go",
"conversions_block_execution.go",
"conversions_lightclient.go",
"conversions_state.go",
"endpoints_beacon.go",
@@ -24,19 +22,17 @@ go_library(
"other.go",
"state.go",
],
importpath = "github.com/OffchainLabs/prysm/v6/api/server/structs",
importpath = "github.com/prysmaticlabs/prysm/v5/api/server/structs",
visibility = ["//visibility:public"],
deps = [
"//api/server:go_default_library",
"//beacon-chain/state:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//consensus-types/validator:go_default_library",
"//container/slice:go_default_library",
"//crypto/bls:go_default_library",
"//encoding/bytesutil:go_default_library",
"//math:go_default_library",
"//proto/engine/v1:go_default_library",
@@ -46,23 +42,15 @@ go_library(
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = [
"conversions_block_execution_test.go",
"conversions_test.go",
],
srcs = ["conversions_test.go"],
embed = [":go_default_library"],
deps = [
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
],
)

View File

@@ -186,6 +186,40 @@ type BlindedBeaconBlockBodyBellatrix struct {
ExecutionPayloadHeader *ExecutionPayloadHeader `json:"execution_payload_header"`
}
type ExecutionPayload struct {
ParentHash string `json:"parent_hash"`
FeeRecipient string `json:"fee_recipient"`
StateRoot string `json:"state_root"`
ReceiptsRoot string `json:"receipts_root"`
LogsBloom string `json:"logs_bloom"`
PrevRandao string `json:"prev_randao"`
BlockNumber string `json:"block_number"`
GasLimit string `json:"gas_limit"`
GasUsed string `json:"gas_used"`
Timestamp string `json:"timestamp"`
ExtraData string `json:"extra_data"`
BaseFeePerGas string `json:"base_fee_per_gas"`
BlockHash string `json:"block_hash"`
Transactions []string `json:"transactions"`
}
type ExecutionPayloadHeader struct {
ParentHash string `json:"parent_hash"`
FeeRecipient string `json:"fee_recipient"`
StateRoot string `json:"state_root"`
ReceiptsRoot string `json:"receipts_root"`
LogsBloom string `json:"logs_bloom"`
PrevRandao string `json:"prev_randao"`
BlockNumber string `json:"block_number"`
GasLimit string `json:"gas_limit"`
GasUsed string `json:"gas_used"`
Timestamp string `json:"timestamp"`
ExtraData string `json:"extra_data"`
BaseFeePerGas string `json:"base_fee_per_gas"`
BlockHash string `json:"block_hash"`
TransactionsRoot string `json:"transactions_root"`
}
// ----------------------------------------------------------------------------
// Capella
// ----------------------------------------------------------------------------
@@ -264,6 +298,42 @@ type BlindedBeaconBlockBodyCapella struct {
BLSToExecutionChanges []*SignedBLSToExecutionChange `json:"bls_to_execution_changes"`
}
type ExecutionPayloadCapella struct {
ParentHash string `json:"parent_hash"`
FeeRecipient string `json:"fee_recipient"`
StateRoot string `json:"state_root"`
ReceiptsRoot string `json:"receipts_root"`
LogsBloom string `json:"logs_bloom"`
PrevRandao string `json:"prev_randao"`
BlockNumber string `json:"block_number"`
GasLimit string `json:"gas_limit"`
GasUsed string `json:"gas_used"`
Timestamp string `json:"timestamp"`
ExtraData string `json:"extra_data"`
BaseFeePerGas string `json:"base_fee_per_gas"`
BlockHash string `json:"block_hash"`
Transactions []string `json:"transactions"`
Withdrawals []*Withdrawal `json:"withdrawals"`
}
type ExecutionPayloadHeaderCapella struct {
ParentHash string `json:"parent_hash"`
FeeRecipient string `json:"fee_recipient"`
StateRoot string `json:"state_root"`
ReceiptsRoot string `json:"receipts_root"`
LogsBloom string `json:"logs_bloom"`
PrevRandao string `json:"prev_randao"`
BlockNumber string `json:"block_number"`
GasLimit string `json:"gas_limit"`
GasUsed string `json:"gas_used"`
Timestamp string `json:"timestamp"`
ExtraData string `json:"extra_data"`
BaseFeePerGas string `json:"base_fee_per_gas"`
BlockHash string `json:"block_hash"`
TransactionsRoot string `json:"transactions_root"`
WithdrawalsRoot string `json:"withdrawals_root"`
}
// ----------------------------------------------------------------------------
// Deneb
// ----------------------------------------------------------------------------
@@ -356,6 +426,46 @@ type BlindedBeaconBlockBodyDeneb struct {
BlobKzgCommitments []string `json:"blob_kzg_commitments"`
}
type ExecutionPayloadDeneb struct {
ParentHash string `json:"parent_hash"`
FeeRecipient string `json:"fee_recipient"`
StateRoot string `json:"state_root"`
ReceiptsRoot string `json:"receipts_root"`
LogsBloom string `json:"logs_bloom"`
PrevRandao string `json:"prev_randao"`
BlockNumber string `json:"block_number"`
GasLimit string `json:"gas_limit"`
GasUsed string `json:"gas_used"`
Timestamp string `json:"timestamp"`
ExtraData string `json:"extra_data"`
BaseFeePerGas string `json:"base_fee_per_gas"`
BlockHash string `json:"block_hash"`
Transactions []string `json:"transactions"`
Withdrawals []*Withdrawal `json:"withdrawals"`
BlobGasUsed string `json:"blob_gas_used"`
ExcessBlobGas string `json:"excess_blob_gas"`
}
type ExecutionPayloadHeaderDeneb struct {
ParentHash string `json:"parent_hash"`
FeeRecipient string `json:"fee_recipient"`
StateRoot string `json:"state_root"`
ReceiptsRoot string `json:"receipts_root"`
LogsBloom string `json:"logs_bloom"`
PrevRandao string `json:"prev_randao"`
BlockNumber string `json:"block_number"`
GasLimit string `json:"gas_limit"`
GasUsed string `json:"gas_used"`
Timestamp string `json:"timestamp"`
ExtraData string `json:"extra_data"`
BaseFeePerGas string `json:"base_fee_per_gas"`
BlockHash string `json:"block_hash"`
TransactionsRoot string `json:"transactions_root"`
WithdrawalsRoot string `json:"withdrawals_root"`
BlobGasUsed string `json:"blob_gas_used"`
ExcessBlobGas string `json:"excess_blob_gas"`
}
// ----------------------------------------------------------------------------
// Electra
// ----------------------------------------------------------------------------
@@ -450,6 +560,14 @@ type BlindedBeaconBlockBodyElectra struct {
ExecutionRequests *ExecutionRequests `json:"execution_requests"`
}
type (
ExecutionRequests struct {
Deposits []*DepositRequest `json:"deposits"`
Withdrawals []*WithdrawalRequest `json:"withdrawals"`
Consolidations []*ConsolidationRequest `json:"consolidations"`
}
)
// ----------------------------------------------------------------------------
// Fulu
// ----------------------------------------------------------------------------

View File

@@ -1,157 +0,0 @@
package structs
// ----------------------------------------------------------------------------
// Bellatrix
// ----------------------------------------------------------------------------
type ExecutionPayload struct {
ParentHash string `json:"parent_hash"`
FeeRecipient string `json:"fee_recipient"`
StateRoot string `json:"state_root"`
ReceiptsRoot string `json:"receipts_root"`
LogsBloom string `json:"logs_bloom"`
PrevRandao string `json:"prev_randao"`
BlockNumber string `json:"block_number"`
GasLimit string `json:"gas_limit"`
GasUsed string `json:"gas_used"`
Timestamp string `json:"timestamp"`
ExtraData string `json:"extra_data"`
BaseFeePerGas string `json:"base_fee_per_gas"`
BlockHash string `json:"block_hash"`
Transactions []string `json:"transactions"`
}
type ExecutionPayloadHeader struct {
ParentHash string `json:"parent_hash"`
FeeRecipient string `json:"fee_recipient"`
StateRoot string `json:"state_root"`
ReceiptsRoot string `json:"receipts_root"`
LogsBloom string `json:"logs_bloom"`
PrevRandao string `json:"prev_randao"`
BlockNumber string `json:"block_number"`
GasLimit string `json:"gas_limit"`
GasUsed string `json:"gas_used"`
Timestamp string `json:"timestamp"`
ExtraData string `json:"extra_data"`
BaseFeePerGas string `json:"base_fee_per_gas"`
BlockHash string `json:"block_hash"`
TransactionsRoot string `json:"transactions_root"`
}
// ----------------------------------------------------------------------------
// Capella
// ----------------------------------------------------------------------------
type ExecutionPayloadCapella struct {
ParentHash string `json:"parent_hash"`
FeeRecipient string `json:"fee_recipient"`
StateRoot string `json:"state_root"`
ReceiptsRoot string `json:"receipts_root"`
LogsBloom string `json:"logs_bloom"`
PrevRandao string `json:"prev_randao"`
BlockNumber string `json:"block_number"`
GasLimit string `json:"gas_limit"`
GasUsed string `json:"gas_used"`
Timestamp string `json:"timestamp"`
ExtraData string `json:"extra_data"`
BaseFeePerGas string `json:"base_fee_per_gas"`
BlockHash string `json:"block_hash"`
Transactions []string `json:"transactions"`
Withdrawals []*Withdrawal `json:"withdrawals"`
}
type ExecutionPayloadHeaderCapella struct {
ParentHash string `json:"parent_hash"`
FeeRecipient string `json:"fee_recipient"`
StateRoot string `json:"state_root"`
ReceiptsRoot string `json:"receipts_root"`
LogsBloom string `json:"logs_bloom"`
PrevRandao string `json:"prev_randao"`
BlockNumber string `json:"block_number"`
GasLimit string `json:"gas_limit"`
GasUsed string `json:"gas_used"`
Timestamp string `json:"timestamp"`
ExtraData string `json:"extra_data"`
BaseFeePerGas string `json:"base_fee_per_gas"`
BlockHash string `json:"block_hash"`
TransactionsRoot string `json:"transactions_root"`
WithdrawalsRoot string `json:"withdrawals_root"`
}
// ----------------------------------------------------------------------------
// Deneb
// ----------------------------------------------------------------------------
type ExecutionPayloadDeneb struct {
ParentHash string `json:"parent_hash"`
FeeRecipient string `json:"fee_recipient"`
StateRoot string `json:"state_root"`
ReceiptsRoot string `json:"receipts_root"`
LogsBloom string `json:"logs_bloom"`
PrevRandao string `json:"prev_randao"`
BlockNumber string `json:"block_number"`
GasLimit string `json:"gas_limit"`
GasUsed string `json:"gas_used"`
Timestamp string `json:"timestamp"`
ExtraData string `json:"extra_data"`
BaseFeePerGas string `json:"base_fee_per_gas"`
BlockHash string `json:"block_hash"`
Transactions []string `json:"transactions"`
Withdrawals []*Withdrawal `json:"withdrawals"`
BlobGasUsed string `json:"blob_gas_used"`
ExcessBlobGas string `json:"excess_blob_gas"`
}
type ExecutionPayloadHeaderDeneb struct {
ParentHash string `json:"parent_hash"`
FeeRecipient string `json:"fee_recipient"`
StateRoot string `json:"state_root"`
ReceiptsRoot string `json:"receipts_root"`
LogsBloom string `json:"logs_bloom"`
PrevRandao string `json:"prev_randao"`
BlockNumber string `json:"block_number"`
GasLimit string `json:"gas_limit"`
GasUsed string `json:"gas_used"`
Timestamp string `json:"timestamp"`
ExtraData string `json:"extra_data"`
BaseFeePerGas string `json:"base_fee_per_gas"`
BlockHash string `json:"block_hash"`
TransactionsRoot string `json:"transactions_root"`
WithdrawalsRoot string `json:"withdrawals_root"`
BlobGasUsed string `json:"blob_gas_used"`
ExcessBlobGas string `json:"excess_blob_gas"`
}
// ----------------------------------------------------------------------------
// Electra
// ----------------------------------------------------------------------------
type ExecutionRequests struct {
Deposits []*DepositRequest `json:"deposits"`
Withdrawals []*WithdrawalRequest `json:"withdrawals"`
Consolidations []*ConsolidationRequest `json:"consolidations"`
}
type DepositRequest struct {
Pubkey string `json:"pubkey"`
WithdrawalCredentials string `json:"withdrawal_credentials"`
Amount string `json:"amount"`
Signature string `json:"signature"`
Index string `json:"index"`
}
type WithdrawalRequest struct {
SourceAddress string `json:"source_address"`
ValidatorPubkey string `json:"validator_pubkey"`
Amount string `json:"amount"`
}
type ConsolidationRequest struct {
SourceAddress string `json:"source_address"`
SourcePubkey string `json:"source_pubkey"`
TargetPubkey string `json:"target_pubkey"`
}
// ----------------------------------------------------------------------------
// Fulu
// ----------------------------------------------------------------------------

View File

@@ -4,21 +4,20 @@ import (
"fmt"
"strconv"
"github.com/OffchainLabs/prysm/v6/api/server"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/consensus-types/validator"
"github.com/OffchainLabs/prysm/v6/container/slice"
"github.com/OffchainLabs/prysm/v6/crypto/bls"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/math"
enginev1 "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
ethv1 "github.com/OffchainLabs/prysm/v6/proto/eth/v1"
eth "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/api/server"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/consensus-types/validator"
"github.com/prysmaticlabs/prysm/v5/container/slice"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/math"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
ethv1 "github.com/prysmaticlabs/prysm/v5/proto/eth/v1"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
)
var errNilValue = errors.New("nil value")
@@ -700,11 +699,6 @@ func (m *SyncCommitteeMessage) ToConsensus() (*eth.SyncCommitteeMessage, error)
if err != nil {
return nil, server.NewDecodeError(err, "Signature")
}
// Add validation to check if the signature is valid BLS format
_, err = bls.SignatureFromBytes(sig)
if err != nil {
return nil, server.NewDecodeError(err, "Signature")
}
return &eth.SyncCommitteeMessage{
Slot: primitives.Slot(slot),
@@ -895,6 +889,126 @@ func WithdrawalFromConsensus(w *enginev1.Withdrawal) *Withdrawal {
}
}
func WithdrawalRequestsFromConsensus(ws []*enginev1.WithdrawalRequest) []*WithdrawalRequest {
result := make([]*WithdrawalRequest, len(ws))
for i, w := range ws {
result[i] = WithdrawalRequestFromConsensus(w)
}
return result
}
func WithdrawalRequestFromConsensus(w *enginev1.WithdrawalRequest) *WithdrawalRequest {
return &WithdrawalRequest{
SourceAddress: hexutil.Encode(w.SourceAddress),
ValidatorPubkey: hexutil.Encode(w.ValidatorPubkey),
Amount: fmt.Sprintf("%d", w.Amount),
}
}
func (w *WithdrawalRequest) ToConsensus() (*enginev1.WithdrawalRequest, error) {
src, err := bytesutil.DecodeHexWithLength(w.SourceAddress, common.AddressLength)
if err != nil {
return nil, server.NewDecodeError(err, "SourceAddress")
}
pubkey, err := bytesutil.DecodeHexWithLength(w.ValidatorPubkey, fieldparams.BLSPubkeyLength)
if err != nil {
return nil, server.NewDecodeError(err, "ValidatorPubkey")
}
amount, err := strconv.ParseUint(w.Amount, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "Amount")
}
return &enginev1.WithdrawalRequest{
SourceAddress: src,
ValidatorPubkey: pubkey,
Amount: amount,
}, nil
}
func ConsolidationRequestsFromConsensus(cs []*enginev1.ConsolidationRequest) []*ConsolidationRequest {
result := make([]*ConsolidationRequest, len(cs))
for i, c := range cs {
result[i] = ConsolidationRequestFromConsensus(c)
}
return result
}
func ConsolidationRequestFromConsensus(c *enginev1.ConsolidationRequest) *ConsolidationRequest {
return &ConsolidationRequest{
SourceAddress: hexutil.Encode(c.SourceAddress),
SourcePubkey: hexutil.Encode(c.SourcePubkey),
TargetPubkey: hexutil.Encode(c.TargetPubkey),
}
}
func (c *ConsolidationRequest) ToConsensus() (*enginev1.ConsolidationRequest, error) {
srcAddress, err := bytesutil.DecodeHexWithLength(c.SourceAddress, common.AddressLength)
if err != nil {
return nil, server.NewDecodeError(err, "SourceAddress")
}
srcPubkey, err := bytesutil.DecodeHexWithLength(c.SourcePubkey, fieldparams.BLSPubkeyLength)
if err != nil {
return nil, server.NewDecodeError(err, "SourcePubkey")
}
targetPubkey, err := bytesutil.DecodeHexWithLength(c.TargetPubkey, fieldparams.BLSPubkeyLength)
if err != nil {
return nil, server.NewDecodeError(err, "TargetPubkey")
}
return &enginev1.ConsolidationRequest{
SourceAddress: srcAddress,
SourcePubkey: srcPubkey,
TargetPubkey: targetPubkey,
}, nil
}
func DepositRequestsFromConsensus(ds []*enginev1.DepositRequest) []*DepositRequest {
result := make([]*DepositRequest, len(ds))
for i, d := range ds {
result[i] = DepositRequestFromConsensus(d)
}
return result
}
func DepositRequestFromConsensus(d *enginev1.DepositRequest) *DepositRequest {
return &DepositRequest{
Pubkey: hexutil.Encode(d.Pubkey),
WithdrawalCredentials: hexutil.Encode(d.WithdrawalCredentials),
Amount: fmt.Sprintf("%d", d.Amount),
Signature: hexutil.Encode(d.Signature),
Index: fmt.Sprintf("%d", d.Index),
}
}
func (d *DepositRequest) ToConsensus() (*enginev1.DepositRequest, error) {
pubkey, err := bytesutil.DecodeHexWithLength(d.Pubkey, fieldparams.BLSPubkeyLength)
if err != nil {
return nil, server.NewDecodeError(err, "Pubkey")
}
withdrawalCredentials, err := bytesutil.DecodeHexWithLength(d.WithdrawalCredentials, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "WithdrawalCredentials")
}
amount, err := strconv.ParseUint(d.Amount, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "Amount")
}
sig, err := bytesutil.DecodeHexWithLength(d.Signature, fieldparams.BLSSignatureLength)
if err != nil {
return nil, server.NewDecodeError(err, "Signature")
}
index, err := strconv.ParseUint(d.Index, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "Index")
}
return &enginev1.DepositRequest{
Pubkey: pubkey,
WithdrawalCredentials: withdrawalCredentials,
Amount: amount,
Signature: sig,
Index: index,
}, nil
}
func ProposerSlashingsToConsensus(src []*ProposerSlashing) ([]*eth.ProposerSlashing, error) {
if src == nil {
return nil, server.NewDecodeError(errNilValue, "ProposerSlashings")

View File

@@ -3,9 +3,9 @@ package structs
import (
"strconv"
"github.com/OffchainLabs/prysm/v6/api/server"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
eth "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/api/server"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
)
func (sc *Sidecar) ToConsensus() (*eth.BlobSidecar, error) {

File diff suppressed because it is too large Load Diff

View File

@@ -1,990 +0,0 @@
package structs
import (
"fmt"
"strconv"
"github.com/OffchainLabs/prysm/v6/api/server"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
consensusblocks "github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/container/slice"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
enginev1 "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
"google.golang.org/protobuf/proto"
)
// ----------------------------------------------------------------------------
// Bellatrix
// ----------------------------------------------------------------------------
func ExecutionPayloadFromConsensus(payload *enginev1.ExecutionPayload) (*ExecutionPayload, error) {
baseFeePerGas, err := sszBytesToUint256String(payload.BaseFeePerGas)
if err != nil {
return nil, err
}
transactions := make([]string, len(payload.Transactions))
for i, tx := range payload.Transactions {
transactions[i] = hexutil.Encode(tx)
}
return &ExecutionPayload{
ParentHash: hexutil.Encode(payload.ParentHash),
FeeRecipient: hexutil.Encode(payload.FeeRecipient),
StateRoot: hexutil.Encode(payload.StateRoot),
ReceiptsRoot: hexutil.Encode(payload.ReceiptsRoot),
LogsBloom: hexutil.Encode(payload.LogsBloom),
PrevRandao: hexutil.Encode(payload.PrevRandao),
BlockNumber: fmt.Sprintf("%d", payload.BlockNumber),
GasLimit: fmt.Sprintf("%d", payload.GasLimit),
GasUsed: fmt.Sprintf("%d", payload.GasUsed),
Timestamp: fmt.Sprintf("%d", payload.Timestamp),
ExtraData: hexutil.Encode(payload.ExtraData),
BaseFeePerGas: baseFeePerGas,
BlockHash: hexutil.Encode(payload.BlockHash),
Transactions: transactions,
}, nil
}
func (e *ExecutionPayload) ToConsensus() (*enginev1.ExecutionPayload, error) {
if e == nil {
return nil, server.NewDecodeError(errNilValue, "ExecutionPayload")
}
payloadParentHash, err := bytesutil.DecodeHexWithLength(e.ParentHash, common.HashLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.ParentHash")
}
payloadFeeRecipient, err := bytesutil.DecodeHexWithLength(e.FeeRecipient, fieldparams.FeeRecipientLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.FeeRecipient")
}
payloadStateRoot, err := bytesutil.DecodeHexWithLength(e.StateRoot, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.StateRoot")
}
payloadReceiptsRoot, err := bytesutil.DecodeHexWithLength(e.ReceiptsRoot, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.ReceiptsRoot")
}
payloadLogsBloom, err := bytesutil.DecodeHexWithLength(e.LogsBloom, fieldparams.LogsBloomLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.LogsBloom")
}
payloadPrevRandao, err := bytesutil.DecodeHexWithLength(e.PrevRandao, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.PrevRandao")
}
payloadBlockNumber, err := strconv.ParseUint(e.BlockNumber, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.BlockNumber")
}
payloadGasLimit, err := strconv.ParseUint(e.GasLimit, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.GasLimit")
}
payloadGasUsed, err := strconv.ParseUint(e.GasUsed, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.GasUsed")
}
payloadTimestamp, err := strconv.ParseUint(e.Timestamp, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.Timestamp")
}
payloadExtraData, err := bytesutil.DecodeHexWithMaxLength(e.ExtraData, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.ExtraData")
}
payloadBaseFeePerGas, err := bytesutil.Uint256ToSSZBytes(e.BaseFeePerGas)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.BaseFeePerGas")
}
payloadBlockHash, err := bytesutil.DecodeHexWithLength(e.BlockHash, common.HashLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.BlockHash")
}
err = slice.VerifyMaxLength(e.Transactions, fieldparams.MaxTxsPerPayloadLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.Transactions")
}
payloadTxs := make([][]byte, len(e.Transactions))
for i, tx := range e.Transactions {
payloadTxs[i], err = bytesutil.DecodeHexWithMaxLength(tx, fieldparams.MaxBytesPerTxLength)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("ExecutionPayload.Transactions[%d]", i))
}
}
return &enginev1.ExecutionPayload{
ParentHash: payloadParentHash,
FeeRecipient: payloadFeeRecipient,
StateRoot: payloadStateRoot,
ReceiptsRoot: payloadReceiptsRoot,
LogsBloom: payloadLogsBloom,
PrevRandao: payloadPrevRandao,
BlockNumber: payloadBlockNumber,
GasLimit: payloadGasLimit,
GasUsed: payloadGasUsed,
Timestamp: payloadTimestamp,
ExtraData: payloadExtraData,
BaseFeePerGas: payloadBaseFeePerGas,
BlockHash: payloadBlockHash,
Transactions: payloadTxs,
}, nil
}
func (r *ExecutionPayload) PayloadProto() (proto.Message, error) {
if r == nil {
return nil, errors.Wrap(consensusblocks.ErrNilObject, "nil execution payload")
}
return r.ToConsensus()
}
func ExecutionPayloadHeaderFromConsensus(payload *enginev1.ExecutionPayloadHeader) (*ExecutionPayloadHeader, error) {
baseFeePerGas, err := sszBytesToUint256String(payload.BaseFeePerGas)
if err != nil {
return nil, err
}
return &ExecutionPayloadHeader{
ParentHash: hexutil.Encode(payload.ParentHash),
FeeRecipient: hexutil.Encode(payload.FeeRecipient),
StateRoot: hexutil.Encode(payload.StateRoot),
ReceiptsRoot: hexutil.Encode(payload.ReceiptsRoot),
LogsBloom: hexutil.Encode(payload.LogsBloom),
PrevRandao: hexutil.Encode(payload.PrevRandao),
BlockNumber: fmt.Sprintf("%d", payload.BlockNumber),
GasLimit: fmt.Sprintf("%d", payload.GasLimit),
GasUsed: fmt.Sprintf("%d", payload.GasUsed),
Timestamp: fmt.Sprintf("%d", payload.Timestamp),
ExtraData: hexutil.Encode(payload.ExtraData),
BaseFeePerGas: baseFeePerGas,
BlockHash: hexutil.Encode(payload.BlockHash),
TransactionsRoot: hexutil.Encode(payload.TransactionsRoot),
}, nil
}
func (e *ExecutionPayloadHeader) ToConsensus() (*enginev1.ExecutionPayloadHeader, error) {
if e == nil {
return nil, server.NewDecodeError(errNilValue, "ExecutionPayloadHeader")
}
payloadParentHash, err := bytesutil.DecodeHexWithLength(e.ParentHash, common.HashLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.ParentHash")
}
payloadFeeRecipient, err := bytesutil.DecodeHexWithLength(e.FeeRecipient, fieldparams.FeeRecipientLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.FeeRecipient")
}
payloadStateRoot, err := bytesutil.DecodeHexWithLength(e.StateRoot, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.StateRoot")
}
payloadReceiptsRoot, err := bytesutil.DecodeHexWithLength(e.ReceiptsRoot, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.ReceiptsRoot")
}
payloadLogsBloom, err := bytesutil.DecodeHexWithLength(e.LogsBloom, fieldparams.LogsBloomLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.LogsBloom")
}
payloadPrevRandao, err := bytesutil.DecodeHexWithLength(e.PrevRandao, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.PrevRandao")
}
payloadBlockNumber, err := strconv.ParseUint(e.BlockNumber, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.BlockNumber")
}
payloadGasLimit, err := strconv.ParseUint(e.GasLimit, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.GasLimit")
}
payloadGasUsed, err := strconv.ParseUint(e.GasUsed, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.GasUsed")
}
payloadTimestamp, err := strconv.ParseUint(e.Timestamp, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.Timestamp")
}
payloadExtraData, err := bytesutil.DecodeHexWithMaxLength(e.ExtraData, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.ExtraData")
}
payloadBaseFeePerGas, err := bytesutil.Uint256ToSSZBytes(e.BaseFeePerGas)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.BaseFeePerGas")
}
payloadBlockHash, err := bytesutil.DecodeHexWithLength(e.BlockHash, common.HashLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.BlockHash")
}
payloadTxsRoot, err := bytesutil.DecodeHexWithLength(e.TransactionsRoot, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.TransactionsRoot")
}
return &enginev1.ExecutionPayloadHeader{
ParentHash: payloadParentHash,
FeeRecipient: payloadFeeRecipient,
StateRoot: payloadStateRoot,
ReceiptsRoot: payloadReceiptsRoot,
LogsBloom: payloadLogsBloom,
PrevRandao: payloadPrevRandao,
BlockNumber: payloadBlockNumber,
GasLimit: payloadGasLimit,
GasUsed: payloadGasUsed,
Timestamp: payloadTimestamp,
ExtraData: payloadExtraData,
BaseFeePerGas: payloadBaseFeePerGas,
BlockHash: payloadBlockHash,
TransactionsRoot: payloadTxsRoot,
}, nil
}
// ----------------------------------------------------------------------------
// Capella
// ----------------------------------------------------------------------------
func ExecutionPayloadCapellaFromConsensus(payload *enginev1.ExecutionPayloadCapella) (*ExecutionPayloadCapella, error) {
baseFeePerGas, err := sszBytesToUint256String(payload.BaseFeePerGas)
if err != nil {
return nil, err
}
transactions := make([]string, len(payload.Transactions))
for i, tx := range payload.Transactions {
transactions[i] = hexutil.Encode(tx)
}
return &ExecutionPayloadCapella{
ParentHash: hexutil.Encode(payload.ParentHash),
FeeRecipient: hexutil.Encode(payload.FeeRecipient),
StateRoot: hexutil.Encode(payload.StateRoot),
ReceiptsRoot: hexutil.Encode(payload.ReceiptsRoot),
LogsBloom: hexutil.Encode(payload.LogsBloom),
PrevRandao: hexutil.Encode(payload.PrevRandao),
BlockNumber: fmt.Sprintf("%d", payload.BlockNumber),
GasLimit: fmt.Sprintf("%d", payload.GasLimit),
GasUsed: fmt.Sprintf("%d", payload.GasUsed),
Timestamp: fmt.Sprintf("%d", payload.Timestamp),
ExtraData: hexutil.Encode(payload.ExtraData),
BaseFeePerGas: baseFeePerGas,
BlockHash: hexutil.Encode(payload.BlockHash),
Transactions: transactions,
Withdrawals: WithdrawalsFromConsensus(payload.Withdrawals),
}, nil
}
func (e *ExecutionPayloadCapella) ToConsensus() (*enginev1.ExecutionPayloadCapella, error) {
if e == nil {
return nil, server.NewDecodeError(errNilValue, "ExecutionPayload")
}
payloadParentHash, err := bytesutil.DecodeHexWithLength(e.ParentHash, common.HashLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.ParentHash")
}
payloadFeeRecipient, err := bytesutil.DecodeHexWithLength(e.FeeRecipient, fieldparams.FeeRecipientLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.FeeRecipient")
}
payloadStateRoot, err := bytesutil.DecodeHexWithLength(e.StateRoot, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.StateRoot")
}
payloadReceiptsRoot, err := bytesutil.DecodeHexWithLength(e.ReceiptsRoot, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.ReceiptsRoot")
}
payloadLogsBloom, err := bytesutil.DecodeHexWithLength(e.LogsBloom, fieldparams.LogsBloomLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.LogsBloom")
}
payloadPrevRandao, err := bytesutil.DecodeHexWithLength(e.PrevRandao, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.PrevRandao")
}
payloadBlockNumber, err := strconv.ParseUint(e.BlockNumber, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.BlockNumber")
}
payloadGasLimit, err := strconv.ParseUint(e.GasLimit, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.GasLimit")
}
payloadGasUsed, err := strconv.ParseUint(e.GasUsed, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.GasUsed")
}
payloadTimestamp, err := strconv.ParseUint(e.Timestamp, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.Timestamp")
}
payloadExtraData, err := bytesutil.DecodeHexWithMaxLength(e.ExtraData, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.ExtraData")
}
payloadBaseFeePerGas, err := bytesutil.Uint256ToSSZBytes(e.BaseFeePerGas)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.BaseFeePerGas")
}
payloadBlockHash, err := bytesutil.DecodeHexWithLength(e.BlockHash, common.HashLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.BlockHash")
}
err = slice.VerifyMaxLength(e.Transactions, fieldparams.MaxTxsPerPayloadLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.Transactions")
}
payloadTxs := make([][]byte, len(e.Transactions))
for i, tx := range e.Transactions {
payloadTxs[i], err = bytesutil.DecodeHexWithMaxLength(tx, fieldparams.MaxBytesPerTxLength)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("ExecutionPayload.Transactions[%d]", i))
}
}
err = slice.VerifyMaxLength(e.Withdrawals, fieldparams.MaxWithdrawalsPerPayload)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.Withdrawals")
}
withdrawals := make([]*enginev1.Withdrawal, len(e.Withdrawals))
for i, w := range e.Withdrawals {
withdrawalIndex, err := strconv.ParseUint(w.WithdrawalIndex, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("ExecutionPayload.Withdrawals[%d].WithdrawalIndex", i))
}
validatorIndex, err := strconv.ParseUint(w.ValidatorIndex, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("ExecutionPayload.Withdrawals[%d].ValidatorIndex", i))
}
address, err := bytesutil.DecodeHexWithLength(w.ExecutionAddress, common.AddressLength)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("ExecutionPayload.Withdrawals[%d].ExecutionAddress", i))
}
amount, err := strconv.ParseUint(w.Amount, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("ExecutionPayload.Withdrawals[%d].Amount", i))
}
withdrawals[i] = &enginev1.Withdrawal{
Index: withdrawalIndex,
ValidatorIndex: primitives.ValidatorIndex(validatorIndex),
Address: address,
Amount: amount,
}
}
return &enginev1.ExecutionPayloadCapella{
ParentHash: payloadParentHash,
FeeRecipient: payloadFeeRecipient,
StateRoot: payloadStateRoot,
ReceiptsRoot: payloadReceiptsRoot,
LogsBloom: payloadLogsBloom,
PrevRandao: payloadPrevRandao,
BlockNumber: payloadBlockNumber,
GasLimit: payloadGasLimit,
GasUsed: payloadGasUsed,
Timestamp: payloadTimestamp,
ExtraData: payloadExtraData,
BaseFeePerGas: payloadBaseFeePerGas,
BlockHash: payloadBlockHash,
Transactions: payloadTxs,
Withdrawals: withdrawals,
}, nil
}
func (p *ExecutionPayloadCapella) PayloadProto() (proto.Message, error) {
if p == nil {
return nil, errors.Wrap(consensusblocks.ErrNilObject, "nil capella execution payload")
}
return p.ToConsensus()
}
func ExecutionPayloadHeaderCapellaFromConsensus(payload *enginev1.ExecutionPayloadHeaderCapella) (*ExecutionPayloadHeaderCapella, error) {
baseFeePerGas, err := sszBytesToUint256String(payload.BaseFeePerGas)
if err != nil {
return nil, err
}
return &ExecutionPayloadHeaderCapella{
ParentHash: hexutil.Encode(payload.ParentHash),
FeeRecipient: hexutil.Encode(payload.FeeRecipient),
StateRoot: hexutil.Encode(payload.StateRoot),
ReceiptsRoot: hexutil.Encode(payload.ReceiptsRoot),
LogsBloom: hexutil.Encode(payload.LogsBloom),
PrevRandao: hexutil.Encode(payload.PrevRandao),
BlockNumber: fmt.Sprintf("%d", payload.BlockNumber),
GasLimit: fmt.Sprintf("%d", payload.GasLimit),
GasUsed: fmt.Sprintf("%d", payload.GasUsed),
Timestamp: fmt.Sprintf("%d", payload.Timestamp),
ExtraData: hexutil.Encode(payload.ExtraData),
BaseFeePerGas: baseFeePerGas,
BlockHash: hexutil.Encode(payload.BlockHash),
TransactionsRoot: hexutil.Encode(payload.TransactionsRoot),
WithdrawalsRoot: hexutil.Encode(payload.WithdrawalsRoot),
}, nil
}
func (e *ExecutionPayloadHeaderCapella) ToConsensus() (*enginev1.ExecutionPayloadHeaderCapella, error) {
if e == nil {
return nil, server.NewDecodeError(errNilValue, "ExecutionPayloadHeader")
}
payloadParentHash, err := bytesutil.DecodeHexWithLength(e.ParentHash, common.HashLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.ParentHash")
}
payloadFeeRecipient, err := bytesutil.DecodeHexWithLength(e.FeeRecipient, fieldparams.FeeRecipientLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.FeeRecipient")
}
payloadStateRoot, err := bytesutil.DecodeHexWithLength(e.StateRoot, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.StateRoot")
}
payloadReceiptsRoot, err := bytesutil.DecodeHexWithLength(e.ReceiptsRoot, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.ReceiptsRoot")
}
payloadLogsBloom, err := bytesutil.DecodeHexWithLength(e.LogsBloom, fieldparams.LogsBloomLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.LogsBloom")
}
payloadPrevRandao, err := bytesutil.DecodeHexWithLength(e.PrevRandao, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.PrevRandao")
}
payloadBlockNumber, err := strconv.ParseUint(e.BlockNumber, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.BlockNumber")
}
payloadGasLimit, err := strconv.ParseUint(e.GasLimit, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.GasLimit")
}
payloadGasUsed, err := strconv.ParseUint(e.GasUsed, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.GasUsed")
}
payloadTimestamp, err := strconv.ParseUint(e.Timestamp, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.Timestamp")
}
payloadExtraData, err := bytesutil.DecodeHexWithMaxLength(e.ExtraData, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.ExtraData")
}
payloadBaseFeePerGas, err := bytesutil.Uint256ToSSZBytes(e.BaseFeePerGas)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.BaseFeePerGas")
}
payloadBlockHash, err := bytesutil.DecodeHexWithMaxLength(e.BlockHash, common.HashLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.BlockHash")
}
payloadTxsRoot, err := bytesutil.DecodeHexWithMaxLength(e.TransactionsRoot, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.TransactionsRoot")
}
payloadWithdrawalsRoot, err := bytesutil.DecodeHexWithMaxLength(e.WithdrawalsRoot, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.WithdrawalsRoot")
}
return &enginev1.ExecutionPayloadHeaderCapella{
ParentHash: payloadParentHash,
FeeRecipient: payloadFeeRecipient,
StateRoot: payloadStateRoot,
ReceiptsRoot: payloadReceiptsRoot,
LogsBloom: payloadLogsBloom,
PrevRandao: payloadPrevRandao,
BlockNumber: payloadBlockNumber,
GasLimit: payloadGasLimit,
GasUsed: payloadGasUsed,
Timestamp: payloadTimestamp,
ExtraData: payloadExtraData,
BaseFeePerGas: payloadBaseFeePerGas,
BlockHash: payloadBlockHash,
TransactionsRoot: payloadTxsRoot,
WithdrawalsRoot: payloadWithdrawalsRoot,
}, nil
}
// ----------------------------------------------------------------------------
// Deneb
// ----------------------------------------------------------------------------
func ExecutionPayloadDenebFromConsensus(payload *enginev1.ExecutionPayloadDeneb) (*ExecutionPayloadDeneb, error) {
baseFeePerGas, err := sszBytesToUint256String(payload.BaseFeePerGas)
if err != nil {
return nil, err
}
transactions := make([]string, len(payload.Transactions))
for i, tx := range payload.Transactions {
transactions[i] = hexutil.Encode(tx)
}
return &ExecutionPayloadDeneb{
ParentHash: hexutil.Encode(payload.ParentHash),
FeeRecipient: hexutil.Encode(payload.FeeRecipient),
StateRoot: hexutil.Encode(payload.StateRoot),
ReceiptsRoot: hexutil.Encode(payload.ReceiptsRoot),
LogsBloom: hexutil.Encode(payload.LogsBloom),
PrevRandao: hexutil.Encode(payload.PrevRandao),
BlockNumber: fmt.Sprintf("%d", payload.BlockNumber),
GasLimit: fmt.Sprintf("%d", payload.GasLimit),
GasUsed: fmt.Sprintf("%d", payload.GasUsed),
Timestamp: fmt.Sprintf("%d", payload.Timestamp),
ExtraData: hexutil.Encode(payload.ExtraData),
BaseFeePerGas: baseFeePerGas,
BlockHash: hexutil.Encode(payload.BlockHash),
Transactions: transactions,
Withdrawals: WithdrawalsFromConsensus(payload.Withdrawals),
BlobGasUsed: fmt.Sprintf("%d", payload.BlobGasUsed),
ExcessBlobGas: fmt.Sprintf("%d", payload.ExcessBlobGas),
}, nil
}
func (e *ExecutionPayloadDeneb) ToConsensus() (*enginev1.ExecutionPayloadDeneb, error) {
if e == nil {
return nil, server.NewDecodeError(errNilValue, "ExecutionPayload")
}
payloadParentHash, err := bytesutil.DecodeHexWithLength(e.ParentHash, common.HashLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.ParentHash")
}
payloadFeeRecipient, err := bytesutil.DecodeHexWithLength(e.FeeRecipient, fieldparams.FeeRecipientLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.FeeRecipient")
}
payloadStateRoot, err := bytesutil.DecodeHexWithLength(e.StateRoot, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.StateRoot")
}
payloadReceiptsRoot, err := bytesutil.DecodeHexWithLength(e.ReceiptsRoot, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.ReceiptsRoot")
}
payloadLogsBloom, err := bytesutil.DecodeHexWithLength(e.LogsBloom, fieldparams.LogsBloomLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.LogsBloom")
}
payloadPrevRandao, err := bytesutil.DecodeHexWithLength(e.PrevRandao, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.PrevRandao")
}
payloadBlockNumber, err := strconv.ParseUint(e.BlockNumber, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.BlockNumber")
}
payloadGasLimit, err := strconv.ParseUint(e.GasLimit, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.GasLimit")
}
payloadGasUsed, err := strconv.ParseUint(e.GasUsed, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.GasUsed")
}
payloadTimestamp, err := strconv.ParseUint(e.Timestamp, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.Timestamp")
}
payloadExtraData, err := bytesutil.DecodeHexWithMaxLength(e.ExtraData, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.ExtraData")
}
payloadBaseFeePerGas, err := bytesutil.Uint256ToSSZBytes(e.BaseFeePerGas)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.BaseFeePerGas")
}
payloadBlockHash, err := bytesutil.DecodeHexWithLength(e.BlockHash, common.HashLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.BlockHash")
}
err = slice.VerifyMaxLength(e.Transactions, fieldparams.MaxTxsPerPayloadLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.Transactions")
}
txs := make([][]byte, len(e.Transactions))
for i, tx := range e.Transactions {
txs[i], err = bytesutil.DecodeHexWithMaxLength(tx, fieldparams.MaxBytesPerTxLength)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("ExecutionPayload.Transactions[%d]", i))
}
}
err = slice.VerifyMaxLength(e.Withdrawals, fieldparams.MaxWithdrawalsPerPayload)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.Withdrawals")
}
withdrawals := make([]*enginev1.Withdrawal, len(e.Withdrawals))
for i, w := range e.Withdrawals {
withdrawalIndex, err := strconv.ParseUint(w.WithdrawalIndex, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("ExecutionPayload.Withdrawals[%d].WithdrawalIndex", i))
}
validatorIndex, err := strconv.ParseUint(w.ValidatorIndex, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("ExecutionPayload.Withdrawals[%d].ValidatorIndex", i))
}
address, err := bytesutil.DecodeHexWithLength(w.ExecutionAddress, common.AddressLength)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("ExecutionPayload.Withdrawals[%d].ExecutionAddress", i))
}
amount, err := strconv.ParseUint(w.Amount, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("ExecutionPayload.Withdrawals[%d].Amount", i))
}
withdrawals[i] = &enginev1.Withdrawal{
Index: withdrawalIndex,
ValidatorIndex: primitives.ValidatorIndex(validatorIndex),
Address: address,
Amount: amount,
}
}
payloadBlobGasUsed, err := strconv.ParseUint(e.BlobGasUsed, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.BlobGasUsed")
}
payloadExcessBlobGas, err := strconv.ParseUint(e.ExcessBlobGas, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.ExcessBlobGas")
}
return &enginev1.ExecutionPayloadDeneb{
ParentHash: payloadParentHash,
FeeRecipient: payloadFeeRecipient,
StateRoot: payloadStateRoot,
ReceiptsRoot: payloadReceiptsRoot,
LogsBloom: payloadLogsBloom,
PrevRandao: payloadPrevRandao,
BlockNumber: payloadBlockNumber,
GasLimit: payloadGasLimit,
GasUsed: payloadGasUsed,
Timestamp: payloadTimestamp,
ExtraData: payloadExtraData,
BaseFeePerGas: payloadBaseFeePerGas,
BlockHash: payloadBlockHash,
Transactions: txs,
Withdrawals: withdrawals,
BlobGasUsed: payloadBlobGasUsed,
ExcessBlobGas: payloadExcessBlobGas,
}, nil
}
func ExecutionPayloadHeaderDenebFromConsensus(payload *enginev1.ExecutionPayloadHeaderDeneb) (*ExecutionPayloadHeaderDeneb, error) {
baseFeePerGas, err := sszBytesToUint256String(payload.BaseFeePerGas)
if err != nil {
return nil, err
}
return &ExecutionPayloadHeaderDeneb{
ParentHash: hexutil.Encode(payload.ParentHash),
FeeRecipient: hexutil.Encode(payload.FeeRecipient),
StateRoot: hexutil.Encode(payload.StateRoot),
ReceiptsRoot: hexutil.Encode(payload.ReceiptsRoot),
LogsBloom: hexutil.Encode(payload.LogsBloom),
PrevRandao: hexutil.Encode(payload.PrevRandao),
BlockNumber: fmt.Sprintf("%d", payload.BlockNumber),
GasLimit: fmt.Sprintf("%d", payload.GasLimit),
GasUsed: fmt.Sprintf("%d", payload.GasUsed),
Timestamp: fmt.Sprintf("%d", payload.Timestamp),
ExtraData: hexutil.Encode(payload.ExtraData),
BaseFeePerGas: baseFeePerGas,
BlockHash: hexutil.Encode(payload.BlockHash),
TransactionsRoot: hexutil.Encode(payload.TransactionsRoot),
WithdrawalsRoot: hexutil.Encode(payload.WithdrawalsRoot),
BlobGasUsed: fmt.Sprintf("%d", payload.BlobGasUsed),
ExcessBlobGas: fmt.Sprintf("%d", payload.ExcessBlobGas),
}, nil
}
func (e *ExecutionPayloadHeaderDeneb) ToConsensus() (*enginev1.ExecutionPayloadHeaderDeneb, error) {
if e == nil {
return nil, server.NewDecodeError(errNilValue, "ExecutionPayloadHeader")
}
payloadParentHash, err := bytesutil.DecodeHexWithLength(e.ParentHash, common.HashLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.ParentHash")
}
payloadFeeRecipient, err := bytesutil.DecodeHexWithLength(e.FeeRecipient, fieldparams.FeeRecipientLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.FeeRecipient")
}
payloadStateRoot, err := bytesutil.DecodeHexWithLength(e.StateRoot, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.StateRoot")
}
payloadReceiptsRoot, err := bytesutil.DecodeHexWithLength(e.ReceiptsRoot, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.ReceiptsRoot")
}
payloadLogsBloom, err := bytesutil.DecodeHexWithLength(e.LogsBloom, fieldparams.LogsBloomLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.LogsBloom")
}
payloadPrevRandao, err := bytesutil.DecodeHexWithLength(e.PrevRandao, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.PrevRandao")
}
payloadBlockNumber, err := strconv.ParseUint(e.BlockNumber, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.BlockNumber")
}
payloadGasLimit, err := strconv.ParseUint(e.GasLimit, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.GasLimit")
}
payloadGasUsed, err := strconv.ParseUint(e.GasUsed, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.GasUsed")
}
payloadTimestamp, err := strconv.ParseUint(e.Timestamp, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.Timestamp")
}
payloadExtraData, err := bytesutil.DecodeHexWithMaxLength(e.ExtraData, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.ExtraData")
}
payloadBaseFeePerGas, err := bytesutil.Uint256ToSSZBytes(e.BaseFeePerGas)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.BaseFeePerGas")
}
payloadBlockHash, err := bytesutil.DecodeHexWithLength(e.BlockHash, common.HashLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.BlockHash")
}
payloadTxsRoot, err := bytesutil.DecodeHexWithLength(e.TransactionsRoot, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.TransactionsRoot")
}
payloadWithdrawalsRoot, err := bytesutil.DecodeHexWithLength(e.WithdrawalsRoot, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayloadHeader.WithdrawalsRoot")
}
payloadBlobGasUsed, err := strconv.ParseUint(e.BlobGasUsed, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.BlobGasUsed")
}
payloadExcessBlobGas, err := strconv.ParseUint(e.ExcessBlobGas, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayload.ExcessBlobGas")
}
return &enginev1.ExecutionPayloadHeaderDeneb{
ParentHash: payloadParentHash,
FeeRecipient: payloadFeeRecipient,
StateRoot: payloadStateRoot,
ReceiptsRoot: payloadReceiptsRoot,
LogsBloom: payloadLogsBloom,
PrevRandao: payloadPrevRandao,
BlockNumber: payloadBlockNumber,
GasLimit: payloadGasLimit,
GasUsed: payloadGasUsed,
Timestamp: payloadTimestamp,
ExtraData: payloadExtraData,
BaseFeePerGas: payloadBaseFeePerGas,
BlockHash: payloadBlockHash,
TransactionsRoot: payloadTxsRoot,
WithdrawalsRoot: payloadWithdrawalsRoot,
BlobGasUsed: payloadBlobGasUsed,
ExcessBlobGas: payloadExcessBlobGas,
}, nil
}
// ----------------------------------------------------------------------------
// Electra
// ----------------------------------------------------------------------------
var (
ExecutionPayloadElectraFromConsensus = ExecutionPayloadDenebFromConsensus
ExecutionPayloadHeaderElectraFromConsensus = ExecutionPayloadHeaderDenebFromConsensus
)
func WithdrawalRequestsFromConsensus(ws []*enginev1.WithdrawalRequest) []*WithdrawalRequest {
result := make([]*WithdrawalRequest, len(ws))
for i, w := range ws {
result[i] = WithdrawalRequestFromConsensus(w)
}
return result
}
func WithdrawalRequestFromConsensus(w *enginev1.WithdrawalRequest) *WithdrawalRequest {
return &WithdrawalRequest{
SourceAddress: hexutil.Encode(w.SourceAddress),
ValidatorPubkey: hexutil.Encode(w.ValidatorPubkey),
Amount: fmt.Sprintf("%d", w.Amount),
}
}
func (w *WithdrawalRequest) ToConsensus() (*enginev1.WithdrawalRequest, error) {
if w == nil {
return nil, server.NewDecodeError(errNilValue, "WithdrawalRequest")
}
src, err := bytesutil.DecodeHexWithLength(w.SourceAddress, common.AddressLength)
if err != nil {
return nil, server.NewDecodeError(err, "SourceAddress")
}
pubkey, err := bytesutil.DecodeHexWithLength(w.ValidatorPubkey, fieldparams.BLSPubkeyLength)
if err != nil {
return nil, server.NewDecodeError(err, "ValidatorPubkey")
}
amount, err := strconv.ParseUint(w.Amount, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "Amount")
}
return &enginev1.WithdrawalRequest{
SourceAddress: src,
ValidatorPubkey: pubkey,
Amount: amount,
}, nil
}
func ConsolidationRequestsFromConsensus(cs []*enginev1.ConsolidationRequest) []*ConsolidationRequest {
result := make([]*ConsolidationRequest, len(cs))
for i, c := range cs {
result[i] = ConsolidationRequestFromConsensus(c)
}
return result
}
func ConsolidationRequestFromConsensus(c *enginev1.ConsolidationRequest) *ConsolidationRequest {
return &ConsolidationRequest{
SourceAddress: hexutil.Encode(c.SourceAddress),
SourcePubkey: hexutil.Encode(c.SourcePubkey),
TargetPubkey: hexutil.Encode(c.TargetPubkey),
}
}
func (c *ConsolidationRequest) ToConsensus() (*enginev1.ConsolidationRequest, error) {
if c == nil {
return nil, server.NewDecodeError(errNilValue, "ConsolidationRequest")
}
srcAddress, err := bytesutil.DecodeHexWithLength(c.SourceAddress, common.AddressLength)
if err != nil {
return nil, server.NewDecodeError(err, "SourceAddress")
}
srcPubkey, err := bytesutil.DecodeHexWithLength(c.SourcePubkey, fieldparams.BLSPubkeyLength)
if err != nil {
return nil, server.NewDecodeError(err, "SourcePubkey")
}
targetPubkey, err := bytesutil.DecodeHexWithLength(c.TargetPubkey, fieldparams.BLSPubkeyLength)
if err != nil {
return nil, server.NewDecodeError(err, "TargetPubkey")
}
return &enginev1.ConsolidationRequest{
SourceAddress: srcAddress,
SourcePubkey: srcPubkey,
TargetPubkey: targetPubkey,
}, nil
}
func DepositRequestsFromConsensus(ds []*enginev1.DepositRequest) []*DepositRequest {
result := make([]*DepositRequest, len(ds))
for i, d := range ds {
result[i] = DepositRequestFromConsensus(d)
}
return result
}
func DepositRequestFromConsensus(d *enginev1.DepositRequest) *DepositRequest {
return &DepositRequest{
Pubkey: hexutil.Encode(d.Pubkey),
WithdrawalCredentials: hexutil.Encode(d.WithdrawalCredentials),
Amount: fmt.Sprintf("%d", d.Amount),
Signature: hexutil.Encode(d.Signature),
Index: fmt.Sprintf("%d", d.Index),
}
}
func (d *DepositRequest) ToConsensus() (*enginev1.DepositRequest, error) {
if d == nil {
return nil, server.NewDecodeError(errNilValue, "DepositRequest")
}
pubkey, err := bytesutil.DecodeHexWithLength(d.Pubkey, fieldparams.BLSPubkeyLength)
if err != nil {
return nil, server.NewDecodeError(err, "Pubkey")
}
withdrawalCredentials, err := bytesutil.DecodeHexWithLength(d.WithdrawalCredentials, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "WithdrawalCredentials")
}
amount, err := strconv.ParseUint(d.Amount, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "Amount")
}
sig, err := bytesutil.DecodeHexWithLength(d.Signature, fieldparams.BLSSignatureLength)
if err != nil {
return nil, server.NewDecodeError(err, "Signature")
}
index, err := strconv.ParseUint(d.Index, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "Index")
}
return &enginev1.DepositRequest{
Pubkey: pubkey,
WithdrawalCredentials: withdrawalCredentials,
Amount: amount,
Signature: sig,
Index: index,
}, nil
}
func ExecutionRequestsFromConsensus(er *enginev1.ExecutionRequests) *ExecutionRequests {
return &ExecutionRequests{
Deposits: DepositRequestsFromConsensus(er.Deposits),
Withdrawals: WithdrawalRequestsFromConsensus(er.Withdrawals),
Consolidations: ConsolidationRequestsFromConsensus(er.Consolidations),
}
}
func (e *ExecutionRequests) ToConsensus() (*enginev1.ExecutionRequests, error) {
if e == nil {
return nil, server.NewDecodeError(errNilValue, "ExecutionRequests")
}
var err error
if err = slice.VerifyMaxLength(e.Deposits, params.BeaconConfig().MaxDepositRequestsPerPayload); err != nil {
return nil, err
}
depositRequests := make([]*enginev1.DepositRequest, len(e.Deposits))
for i, d := range e.Deposits {
depositRequests[i], err = d.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("ExecutionRequests.Deposits[%d]", i))
}
}
if err = slice.VerifyMaxLength(e.Withdrawals, params.BeaconConfig().MaxWithdrawalRequestsPerPayload); err != nil {
return nil, err
}
withdrawalRequests := make([]*enginev1.WithdrawalRequest, len(e.Withdrawals))
for i, w := range e.Withdrawals {
withdrawalRequests[i], err = w.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("ExecutionRequests.Withdrawals[%d]", i))
}
}
if err = slice.VerifyMaxLength(e.Consolidations, params.BeaconConfig().MaxConsolidationsRequestsPerPayload); err != nil {
return nil, err
}
consolidationRequests := make([]*enginev1.ConsolidationRequest, len(e.Consolidations))
for i, c := range e.Consolidations {
consolidationRequests[i], err = c.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("ExecutionRequests.Consolidations[%d]", i))
}
}
return &enginev1.ExecutionRequests{
Deposits: depositRequests,
Withdrawals: withdrawalRequests,
Consolidations: consolidationRequests,
}, nil
}
// ----------------------------------------------------------------------------
// Fulu
// ----------------------------------------------------------------------------
var (
ExecutionPayloadFuluFromConsensus = ExecutionPayloadDenebFromConsensus
ExecutionPayloadHeaderFuluFromConsensus = ExecutionPayloadHeaderDenebFromConsensus
BeaconBlockFuluFromConsensus = BeaconBlockElectraFromConsensus
)

View File

@@ -1,563 +0,0 @@
package structs
import (
"fmt"
"testing"
enginev1 "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
)
func fillByteSlice(sliceLength int, value byte) []byte {
bytes := make([]byte, sliceLength)
for index := range bytes {
bytes[index] = value
}
return bytes
}
// TestExecutionPayloadFromConsensus_HappyPath checks the
// ExecutionPayloadFromConsensus function under normal conditions.
func TestExecutionPayloadFromConsensus_HappyPath(t *testing.T) {
consensusPayload := &enginev1.ExecutionPayload{
ParentHash: fillByteSlice(common.HashLength, 0xaa),
FeeRecipient: fillByteSlice(20, 0xbb),
StateRoot: fillByteSlice(32, 0xcc),
ReceiptsRoot: fillByteSlice(32, 0xdd),
LogsBloom: fillByteSlice(256, 0xee),
PrevRandao: fillByteSlice(32, 0xff),
BlockNumber: 12345,
GasLimit: 15000000,
GasUsed: 8000000,
Timestamp: 1680000000,
ExtraData: fillByteSlice(8, 0x11),
BaseFeePerGas: fillByteSlice(32, 0x01),
BlockHash: fillByteSlice(common.HashLength, 0x22),
Transactions: [][]byte{
fillByteSlice(10, 0x33),
fillByteSlice(10, 0x44),
},
}
result, err := ExecutionPayloadFromConsensus(consensusPayload)
require.NoError(t, err)
require.NotNil(t, result)
require.Equal(t, hexutil.Encode(consensusPayload.ParentHash), result.ParentHash)
require.Equal(t, hexutil.Encode(consensusPayload.FeeRecipient), result.FeeRecipient)
require.Equal(t, hexutil.Encode(consensusPayload.StateRoot), result.StateRoot)
require.Equal(t, hexutil.Encode(consensusPayload.ReceiptsRoot), result.ReceiptsRoot)
require.Equal(t, fmt.Sprintf("%d", consensusPayload.BlockNumber), result.BlockNumber)
}
// TestExecutionPayload_ToConsensus_HappyPath checks the
// (*ExecutionPayload).ToConsensus function under normal conditions.
func TestExecutionPayload_ToConsensus_HappyPath(t *testing.T) {
payload := &ExecutionPayload{
ParentHash: hexutil.Encode(fillByteSlice(common.HashLength, 0xaa)),
FeeRecipient: hexutil.Encode(fillByteSlice(20, 0xbb)),
StateRoot: hexutil.Encode(fillByteSlice(32, 0xcc)),
ReceiptsRoot: hexutil.Encode(fillByteSlice(32, 0xdd)),
LogsBloom: hexutil.Encode(fillByteSlice(256, 0xee)),
PrevRandao: hexutil.Encode(fillByteSlice(32, 0xff)),
BlockNumber: "12345",
GasLimit: "15000000",
GasUsed: "8000000",
Timestamp: "1680000000",
ExtraData: "0x11111111",
BaseFeePerGas: "1234",
BlockHash: hexutil.Encode(fillByteSlice(common.HashLength, 0x22)),
Transactions: []string{
hexutil.Encode(fillByteSlice(10, 0x33)),
hexutil.Encode(fillByteSlice(10, 0x44)),
},
}
result, err := payload.ToConsensus()
require.NoError(t, err)
require.DeepEqual(t, result.ParentHash, fillByteSlice(common.HashLength, 0xaa))
require.DeepEqual(t, result.FeeRecipient, fillByteSlice(20, 0xbb))
require.DeepEqual(t, result.StateRoot, fillByteSlice(32, 0xcc))
}
// TestExecutionPayloadHeaderFromConsensus_HappyPath checks the
// ExecutionPayloadHeaderFromConsensus function under normal conditions.
func TestExecutionPayloadHeaderFromConsensus_HappyPath(t *testing.T) {
consensusHeader := &enginev1.ExecutionPayloadHeader{
ParentHash: fillByteSlice(common.HashLength, 0xaa),
FeeRecipient: fillByteSlice(20, 0xbb),
StateRoot: fillByteSlice(32, 0xcc),
ReceiptsRoot: fillByteSlice(32, 0xdd),
LogsBloom: fillByteSlice(256, 0xee),
PrevRandao: fillByteSlice(32, 0xff),
BlockNumber: 9999,
GasLimit: 5000000,
GasUsed: 2500000,
Timestamp: 1111111111,
ExtraData: fillByteSlice(4, 0x12),
BaseFeePerGas: fillByteSlice(32, 0x34),
BlockHash: fillByteSlice(common.HashLength, 0x56),
TransactionsRoot: fillByteSlice(32, 0x78),
}
result, err := ExecutionPayloadHeaderFromConsensus(consensusHeader)
require.NoError(t, err)
require.NotNil(t, result)
require.Equal(t, hexutil.Encode(consensusHeader.ParentHash), result.ParentHash)
require.Equal(t, fmt.Sprintf("%d", consensusHeader.BlockNumber), result.BlockNumber)
}
// TestExecutionPayloadHeader_ToConsensus_HappyPath checks the
// (*ExecutionPayloadHeader).ToConsensus function under normal conditions.
func TestExecutionPayloadHeader_ToConsensus_HappyPath(t *testing.T) {
header := &ExecutionPayloadHeader{
ParentHash: hexutil.Encode(fillByteSlice(common.HashLength, 0xaa)),
FeeRecipient: hexutil.Encode(fillByteSlice(20, 0xbb)),
StateRoot: hexutil.Encode(fillByteSlice(32, 0xcc)),
ReceiptsRoot: hexutil.Encode(fillByteSlice(32, 0xdd)),
LogsBloom: hexutil.Encode(fillByteSlice(256, 0xee)),
PrevRandao: hexutil.Encode(fillByteSlice(32, 0xff)),
BlockNumber: "9999",
GasLimit: "5000000",
GasUsed: "2500000",
Timestamp: "1111111111",
ExtraData: "0x1234abcd",
BaseFeePerGas: "1234",
BlockHash: hexutil.Encode(fillByteSlice(common.HashLength, 0x56)),
TransactionsRoot: hexutil.Encode(fillByteSlice(32, 0x78)),
}
result, err := header.ToConsensus()
require.NoError(t, err)
require.DeepEqual(t, hexutil.Encode(result.ParentHash), header.ParentHash)
require.DeepEqual(t, hexutil.Encode(result.FeeRecipient), header.FeeRecipient)
require.DeepEqual(t, hexutil.Encode(result.StateRoot), header.StateRoot)
}
// TestExecutionPayloadCapellaFromConsensus_HappyPath checks the
// ExecutionPayloadCapellaFromConsensus function under normal conditions.
func TestExecutionPayloadCapellaFromConsensus_HappyPath(t *testing.T) {
capellaPayload := &enginev1.ExecutionPayloadCapella{
ParentHash: fillByteSlice(common.HashLength, 0xaa),
FeeRecipient: fillByteSlice(20, 0xbb),
StateRoot: fillByteSlice(32, 0xcc),
ReceiptsRoot: fillByteSlice(32, 0xdd),
LogsBloom: fillByteSlice(256, 0xee),
PrevRandao: fillByteSlice(32, 0xff),
BlockNumber: 123,
GasLimit: 9876543,
GasUsed: 1234567,
Timestamp: 5555555,
ExtraData: fillByteSlice(6, 0x11),
BaseFeePerGas: fillByteSlice(32, 0x22),
BlockHash: fillByteSlice(common.HashLength, 0x33),
Transactions: [][]byte{
fillByteSlice(5, 0x44),
},
Withdrawals: []*enginev1.Withdrawal{
{
Index: 1,
ValidatorIndex: 2,
Address: fillByteSlice(20, 0xaa),
Amount: 100,
},
},
}
result, err := ExecutionPayloadCapellaFromConsensus(capellaPayload)
require.NoError(t, err)
require.NotNil(t, result)
require.Equal(t, hexutil.Encode(capellaPayload.ParentHash), result.ParentHash)
require.Equal(t, len(capellaPayload.Transactions), len(result.Transactions))
require.Equal(t, len(capellaPayload.Withdrawals), len(result.Withdrawals))
}
// TestExecutionPayloadCapella_ToConsensus_HappyPath checks the
// (*ExecutionPayloadCapella).ToConsensus function under normal conditions.
func TestExecutionPayloadCapella_ToConsensus_HappyPath(t *testing.T) {
capella := &ExecutionPayloadCapella{
ParentHash: hexutil.Encode(fillByteSlice(common.HashLength, 0xaa)),
FeeRecipient: hexutil.Encode(fillByteSlice(20, 0xbb)),
StateRoot: hexutil.Encode(fillByteSlice(32, 0xcc)),
ReceiptsRoot: hexutil.Encode(fillByteSlice(32, 0xdd)),
LogsBloom: hexutil.Encode(fillByteSlice(256, 0xee)),
PrevRandao: hexutil.Encode(fillByteSlice(32, 0xff)),
BlockNumber: "123",
GasLimit: "9876543",
GasUsed: "1234567",
Timestamp: "5555555",
ExtraData: hexutil.Encode(fillByteSlice(6, 0x11)),
BaseFeePerGas: "1234",
BlockHash: hexutil.Encode(fillByteSlice(common.HashLength, 0x33)),
Transactions: []string{
hexutil.Encode(fillByteSlice(5, 0x44)),
},
Withdrawals: []*Withdrawal{
{
WithdrawalIndex: "1",
ValidatorIndex: "2",
ExecutionAddress: hexutil.Encode(fillByteSlice(20, 0xaa)),
Amount: "100",
},
},
}
result, err := capella.ToConsensus()
require.NoError(t, err)
require.DeepEqual(t, hexutil.Encode(result.ParentHash), capella.ParentHash)
require.DeepEqual(t, hexutil.Encode(result.FeeRecipient), capella.FeeRecipient)
require.DeepEqual(t, hexutil.Encode(result.StateRoot), capella.StateRoot)
}
// TestExecutionPayloadDenebFromConsensus_HappyPath checks the
// ExecutionPayloadDenebFromConsensus function under normal conditions.
func TestExecutionPayloadDenebFromConsensus_HappyPath(t *testing.T) {
denebPayload := &enginev1.ExecutionPayloadDeneb{
ParentHash: fillByteSlice(common.HashLength, 0xaa),
FeeRecipient: fillByteSlice(20, 0xbb),
StateRoot: fillByteSlice(32, 0xcc),
ReceiptsRoot: fillByteSlice(32, 0xdd),
LogsBloom: fillByteSlice(256, 0xee),
PrevRandao: fillByteSlice(32, 0xff),
BlockNumber: 999,
GasLimit: 2222222,
GasUsed: 1111111,
Timestamp: 666666,
ExtraData: fillByteSlice(6, 0x11),
BaseFeePerGas: fillByteSlice(32, 0x22),
BlockHash: fillByteSlice(common.HashLength, 0x33),
Transactions: [][]byte{
fillByteSlice(5, 0x44),
},
Withdrawals: []*enginev1.Withdrawal{
{
Index: 1,
ValidatorIndex: 2,
Address: fillByteSlice(20, 0xaa),
Amount: 100,
},
},
BlobGasUsed: 1234,
ExcessBlobGas: 5678,
}
result, err := ExecutionPayloadDenebFromConsensus(denebPayload)
require.NoError(t, err)
require.Equal(t, hexutil.Encode(denebPayload.ParentHash), result.ParentHash)
require.Equal(t, len(denebPayload.Transactions), len(result.Transactions))
require.Equal(t, len(denebPayload.Withdrawals), len(result.Withdrawals))
require.Equal(t, "1234", result.BlobGasUsed)
require.Equal(t, fmt.Sprintf("%d", denebPayload.BlockNumber), result.BlockNumber)
}
// TestExecutionPayloadDeneb_ToConsensus_HappyPath checks the
// (*ExecutionPayloadDeneb).ToConsensus function under normal conditions.
func TestExecutionPayloadDeneb_ToConsensus_HappyPath(t *testing.T) {
deneb := &ExecutionPayloadDeneb{
ParentHash: hexutil.Encode(fillByteSlice(common.HashLength, 0xaa)),
FeeRecipient: hexutil.Encode(fillByteSlice(20, 0xbb)),
StateRoot: hexutil.Encode(fillByteSlice(32, 0xcc)),
ReceiptsRoot: hexutil.Encode(fillByteSlice(32, 0xdd)),
LogsBloom: hexutil.Encode(fillByteSlice(256, 0xee)),
PrevRandao: hexutil.Encode(fillByteSlice(32, 0xff)),
BlockNumber: "999",
GasLimit: "2222222",
GasUsed: "1111111",
Timestamp: "666666",
ExtraData: hexutil.Encode(fillByteSlice(6, 0x11)),
BaseFeePerGas: "1234",
BlockHash: hexutil.Encode(fillByteSlice(common.HashLength, 0x33)),
Transactions: []string{
hexutil.Encode(fillByteSlice(5, 0x44)),
},
Withdrawals: []*Withdrawal{
{
WithdrawalIndex: "1",
ValidatorIndex: "2",
ExecutionAddress: hexutil.Encode(fillByteSlice(20, 0xaa)),
Amount: "100",
},
},
BlobGasUsed: "1234",
ExcessBlobGas: "5678",
}
result, err := deneb.ToConsensus()
require.NoError(t, err)
require.DeepEqual(t, hexutil.Encode(result.ParentHash), deneb.ParentHash)
require.DeepEqual(t, hexutil.Encode(result.FeeRecipient), deneb.FeeRecipient)
require.Equal(t, result.BlockNumber, uint64(999))
}
func TestExecutionPayloadHeaderCapellaFromConsensus_HappyPath(t *testing.T) {
capellaHeader := &enginev1.ExecutionPayloadHeaderCapella{
ParentHash: fillByteSlice(common.HashLength, 0xaa),
FeeRecipient: fillByteSlice(20, 0xbb),
StateRoot: fillByteSlice(32, 0xcc),
ReceiptsRoot: fillByteSlice(32, 0xdd),
LogsBloom: fillByteSlice(256, 0xee),
PrevRandao: fillByteSlice(32, 0xff),
BlockNumber: 555,
GasLimit: 1111111,
GasUsed: 222222,
Timestamp: 3333333333,
ExtraData: fillByteSlice(4, 0x12),
BaseFeePerGas: fillByteSlice(32, 0x34),
BlockHash: fillByteSlice(common.HashLength, 0x56),
TransactionsRoot: fillByteSlice(32, 0x78),
WithdrawalsRoot: fillByteSlice(32, 0x99),
}
result, err := ExecutionPayloadHeaderCapellaFromConsensus(capellaHeader)
require.NoError(t, err)
require.Equal(t, hexutil.Encode(capellaHeader.ParentHash), result.ParentHash)
require.DeepEqual(t, hexutil.Encode(capellaHeader.WithdrawalsRoot), result.WithdrawalsRoot)
}
func TestExecutionPayloadHeaderCapella_ToConsensus_HappyPath(t *testing.T) {
header := &ExecutionPayloadHeaderCapella{
ParentHash: hexutil.Encode(fillByteSlice(common.HashLength, 0xaa)),
FeeRecipient: hexutil.Encode(fillByteSlice(20, 0xbb)),
StateRoot: hexutil.Encode(fillByteSlice(32, 0xcc)),
ReceiptsRoot: hexutil.Encode(fillByteSlice(32, 0xdd)),
LogsBloom: hexutil.Encode(fillByteSlice(256, 0xee)),
PrevRandao: hexutil.Encode(fillByteSlice(32, 0xff)),
BlockNumber: "555",
GasLimit: "1111111",
GasUsed: "222222",
Timestamp: "3333333333",
ExtraData: "0x1234abcd",
BaseFeePerGas: "1234",
BlockHash: hexutil.Encode(fillByteSlice(common.HashLength, 0x56)),
TransactionsRoot: hexutil.Encode(fillByteSlice(32, 0x78)),
WithdrawalsRoot: hexutil.Encode(fillByteSlice(32, 0x99)),
}
result, err := header.ToConsensus()
require.NoError(t, err)
require.DeepEqual(t, hexutil.Encode(result.ParentHash), header.ParentHash)
require.DeepEqual(t, hexutil.Encode(result.FeeRecipient), header.FeeRecipient)
require.DeepEqual(t, hexutil.Encode(result.StateRoot), header.StateRoot)
require.DeepEqual(t, hexutil.Encode(result.ReceiptsRoot), header.ReceiptsRoot)
require.DeepEqual(t, hexutil.Encode(result.WithdrawalsRoot), header.WithdrawalsRoot)
}
func TestExecutionPayloadHeaderDenebFromConsensus_HappyPath(t *testing.T) {
denebHeader := &enginev1.ExecutionPayloadHeaderDeneb{
ParentHash: fillByteSlice(common.HashLength, 0xaa),
FeeRecipient: fillByteSlice(20, 0xbb),
StateRoot: fillByteSlice(32, 0xcc),
ReceiptsRoot: fillByteSlice(32, 0xdd),
LogsBloom: fillByteSlice(256, 0xee),
PrevRandao: fillByteSlice(32, 0xff),
BlockNumber: 999,
GasLimit: 5000000,
GasUsed: 2500000,
Timestamp: 4444444444,
ExtraData: fillByteSlice(4, 0x12),
BaseFeePerGas: fillByteSlice(32, 0x34),
BlockHash: fillByteSlice(common.HashLength, 0x56),
TransactionsRoot: fillByteSlice(32, 0x78),
WithdrawalsRoot: fillByteSlice(32, 0x99),
BlobGasUsed: 1234,
ExcessBlobGas: 5678,
}
result, err := ExecutionPayloadHeaderDenebFromConsensus(denebHeader)
require.NoError(t, err)
require.Equal(t, hexutil.Encode(denebHeader.ParentHash), result.ParentHash)
require.DeepEqual(t, hexutil.Encode(denebHeader.FeeRecipient), result.FeeRecipient)
require.DeepEqual(t, hexutil.Encode(denebHeader.StateRoot), result.StateRoot)
require.DeepEqual(t, fmt.Sprintf("%d", denebHeader.BlobGasUsed), result.BlobGasUsed)
}
func TestExecutionPayloadHeaderDeneb_ToConsensus_HappyPath(t *testing.T) {
header := &ExecutionPayloadHeaderDeneb{
ParentHash: hexutil.Encode(fillByteSlice(common.HashLength, 0xaa)),
FeeRecipient: hexutil.Encode(fillByteSlice(20, 0xbb)),
StateRoot: hexutil.Encode(fillByteSlice(32, 0xcc)),
ReceiptsRoot: hexutil.Encode(fillByteSlice(32, 0xdd)),
LogsBloom: hexutil.Encode(fillByteSlice(256, 0xee)),
PrevRandao: hexutil.Encode(fillByteSlice(32, 0xff)),
BlockNumber: "999",
GasLimit: "5000000",
GasUsed: "2500000",
Timestamp: "4444444444",
ExtraData: "0x1234abcd",
BaseFeePerGas: "1234",
BlockHash: hexutil.Encode(fillByteSlice(common.HashLength, 0x56)),
TransactionsRoot: hexutil.Encode(fillByteSlice(32, 0x78)),
WithdrawalsRoot: hexutil.Encode(fillByteSlice(32, 0x99)),
BlobGasUsed: "1234",
ExcessBlobGas: "5678",
}
result, err := header.ToConsensus()
require.NoError(t, err)
require.DeepEqual(t, hexutil.Encode(result.ParentHash), header.ParentHash)
require.DeepEqual(t, result.BlobGasUsed, uint64(1234))
require.DeepEqual(t, result.ExcessBlobGas, uint64(5678))
require.DeepEqual(t, result.BlockNumber, uint64(999))
}
func TestWithdrawalRequestsFromConsensus_HappyPath(t *testing.T) {
consensusRequests := []*enginev1.WithdrawalRequest{
{
SourceAddress: fillByteSlice(20, 0xbb),
ValidatorPubkey: fillByteSlice(48, 0xbb),
Amount: 12345,
},
{
SourceAddress: fillByteSlice(20, 0xcc),
ValidatorPubkey: fillByteSlice(48, 0xcc),
Amount: 54321,
},
}
result := WithdrawalRequestsFromConsensus(consensusRequests)
require.DeepEqual(t, len(result), len(consensusRequests))
require.DeepEqual(t, result[0].Amount, fmt.Sprintf("%d", consensusRequests[0].Amount))
}
func TestWithdrawalRequestFromConsensus_HappyPath(t *testing.T) {
req := &enginev1.WithdrawalRequest{
SourceAddress: fillByteSlice(20, 0xbb),
ValidatorPubkey: fillByteSlice(48, 0xbb),
Amount: 42,
}
result := WithdrawalRequestFromConsensus(req)
require.NotNil(t, result)
require.DeepEqual(t, result.SourceAddress, hexutil.Encode(fillByteSlice(20, 0xbb)))
}
func TestWithdrawalRequest_ToConsensus_HappyPath(t *testing.T) {
withdrawalReq := &WithdrawalRequest{
SourceAddress: hexutil.Encode(fillByteSlice(20, 111)),
ValidatorPubkey: hexutil.Encode(fillByteSlice(48, 123)),
Amount: "12345",
}
result, err := withdrawalReq.ToConsensus()
require.NoError(t, err)
require.DeepEqual(t, result.Amount, uint64(12345))
}
func TestConsolidationRequestsFromConsensus_HappyPath(t *testing.T) {
consensusRequests := []*enginev1.ConsolidationRequest{
{
SourceAddress: fillByteSlice(20, 111),
SourcePubkey: fillByteSlice(48, 112),
TargetPubkey: fillByteSlice(48, 113),
},
}
result := ConsolidationRequestsFromConsensus(consensusRequests)
require.DeepEqual(t, len(result), len(consensusRequests))
require.DeepEqual(t, result[0].SourceAddress, "0x6f6f6f6f6f6f6f6f6f6f6f6f6f6f6f6f6f6f6f6f")
}
func TestDepositRequestsFromConsensus_HappyPath(t *testing.T) {
ds := []*enginev1.DepositRequest{
{
Pubkey: fillByteSlice(48, 0xbb),
WithdrawalCredentials: fillByteSlice(32, 0xdd),
Amount: 98765,
Signature: fillByteSlice(96, 0xff),
Index: 111,
},
}
result := DepositRequestsFromConsensus(ds)
require.DeepEqual(t, len(result), len(ds))
require.DeepEqual(t, result[0].Amount, "98765")
}
func TestDepositRequest_ToConsensus_HappyPath(t *testing.T) {
req := &DepositRequest{
Pubkey: hexutil.Encode(fillByteSlice(48, 0xbb)),
WithdrawalCredentials: hexutil.Encode(fillByteSlice(32, 0xaa)),
Amount: "123",
Signature: hexutil.Encode(fillByteSlice(96, 0xdd)),
Index: "456",
}
result, err := req.ToConsensus()
require.NoError(t, err)
require.DeepEqual(t, result.Amount, uint64(123))
require.DeepEqual(t, result.Signature, fillByteSlice(96, 0xdd))
}
func TestExecutionRequestsFromConsensus_HappyPath(t *testing.T) {
er := &enginev1.ExecutionRequests{
Deposits: []*enginev1.DepositRequest{
{
Pubkey: fillByteSlice(48, 0xba),
WithdrawalCredentials: fillByteSlice(32, 0xaa),
Amount: 33,
Signature: fillByteSlice(96, 0xff),
Index: 44,
},
},
Withdrawals: []*enginev1.WithdrawalRequest{
{
SourceAddress: fillByteSlice(20, 0xaa),
ValidatorPubkey: fillByteSlice(48, 0xba),
Amount: 555,
},
},
Consolidations: []*enginev1.ConsolidationRequest{
{
SourceAddress: fillByteSlice(20, 0xdd),
SourcePubkey: fillByteSlice(48, 0xdd),
TargetPubkey: fillByteSlice(48, 0xcc),
},
},
}
result := ExecutionRequestsFromConsensus(er)
require.NotNil(t, result)
require.Equal(t, 1, len(result.Deposits))
require.Equal(t, "33", result.Deposits[0].Amount)
require.Equal(t, 1, len(result.Withdrawals))
require.Equal(t, "555", result.Withdrawals[0].Amount)
require.Equal(t, 1, len(result.Consolidations))
require.Equal(t, "0xcccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc", result.Consolidations[0].TargetPubkey)
}
func TestExecutionRequests_ToConsensus_HappyPath(t *testing.T) {
execReq := &ExecutionRequests{
Deposits: []*DepositRequest{
{
Pubkey: hexutil.Encode(fillByteSlice(48, 0xbb)),
WithdrawalCredentials: hexutil.Encode(fillByteSlice(32, 0xaa)),
Amount: "33",
Signature: hexutil.Encode(fillByteSlice(96, 0xff)),
Index: "44",
},
},
Withdrawals: []*WithdrawalRequest{
{
SourceAddress: hexutil.Encode(fillByteSlice(20, 0xdd)),
ValidatorPubkey: hexutil.Encode(fillByteSlice(48, 0xbb)),
Amount: "555",
},
},
Consolidations: []*ConsolidationRequest{
{
SourceAddress: hexutil.Encode(fillByteSlice(20, 0xcc)),
SourcePubkey: hexutil.Encode(fillByteSlice(48, 0xbb)),
TargetPubkey: hexutil.Encode(fillByteSlice(48, 0xcc)),
},
},
}
result, err := execReq.ToConsensus()
require.NoError(t, err)
require.Equal(t, 1, len(result.Deposits))
require.Equal(t, uint64(33), result.Deposits[0].Amount)
require.Equal(t, 1, len(result.Withdrawals))
require.Equal(t, uint64(555), result.Withdrawals[0].Amount)
require.Equal(t, 1, len(result.Consolidations))
require.DeepEqual(t, fillByteSlice(48, 0xcc), result.Consolidations[0].TargetPubkey)
}

View File

@@ -4,11 +4,11 @@ import (
"encoding/json"
"fmt"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
enginev1 "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
)
func LightClientUpdateFromConsensus(update interfaces.LightClientUpdate) (*LightClientUpdate, error) {

View File

@@ -4,9 +4,9 @@ import (
"errors"
"fmt"
beaconState "github.com/OffchainLabs/prysm/v6/beacon-chain/state"
enginev1 "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
"github.com/ethereum/go-ethereum/common/hexutil"
beaconState "github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
)
var errPayloadHeaderNotFound = errors.New("expected payload header not found")
@@ -26,7 +26,10 @@ func BeaconStateFromConsensus(st beaconState.BeaconState) (*BeaconState, error)
for i, r := range srcSr {
sr[i] = hexutil.Encode(r)
}
srcHr := st.HistoricalRoots()
srcHr, err := st.HistoricalRoots()
if err != nil {
return nil, err
}
hr := make([]string, len(srcHr))
for i, r := range srcHr {
hr[i] = hexutil.Encode(r)
@@ -74,7 +77,7 @@ func BeaconStateFromConsensus(st beaconState.BeaconState) (*BeaconState, error)
}
return &BeaconState{
GenesisTime: fmt.Sprintf("%d", st.GenesisTime().Unix()),
GenesisTime: fmt.Sprintf("%d", st.GenesisTime()),
GenesisValidatorsRoot: hexutil.Encode(st.GenesisValidatorsRoot()),
Slot: fmt.Sprintf("%d", st.Slot()),
Fork: ForkFromConsensus(st.Fork()),
@@ -113,7 +116,10 @@ func BeaconStateAltairFromConsensus(st beaconState.BeaconState) (*BeaconStateAlt
for i, r := range srcSr {
sr[i] = hexutil.Encode(r)
}
srcHr := st.HistoricalRoots()
srcHr, err := st.HistoricalRoots()
if err != nil {
return nil, err
}
hr := make([]string, len(srcHr))
for i, r := range srcHr {
hr[i] = hexutil.Encode(r)
@@ -177,7 +183,7 @@ func BeaconStateAltairFromConsensus(st beaconState.BeaconState) (*BeaconStateAlt
}
return &BeaconStateAltair{
GenesisTime: fmt.Sprintf("%d", st.GenesisTime().Unix()),
GenesisTime: fmt.Sprintf("%d", st.GenesisTime()),
GenesisValidatorsRoot: hexutil.Encode(st.GenesisValidatorsRoot()),
Slot: fmt.Sprintf("%d", st.Slot()),
Fork: ForkFromConsensus(st.Fork()),
@@ -219,7 +225,10 @@ func BeaconStateBellatrixFromConsensus(st beaconState.BeaconState) (*BeaconState
for i, r := range srcSr {
sr[i] = hexutil.Encode(r)
}
srcHr := st.HistoricalRoots()
srcHr, err := st.HistoricalRoots()
if err != nil {
return nil, err
}
hr := make([]string, len(srcHr))
for i, r := range srcHr {
hr[i] = hexutil.Encode(r)
@@ -295,7 +304,7 @@ func BeaconStateBellatrixFromConsensus(st beaconState.BeaconState) (*BeaconState
}
return &BeaconStateBellatrix{
GenesisTime: fmt.Sprintf("%d", st.GenesisTime().Unix()),
GenesisTime: fmt.Sprintf("%d", st.GenesisTime()),
GenesisValidatorsRoot: hexutil.Encode(st.GenesisValidatorsRoot()),
Slot: fmt.Sprintf("%d", st.Slot()),
Fork: ForkFromConsensus(st.Fork()),
@@ -338,7 +347,10 @@ func BeaconStateCapellaFromConsensus(st beaconState.BeaconState) (*BeaconStateCa
for i, r := range srcSr {
sr[i] = hexutil.Encode(r)
}
srcHr := st.HistoricalRoots()
srcHr, err := st.HistoricalRoots()
if err != nil {
return nil, err
}
hr := make([]string, len(srcHr))
for i, r := range srcHr {
hr[i] = hexutil.Encode(r)
@@ -430,7 +442,7 @@ func BeaconStateCapellaFromConsensus(st beaconState.BeaconState) (*BeaconStateCa
}
return &BeaconStateCapella{
GenesisTime: fmt.Sprintf("%d", st.GenesisTime().Unix()),
GenesisTime: fmt.Sprintf("%d", st.GenesisTime()),
GenesisValidatorsRoot: hexutil.Encode(st.GenesisValidatorsRoot()),
Slot: fmt.Sprintf("%d", st.Slot()),
Fork: ForkFromConsensus(st.Fork()),
@@ -476,7 +488,10 @@ func BeaconStateDenebFromConsensus(st beaconState.BeaconState) (*BeaconStateDene
for i, r := range srcSr {
sr[i] = hexutil.Encode(r)
}
srcHr := st.HistoricalRoots()
srcHr, err := st.HistoricalRoots()
if err != nil {
return nil, err
}
hr := make([]string, len(srcHr))
for i, r := range srcHr {
hr[i] = hexutil.Encode(r)
@@ -568,7 +583,7 @@ func BeaconStateDenebFromConsensus(st beaconState.BeaconState) (*BeaconStateDene
}
return &BeaconStateDeneb{
GenesisTime: fmt.Sprintf("%d", st.GenesisTime().Unix()),
GenesisTime: fmt.Sprintf("%d", st.GenesisTime()),
GenesisValidatorsRoot: hexutil.Encode(st.GenesisValidatorsRoot()),
Slot: fmt.Sprintf("%d", st.Slot()),
Fork: ForkFromConsensus(st.Fork()),
@@ -614,7 +629,10 @@ func BeaconStateElectraFromConsensus(st beaconState.BeaconState) (*BeaconStateEl
for i, r := range srcSr {
sr[i] = hexutil.Encode(r)
}
srcHr := st.HistoricalRoots()
srcHr, err := st.HistoricalRoots()
if err != nil {
return nil, err
}
hr := make([]string, len(srcHr))
for i, r := range srcHr {
hr[i] = hexutil.Encode(r)
@@ -742,7 +760,7 @@ func BeaconStateElectraFromConsensus(st beaconState.BeaconState) (*BeaconStateEl
}
return &BeaconStateElectra{
GenesisTime: fmt.Sprintf("%d", st.GenesisTime().Unix()),
GenesisTime: fmt.Sprintf("%d", st.GenesisTime()),
GenesisValidatorsRoot: hexutil.Encode(st.GenesisValidatorsRoot()),
Slot: fmt.Sprintf("%d", st.Slot()),
Fork: ForkFromConsensus(st.Fork()),
@@ -797,7 +815,10 @@ func BeaconStateFuluFromConsensus(st beaconState.BeaconState) (*BeaconStateFulu,
for i, r := range srcSr {
sr[i] = hexutil.Encode(r)
}
srcHr := st.HistoricalRoots()
srcHr, err := st.HistoricalRoots()
if err != nil {
return nil, err
}
hr := make([]string, len(srcHr))
for i, r := range srcHr {
hr[i] = hexutil.Encode(r)
@@ -923,16 +944,9 @@ func BeaconStateFuluFromConsensus(st beaconState.BeaconState) (*BeaconStateFulu,
if err != nil {
return nil, err
}
srcLookahead, err := st.ProposerLookahead()
if err != nil {
return nil, err
}
lookahead := make([]string, len(srcLookahead))
for i, v := range srcLookahead {
lookahead[i] = fmt.Sprintf("%d", uint64(v))
}
return &BeaconStateFulu{
GenesisTime: fmt.Sprintf("%d", st.GenesisTime().Unix()),
GenesisTime: fmt.Sprintf("%d", st.GenesisTime()),
GenesisValidatorsRoot: hexutil.Encode(st.GenesisValidatorsRoot()),
Slot: fmt.Sprintf("%d", st.Slot()),
Fork: ForkFromConsensus(st.Fork()),
@@ -969,6 +983,5 @@ func BeaconStateFuluFromConsensus(st beaconState.BeaconState) (*BeaconStateFulu,
PendingDeposits: PendingDepositsFromConsensus(pbd),
PendingPartialWithdrawals: PendingPartialWithdrawalsFromConsensus(ppw),
PendingConsolidations: PendingConsolidationsFromConsensus(pc),
ProposerLookahead: lookahead,
}, nil
}

View File

@@ -3,10 +3,8 @@ package structs
import (
"testing"
eth "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/ethereum/go-ethereum/common/hexutil"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func TestDepositSnapshotFromConsensus(t *testing.T) {
@@ -104,266 +102,12 @@ func TestProposerSlashing_ToConsensus(t *testing.T) {
require.ErrorContains(t, errNilValue.Error(), err)
}
func TestProposerSlashing_FromConsensus(t *testing.T) {
input := []*eth.ProposerSlashing{
{
Header_1: &eth.SignedBeaconBlockHeader{
Header: &eth.BeaconBlockHeader{
Slot: 1,
ProposerIndex: 2,
ParentRoot: []byte{3},
StateRoot: []byte{4},
BodyRoot: []byte{5},
},
Signature: []byte{6},
},
Header_2: &eth.SignedBeaconBlockHeader{
Header: &eth.BeaconBlockHeader{
Slot: 7,
ProposerIndex: 8,
ParentRoot: []byte{9},
StateRoot: []byte{10},
BodyRoot: []byte{11},
},
Signature: []byte{12},
},
},
{
Header_1: &eth.SignedBeaconBlockHeader{
Header: &eth.BeaconBlockHeader{
Slot: 13,
ProposerIndex: 14,
ParentRoot: []byte{15},
StateRoot: []byte{16},
BodyRoot: []byte{17},
},
Signature: []byte{18},
},
Header_2: &eth.SignedBeaconBlockHeader{
Header: &eth.BeaconBlockHeader{
Slot: 19,
ProposerIndex: 20,
ParentRoot: []byte{21},
StateRoot: []byte{22},
BodyRoot: []byte{23},
},
Signature: []byte{24},
},
},
}
expectedResult := []*ProposerSlashing{
{
SignedHeader1: &SignedBeaconBlockHeader{
Message: &BeaconBlockHeader{
Slot: "1",
ProposerIndex: "2",
ParentRoot: hexutil.Encode([]byte{3}),
StateRoot: hexutil.Encode([]byte{4}),
BodyRoot: hexutil.Encode([]byte{5}),
},
Signature: hexutil.Encode([]byte{6}),
},
SignedHeader2: &SignedBeaconBlockHeader{
Message: &BeaconBlockHeader{
Slot: "7",
ProposerIndex: "8",
ParentRoot: hexutil.Encode([]byte{9}),
StateRoot: hexutil.Encode([]byte{10}),
BodyRoot: hexutil.Encode([]byte{11}),
},
Signature: hexutil.Encode([]byte{12}),
},
},
{
SignedHeader1: &SignedBeaconBlockHeader{
Message: &BeaconBlockHeader{
Slot: "13",
ProposerIndex: "14",
ParentRoot: hexutil.Encode([]byte{15}),
StateRoot: hexutil.Encode([]byte{16}),
BodyRoot: hexutil.Encode([]byte{17}),
},
Signature: hexutil.Encode([]byte{18}),
},
SignedHeader2: &SignedBeaconBlockHeader{
Message: &BeaconBlockHeader{
Slot: "19",
ProposerIndex: "20",
ParentRoot: hexutil.Encode([]byte{21}),
StateRoot: hexutil.Encode([]byte{22}),
BodyRoot: hexutil.Encode([]byte{23}),
},
Signature: hexutil.Encode([]byte{24}),
},
},
}
result := ProposerSlashingsFromConsensus(input)
assert.DeepEqual(t, expectedResult, result)
}
func TestAttesterSlashing_ToConsensus(t *testing.T) {
a := &AttesterSlashing{Attestation1: nil, Attestation2: nil}
_, err := a.ToConsensus()
require.ErrorContains(t, errNilValue.Error(), err)
}
func TestAttesterSlashing_FromConsensus(t *testing.T) {
input := []*eth.AttesterSlashing{
{
Attestation_1: &eth.IndexedAttestation{
AttestingIndices: []uint64{1, 2},
Data: &eth.AttestationData{
Slot: 3,
CommitteeIndex: 4,
BeaconBlockRoot: []byte{5},
Source: &eth.Checkpoint{
Epoch: 6,
Root: []byte{7},
},
Target: &eth.Checkpoint{
Epoch: 8,
Root: []byte{9},
},
},
Signature: []byte{10},
},
Attestation_2: &eth.IndexedAttestation{
AttestingIndices: []uint64{11, 12},
Data: &eth.AttestationData{
Slot: 13,
CommitteeIndex: 14,
BeaconBlockRoot: []byte{15},
Source: &eth.Checkpoint{
Epoch: 16,
Root: []byte{17},
},
Target: &eth.Checkpoint{
Epoch: 18,
Root: []byte{19},
},
},
Signature: []byte{20},
},
},
{
Attestation_1: &eth.IndexedAttestation{
AttestingIndices: []uint64{21, 22},
Data: &eth.AttestationData{
Slot: 23,
CommitteeIndex: 24,
BeaconBlockRoot: []byte{25},
Source: &eth.Checkpoint{
Epoch: 26,
Root: []byte{27},
},
Target: &eth.Checkpoint{
Epoch: 28,
Root: []byte{29},
},
},
Signature: []byte{30},
},
Attestation_2: &eth.IndexedAttestation{
AttestingIndices: []uint64{31, 32},
Data: &eth.AttestationData{
Slot: 33,
CommitteeIndex: 34,
BeaconBlockRoot: []byte{35},
Source: &eth.Checkpoint{
Epoch: 36,
Root: []byte{37},
},
Target: &eth.Checkpoint{
Epoch: 38,
Root: []byte{39},
},
},
Signature: []byte{40},
},
},
}
expectedResult := []*AttesterSlashing{
{
Attestation1: &IndexedAttestation{
AttestingIndices: []string{"1", "2"},
Data: &AttestationData{
Slot: "3",
CommitteeIndex: "4",
BeaconBlockRoot: hexutil.Encode([]byte{5}),
Source: &Checkpoint{
Epoch: "6",
Root: hexutil.Encode([]byte{7}),
},
Target: &Checkpoint{
Epoch: "8",
Root: hexutil.Encode([]byte{9}),
},
},
Signature: hexutil.Encode([]byte{10}),
},
Attestation2: &IndexedAttestation{
AttestingIndices: []string{"11", "12"},
Data: &AttestationData{
Slot: "13",
CommitteeIndex: "14",
BeaconBlockRoot: hexutil.Encode([]byte{15}),
Source: &Checkpoint{
Epoch: "16",
Root: hexutil.Encode([]byte{17}),
},
Target: &Checkpoint{
Epoch: "18",
Root: hexutil.Encode([]byte{19}),
},
},
Signature: hexutil.Encode([]byte{20}),
},
},
{
Attestation1: &IndexedAttestation{
AttestingIndices: []string{"21", "22"},
Data: &AttestationData{
Slot: "23",
CommitteeIndex: "24",
BeaconBlockRoot: hexutil.Encode([]byte{25}),
Source: &Checkpoint{
Epoch: "26",
Root: hexutil.Encode([]byte{27}),
},
Target: &Checkpoint{
Epoch: "28",
Root: hexutil.Encode([]byte{29}),
},
},
Signature: hexutil.Encode([]byte{30}),
},
Attestation2: &IndexedAttestation{
AttestingIndices: []string{"31", "32"},
Data: &AttestationData{
Slot: "33",
CommitteeIndex: "34",
BeaconBlockRoot: hexutil.Encode([]byte{35}),
Source: &Checkpoint{
Epoch: "36",
Root: hexutil.Encode([]byte{37}),
},
Target: &Checkpoint{
Epoch: "38",
Root: hexutil.Encode([]byte{39}),
},
},
Signature: hexutil.Encode([]byte{40}),
},
},
}
result := AttesterSlashingsFromConsensus(input)
assert.DeepEqual(t, expectedResult, result)
}
func TestIndexedAttestation_ToConsensus(t *testing.T) {
a := &IndexedAttestation{
AttestingIndices: []string{"1"},

View File

@@ -78,8 +78,8 @@ type GetBlockHeaderResponse struct {
}
type GetValidatorsRequest struct {
Ids []string `json:"ids,omitempty"`
Statuses []string `json:"statuses,omitempty"`
Ids []string `json:"ids"`
Statuses []string `json:"statuses"`
}
type GetValidatorsResponse struct {
@@ -100,12 +100,6 @@ type GetValidatorBalancesResponse struct {
Data []*ValidatorBalance `json:"data"`
}
type GetValidatorIdentitiesResponse struct {
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
Data []*ValidatorIdentity `json:"data"`
}
type ValidatorContainer struct {
Index string `json:"index"`
Balance string `json:"balance"`
@@ -118,12 +112,6 @@ type ValidatorBalance struct {
Balance string `json:"balance"`
}
type ValidatorIdentity struct {
Index string `json:"index"`
Pubkey string `json:"pubkey"`
ActivationEpoch string `json:"activation_epoch"`
}
type GetBlockResponse struct {
Data *SignedBlock `json:"data"`
}
@@ -263,13 +251,6 @@ type ChainHead struct {
OptimisticStatus bool `json:"optimistic_status"`
}
type GetPendingConsolidationsResponse struct {
Version string `json:"version"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
Data []*PendingConsolidation `json:"data"`
}
type GetPendingDepositsResponse struct {
Version string `json:"version"`
ExecutionOptimistic bool `json:"execution_optimistic"`
@@ -283,10 +264,3 @@ type GetPendingPartialWithdrawalsResponse struct {
Finalized bool `json:"finalized"`
Data []*PendingPartialWithdrawal `json:"data"`
}
type GetProposerLookaheadResponse struct {
Version string `json:"version"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
Data []string `json:"data"` // validator indexes
}

View File

@@ -54,5 +54,4 @@ type ForkChoiceNodeExtraData struct {
Balance string `json:"balance"`
ExecutionOptimistic bool `json:"execution_optimistic"`
TimeStamp string `json:"timestamp"`
Target string `json:"target"`
}

View File

@@ -20,18 +20,6 @@ type BlockEvent struct {
ExecutionOptimistic bool `json:"execution_optimistic"`
}
type BlockGossipEvent struct {
Slot string `json:"slot"`
Block string `json:"block"`
}
type DataColumnGossipEvent struct {
Slot string `json:"slot"`
Index string `json:"index"`
BlockRoot string `json:"block_root"`
KzgCommitments []string `json:"kzg_commitments"`
}
type AggregatedAttEventSource struct {
Aggregate *Attestation `json:"aggregate"`
}

View File

@@ -27,22 +27,14 @@ type Identity struct {
type Metadata struct {
SeqNumber string `json:"seq_number"`
Attnets string `json:"attnets"`
Syncnets string `json:"syncnets,omitempty"`
Cgc string `json:"custody_group_count,omitempty"`
}
type GetPeerResponse struct {
Data *Peer `json:"data"`
}
// Added Meta to align with beacon-api: https://ethereum.github.io/beacon-APIs/#/Node/getPeers
type Meta struct {
Count int `json:"count"`
}
type GetPeersResponse struct {
Data []*Peer `json:"data"`
Meta Meta `json:"meta"`
}
type Peer struct {

View File

@@ -3,7 +3,7 @@ package structs
import (
"encoding/json"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
)
type AggregateAttestationResponse struct {

View File

@@ -244,6 +244,26 @@ type Withdrawal struct {
Amount string `json:"amount"`
}
type DepositRequest struct {
Pubkey string `json:"pubkey"`
WithdrawalCredentials string `json:"withdrawal_credentials"`
Amount string `json:"amount"`
Signature string `json:"signature"`
Index string `json:"index"`
}
type WithdrawalRequest struct {
SourceAddress string `json:"source_address"`
ValidatorPubkey string `json:"validator_pubkey"`
Amount string `json:"amount"`
}
type ConsolidationRequest struct {
SourceAddress string `json:"source_address"`
SourcePubkey string `json:"source_pubkey"`
TargetPubkey string `json:"target_pubkey"`
}
type PendingDeposit struct {
Pubkey string `json:"pubkey"`
WithdrawalCredentials string `json:"withdrawal_credentials"`
@@ -253,7 +273,7 @@ type PendingDeposit struct {
}
type PendingPartialWithdrawal struct {
Index string `json:"validator_index"`
Index string `json:"index"`
Amount string `json:"amount"`
WithdrawableEpoch string `json:"withdrawable_epoch"`
}

View File

@@ -219,5 +219,4 @@ type BeaconStateFulu struct {
PendingDeposits []*PendingDeposit `json:"pending_deposits"`
PendingPartialWithdrawals []*PendingPartialWithdrawal `json:"pending_partial_withdrawals"`
PendingConsolidations []*PendingConsolidation `json:"pending_consolidations"`
ProposerLookahead []string `json:"proposer_lookahead"`
}

View File

@@ -8,7 +8,7 @@ go_library(
"multilock.go",
"scatter.go",
],
importpath = "github.com/OffchainLabs/prysm/v6/async",
importpath = "github.com/prysmaticlabs/prysm/v5/async",
visibility = ["//visibility:public"],
deps = ["@com_github_sirupsen_logrus//:go_default_library"],
)

View File

@@ -3,7 +3,7 @@ load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = ["abool.go"],
importpath = "github.com/OffchainLabs/prysm/v6/async/abool",
importpath = "github.com/prysmaticlabs/prysm/v5/async/abool",
visibility = ["//visibility:public"],
)

View File

@@ -6,8 +6,8 @@ import (
"sync"
"testing"
"github.com/OffchainLabs/prysm/v6/async"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/prysmaticlabs/prysm/v5/async"
"github.com/prysmaticlabs/prysm/v5/testing/require"
log "github.com/sirupsen/logrus"
)

View File

@@ -7,15 +7,15 @@ import (
"testing"
"time"
"github.com/OffchainLabs/prysm/v6/async"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
"github.com/prysmaticlabs/prysm/v5/async"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
)
func TestDebounce_NoEvents(t *testing.T) {
eventsChan := make(chan interface{}, 100)
ctx, cancel := context.WithCancel(t.Context())
ctx, cancel := context.WithCancel(context.Background())
interval := time.Second
timesHandled := int32(0)
wg := &sync.WaitGroup{}
@@ -39,7 +39,7 @@ func TestDebounce_NoEvents(t *testing.T) {
func TestDebounce_CtxClosing(t *testing.T) {
eventsChan := make(chan interface{}, 100)
ctx, cancel := context.WithCancel(t.Context())
ctx, cancel := context.WithCancel(context.Background())
interval := time.Second
timesHandled := int32(0)
wg := &sync.WaitGroup{}
@@ -75,7 +75,7 @@ func TestDebounce_CtxClosing(t *testing.T) {
func TestDebounce_SingleHandlerInvocation(t *testing.T) {
eventsChan := make(chan interface{}, 100)
ctx, cancel := context.WithCancel(t.Context())
ctx, cancel := context.WithCancel(context.Background())
interval := time.Second
timesHandled := int32(0)
go async.Debounce(ctx, interval, eventsChan, func(event interface{}) {
@@ -93,7 +93,7 @@ func TestDebounce_SingleHandlerInvocation(t *testing.T) {
func TestDebounce_MultipleHandlerInvocation(t *testing.T) {
eventsChan := make(chan interface{}, 100)
ctx, cancel := context.WithCancel(t.Context())
ctx, cancel := context.WithCancel(context.Background())
interval := time.Second
timesHandled := int32(0)
go async.Debounce(ctx, interval, eventsChan, func(event interface{}) {

View File

@@ -7,7 +7,7 @@ go_library(
"interface.go",
"subscription.go",
],
importpath = "github.com/OffchainLabs/prysm/v6/async/event",
importpath = "github.com/prysmaticlabs/prysm/v5/async/event",
visibility = ["//visibility:public"],
deps = [
"//time/mclock:go_default_library",

View File

@@ -20,7 +20,7 @@ import (
"fmt"
"sync"
"github.com/OffchainLabs/prysm/v6/async/event"
"github.com/prysmaticlabs/prysm/v5/async/event"
)
// This example demonstrates how SubscriptionScope can be used to control the lifetime of

View File

@@ -19,7 +19,7 @@ package event_test
import (
"fmt"
"github.com/OffchainLabs/prysm/v6/async/event"
"github.com/prysmaticlabs/prysm/v5/async/event"
)
func ExampleNewSubscription() {

View File

@@ -21,7 +21,7 @@ import (
"sync"
"time"
"github.com/OffchainLabs/prysm/v6/time/mclock"
"github.com/prysmaticlabs/prysm/v5/time/mclock"
)
// waitQuotient is divided against the max backoff time, in order to have N requests based on the full
@@ -154,7 +154,7 @@ retry:
continue retry
}
if sub == nil {
panic("event: ResubscribeFunc returned nil subscription and no error") // lint:nopanic -- This should never happen.
panic("event: ResubscribeFunc returned nil subscription and no error")
}
return sub
case <-s.unsub:

View File

@@ -23,7 +23,7 @@ import (
"testing"
"time"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
var errInts = errors.New("error in subscribeInts")

View File

@@ -19,10 +19,10 @@ func RunEvery(ctx context.Context, period time.Duration, f func()) {
for {
select {
case <-ticker.C:
log.WithField("function", funcName).Trace("Running")
log.WithField("function", funcName).Trace("running")
f()
case <-ctx.Done():
log.WithField("function", funcName).Debug("Context is closed, exiting")
log.WithField("function", funcName).Debug("context is closed, exiting")
ticker.Stop()
return
}

View File

@@ -6,11 +6,11 @@ import (
"testing"
"time"
"github.com/OffchainLabs/prysm/v6/async"
"github.com/prysmaticlabs/prysm/v5/async"
)
func TestEveryRuns(t *testing.T) {
ctx, cancel := context.WithCancel(t.Context())
ctx, cancel := context.WithCancel(context.Background())
i := int32(0)
async.RunEvery(ctx, 100*time.Millisecond, func() {

View File

@@ -125,7 +125,7 @@ func getChan(key string) chan byte {
// Return a new string with unique elements.
func unique(arr []string) []string {
if len(arr) <= 1 {
if arr == nil || len(arr) <= 1 {
return arr
}

View File

@@ -5,9 +5,9 @@ import (
"sync"
"testing"
"github.com/OffchainLabs/prysm/v6/async"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/prysmaticlabs/prysm/v5/async"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func TestDouble(t *testing.T) {

View File

@@ -4,6 +4,6 @@ This is the main project folder for the beacon chain implementation of Ethereum
You can also read our main [README](https://github.com/prysmaticlabs/prysm/blob/master/README.md) and join our active chat room on Discord.
[![Discord](https://user-images.githubusercontent.com/7288322/34471967-1df7808a-efbb-11e7-9088-ed0b04151291.png)](https://discord.gg/prysm)
[![Discord](https://user-images.githubusercontent.com/7288322/34471967-1df7808a-efbb-11e7-9088-ed0b04151291.png)](https://discord.gg/CTYGPUJ)
Also, read the official beacon chain [specification](https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/beacon-chain.md), this design spec serves as a source of truth for the beacon chain implementation we follow at Prysmatic Labs.

Some files were not shown because too many files have changed in this diff Show More