Compare commits

...

17 Commits

Author SHA1 Message Date
james-prysm
9e448ab3ba Merge branch 'develop' into simplify-parse-beacon-block 2026-02-04 14:10:00 -06:00
james-prysm
17f8e67646 addressing satyajit comment and adding more tests 2026-02-04 13:23:50 -06:00
Justin Traglia
fab687d96d Improve ethspecify integration (#16304)
**What type of PR is this?**

Documentation

**What does this PR do? Why is it needed?**

* Move the ethspecify config from `/specrefs/.ethspecify` to
`/.ethspecify`.
* This allows developers to use inline specrefs (eg spec functions in
godoc comments).
* To do this, simply add a spec tag and run `ethspecify` to populate it.
* Clean up specref exceptions; organize by upgrade & put items in the
correct section.
* Update a few godoc comments to use the new inline specref feature.
* Update check-specrefs GitHub action so that it enforces up-to-date
godocs.
* Standardize specref naming; requiring a `#fork` tag for everything.
* Add new specrefs (which haven't been implemented yet) which were
missing.

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2026-02-04 18:44:01 +00:00
james-prysm
cf94ccbf72 node fallback cleanup (#16316)
**What type of PR is this?**

 Other

**What does this PR do? Why is it needed?**

Follow up to https://github.com/OffchainLabs/prysm/pull/16215 this pr
improves logging, fixes stuttering in package naming, adds additional
unit tests, and deduplicates fallback node code.

**Which issues(s) does this PR fix?**

fixes a potential race if reconnecting to the same host very quickly
which has a stale connection still.

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-02-04 15:59:42 +00:00
james-prysm
b64c7d57a1 Merge branch 'develop' into simplify-parse-beacon-block 2026-02-03 13:49:05 -08:00
james-prysm
e5e0bf7426 changelog 2026-02-03 15:47:37 -06:00
Aarsh Shah
75895c1e0b fix: Set Beacon Node Options after reading the config file (#16320)
**What type of PR is this?**
Bug fix

**What does this PR do? Why is it needed?**
This PR ensures that we set the beacon node options AFTER reading the
config file (if one is given to override the defaults).

**Which issues(s) does this PR fix?**
It fixes the issue that Barnabas reported around the
"MIN_EPOCHS_FOR_DATA_COLUMN_SIDECARS_REQUESTS" override not being
respected (and potentially other issues resulting from setting the
options before reading the config).

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-02-03 16:51:01 +00:00
Preston Van Loon
d1b9281677 golangci-lint: Remove test exclusion from formatting (#16318)
**What type of PR is this?**

> Other

**What does this PR do? Why is it needed?**

**Which issues(s) does this PR fix?**

Follow up to #16311

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-02-02 17:42:05 +00:00
james-prysm
641d90990d grpc fallback improvements (#16215)
<!-- Thanks for sending a PR! Before submitting:

1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
   fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
   in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
   Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->

**What type of PR is this?**

## Summary

This PR implements gRPC fallback support for the validator client,
allowing it to automatically switch between multiple beacon node
endpoints when the primary node becomes unavailable or unhealthy.

## Changes

- Added `grpcConnectionProvider` to manage multiple gRPC connections
with circular failover
- Validator automatically detects unhealthy beacon nodes and switches to
the next available endpoint
- Health checks verify both node responsiveness AND sync status before
accepting a node
- Improved logging to only show "Found fully synced beacon node" when an
actual switch occurs (reduces log noise)


I removed the old middleware that uses gRPC's built in load balancer
because:

- gRPC's pick_first load balancer doesn't provide sync-status-aware
failover
- The validator needs to ensure it connects to a fully synced node, not
just a reachable one

## Test Scenario

### Setup
Deployed a 4-node Kurtosis testnet with local validator connecting to 2
beacon nodes:

```yaml
# kurtosis-grpc-fallback-test.yaml
participants:
  - el_type: nethermind
    cl_type: prysm
    validator_count: 128  # Keeps chain advancing
  - el_type: nethermind
    cl_type: prysm
    validator_count: 64
  - el_type: nethermind
    cl_type: prysm
    validator_count: 64   # Keeps chain advancing
  - el_type: nethermind
    cl_type: prysm
    validator_count: 64   # Keeps chain advancing

network_params:
  fulu_fork_epoch: 0
  seconds_per_slot: 6
```

Local validator started with:
```bash
./validator --beacon-rpc-provider=127.0.0.1:33005,127.0.0.1:33012 ...
```

### Test 1: Primary Failover (cl-1 → cl-2)

1. Stopped cl-1 beacon node
2. Validator detected failure and switched to cl-2

**Logs:**
```
WARN  Beacon node is not responding, switching host currentHost=127.0.0.1:33005 nextHost=127.0.0.1:33012
DEBUG Trying gRPC endpoint newHost=127.0.0.1:33012 previousHost=127.0.0.1:33005
INFO  Failover succeeded: connected to healthy beacon node failedAttempts=[127.0.0.1:33005] newHost=127.0.0.1:33012 previousHost=127.0.0.1:33005
```

**Result:**  PASSED - Validator continued submitting attestations on
cl-2

### Test 2: Circular Failover (cl-2 → cl-1)

1. Restarted cl-1, stopped cl-2
2. Validator detected failure and switched back to cl-1

**Logs:**
```
WARN  Beacon node is not responding, switching host currentHost=127.0.0.1:33012 nextHost=127.0.0.1:33005
DEBUG Trying gRPC endpoint newHost=127.0.0.1:33005 previousHost=127.0.0.1:33012
INFO  Failover succeeded: connected to healthy beacon node failedAttempts=[127.0.0.1:33012] newHost=127.0.0.1:33005 previousHost=127.0.0.1:33012
```

**Result:**  PASSED - Circular fallback works correctly

## Key Log Messages

| Log Level | Message | Source |
|-----------|---------|--------|
| WARN | "Beacon node is not responding, switching host" |
`changeHost()` in validator.go |
| INFO | "Switched gRPC endpoint" | `SetHost()` in
grpc_connection_provider.go |
| INFO | "Found fully synced beacon node" | `FindHealthyHost()` in
validator.go (only on actual switch) |

## Test Plan

- [x] Verify primary failover (cl-1 → cl-2)
- [x] Verify circular failover (cl-2 → cl-1)
- [x] Verify validator continues producing attestations after switch
- [x] Verify "Found fully synced beacon node" only logs on actual switch
(not every health check)

**What does this PR do? Why is it needed?**

**Which issues(s) does this PR fix?**

Fixes # https://github.com/OffchainLabs/prysm/pull/7133


**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).

---------

Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2026-02-02 14:51:56 +00:00
terence
d2fc250f34 Run go fmt (#16311)
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2026-02-02 14:19:15 +00:00
Jun Song
571c6f39aa Add docs for SSZ Query package (#16299)
**What type of PR is this?**

Documentation

**What does this PR do? Why is it needed?**

Although godoc and comments are well-written in `encoding/ssz/query`
package, we (@rkapka, @fernantho, @syjn99)
[agreed](https://discord.com/channels/476244492043812875/1387734369527136297/1466075406523174944)
that it would be great to have human-readable documentation.

**Which issues(s) does this PR fix?**

Part of  #15587 & #15598 

**Other notes for review**

This documentation is first drafted by Claude Code, and then has a few
rounds of self-review.

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [ ] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).

---------

Co-authored-by: fernantho <fernantho1@gmail.com>
Co-authored-by: Radosław Kapka <radoslaw.kapka@gmail.com>
2026-02-01 03:39:53 +00:00
james-prysm
7bcbc1f8f4 refactor 2026-01-30 15:35:27 -06:00
Justin Traglia
55fe85c887 Add ability to download nightly tests from a specific night (#16298)
**What type of PR is this?**

Feature

**What does this PR do? Why is it needed?**

This PR allows devs to test against a specific run of the nightly
reference test generator.

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-01-29 21:38:13 +00:00
Justin Traglia
31f77567dd Add a README for specrefs (#16302)
**What type of PR is this?**

Documentation

**What does this PR do? Why is it needed?**

This PR adds a basic README for the specrefs.


**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-01-29 20:36:29 +00:00
terence
a7fdd11777 gloas: sample PTC per committee (#16293)
This PR updates `get_ptc` construction to sample ptc
committee-by-committee instead of concatenating all beacon committees
into a large slice. No functional changes to payload attestation
verification
2026-01-29 14:21:54 +00:00
james-prysm
919bd5d6aa Update health endpoint to include sync and optimistic checks (#16294)
**What type of PR is this?**
Other

**What does this PR do? Why is it needed?**

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

a node that is in syncing or optimistic status isn't fully ready yet. we
don't have a is ready endpoint, but I think having the gRPC match more
closely to
[/eth/v1/node/health](https://ethereum.github.io/beacon-APIs/?urls.primaryName=dev#/Node/getHealth)
would be good. This endpoint is only used internally as far as I can
tell.

this is prerequisite to https://github.com/OffchainLabs/prysm/pull/16215

tested via grpcurl against a syncing hoodi node

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-01-28 21:13:30 +00:00
fernantho
0476eeda57 SSZ-QL: custom Generic Merkle Proofs building the tree and collecting the hashes in one sweep (#16177)
<!-- Thanks for sending a PR! Before submitting:

1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
   fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
   in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
   Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->

**What type of PR is this?**
Feature

**What does this PR do? Why is it needed?**
This PR replaces the previous PR
https://github.com/OffchainLabs/prysm/pull/16121, which built the entire
Merkle tree and generated proofs only after the tree was complete. In
this PR, the Merkle proof is produced by collecting hashes while the
Merkle tree is being built. This approach has proven to be more
efficient than the one in
https://github.com/OffchainLabs/prysm/pull/16121.

- **ProofCollector**: 
- New `ProofCollector` type in `encoding/ssz/query/proof_collector.go`:
Collects sibling hashes and leaves needed for Merkle proofs during
merkleization.
- Multiproof-ready design with `requiredSiblings`/`requiredLeaves` maps
for registering target gindices before merkleization.
- Thread-safe: read-only required maps during merkleization,
mutex-protected writes to `siblings`/`leaves`.
- `AddTarget(gindex)` registers a target leaf and computes all required
sibling gindices along the path to root.
- `toProof()` converts collected data into `fastssz.Proof` structure.
- Parallel execution in `merkleizeVectorBody` for composite elements
with worker pool pattern.
- Optimized container hashing: Generalized
`stateutil.OptimizedValidatorRoots` pattern for any SSZ container type:
- `optimizedContainerRoots`: Parallelized field root computation +
level-by-level vectorized hashing via `VectorizedSha256`.
- `hashContainerHelper`: Worker goroutine for processing container
subsets.
- `containerFieldRoots`: Computes field roots for a single container
using reflection and SszInfo metadata.

- **`Prove(gindex)` method** in `encoding/ssz/query/merkle_proof.go`:
Entry point for generating SSZ Merkle proofs for a given generalized
index.

- **Testing**
- Added `merkle_proof_test.go` and `proof_collector_test.go` to test and
benchmark this feature.

The main outcomes of the optimizations are here:
```
❯ go test ./encoding/ssz/query -run=^$ -bench='Benchmark(OptimizedContainerRoots|OptimizedValidatorRoots|ProofCollectorMerkleize)$' -benchmem
goos: darwin
goarch: arm64
pkg: github.com/OffchainLabs/prysm/v7/encoding/ssz/query
cpu: Apple M2 Pro
BenchmarkOptimizedValidatorRoots-10         3237            361029 ns/op          956858 B/op       6024 allocs/op
BenchmarkOptimizedContainerRoots-10         1138            969002 ns/op         3245223 B/op      11024 allocs/op
BenchmarkProofCollectorMerkleize-10          522           2262066 ns/op         3216000 B/op      19000 allocs/op
PASS
ok      github.com/OffchainLabs/prysm/v7/encoding/ssz/query     4.619s
```
Knowing that `OptimizedValidatorRoots` implements very effective
optimizations, `OptimizedContainerRoots` mimics them.
In the benchmark we can see that `OptimizedValidatorRoots` remain as the
most performant and tit the baseline here:
- `ProofCollectorMerkleize` is **~6.3× slower**, uses **~3.4× more
memory** (B/op), and performs **~3.2× more allocations**.
- `OptimizedContainerRoots` sits in between: it’s **~2.7× slower** than
`OptimizedValidatorRoots` (and **~3.4× higher B/op**, **~1.8× more
allocations**), but it is a clear win over `ProofCollectorMerkleize` for
lists/vectors: **~2.3× faster** with **~1.7× fewer allocations** (and
essentially the same memory footprint).

The main drawback is that `OptimizedContainerRoots` can only be applied
to vector/list subtrees where we don’t need to collect any sibling/leaf
data (i.e., no proof targets within that subtree); integrating it into
the recursive merkleize(...) flow when targets are outside the subtree
is expected to land in a follow-up PR.

**Which issues(s) does this PR fix?**
Partially https://github.com/OffchainLabs/prysm/issues/15598

**Other notes for review**
In this [write-up](https://hackmd.io/@fernantho/BJbZ1xmmbg), I depict
the process to come up with this solution.

Future improvements:
- Defensive check that the gindex is not too big, depicted [here](
https://github.com/OffchainLabs/prysm/pull/16177#discussion_r2671684100).
- Integrate optimizedContainerRoots into the recursive merkleize(...)
flow when proof targets are not within the subtree (skip full traversal
for container lists).
- Add multiproofs.
- Connect `proofCollector` to SSZ-QL endpoints (direct integration of
`proofCollector` for BeaconBlock endpoint and "hybrid" approach for
BeaconState endpoint).

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).

---------

Co-authored-by: Radosław Kapka <radoslaw.kapka@gmail.com>
Co-authored-by: Jun Song <87601811+syjn99@users.noreply.github.com>
2026-01-28 13:01:22 +00:00
162 changed files with 9063 additions and 2607 deletions

View File

@@ -2,24 +2,38 @@ version: v1.7.0-alpha.1
style: full style: full
specrefs: specrefs:
search_root: .. search_root: .
auto_standardize_names: true
auto_add_missing_entries: true
require_exceptions_have_fork: true
files: files:
- configs.yml - specrefs/configs.yml
- constants.yml - specrefs/constants.yml
- containers.yml - specrefs/containers.yml
- dataclasses.yml - specrefs/dataclasses.yml
- functions.yml - specrefs/functions.yml
- presets.yml - specrefs/presets.yml
exceptions: exceptions:
presets: presets:
# Not implemented: gloas (future fork) # gloas
- BUILDER_PENDING_WITHDRAWALS_LIMIT#gloas - BUILDER_PENDING_WITHDRAWALS_LIMIT#gloas
- MAX_PAYLOAD_ATTESTATIONS#gloas - MAX_PAYLOAD_ATTESTATIONS#gloas
- PTC_SIZE#gloas - PTC_SIZE#gloas
constants: constants:
# Constants in the KZG library # phase0
- BASIS_POINTS#phase0
- ENDIANNESS#phase0
- MAX_CONCURRENT_REQUESTS#phase0
- UINT64_MAX#phase0
- UINT64_MAX_SQRT#phase0
# altair
- PARTICIPATION_FLAG_WEIGHTS#altair
# bellatrix
- SAFE_SLOTS_TO_IMPORT_OPTIMISTICALLY#bellatrix
# deneb
- BLS_MODULUS#deneb - BLS_MODULUS#deneb
- BYTES_PER_COMMITMENT#deneb - BYTES_PER_COMMITMENT#deneb
- BYTES_PER_FIELD_ELEMENT#deneb - BYTES_PER_FIELD_ELEMENT#deneb
@@ -33,18 +47,9 @@ exceptions:
- PRIMITIVE_ROOT_OF_UNITY#deneb - PRIMITIVE_ROOT_OF_UNITY#deneb
- RANDOM_CHALLENGE_KZG_BATCH_DOMAIN#deneb - RANDOM_CHALLENGE_KZG_BATCH_DOMAIN#deneb
- RANDOM_CHALLENGE_KZG_CELL_BATCH_DOMAIN#fulu - RANDOM_CHALLENGE_KZG_CELL_BATCH_DOMAIN#fulu
# fulu
# Not implemented
- BASIS_POINTS#phase0
- ENDIANNESS#phase0
- MAX_CONCURRENT_REQUESTS#phase0
- PARTICIPATION_FLAG_WEIGHTS#altair
- SAFE_SLOTS_TO_IMPORT_OPTIMISTICALLY#bellatrix
- UINT256_MAX#fulu - UINT256_MAX#fulu
- UINT64_MAX#phase0 # gloas
- UINT64_MAX_SQRT#phase0
# Not implemented: gloas (future fork)
- BUILDER_PAYMENT_THRESHOLD_DENOMINATOR#gloas - BUILDER_PAYMENT_THRESHOLD_DENOMINATOR#gloas
- BUILDER_PAYMENT_THRESHOLD_NUMERATOR#gloas - BUILDER_PAYMENT_THRESHOLD_NUMERATOR#gloas
- BUILDER_WITHDRAWAL_PREFIX#gloas - BUILDER_WITHDRAWAL_PREFIX#gloas
@@ -61,61 +66,62 @@ exceptions:
- PTC_TIMELINESS_INDEX#gloas - PTC_TIMELINESS_INDEX#gloas
configs: configs:
# Not implemented: gloas (future fork) # gloas
- AGGREGATE_DUE_BPS_GLOAS#gloas - AGGREGATE_DUE_BPS_GLOAS#gloas
- ATTESTATION_DUE_BPS_GLOAS#gloas - ATTESTATION_DUE_BPS_GLOAS#gloas
- CONTRIBUTION_DUE_BPS_GLOAS#gloas - CONTRIBUTION_DUE_BPS_GLOAS#gloas
- GLOAS_FORK_EPOCH#gloas - GLOAS_FORK_EPOCH#gloas
- GLOAS_FORK_VERSION#gloas - GLOAS_FORK_VERSION#gloas
- MAX_REQUEST_PAYLOADS#gloas - MAX_REQUEST_PAYLOADS#gloas
- MIN_BUILDER_WITHDRAWABILITY_DELAY#gloas
- PAYLOAD_ATTESTATION_DUE_BPS#gloas - PAYLOAD_ATTESTATION_DUE_BPS#gloas
- SYNC_MESSAGE_DUE_BPS_GLOAS#gloas - SYNC_MESSAGE_DUE_BPS_GLOAS#gloas
- MIN_BUILDER_WITHDRAWABILITY_DELAY#gloas
ssz_objects: ssz_objects:
# Not implemented # phase0
- Eth1Block#phase0 - Eth1Block#phase0
- MatrixEntry#fulu # capella
# Not implemented: capella
- LightClientBootstrap#capella - LightClientBootstrap#capella
- LightClientFinalityUpdate#capella - LightClientFinalityUpdate#capella
- LightClientOptimisticUpdate#capella - LightClientOptimisticUpdate#capella
- LightClientUpdate#capella - LightClientUpdate#capella
# fulu
# Not implemented: gloas (future fork) - MatrixEntry#fulu
# gloas
- BeaconBlockBody#gloas - BeaconBlockBody#gloas
- BeaconState#gloas - BeaconState#gloas
- Builder#gloas
- BuilderPendingPayment#gloas - BuilderPendingPayment#gloas
- BuilderPendingWithdrawal#gloas - BuilderPendingWithdrawal#gloas
- DataColumnSidecar#gloas - DataColumnSidecar#gloas
- ExecutionPayloadEnvelope#gloas
- ExecutionPayloadBid#gloas - ExecutionPayloadBid#gloas
- ExecutionPayloadEnvelope#gloas
- ForkChoiceNode#gloas - ForkChoiceNode#gloas
- IndexedPayloadAttestation#gloas - IndexedPayloadAttestation#gloas
- PayloadAttestation#gloas - PayloadAttestation#gloas
- PayloadAttestationData#gloas - PayloadAttestationData#gloas
- PayloadAttestationMessage#gloas - PayloadAttestationMessage#gloas
- SignedExecutionPayloadEnvelope#gloas
- SignedExecutionPayloadBid#gloas
- Builder#gloas
- ProposerPreferences#gloas - ProposerPreferences#gloas
- SignedExecutionPayloadBid#gloas
- SignedExecutionPayloadEnvelope#gloas
- SignedProposerPreferences#gloas - SignedProposerPreferences#gloas
dataclasses: dataclasses:
# Not implemented # phase0
- BlobParameters#fulu
- ExpectedWithdrawals#capella
- ExpectedWithdrawals#electra
- LatestMessage#phase0 - LatestMessage#phase0
- LightClientStore#altair
- OptimisticStore#bellatrix
- Store#phase0 - Store#phase0
# altair
# Not implemented: capella - LightClientStore#altair
# bellatrix
- OptimisticStore#bellatrix
# capella
- ExpectedWithdrawals#capella
- LightClientStore#capella - LightClientStore#capella
# electra
# Not implemented: gloas (future fork) - ExpectedWithdrawals#electra
# fulu
- BlobParameters#fulu
# gloas
- ExpectedWithdrawals#gloas - ExpectedWithdrawals#gloas
- LatestMessage#gloas - LatestMessage#gloas
- Store#gloas - Store#gloas
@@ -175,7 +181,12 @@ exceptions:
- verify_cell_kzg_proof_batch#fulu - verify_cell_kzg_proof_batch#fulu
- verify_cell_kzg_proof_batch_impl#fulu - verify_cell_kzg_proof_batch_impl#fulu
# Not implemented: phase0 # phase0
- update_proposer_boost_root#phase0
- is_proposer_equivocation#phase0
- record_block_timeliness#phase0
- compute_proposer_score#phase0
- get_attestation_score#phase0
- calculate_committee_fraction#phase0 - calculate_committee_fraction#phase0
- compute_fork_version#phase0 - compute_fork_version#phase0
- compute_pulled_up_tip#phase0 - compute_pulled_up_tip#phase0
@@ -221,8 +232,7 @@ exceptions:
- validate_on_attestation#phase0 - validate_on_attestation#phase0
- validate_target_epoch_against_current_time#phase0 - validate_target_epoch_against_current_time#phase0
- xor#phase0 - xor#phase0
# altair
# Not implemented: altair
- compute_merkle_proof#altair - compute_merkle_proof#altair
- compute_sync_committee_period_at_slot#altair - compute_sync_committee_period_at_slot#altair
- get_contribution_and_proof#altair - get_contribution_and_proof#altair
@@ -244,27 +254,29 @@ exceptions:
- process_sync_committee_contributions#altair - process_sync_committee_contributions#altair
- set_or_append_list#altair - set_or_append_list#altair
- validate_light_client_update#altair - validate_light_client_update#altair
# bellatrix
# Not implemented: bellatrix
- get_execution_payload#bellatrix - get_execution_payload#bellatrix
- is_merge_transition_block#bellatrix - is_merge_transition_block#bellatrix
- is_optimistic_candidate_block#bellatrix - is_optimistic_candidate_block#bellatrix
- latest_verified_ancestor#bellatrix - latest_verified_ancestor#bellatrix
- prepare_execution_payload#bellatrix - prepare_execution_payload#bellatrix
# capella
# Not implemented: capella - apply_withdrawals#capella
- get_balance_after_withdrawals#capella
- get_lc_execution_root#capella - get_lc_execution_root#capella
- get_validators_sweep_withdrawals#capella
- is_valid_light_client_header#capella - is_valid_light_client_header#capella
- prepare_execution_payload#capella - prepare_execution_payload#capella
- process_epoch#capella - process_epoch#capella
- update_next_withdrawal_index#capella
- update_next_withdrawal_validator_index#capella
- upgrade_lc_bootstrap_to_capella#capella - upgrade_lc_bootstrap_to_capella#capella
- upgrade_lc_finality_update_to_capella#capella - upgrade_lc_finality_update_to_capella#capella
- upgrade_lc_header_to_capella#capella - upgrade_lc_header_to_capella#capella
- upgrade_lc_optimistic_update_to_capella#capella - upgrade_lc_optimistic_update_to_capella#capella
- upgrade_lc_store_to_capella#capella - upgrade_lc_store_to_capella#capella
- upgrade_lc_update_to_capella#capella - upgrade_lc_update_to_capella#capella
# deneb
# Not implemented: deneb
- get_lc_execution_root#deneb - get_lc_execution_root#deneb
- is_valid_light_client_header#deneb - is_valid_light_client_header#deneb
- prepare_execution_payload#deneb - prepare_execution_payload#deneb
@@ -274,33 +286,34 @@ exceptions:
- upgrade_lc_optimistic_update_to_deneb#deneb - upgrade_lc_optimistic_update_to_deneb#deneb
- upgrade_lc_store_to_deneb#deneb - upgrade_lc_store_to_deneb#deneb
- upgrade_lc_update_to_deneb#deneb - upgrade_lc_update_to_deneb#deneb
# electra
# Not implemented: electra
- compute_weak_subjectivity_period#electra - compute_weak_subjectivity_period#electra
- current_sync_committee_gindex_at_slot#electra - current_sync_committee_gindex_at_slot#electra
- finalized_root_gindex_at_slot#electra - finalized_root_gindex_at_slot#electra
- get_eth1_vote#electra - get_eth1_vote#electra
- get_lc_execution_root#electra - get_lc_execution_root#electra
- get_pending_partial_withdrawals#electra
- get_validators_sweep_withdrawals#electra
- is_compounding_withdrawal_credential#electra - is_compounding_withdrawal_credential#electra
- is_eligible_for_partial_withdrawals#electra
- is_within_weak_subjectivity_period#electra - is_within_weak_subjectivity_period#electra
- next_sync_committee_gindex_at_slot#electra - next_sync_committee_gindex_at_slot#electra
- normalize_merkle_branch#electra - normalize_merkle_branch#electra
- prepare_execution_payload#electra - prepare_execution_payload#electra
- update_pending_partial_withdrawals#electra
- upgrade_lc_bootstrap_to_electra#electra - upgrade_lc_bootstrap_to_electra#electra
- upgrade_lc_finality_update_to_electra#electra - upgrade_lc_finality_update_to_electra#electra
- upgrade_lc_header_to_electra#electra - upgrade_lc_header_to_electra#electra
- upgrade_lc_optimistic_update_to_electra#electra - upgrade_lc_optimistic_update_to_electra#electra
- upgrade_lc_store_to_electra#electra - upgrade_lc_store_to_electra#electra
- upgrade_lc_update_to_electra#electra - upgrade_lc_update_to_electra#electra
# fulu
# Not implemented: fulu
- compute_matrix#fulu - compute_matrix#fulu
- get_blob_parameters#fulu - get_blob_parameters#fulu
- get_data_column_sidecars_from_block#fulu - get_data_column_sidecars_from_block#fulu
- get_data_column_sidecars_from_column_sidecar#fulu - get_data_column_sidecars_from_column_sidecar#fulu
- recover_matrix#fulu - recover_matrix#fulu
# gloas
# Not implemented: gloas (future fork)
- compute_balance_weighted_acceptance#gloas - compute_balance_weighted_acceptance#gloas
- compute_balance_weighted_selection#gloas - compute_balance_weighted_selection#gloas
- compute_fork_version#gloas - compute_fork_version#gloas
@@ -368,49 +381,36 @@ exceptions:
- verify_execution_payload_bid_signature#gloas - verify_execution_payload_bid_signature#gloas
- add_builder_to_registry#gloas - add_builder_to_registry#gloas
- apply_deposit_for_builder#gloas - apply_deposit_for_builder#gloas
- apply_withdrawals#capella
- apply_withdrawals#gloas - apply_withdrawals#gloas
- can_builder_cover_bid#gloas - can_builder_cover_bid#gloas
- compute_proposer_score#phase0
- convert_builder_index_to_validator_index#gloas - convert_builder_index_to_validator_index#gloas
- convert_validator_index_to_builder_index#gloas - convert_validator_index_to_builder_index#gloas
- get_attestation_score#gloas - get_attestation_score#gloas
- get_attestation_score#phase0
- get_balance_after_withdrawals#capella
- get_builder_from_deposit#gloas - get_builder_from_deposit#gloas
- get_builder_withdrawals#gloas - get_builder_withdrawals#gloas
- get_builders_sweep_withdrawals#gloas - get_builders_sweep_withdrawals#gloas
- get_index_for_new_builder#gloas - get_index_for_new_builder#gloas
- get_pending_balance_to_withdraw_for_builder#gloas - get_pending_balance_to_withdraw_for_builder#gloas
- get_pending_partial_withdrawals#electra
- get_proposer_preferences_signature#gloas - get_proposer_preferences_signature#gloas
- get_upcoming_proposal_slots#gloas - get_upcoming_proposal_slots#gloas
- get_validators_sweep_withdrawals#capella
- get_validators_sweep_withdrawals#electra
- initiate_builder_exit#gloas - initiate_builder_exit#gloas
- is_active_builder#gloas - is_active_builder#gloas
- is_builder_index#gloas - is_builder_index#gloas
- is_eligible_for_partial_withdrawals#electra
- is_head_late#gloas - is_head_late#gloas
- is_head_weak#gloas - is_head_weak#gloas
- is_parent_strong#gloas - is_parent_strong#gloas
- is_proposer_equivocation#phase0
- is_valid_proposal_slot#gloas - is_valid_proposal_slot#gloas
- process_deposit_request#gloas - process_deposit_request#gloas
- process_voluntary_exit#gloas - process_voluntary_exit#gloas
- record_block_timeliness#gloas - record_block_timeliness#gloas
- record_block_timeliness#phase0
- should_apply_proposer_boost#gloas - should_apply_proposer_boost#gloas
- update_builder_pending_withdrawals#gloas - update_builder_pending_withdrawals#gloas
- update_next_withdrawal_builder_index#gloas - update_next_withdrawal_builder_index#gloas
- update_next_withdrawal_index#capella
- update_next_withdrawal_validator_index#capella
- update_payload_expected_withdrawals#gloas - update_payload_expected_withdrawals#gloas
- update_pending_partial_withdrawals#electra
- update_proposer_boost_root#gloas - update_proposer_boost_root#gloas
- update_proposer_boost_root#phase0
presets: presets:
# gloas
- BUILDER_PENDING_WITHDRAWALS_LIMIT#gloas - BUILDER_PENDING_WITHDRAWALS_LIMIT#gloas
- BUILDER_REGISTRY_LIMIT#gloas - BUILDER_REGISTRY_LIMIT#gloas
- MAX_BUILDERS_PER_WITHDRAWALS_SWEEP#gloas - MAX_BUILDERS_PER_WITHDRAWALS_SWEEP#gloas

View File

@@ -12,11 +12,11 @@ jobs:
- name: Check version consistency - name: Check version consistency
run: | run: |
WORKSPACE_VERSION=$(grep 'consensus_spec_version = ' WORKSPACE | sed 's/.*"\(.*\)"/\1/') WORKSPACE_VERSION=$(grep 'consensus_spec_version = ' WORKSPACE | sed 's/.*"\(.*\)"/\1/')
ETHSPECIFY_VERSION=$(grep '^version:' specrefs/.ethspecify.yml | sed 's/version: //') ETHSPECIFY_VERSION=$(grep '^version:' .ethspecify.yml | sed 's/version: //')
if [ "$WORKSPACE_VERSION" != "$ETHSPECIFY_VERSION" ]; then if [ "$WORKSPACE_VERSION" != "$ETHSPECIFY_VERSION" ]; then
echo "Version mismatch between WORKSPACE and ethspecify" echo "Version mismatch between WORKSPACE and ethspecify"
echo " WORKSPACE: $WORKSPACE_VERSION" echo " WORKSPACE: $WORKSPACE_VERSION"
echo " specrefs/.ethspecify.yml: $ETHSPECIFY_VERSION" echo " .ethspecify.yml: $ETHSPECIFY_VERSION"
exit 1 exit 1
else else
echo "Versions match: $WORKSPACE_VERSION" echo "Versions match: $WORKSPACE_VERSION"
@@ -26,7 +26,7 @@ jobs:
run: python3 -mpip install ethspecify run: python3 -mpip install ethspecify
- name: Update spec references - name: Update spec references
run: ethspecify process --path=specrefs run: ethspecify
- name: Check for differences - name: Check for differences
run: | run: |
@@ -40,4 +40,4 @@ jobs:
fi fi
- name: Check spec references - name: Check spec references
run: ethspecify check --path=specrefs run: ethspecify check

View File

@@ -2,7 +2,7 @@ name: Go
on: on:
push: push:
branches: [ master ] branches: [ master, develop ]
pull_request: pull_request:
branches: [ '*' ] branches: [ '*' ]
merge_group: merge_group:

View File

@@ -33,9 +33,8 @@ formatters:
generated: lax generated: lax
paths: paths:
- validator/web/site_data.go - validator/web/site_data.go
- .*_test.go
- proto - proto
- tools/analyzers - tools/analyzers
- third_party$ - third_party$
- builtin$ - builtin$
- examples$ - examples$

19
api/fallback/BUILD.bazel Normal file
View File

@@ -0,0 +1,19 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"fallback.go",
"log.go",
],
importpath = "github.com/OffchainLabs/prysm/v7/api/fallback",
visibility = ["//visibility:public"],
deps = ["@com_github_sirupsen_logrus//:go_default_library"],
)
go_test(
name = "go_default_test",
srcs = ["fallback_test.go"],
embed = [":go_default_library"],
deps = ["//testing/assert:go_default_library"],
)

66
api/fallback/fallback.go Normal file
View File

@@ -0,0 +1,66 @@
package fallback
import (
"context"
"github.com/sirupsen/logrus"
)
// HostProvider is the subset of connection-provider methods that EnsureReady
// needs. Both grpc.GrpcConnectionProvider and rest.RestConnectionProvider
// satisfy this interface.
type HostProvider interface {
Hosts() []string
CurrentHost() string
SwitchHost(index int) error
}
// ReadyChecker can report whether the current endpoint is ready.
// iface.NodeClient satisfies this implicitly.
type ReadyChecker interface {
IsReady(ctx context.Context) bool
}
// EnsureReady iterates through the configured hosts and returns true as soon as
// one responds as ready. It starts from the provider's current host and wraps
// around using modular arithmetic, performing failover when a host is not ready.
func EnsureReady(ctx context.Context, provider HostProvider, checker ReadyChecker) bool {
hosts := provider.Hosts()
numHosts := len(hosts)
startingHost := provider.CurrentHost()
var attemptedHosts []string
// Find current index
currentIdx := 0
for i, h := range hosts {
if h == startingHost {
currentIdx = i
break
}
}
for i := range numHosts {
if checker.IsReady(ctx) {
if len(attemptedHosts) > 0 {
log.WithFields(logrus.Fields{
"previous": startingHost,
"current": provider.CurrentHost(),
"tried": attemptedHosts,
}).Info("Switched to responsive beacon node")
}
return true
}
attemptedHosts = append(attemptedHosts, provider.CurrentHost())
// Try next host if not the last iteration
if i < numHosts-1 {
nextIdx := (currentIdx + i + 1) % numHosts
if err := provider.SwitchHost(nextIdx); err != nil {
log.WithError(err).Error("Failed to switch host")
}
}
}
log.WithField("tried", attemptedHosts).Warn("No responsive beacon node found")
return false
}

View File

@@ -0,0 +1,94 @@
package fallback
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v7/testing/assert"
)
// mockHostProvider is a minimal HostProvider for unit tests.
type mockHostProvider struct {
hosts []string
hostIndex int
}
func (m *mockHostProvider) Hosts() []string { return m.hosts }
func (m *mockHostProvider) CurrentHost() string {
return m.hosts[m.hostIndex%len(m.hosts)]
}
func (m *mockHostProvider) SwitchHost(index int) error { m.hostIndex = index; return nil }
// mockReadyChecker records per-call IsReady results in sequence.
type mockReadyChecker struct {
results []bool
idx int
}
func (m *mockReadyChecker) IsReady(_ context.Context) bool {
if m.idx >= len(m.results) {
return false
}
r := m.results[m.idx]
m.idx++
return r
}
func TestEnsureReady_SingleHostReady(t *testing.T) {
provider := &mockHostProvider{hosts: []string{"http://host1:3500"}, hostIndex: 0}
checker := &mockReadyChecker{results: []bool{true}}
assert.Equal(t, true, EnsureReady(t.Context(), provider, checker))
assert.Equal(t, 0, provider.hostIndex)
}
func TestEnsureReady_SingleHostNotReady(t *testing.T) {
provider := &mockHostProvider{hosts: []string{"http://host1:3500"}, hostIndex: 0}
checker := &mockReadyChecker{results: []bool{false}}
assert.Equal(t, false, EnsureReady(t.Context(), provider, checker))
}
func TestEnsureReady_SingleHostError(t *testing.T) {
provider := &mockHostProvider{hosts: []string{"http://host1:3500"}, hostIndex: 0}
checker := &mockReadyChecker{results: []bool{false}}
assert.Equal(t, false, EnsureReady(t.Context(), provider, checker))
}
func TestEnsureReady_MultipleHostsFirstReady(t *testing.T) {
provider := &mockHostProvider{
hosts: []string{"http://host1:3500", "http://host2:3500"},
hostIndex: 0,
}
checker := &mockReadyChecker{results: []bool{true}}
assert.Equal(t, true, EnsureReady(t.Context(), provider, checker))
assert.Equal(t, 0, provider.hostIndex)
}
func TestEnsureReady_MultipleHostsFailoverToSecond(t *testing.T) {
provider := &mockHostProvider{
hosts: []string{"http://host1:3500", "http://host2:3500"},
hostIndex: 0,
}
checker := &mockReadyChecker{results: []bool{false, true}}
assert.Equal(t, true, EnsureReady(t.Context(), provider, checker))
assert.Equal(t, 1, provider.hostIndex)
}
func TestEnsureReady_MultipleHostsNoneReady(t *testing.T) {
provider := &mockHostProvider{
hosts: []string{"http://host1:3500", "http://host2:3500", "http://host3:3500"},
hostIndex: 0,
}
checker := &mockReadyChecker{results: []bool{false, false, false}}
assert.Equal(t, false, EnsureReady(t.Context(), provider, checker))
}
func TestEnsureReady_WrapAroundFromNonZeroIndex(t *testing.T) {
provider := &mockHostProvider{
hosts: []string{"http://host0:3500", "http://host1:3500", "http://host2:3500"},
hostIndex: 1,
}
// host1 (start) fails, host2 fails, host0 succeeds
checker := &mockReadyChecker{results: []bool{false, false, true}}
assert.Equal(t, true, EnsureReady(t.Context(), provider, checker))
assert.Equal(t, 0, provider.hostIndex)
}

9
api/fallback/log.go Normal file
View File

@@ -0,0 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package fallback
import "github.com/sirupsen/logrus"
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "api/fallback")

View File

@@ -3,13 +3,16 @@ load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library( go_library(
name = "go_default_library", name = "go_default_library",
srcs = [ srcs = [
"grpc_connection_provider.go",
"grpcutils.go", "grpcutils.go",
"log.go", "log.go",
"mock_grpc_provider.go",
"parameters.go", "parameters.go",
], ],
importpath = "github.com/OffchainLabs/prysm/v7/api/grpc", importpath = "github.com/OffchainLabs/prysm/v7/api/grpc",
visibility = ["//visibility:public"], visibility = ["//visibility:public"],
deps = [ deps = [
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library", "@com_github_sirupsen_logrus//:go_default_library",
"@org_golang_google_grpc//:go_default_library", "@org_golang_google_grpc//:go_default_library",
"@org_golang_google_grpc//metadata:go_default_library", "@org_golang_google_grpc//metadata:go_default_library",
@@ -18,12 +21,17 @@ go_library(
go_test( go_test(
name = "go_default_test", name = "go_default_test",
srcs = ["grpcutils_test.go"], srcs = [
"grpc_connection_provider_test.go",
"grpcutils_test.go",
],
embed = [":go_default_library"], embed = [":go_default_library"],
deps = [ deps = [
"//testing/assert:go_default_library", "//testing/assert:go_default_library",
"//testing/require:go_default_library", "//testing/require:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library", "@com_github_sirupsen_logrus//hooks/test:go_default_library",
"@org_golang_google_grpc//:go_default_library",
"@org_golang_google_grpc//credentials/insecure:go_default_library",
"@org_golang_google_grpc//metadata:go_default_library", "@org_golang_google_grpc//metadata:go_default_library",
], ],
) )

View File

@@ -0,0 +1,186 @@
package grpc
import (
"context"
"strings"
"sync"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"google.golang.org/grpc"
)
// GrpcConnectionProvider manages gRPC connections for failover support.
// It allows switching between different beacon node endpoints when the current one becomes unavailable.
// Only one connection is maintained at a time - when switching hosts, the old connection is closed.
type GrpcConnectionProvider interface {
// CurrentConn returns the currently active gRPC connection.
// The connection is created lazily on first call.
// Returns nil if the provider has been closed.
CurrentConn() *grpc.ClientConn
// CurrentHost returns the address of the currently active endpoint.
CurrentHost() string
// Hosts returns all configured endpoint addresses.
Hosts() []string
// SwitchHost switches to the endpoint at the given index.
// The new connection is created lazily on next CurrentConn() call.
SwitchHost(index int) error
// ConnectionCounter returns a monotonically increasing counter that increments
// each time SwitchHost changes the active endpoint. This allows consumers to
// detect connection changes even when the host string returns to a previous value
// (e.g., host0 → host1 → host0).
ConnectionCounter() uint64
// Close closes the current connection.
Close()
}
type grpcConnectionProvider struct {
// Immutable after construction - no lock needed for reads
endpoints []string
ctx context.Context
dialOpts []grpc.DialOption
// Current connection state (protected by mutex)
currentIndex uint64
conn *grpc.ClientConn
connCounter uint64
mu sync.Mutex
closed bool
}
// NewGrpcConnectionProvider creates a new connection provider that manages gRPC connections.
// The endpoint parameter can be a comma-separated list of addresses (e.g., "host1:4000,host2:4000").
// Only one connection is maintained at a time, created lazily on first use.
func NewGrpcConnectionProvider(
ctx context.Context,
endpoint string,
dialOpts []grpc.DialOption,
) (GrpcConnectionProvider, error) {
endpoints := parseEndpoints(endpoint)
if len(endpoints) == 0 {
return nil, errors.New("no gRPC endpoints provided")
}
log.WithFields(logrus.Fields{
"endpoints": endpoints,
"count": len(endpoints),
}).Info("Initialized gRPC connection provider")
return &grpcConnectionProvider{
endpoints: endpoints,
ctx: ctx,
dialOpts: dialOpts,
}, nil
}
// parseEndpoints splits a comma-separated endpoint string into individual endpoints.
func parseEndpoints(endpoint string) []string {
if endpoint == "" {
return nil
}
endpoints := make([]string, 0, 1)
for p := range strings.SplitSeq(endpoint, ",") {
if p = strings.TrimSpace(p); p != "" {
endpoints = append(endpoints, p)
}
}
return endpoints
}
func (p *grpcConnectionProvider) CurrentConn() *grpc.ClientConn {
p.mu.Lock()
defer p.mu.Unlock()
if p.closed {
return nil
}
// Return existing connection if available
if p.conn != nil {
return p.conn
}
// Create connection lazily
ep := p.endpoints[p.currentIndex]
conn, err := grpc.DialContext(p.ctx, ep, p.dialOpts...)
if err != nil {
log.WithError(err).WithField("endpoint", ep).Error("Failed to create gRPC connection")
return nil
}
p.conn = conn
log.WithField("endpoint", ep).Debug("Created gRPC connection")
return conn
}
func (p *grpcConnectionProvider) CurrentHost() string {
p.mu.Lock()
defer p.mu.Unlock()
return p.endpoints[p.currentIndex]
}
func (p *grpcConnectionProvider) Hosts() []string {
// Return a copy to maintain immutability
hosts := make([]string, len(p.endpoints))
copy(hosts, p.endpoints)
return hosts
}
func (p *grpcConnectionProvider) SwitchHost(index int) error {
if index < 0 || index >= len(p.endpoints) {
return errors.Errorf("invalid host index %d, must be between 0 and %d", index, len(p.endpoints)-1)
}
p.mu.Lock()
defer p.mu.Unlock()
if uint64(index) == p.currentIndex {
return nil // Already on this host
}
oldHost := p.endpoints[p.currentIndex]
oldConn := p.conn
p.conn = nil // Clear immediately - new connection created lazily
p.currentIndex = uint64(index)
p.connCounter++
// Close old connection asynchronously to avoid blocking the caller
if oldConn != nil {
go func() {
if err := oldConn.Close(); err != nil {
log.WithError(err).WithField("endpoint", oldHost).Debug("Failed to close previous connection")
}
}()
}
log.WithFields(logrus.Fields{
"previousHost": oldHost,
"newHost": p.endpoints[index],
}).Debug("Switched gRPC endpoint")
return nil
}
func (p *grpcConnectionProvider) ConnectionCounter() uint64 {
p.mu.Lock()
defer p.mu.Unlock()
return p.connCounter
}
func (p *grpcConnectionProvider) Close() {
p.mu.Lock()
defer p.mu.Unlock()
if p.closed {
return
}
p.closed = true
if p.conn != nil {
if err := p.conn.Close(); err != nil {
log.WithError(err).WithField("endpoint", p.endpoints[p.currentIndex]).Debug("Failed to close gRPC connection")
}
p.conn = nil
}
}

View File

@@ -0,0 +1,207 @@
package grpc
import (
"context"
"net"
"reflect"
"strings"
"testing"
"github.com/OffchainLabs/prysm/v7/testing/assert"
"github.com/OffchainLabs/prysm/v7/testing/require"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials/insecure"
)
func TestParseEndpoints(t *testing.T) {
tests := []struct {
name string
input string
expected []string
}{
{"single endpoint", "localhost:4000", []string{"localhost:4000"}},
{"multiple endpoints", "host1:4000,host2:4000,host3:4000", []string{"host1:4000", "host2:4000", "host3:4000"}},
{"endpoints with spaces", "host1:4000, host2:4000 , host3:4000", []string{"host1:4000", "host2:4000", "host3:4000"}},
{"empty string", "", nil},
{"only commas", ",,,", []string{}},
{"trailing comma", "host1:4000,host2:4000,", []string{"host1:4000", "host2:4000"}},
{"leading comma", ",host1:4000,host2:4000", []string{"host1:4000", "host2:4000"}},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := parseEndpoints(tt.input)
if !reflect.DeepEqual(tt.expected, got) {
t.Errorf("parseEndpoints(%q) = %v, want %v", tt.input, got, tt.expected)
}
})
}
}
func TestNewGrpcConnectionProvider_Errors(t *testing.T) {
t.Run("no endpoints", func(t *testing.T) {
dialOpts := []grpc.DialOption{grpc.WithTransportCredentials(insecure.NewCredentials())}
_, err := NewGrpcConnectionProvider(context.Background(), "", dialOpts)
require.ErrorContains(t, "no gRPC endpoints provided", err)
})
}
func TestGrpcConnectionProvider_LazyConnection(t *testing.T) {
// Start only one server but configure provider with two endpoints
lis, err := net.Listen("tcp", "127.0.0.1:0")
require.NoError(t, err)
server := grpc.NewServer()
go func() { _ = server.Serve(lis) }()
defer server.Stop()
validAddr := lis.Addr().String()
invalidAddr := "127.0.0.1:1" // Port 1 is unlikely to be listening
// Provider should succeed even though second endpoint is invalid (lazy connections)
endpoint := validAddr + "," + invalidAddr
ctx := context.Background()
dialOpts := []grpc.DialOption{grpc.WithTransportCredentials(insecure.NewCredentials())}
provider, err := NewGrpcConnectionProvider(ctx, endpoint, dialOpts)
require.NoError(t, err, "Provider creation should succeed with lazy connections")
defer func() { provider.Close() }()
// First endpoint should work
conn := provider.CurrentConn()
assert.NotNil(t, conn, "First connection should be created lazily")
}
func TestGrpcConnectionProvider_SingleConnectionModel(t *testing.T) {
// Create provider with 3 endpoints
var addrs []string
var servers []*grpc.Server
for range 3 {
lis, err := net.Listen("tcp", "127.0.0.1:0")
require.NoError(t, err)
server := grpc.NewServer()
go func() { _ = server.Serve(lis) }()
addrs = append(addrs, lis.Addr().String())
servers = append(servers, server)
}
defer func() {
for _, s := range servers {
s.Stop()
}
}()
endpoint := strings.Join(addrs, ",")
ctx := context.Background()
dialOpts := []grpc.DialOption{grpc.WithTransportCredentials(insecure.NewCredentials())}
provider, err := NewGrpcConnectionProvider(ctx, endpoint, dialOpts)
require.NoError(t, err)
defer func() { provider.Close() }()
// Access the internal state to verify single connection behavior
p := provider.(*grpcConnectionProvider)
// Initially no connection
p.mu.Lock()
assert.Equal(t, (*grpc.ClientConn)(nil), p.conn, "Connection should be nil before access")
p.mu.Unlock()
// Access connection - should create one
conn0 := provider.CurrentConn()
assert.NotNil(t, conn0)
p.mu.Lock()
assert.NotNil(t, p.conn, "Connection should be created after CurrentConn()")
firstConn := p.conn
p.mu.Unlock()
// Call CurrentConn again - should return same connection
conn0Again := provider.CurrentConn()
assert.Equal(t, conn0, conn0Again, "Should return same connection")
// Switch to different host - old connection should be closed, new one created lazily
require.NoError(t, provider.SwitchHost(1))
p.mu.Lock()
assert.Equal(t, (*grpc.ClientConn)(nil), p.conn, "Connection should be nil after SwitchHost (lazy)")
p.mu.Unlock()
// Get new connection
conn1 := provider.CurrentConn()
assert.NotNil(t, conn1)
assert.NotEqual(t, firstConn, conn1, "Should be a different connection after switching hosts")
}
// testProvider creates a provider with n test servers and returns cleanup function.
func testProvider(t *testing.T, n int) (GrpcConnectionProvider, []string, func()) {
var addrs []string
var cleanups []func()
for range n {
lis, err := net.Listen("tcp", "127.0.0.1:0")
require.NoError(t, err)
server := grpc.NewServer()
go func() { _ = server.Serve(lis) }()
addrs = append(addrs, lis.Addr().String())
cleanups = append(cleanups, server.Stop)
}
endpoint := strings.Join(addrs, ",")
ctx := context.Background()
dialOpts := []grpc.DialOption{grpc.WithTransportCredentials(insecure.NewCredentials())}
provider, err := NewGrpcConnectionProvider(ctx, endpoint, dialOpts)
require.NoError(t, err)
cleanup := func() {
provider.Close()
for _, c := range cleanups {
c()
}
}
return provider, addrs, cleanup
}
func TestGrpcConnectionProvider(t *testing.T) {
provider, addrs, cleanup := testProvider(t, 3)
defer cleanup()
t.Run("initial state", func(t *testing.T) {
assert.Equal(t, 3, len(provider.Hosts()))
assert.Equal(t, addrs[0], provider.CurrentHost())
assert.NotNil(t, provider.CurrentConn())
})
t.Run("SwitchHost", func(t *testing.T) {
require.NoError(t, provider.SwitchHost(1))
assert.Equal(t, addrs[1], provider.CurrentHost())
assert.NotNil(t, provider.CurrentConn()) // New connection created lazily
require.NoError(t, provider.SwitchHost(0))
assert.Equal(t, addrs[0], provider.CurrentHost())
require.ErrorContains(t, "invalid host index", provider.SwitchHost(-1))
require.ErrorContains(t, "invalid host index", provider.SwitchHost(3))
})
t.Run("SwitchHost circular", func(t *testing.T) {
// Test round-robin style switching using SwitchHost with manual index
indices := []int{1, 2, 0, 1} // Simulate circular switching
for i, idx := range indices {
require.NoError(t, provider.SwitchHost(idx))
assert.Equal(t, addrs[idx], provider.CurrentHost(), "iteration %d", i)
}
})
t.Run("Hosts returns copy", func(t *testing.T) {
hosts := provider.Hosts()
original := hosts[0]
hosts[0] = "modified"
assert.Equal(t, original, provider.Hosts()[0])
})
}
func TestGrpcConnectionProvider_Close(t *testing.T) {
provider, _, cleanup := testProvider(t, 1)
defer cleanup()
assert.NotNil(t, provider.CurrentConn())
provider.Close()
assert.Equal(t, (*grpc.ClientConn)(nil), provider.CurrentConn())
provider.Close() // Double close is safe
}

View File

@@ -0,0 +1,27 @@
package grpc
import "google.golang.org/grpc"
// MockGrpcProvider implements GrpcConnectionProvider for testing.
type MockGrpcProvider struct {
MockConn *grpc.ClientConn
MockHosts []string
CurrentIndex int
ConnCounter uint64
}
func (m *MockGrpcProvider) CurrentConn() *grpc.ClientConn { return m.MockConn }
func (m *MockGrpcProvider) CurrentHost() string {
if len(m.MockHosts) > 0 {
return m.MockHosts[m.CurrentIndex]
}
return ""
}
func (m *MockGrpcProvider) Hosts() []string { return m.MockHosts }
func (m *MockGrpcProvider) SwitchHost(idx int) error {
m.CurrentIndex = idx
m.ConnCounter++
return nil
}
func (m *MockGrpcProvider) ConnectionCounter() uint64 { return m.ConnCounter }
func (m *MockGrpcProvider) Close() {}

34
api/rest/BUILD.bazel Normal file
View File

@@ -0,0 +1,34 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"log.go",
"mock_rest_provider.go",
"rest_connection_provider.go",
"rest_handler.go",
],
importpath = "github.com/OffchainLabs/prysm/v7/api/rest",
visibility = ["//visibility:public"],
deps = [
"//api:go_default_library",
"//api/apiutil:go_default_library",
"//api/client:go_default_library",
"//config/params:go_default_library",
"//network/httputil:go_default_library",
"//runtime/version:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_opentelemetry_go_contrib_instrumentation_net_http_otelhttp//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = ["rest_connection_provider_test.go"],
embed = [":go_default_library"],
deps = [
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
],
)

9
api/rest/log.go Normal file
View File

@@ -0,0 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package rest
import "github.com/sirupsen/logrus"
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "api/rest")

View File

@@ -0,0 +1,46 @@
package rest
import (
"bytes"
"context"
"net/http"
)
// MockRestProvider implements RestConnectionProvider for testing.
type MockRestProvider struct {
MockClient *http.Client
MockHandler Handler
MockHosts []string
HostIndex int
}
func (m *MockRestProvider) HttpClient() *http.Client { return m.MockClient }
func (m *MockRestProvider) Handler() Handler { return m.MockHandler }
func (m *MockRestProvider) CurrentHost() string {
if len(m.MockHosts) > 0 {
return m.MockHosts[m.HostIndex%len(m.MockHosts)]
}
return ""
}
func (m *MockRestProvider) Hosts() []string { return m.MockHosts }
func (m *MockRestProvider) SwitchHost(index int) error { m.HostIndex = index; return nil }
// MockHandler implements Handler for testing.
type MockHandler struct {
MockHost string
}
func (m *MockHandler) Get(_ context.Context, _ string, _ any) error { return nil }
func (m *MockHandler) GetStatusCode(_ context.Context, _ string) (int, error) {
return http.StatusOK, nil
}
func (m *MockHandler) GetSSZ(_ context.Context, _ string) ([]byte, http.Header, error) {
return nil, nil, nil
}
func (m *MockHandler) Post(_ context.Context, _ string, _ map[string]string, _ *bytes.Buffer, _ any) error {
return nil
}
func (m *MockHandler) PostSSZ(_ context.Context, _ string, _ map[string]string, _ *bytes.Buffer) ([]byte, http.Header, error) {
return nil, nil, nil
}
func (m *MockHandler) Host() string { return m.MockHost }

View File

@@ -0,0 +1,158 @@
package rest
import (
"net/http"
"strings"
"sync/atomic"
"time"
"github.com/OffchainLabs/prysm/v7/api/client"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
)
// RestConnectionProvider manages HTTP client configuration for REST API with failover support.
// It allows switching between different beacon node REST endpoints when the current one becomes unavailable.
type RestConnectionProvider interface {
// HttpClient returns the configured HTTP client with headers, timeout, and optional tracing.
HttpClient() *http.Client
// Handler returns the REST handler for making API requests.
Handler() Handler
// CurrentHost returns the current REST API endpoint URL.
CurrentHost() string
// Hosts returns all configured REST API endpoint URLs.
Hosts() []string
// SwitchHost switches to the endpoint at the given index.
SwitchHost(index int) error
}
// RestConnectionProviderOption is a functional option for configuring the REST connection provider.
type RestConnectionProviderOption func(*restConnectionProvider)
// WithHttpTimeout sets the HTTP client timeout.
func WithHttpTimeout(timeout time.Duration) RestConnectionProviderOption {
return func(p *restConnectionProvider) {
p.timeout = timeout
}
}
// WithHttpHeaders sets custom HTTP headers to include in all requests.
func WithHttpHeaders(headers map[string][]string) RestConnectionProviderOption {
return func(p *restConnectionProvider) {
p.headers = headers
}
}
// WithTracing enables OpenTelemetry tracing for HTTP requests.
func WithTracing() RestConnectionProviderOption {
return func(p *restConnectionProvider) {
p.enableTracing = true
}
}
type restConnectionProvider struct {
endpoints []string
httpClient *http.Client
restHandler *handler
currentIndex atomic.Uint64
timeout time.Duration
headers map[string][]string
enableTracing bool
}
// NewRestConnectionProvider creates a new REST connection provider that manages HTTP client configuration.
// The endpoint parameter can be a comma-separated list of URLs (e.g., "http://host1:3500,http://host2:3500").
func NewRestConnectionProvider(endpoint string, opts ...RestConnectionProviderOption) (RestConnectionProvider, error) {
endpoints := parseEndpoints(endpoint)
if len(endpoints) == 0 {
return nil, errors.New("no REST API endpoints provided")
}
p := &restConnectionProvider{
endpoints: endpoints,
}
for _, opt := range opts {
opt(p)
}
// Build the HTTP transport chain
var transport http.RoundTripper = http.DefaultTransport
// Add custom headers if configured
if len(p.headers) > 0 {
transport = client.NewCustomHeadersTransport(transport, p.headers)
}
// Add tracing if enabled
if p.enableTracing {
transport = otelhttp.NewTransport(transport)
}
p.httpClient = &http.Client{
Timeout: p.timeout,
Transport: transport,
}
// Create the REST handler with the HTTP client and initial host
p.restHandler = newHandler(*p.httpClient, endpoints[0])
log.WithFields(logrus.Fields{
"endpoints": endpoints,
"count": len(endpoints),
}).Info("Initialized REST connection provider")
return p, nil
}
// parseEndpoints splits a comma-separated endpoint string into individual endpoints.
func parseEndpoints(endpoint string) []string {
if endpoint == "" {
return nil
}
endpoints := make([]string, 0, 1)
for p := range strings.SplitSeq(endpoint, ",") {
if p = strings.TrimSpace(p); p != "" {
endpoints = append(endpoints, p)
}
}
return endpoints
}
func (p *restConnectionProvider) HttpClient() *http.Client {
return p.httpClient
}
func (p *restConnectionProvider) Handler() Handler {
return p.restHandler
}
func (p *restConnectionProvider) CurrentHost() string {
return p.endpoints[p.currentIndex.Load()]
}
func (p *restConnectionProvider) Hosts() []string {
// Return a copy to maintain immutability
hosts := make([]string, len(p.endpoints))
copy(hosts, p.endpoints)
return hosts
}
func (p *restConnectionProvider) SwitchHost(index int) error {
if index < 0 || index >= len(p.endpoints) {
return errors.Errorf("invalid host index %d, must be between 0 and %d", index, len(p.endpoints)-1)
}
oldIdx := p.currentIndex.Load()
p.currentIndex.Store(uint64(index))
// Update the rest handler's host
p.restHandler.SwitchHost(p.endpoints[index])
log.WithFields(logrus.Fields{
"previousHost": p.endpoints[oldIdx],
"newHost": p.endpoints[index],
}).Debug("Switched REST endpoint")
return nil
}

View File

@@ -0,0 +1,80 @@
package rest
import (
"reflect"
"testing"
"github.com/OffchainLabs/prysm/v7/testing/assert"
"github.com/OffchainLabs/prysm/v7/testing/require"
)
func TestParseEndpoints(t *testing.T) {
tests := []struct {
name string
input string
expected []string
}{
{"single endpoint", "http://localhost:3500", []string{"http://localhost:3500"}},
{"multiple endpoints", "http://host1:3500,http://host2:3500,http://host3:3500", []string{"http://host1:3500", "http://host2:3500", "http://host3:3500"}},
{"endpoints with spaces", "http://host1:3500, http://host2:3500 , http://host3:3500", []string{"http://host1:3500", "http://host2:3500", "http://host3:3500"}},
{"empty string", "", nil},
{"only commas", ",,,", []string{}},
{"trailing comma", "http://host1:3500,http://host2:3500,", []string{"http://host1:3500", "http://host2:3500"}},
{"leading comma", ",http://host1:3500,http://host2:3500", []string{"http://host1:3500", "http://host2:3500"}},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := parseEndpoints(tt.input)
if !reflect.DeepEqual(tt.expected, got) {
t.Errorf("parseEndpoints(%q) = %v, want %v", tt.input, got, tt.expected)
}
})
}
}
func TestNewRestConnectionProvider_Errors(t *testing.T) {
t.Run("no endpoints", func(t *testing.T) {
_, err := NewRestConnectionProvider("")
require.ErrorContains(t, "no REST API endpoints provided", err)
})
}
func TestRestConnectionProvider(t *testing.T) {
provider, err := NewRestConnectionProvider("http://host1:3500,http://host2:3500,http://host3:3500")
require.NoError(t, err)
t.Run("initial state", func(t *testing.T) {
assert.Equal(t, 3, len(provider.Hosts()))
assert.Equal(t, "http://host1:3500", provider.CurrentHost())
assert.NotNil(t, provider.HttpClient())
})
t.Run("SwitchHost", func(t *testing.T) {
require.NoError(t, provider.SwitchHost(1))
assert.Equal(t, "http://host2:3500", provider.CurrentHost())
require.NoError(t, provider.SwitchHost(0))
assert.Equal(t, "http://host1:3500", provider.CurrentHost())
require.ErrorContains(t, "invalid host index", provider.SwitchHost(-1))
require.ErrorContains(t, "invalid host index", provider.SwitchHost(3))
})
t.Run("Hosts returns copy", func(t *testing.T) {
hosts := provider.Hosts()
original := hosts[0]
hosts[0] = "modified"
assert.Equal(t, original, provider.Hosts()[0])
})
}
func TestRestConnectionProvider_WithOptions(t *testing.T) {
headers := map[string][]string{"Authorization": {"Bearer token"}}
provider, err := NewRestConnectionProvider(
"http://localhost:3500",
WithHttpHeaders(headers),
WithHttpTimeout(30000000000), // 30 seconds in nanoseconds
WithTracing(),
)
require.NoError(t, err)
assert.NotNil(t, provider.HttpClient())
assert.Equal(t, "http://localhost:3500", provider.CurrentHost())
}

View File

@@ -1,4 +1,4 @@
package beacon_api package rest
import ( import (
"bytes" "bytes"
@@ -21,37 +21,46 @@ import (
type reqOption func(*http.Request) type reqOption func(*http.Request)
type RestHandler interface { // Handler defines the interface for making REST API requests.
type Handler interface {
Get(ctx context.Context, endpoint string, resp any) error Get(ctx context.Context, endpoint string, resp any) error
GetStatusCode(ctx context.Context, endpoint string) (int, error) GetStatusCode(ctx context.Context, endpoint string) (int, error)
GetSSZ(ctx context.Context, endpoint string) ([]byte, http.Header, error) GetSSZ(ctx context.Context, endpoint string) ([]byte, http.Header, error)
Post(ctx context.Context, endpoint string, headers map[string]string, data *bytes.Buffer, resp any) error Post(ctx context.Context, endpoint string, headers map[string]string, data *bytes.Buffer, resp any) error
PostSSZ(ctx context.Context, endpoint string, headers map[string]string, data *bytes.Buffer) ([]byte, http.Header, error) PostSSZ(ctx context.Context, endpoint string, headers map[string]string, data *bytes.Buffer) ([]byte, http.Header, error)
HttpClient() *http.Client
Host() string Host() string
SetHost(host string)
} }
type BeaconApiRestHandler struct { type handler struct {
client http.Client client http.Client
host string host string
reqOverrides []reqOption reqOverrides []reqOption
} }
// NewBeaconApiRestHandler returns a RestHandler // newHandler returns a *handler for internal use within the rest package.
func NewBeaconApiRestHandler(client http.Client, host string) RestHandler { func newHandler(client http.Client, host string) *handler {
brh := &BeaconApiRestHandler{ rh := &handler{
client: client, client: client,
host: host, host: host,
} }
brh.appendAcceptOverride() rh.appendAcceptOverride()
return brh return rh
}
// NewHandler returns a Handler
func NewHandler(client http.Client, host string) Handler {
rh := &handler{
client: client,
host: host,
}
rh.appendAcceptOverride()
return rh
} }
// appendAcceptOverride enables the Accept header to be customized at runtime via an environment variable. // appendAcceptOverride enables the Accept header to be customized at runtime via an environment variable.
// This is specified as an env var because it is a niche option that prysm may use for performance testing or debugging // This is specified as an env var because it is a niche option that prysm may use for performance testing or debugging
// bug which users are unlikely to need. Using an env var keeps the set of user-facing flags cleaner. // bug which users are unlikely to need. Using an env var keeps the set of user-facing flags cleaner.
func (c *BeaconApiRestHandler) appendAcceptOverride() { func (c *handler) appendAcceptOverride() {
if accept := os.Getenv(params.EnvNameOverrideAccept); accept != "" { if accept := os.Getenv(params.EnvNameOverrideAccept); accept != "" {
c.reqOverrides = append(c.reqOverrides, func(req *http.Request) { c.reqOverrides = append(c.reqOverrides, func(req *http.Request) {
req.Header.Set("Accept", accept) req.Header.Set("Accept", accept)
@@ -60,18 +69,18 @@ func (c *BeaconApiRestHandler) appendAcceptOverride() {
} }
// HttpClient returns the underlying HTTP client of the handler // HttpClient returns the underlying HTTP client of the handler
func (c *BeaconApiRestHandler) HttpClient() *http.Client { func (c *handler) HttpClient() *http.Client {
return &c.client return &c.client
} }
// Host returns the underlying HTTP host // Host returns the underlying HTTP host
func (c *BeaconApiRestHandler) Host() string { func (c *handler) Host() string {
return c.host return c.host
} }
// Get sends a GET request and decodes the response body as a JSON object into the passed in object. // Get sends a GET request and decodes the response body as a JSON object into the passed in object.
// If an HTTP error is returned, the body is decoded as a DefaultJsonError JSON object and returned as the first return value. // If an HTTP error is returned, the body is decoded as a DefaultJsonError JSON object and returned as the first return value.
func (c *BeaconApiRestHandler) Get(ctx context.Context, endpoint string, resp any) error { func (c *handler) Get(ctx context.Context, endpoint string, resp any) error {
url := c.host + endpoint url := c.host + endpoint
req, err := http.NewRequestWithContext(ctx, http.MethodGet, url, nil) req, err := http.NewRequestWithContext(ctx, http.MethodGet, url, nil)
if err != nil { if err != nil {
@@ -94,7 +103,7 @@ func (c *BeaconApiRestHandler) Get(ctx context.Context, endpoint string, resp an
// GetStatusCode sends a GET request and returns only the HTTP status code. // GetStatusCode sends a GET request and returns only the HTTP status code.
// This is useful for endpoints like /eth/v1/node/health that communicate status via HTTP codes // This is useful for endpoints like /eth/v1/node/health that communicate status via HTTP codes
// (200 = ready, 206 = syncing, 503 = unavailable) rather than response bodies. // (200 = ready, 206 = syncing, 503 = unavailable) rather than response bodies.
func (c *BeaconApiRestHandler) GetStatusCode(ctx context.Context, endpoint string) (int, error) { func (c *handler) GetStatusCode(ctx context.Context, endpoint string) (int, error) {
url := c.host + endpoint url := c.host + endpoint
req, err := http.NewRequestWithContext(ctx, http.MethodGet, url, nil) req, err := http.NewRequestWithContext(ctx, http.MethodGet, url, nil)
if err != nil { if err != nil {
@@ -113,7 +122,7 @@ func (c *BeaconApiRestHandler) GetStatusCode(ctx context.Context, endpoint strin
return httpResp.StatusCode, nil return httpResp.StatusCode, nil
} }
func (c *BeaconApiRestHandler) GetSSZ(ctx context.Context, endpoint string) ([]byte, http.Header, error) { func (c *handler) GetSSZ(ctx context.Context, endpoint string) ([]byte, http.Header, error) {
url := c.host + endpoint url := c.host + endpoint
req, err := http.NewRequestWithContext(ctx, http.MethodGet, url, nil) req, err := http.NewRequestWithContext(ctx, http.MethodGet, url, nil)
if err != nil { if err != nil {
@@ -168,7 +177,7 @@ func (c *BeaconApiRestHandler) GetSSZ(ctx context.Context, endpoint string) ([]b
// Post sends a POST request and decodes the response body as a JSON object into the passed in object. // Post sends a POST request and decodes the response body as a JSON object into the passed in object.
// If an HTTP error is returned, the body is decoded as a DefaultJsonError JSON object and returned as the first return value. // If an HTTP error is returned, the body is decoded as a DefaultJsonError JSON object and returned as the first return value.
func (c *BeaconApiRestHandler) Post( func (c *handler) Post(
ctx context.Context, ctx context.Context,
apiEndpoint string, apiEndpoint string,
headers map[string]string, headers map[string]string,
@@ -204,7 +213,7 @@ func (c *BeaconApiRestHandler) Post(
} }
// PostSSZ sends a POST request and prefers an SSZ (application/octet-stream) response body. // PostSSZ sends a POST request and prefers an SSZ (application/octet-stream) response body.
func (c *BeaconApiRestHandler) PostSSZ( func (c *handler) PostSSZ(
ctx context.Context, ctx context.Context,
apiEndpoint string, apiEndpoint string,
headers map[string]string, headers map[string]string,
@@ -305,6 +314,6 @@ func decodeResp(httpResp *http.Response, resp any) error {
return nil return nil
} }
func (c *BeaconApiRestHandler) SetHost(host string) { func (c *handler) SwitchHost(host string) {
c.host = host c.host = host
} }

View File

@@ -17,27 +17,50 @@ import (
) )
// ProcessExecutionPayloadBid processes a signed execution payload bid in the Gloas fork. // ProcessExecutionPayloadBid processes a signed execution payload bid in the Gloas fork.
// Spec v1.7.0-alpha.0 (pseudocode):
// process_execution_payload_bid(state: BeaconState, block: BeaconBlock):
// //
// signed_bid = block.body.signed_execution_payload_bid // <spec fn="process_execution_payload_bid" fork="gloas" hash="6dc696bb">
// bid = signed_bid.message // def process_execution_payload_bid(state: BeaconState, block: BeaconBlock) -> None:
// builder_index = bid.builder_index // signed_bid = block.body.signed_execution_payload_bid
// amount = bid.value // bid = signed_bid.message
// if builder_index == BUILDER_INDEX_SELF_BUILD: // builder_index = bid.builder_index
// assert amount == 0 // amount = bid.value
// assert signed_bid.signature == G2_POINT_AT_INFINITY //
// else: // # For self-builds, amount must be zero regardless of withdrawal credential prefix
// assert is_active_builder(state, builder_index) // if builder_index == BUILDER_INDEX_SELF_BUILD:
// assert can_builder_cover_bid(state, builder_index, amount) // assert amount == 0
// assert verify_execution_payload_bid_signature(state, signed_bid) // assert signed_bid.signature == bls.G2_POINT_AT_INFINITY
// assert bid.slot == block.slot // else:
// assert bid.parent_block_hash == state.latest_block_hash // # Verify that the builder is active
// assert bid.parent_block_root == block.parent_root // assert is_active_builder(state, builder_index)
// assert bid.prev_randao == get_randao_mix(state, get_current_epoch(state)) // # Verify that the builder has funds to cover the bid
// if amount > 0: // assert can_builder_cover_bid(state, builder_index, amount)
// state.builder_pending_payments[...] = BuilderPendingPayment(weight=0, withdrawal=BuilderPendingWithdrawal(fee_recipient=bid.fee_recipient, amount=amount, builder_index=builder_index)) // # Verify that the bid signature is valid
// state.latest_execution_payload_bid = bid // assert verify_execution_payload_bid_signature(state, signed_bid)
//
// # Verify that the bid is for the current slot
// assert bid.slot == block.slot
// # Verify that the bid is for the right parent block
// assert bid.parent_block_hash == state.latest_block_hash
// assert bid.parent_block_root == block.parent_root
// assert bid.prev_randao == get_randao_mix(state, get_current_epoch(state))
//
// # Record the pending payment if there is some payment
// if amount > 0:
// pending_payment = BuilderPendingPayment(
// weight=0,
// withdrawal=BuilderPendingWithdrawal(
// fee_recipient=bid.fee_recipient,
// amount=amount,
// builder_index=builder_index,
// ),
// )
// state.builder_pending_payments[SLOTS_PER_EPOCH + bid.slot % SLOTS_PER_EPOCH] = (
// pending_payment
// )
//
// # Cache the signed execution payload bid
// state.latest_execution_payload_bid = bid
// </spec>
func ProcessExecutionPayloadBid(st state.BeaconState, block interfaces.ReadOnlyBeaconBlock) error { func ProcessExecutionPayloadBid(st state.BeaconState, block interfaces.ReadOnlyBeaconBlock) error {
signedBid, err := block.Body().SignedExecutionPayloadBid() signedBid, err := block.Body().SignedExecutionPayloadBid()
if err != nil { if err != nil {

View File

@@ -24,14 +24,21 @@ import (
) )
// ProcessPayloadAttestations validates payload attestations in a block body. // ProcessPayloadAttestations validates payload attestations in a block body.
// Spec v1.7.0-alpha.0 (pseudocode):
// process_payload_attestation(state: BeaconState, payload_attestation: PayloadAttestation):
// //
// data = payload_attestation.data // <spec fn="process_payload_attestation" fork="gloas" hash="f46bf0b0">
// assert data.beacon_block_root == state.latest_block_header.parent_root // def process_payload_attestation(
// assert data.slot + 1 == state.slot // state: BeaconState, payload_attestation: PayloadAttestation
// indexed = get_indexed_payload_attestation(state, data.slot, payload_attestation) // ) -> None:
// assert is_valid_indexed_payload_attestation(state, indexed) // data = payload_attestation.data
//
// # Check that the attestation is for the parent beacon block
// assert data.beacon_block_root == state.latest_block_header.parent_root
// # Check that the attestation is for the previous slot
// assert data.slot + 1 == state.slot
// # Verify signature
// indexed_payload_attestation = get_indexed_payload_attestation(state, payload_attestation)
// assert is_valid_indexed_payload_attestation(state, indexed_payload_attestation)
// </spec>
func ProcessPayloadAttestations(ctx context.Context, st state.BeaconState, body interfaces.ReadOnlyBeaconBlockBody) error { func ProcessPayloadAttestations(ctx context.Context, st state.BeaconState, body interfaces.ReadOnlyBeaconBlockBody) error {
atts, err := body.PayloadAttestations() atts, err := body.PayloadAttestations()
if err != nil { if err != nil {
@@ -90,17 +97,24 @@ func indexedPayloadAttestation(ctx context.Context, st state.ReadOnlyBeaconState
} }
// payloadCommittee returns the payload timeliness committee for a given slot for the state. // payloadCommittee returns the payload timeliness committee for a given slot for the state.
// Spec v1.7.0-alpha.0 (pseudocode):
// get_ptc(state: BeaconState, slot: Slot) -> Vector[ValidatorIndex, PTC_SIZE]:
// //
// epoch = compute_epoch_at_slot(slot) // <spec fn="get_ptc" fork="gloas" hash="ae15f761">
// seed = hash(get_seed(state, epoch, DOMAIN_PTC_ATTESTER) + uint_to_bytes(slot)) // def get_ptc(state: BeaconState, slot: Slot) -> Vector[ValidatorIndex, PTC_SIZE]:
// indices = [] // """
// committees_per_slot = get_committee_count_per_slot(state, epoch) // Get the payload timeliness committee for the given ``slot``.
// for i in range(committees_per_slot): // """
// committee = get_beacon_committee(state, slot, CommitteeIndex(i)) // epoch = compute_epoch_at_slot(slot)
// indices.extend(committee) // seed = hash(get_seed(state, epoch, DOMAIN_PTC_ATTESTER) + uint_to_bytes(slot))
// return compute_balance_weighted_selection(state, indices, seed, size=PTC_SIZE, shuffle_indices=False) // indices: List[ValidatorIndex] = []
// # Concatenate all committees for this slot in order
// committees_per_slot = get_committee_count_per_slot(state, epoch)
// for i in range(committees_per_slot):
// committee = get_beacon_committee(state, slot, CommitteeIndex(i))
// indices.extend(committee)
// return compute_balance_weighted_selection(
// state, indices, seed, size=PTC_SIZE, shuffle_indices=False
// )
// </spec>
func payloadCommittee(ctx context.Context, st state.ReadOnlyBeaconState, slot primitives.Slot) ([]primitives.ValidatorIndex, error) { func payloadCommittee(ctx context.Context, st state.ReadOnlyBeaconState, slot primitives.Slot) ([]primitives.ValidatorIndex, error) {
epoch := slots.ToEpoch(slot) epoch := slots.ToEpoch(slot)
seed, err := ptcSeed(st, epoch, slot) seed, err := ptcSeed(st, epoch, slot)
@@ -114,17 +128,32 @@ func payloadCommittee(ctx context.Context, st state.ReadOnlyBeaconState, slot pr
} }
committeesPerSlot := helpers.SlotCommitteeCount(activeCount) committeesPerSlot := helpers.SlotCommitteeCount(activeCount)
out := make([]primitives.ValidatorIndex, 0, activeCount/uint64(params.BeaconConfig().SlotsPerEpoch))
for i := primitives.CommitteeIndex(0); i < primitives.CommitteeIndex(committeesPerSlot); i++ { selected := make([]primitives.ValidatorIndex, 0, fieldparams.PTCSize)
committee, err := helpers.BeaconCommitteeFromState(ctx, st, slot, i) var i uint64
if err != nil { for uint64(len(selected)) < fieldparams.PTCSize {
return nil, errors.Wrapf(err, "failed to get beacon committee %d", i) if ctx.Err() != nil {
return nil, ctx.Err()
}
for committeeIndex := primitives.CommitteeIndex(0); committeeIndex < primitives.CommitteeIndex(committeesPerSlot); committeeIndex++ {
if uint64(len(selected)) >= fieldparams.PTCSize {
break
}
committee, err := helpers.BeaconCommitteeFromState(ctx, st, slot, committeeIndex)
if err != nil {
return nil, errors.Wrapf(err, "failed to get beacon committee %d", committeeIndex)
}
selected, i, err = selectByBalanceFill(ctx, st, committee, seed, selected, i)
if err != nil {
return nil, errors.Wrapf(err, "failed to sample beacon committee %d", committeeIndex)
}
} }
out = append(out, committee...)
} }
return selectByBalance(ctx, st, out, seed, fieldparams.PTCSize) return selected, nil
} }
// ptcSeed computes the seed for the payload timeliness committee. // ptcSeed computes the seed for the payload timeliness committee.
@@ -137,56 +166,87 @@ func ptcSeed(st state.ReadOnlyBeaconState, epoch primitives.Epoch, slot primitiv
} }
// selectByBalance selects a balance-weighted subset of input candidates. // selectByBalance selects a balance-weighted subset of input candidates.
// Spec v1.7.0-alpha.0 (pseudocode):
// compute_balance_weighted_selection(state, indices, seed, size, shuffle_indices):
// Note: shuffle_indices is false for PTC.
// //
// total = len(indices); selected = []; i = 0 // <spec fn="compute_balance_weighted_selection" fork="gloas" hash="2c9f1c23">
// while len(selected) < size: // def compute_balance_weighted_selection(
// next = i % total // state: BeaconState,
// if shuffle_indices: next = compute_shuffled_index(next, total, seed) // indices: Sequence[ValidatorIndex],
// if compute_balance_weighted_acceptance(state, indices[next], seed, i): // seed: Bytes32,
// selected.append(indices[next]) // size: uint64,
// i += 1 // shuffle_indices: bool,
func selectByBalance(ctx context.Context, st state.ReadOnlyBeaconState, candidates []primitives.ValidatorIndex, seed [32]byte, count uint64) ([]primitives.ValidatorIndex, error) { // ) -> Sequence[ValidatorIndex]:
if len(candidates) == 0 { // """
return nil, errors.New("no candidates for balance weighted selection") // Return ``size`` indices sampled by effective balance, using ``indices``
} // as candidates. If ``shuffle_indices`` is ``True``, candidate indices
// are themselves sampled from ``indices`` by shuffling it, otherwise
// ``indices`` is traversed in order.
// """
// total = uint64(len(indices))
// assert total > 0
// selected: List[ValidatorIndex] = []
// i = uint64(0)
// while len(selected) < size:
// next_index = i % total
// if shuffle_indices:
// next_index = compute_shuffled_index(next_index, total, seed)
// candidate_index = indices[next_index]
// if compute_balance_weighted_acceptance(state, candidate_index, seed, i):
// selected.append(candidate_index)
// i += 1
// return selected
// </spec>
func selectByBalanceFill(
ctx context.Context,
st state.ReadOnlyBeaconState,
candidates []primitives.ValidatorIndex,
seed [32]byte,
selected []primitives.ValidatorIndex,
i uint64,
) ([]primitives.ValidatorIndex, uint64, error) {
hashFunc := hash.CustomSHA256Hasher() hashFunc := hash.CustomSHA256Hasher()
// Pre-allocate buffer for hash input: seed (32 bytes) + round counter (8 bytes). // Pre-allocate buffer for hash input: seed (32 bytes) + round counter (8 bytes).
var buf [40]byte var buf [40]byte
copy(buf[:], seed[:]) copy(buf[:], seed[:])
maxBalance := params.BeaconConfig().MaxEffectiveBalanceElectra maxBalance := params.BeaconConfig().MaxEffectiveBalanceElectra
selected := make([]primitives.ValidatorIndex, 0, count) for _, idx := range candidates {
total := uint64(len(candidates))
for i := uint64(0); uint64(len(selected)) < count; i++ {
if ctx.Err() != nil { if ctx.Err() != nil {
return nil, ctx.Err() return nil, i, ctx.Err()
} }
idx := candidates[i%total]
ok, err := acceptByBalance(st, idx, buf[:], hashFunc, maxBalance, i) ok, err := acceptByBalance(st, idx, buf[:], hashFunc, maxBalance, i)
if err != nil { if err != nil {
return nil, err return nil, i, err
} }
if ok { if ok {
selected = append(selected, idx) selected = append(selected, idx)
} }
if uint64(len(selected)) == fieldparams.PTCSize {
break
}
i++
} }
return selected, nil
return selected, i, nil
} }
// acceptByBalance determines if a validator is accepted based on its effective balance. // acceptByBalance determines if a validator is accepted based on its effective balance.
// Spec v1.7.0-alpha.0 (pseudocode):
// compute_balance_weighted_acceptance(state, index, seed, i):
// //
// MAX_RANDOM_VALUE = 2**16 - 1 // <spec fn="compute_balance_weighted_acceptance" fork="gloas" hash="9954dcd0">
// random_bytes = hash(seed + uint_to_bytes(i // 16)) // def compute_balance_weighted_acceptance(
// offset = i % 16 * 2 // state: BeaconState, index: ValidatorIndex, seed: Bytes32, i: uint64
// random_value = bytes_to_uint64(random_bytes[offset:offset+2]) // ) -> bool:
// effective_balance = state.validators[index].effective_balance // """
// return effective_balance * MAX_RANDOM_VALUE >= MAX_EFFECTIVE_BALANCE_ELECTRA * random_value // Return whether to accept the selection of the validator ``index``, with probability
// proportional to its ``effective_balance``, and randomness given by ``seed`` and ``i``.
// """
// MAX_RANDOM_VALUE = 2**16 - 1
// random_bytes = hash(seed + uint_to_bytes(i // 16))
// offset = i % 16 * 2
// random_value = bytes_to_uint64(random_bytes[offset : offset + 2])
// effective_balance = state.validators[index].effective_balance
// return effective_balance * MAX_RANDOM_VALUE >= MAX_EFFECTIVE_BALANCE_ELECTRA * random_value
// </spec>
func acceptByBalance(st state.ReadOnlyBeaconState, idx primitives.ValidatorIndex, seedBuf []byte, hashFunc func([]byte) [32]byte, maxBalance uint64, round uint64) (bool, error) { func acceptByBalance(st state.ReadOnlyBeaconState, idx primitives.ValidatorIndex, seedBuf []byte, hashFunc func([]byte) [32]byte, maxBalance uint64, round uint64) (bool, error) {
// Reuse the seed buffer by overwriting the last 8 bytes with the round counter. // Reuse the seed buffer by overwriting the last 8 bytes with the round counter.
binary.LittleEndian.PutUint64(seedBuf[len(seedBuf)-8:], round/16) binary.LittleEndian.PutUint64(seedBuf[len(seedBuf)-8:], round/16)
@@ -203,16 +263,26 @@ func acceptByBalance(st state.ReadOnlyBeaconState, idx primitives.ValidatorIndex
} }
// validIndexedPayloadAttestation verifies the signature of an indexed payload attestation. // validIndexedPayloadAttestation verifies the signature of an indexed payload attestation.
// Spec v1.7.0-alpha.0 (pseudocode):
// is_valid_indexed_payload_attestation(state: BeaconState, indexed_payload_attestation: IndexedPayloadAttestation) -> bool:
// //
// indices = indexed_payload_attestation.attesting_indices // <spec fn="is_valid_indexed_payload_attestation" fork="gloas" hash="cf1e65b5">
// return len(indices) > 0 and indices == sorted(indices) and // def is_valid_indexed_payload_attestation(
// bls.FastAggregateVerify( // state: BeaconState, indexed_payload_attestation: IndexedPayloadAttestation
// [state.validators[i].pubkey for i in indices], // ) -> bool:
// compute_signing_root(indexed_payload_attestation.data, get_domain(state, DOMAIN_PTC_ATTESTER, compute_epoch_at_slot(attestation.data.slot)), // """
// indexed_payload_attestation.signature, // Check if ``indexed_payload_attestation`` is non-empty, has sorted indices, and has
// ) // a valid aggregate signature.
// """
// # Verify indices are non-empty and sorted
// indices = indexed_payload_attestation.attesting_indices
// if len(indices) == 0 or not indices == sorted(indices):
// return False
//
// # Verify aggregate signature
// pubkeys = [state.validators[i].pubkey for i in indices]
// domain = get_domain(state, DOMAIN_PTC_ATTESTER, None)
// signing_root = compute_signing_root(indexed_payload_attestation.data, domain)
// return bls.FastAggregateVerify(pubkeys, signing_root, indexed_payload_attestation.signature)
// </spec>
func validIndexedPayloadAttestation(st state.ReadOnlyBeaconState, att *consensus_types.IndexedPayloadAttestation) error { func validIndexedPayloadAttestation(st state.ReadOnlyBeaconState, att *consensus_types.IndexedPayloadAttestation) error {
indices := att.AttestingIndices indices := att.AttestingIndices
if len(indices) == 0 || !slices.IsSorted(indices) { if len(indices) == 0 || !slices.IsSorted(indices) {

View File

@@ -10,17 +10,21 @@ import (
) )
// ProcessBuilderPendingPayments processes the builder pending payments from the previous epoch. // ProcessBuilderPendingPayments processes the builder pending payments from the previous epoch.
// Spec v1.7.0-alpha.0 (pseudocode):
// def process_builder_pending_payments(state: BeaconState) -> None:
// //
// quorum = get_builder_payment_quorum_threshold(state) // <spec fn="process_builder_pending_payments" fork="gloas" hash="10da48dd">
// for payment in state.builder_pending_payments[:SLOTS_PER_EPOCH]: // def process_builder_pending_payments(state: BeaconState) -> None:
// if payment.weight >= quorum: // """
// state.builder_pending_withdrawals.append(payment.withdrawal) // Processes the builder pending payments from the previous epoch.
// """
// quorum = get_builder_payment_quorum_threshold(state)
// for payment in state.builder_pending_payments[:SLOTS_PER_EPOCH]:
// if payment.weight >= quorum:
// state.builder_pending_withdrawals.append(payment.withdrawal)
// //
// old_payments = state.builder_pending_payments[SLOTS_PER_EPOCH:] // old_payments = state.builder_pending_payments[SLOTS_PER_EPOCH:]
// new_payments = [BuilderPendingPayment() for _ in range(SLOTS_PER_EPOCH)] // new_payments = [BuilderPendingPayment() for _ in range(SLOTS_PER_EPOCH)]
// state.builder_pending_payments = old_payments + new_payments // state.builder_pending_payments = old_payments + new_payments
// </spec>
func ProcessBuilderPendingPayments(state state.BeaconState) error { func ProcessBuilderPendingPayments(state state.BeaconState) error {
quorum, err := builderQuorumThreshold(state) quorum, err := builderQuorumThreshold(state)
if err != nil { if err != nil {
@@ -53,12 +57,16 @@ func ProcessBuilderPendingPayments(state state.BeaconState) error {
} }
// builderQuorumThreshold calculates the quorum threshold for builder payments. // builderQuorumThreshold calculates the quorum threshold for builder payments.
// Spec v1.7.0-alpha.0 (pseudocode):
// def get_builder_payment_quorum_threshold(state: BeaconState) -> uint64:
// //
// per_slot_balance = get_total_active_balance(state) // SLOTS_PER_EPOCH // <spec fn="get_builder_payment_quorum_threshold" fork="gloas" hash="a64b7ffb">
// quorum = per_slot_balance * BUILDER_PAYMENT_THRESHOLD_NUMERATOR // def get_builder_payment_quorum_threshold(state: BeaconState) -> uint64:
// return uint64(quorum // BUILDER_PAYMENT_THRESHOLD_DENOMINATOR) // """
// Calculate the quorum threshold for builder payments.
// """
// per_slot_balance = get_total_active_balance(state) // SLOTS_PER_EPOCH
// quorum = per_slot_balance * BUILDER_PAYMENT_THRESHOLD_NUMERATOR
// return uint64(quorum // BUILDER_PAYMENT_THRESHOLD_DENOMINATOR)
// </spec>
func builderQuorumThreshold(state state.ReadOnlyBeaconState) (primitives.Gwei, error) { func builderQuorumThreshold(state state.ReadOnlyBeaconState) (primitives.Gwei, error) {
activeBalance, err := helpers.TotalActiveBalance(state) activeBalance, err := helpers.TotalActiveBalance(state)
if err != nil { if err != nil {

View File

@@ -11,16 +11,20 @@ import (
) )
// RemoveBuilderPendingPayment removes the pending builder payment for the proposal slot. // RemoveBuilderPendingPayment removes the pending builder payment for the proposal slot.
// Spec v1.7.0 (pseudocode):
// //
// <spec fn="process_proposer_slashing" fork="gloas" lines="22-32" hash="4da721ef">
// # [New in Gloas:EIP7732]
// # Remove the BuilderPendingPayment corresponding to
// # this proposal if it is still in the 2-epoch window.
// slot = header_1.slot // slot = header_1.slot
// proposal_epoch = compute_epoch_at_slot(slot) // proposal_epoch = compute_epoch_at_slot(slot)
// if proposal_epoch == get_current_epoch(state): // if proposal_epoch == get_current_epoch(state):
// payment_index = SLOTS_PER_EPOCH + slot % SLOTS_PER_EPOCH // payment_index = SLOTS_PER_EPOCH + slot % SLOTS_PER_EPOCH
// state.builder_pending_payments[payment_index] = BuilderPendingPayment() // state.builder_pending_payments[payment_index] = BuilderPendingPayment()
// elif proposal_epoch == get_previous_epoch(state): // elif proposal_epoch == get_previous_epoch(state):
// payment_index = slot % SLOTS_PER_EPOCH // payment_index = slot % SLOTS_PER_EPOCH
// state.builder_pending_payments[payment_index] = BuilderPendingPayment() // state.builder_pending_payments[payment_index] = BuilderPendingPayment()
// </spec>
func RemoveBuilderPendingPayment(st state.BeaconState, header *eth.BeaconBlockHeader) error { func RemoveBuilderPendingPayment(st state.BeaconState, header *eth.BeaconBlockHeader) error {
proposalEpoch := slots.ToEpoch(header.Slot) proposalEpoch := slots.ToEpoch(header.Slot)
currentEpoch := time.CurrentEpoch(st) currentEpoch := time.CurrentEpoch(st)

View File

@@ -143,10 +143,11 @@ func ProcessSlot(ctx context.Context, state state.BeaconState) (state.BeaconStat
return nil, err return nil, err
} }
// Spec v1.6.1 (pseudocode): // <spec fn="process_slot" fork="gloas" lines="11-13" hash="62b28839">
// # [New in Gloas:EIP7732] // # [New in Gloas:EIP7732]
// # Unset the next payload availability // # Unset the next payload availability
// state.execution_payload_availability[(state.slot + 1) % SLOTS_PER_HISTORICAL_ROOT] = 0b0 // state.execution_payload_availability[(state.slot + 1) % SLOTS_PER_HISTORICAL_ROOT] = 0b0
// </spec>
if state.Version() >= version.Gloas { if state.Version() >= version.Gloas {
index := uint64((state.Slot() + 1) % params.BeaconConfig().SlotsPerHistoricalRoot) index := uint64((state.Slot() + 1) % params.BeaconConfig().SlotsPerHistoricalRoot)
if err := state.UpdateExecutionPayloadAvailabilityAtIndex(index, 0x0); err != nil { if err := state.UpdateExecutionPayloadAvailabilityAtIndex(index, 0x0); err != nil {

View File

@@ -67,7 +67,6 @@ func getSubscriptionStatusFromDB(t *testing.T, db *Store) bool {
return subscribed return subscribed
} }
func TestUpdateCustodyInfo(t *testing.T) { func TestUpdateCustodyInfo(t *testing.T) {
ctx := t.Context() ctx := t.Context()

View File

@@ -134,10 +134,20 @@ type BeaconNode struct {
// New creates a new node instance, sets up configuration options, and registers // New creates a new node instance, sets up configuration options, and registers
// every required service to the node. // every required service to the node.
func New(cliCtx *cli.Context, cancel context.CancelFunc, opts ...Option) (*BeaconNode, error) { func New(cliCtx *cli.Context, cancel context.CancelFunc, optFuncs []func(*cli.Context) ([]Option, error), opts ...Option) (*BeaconNode, error) {
if err := configureBeacon(cliCtx); err != nil { if err := configureBeacon(cliCtx); err != nil {
return nil, errors.Wrap(err, "could not set beacon configuration options") return nil, errors.Wrap(err, "could not set beacon configuration options")
} }
for _, of := range optFuncs {
ofo, err := of(cliCtx)
if err != nil {
return nil, err
}
if ofo != nil {
opts = append(opts, ofo...)
}
}
ctx := cliCtx.Context ctx := cliCtx.Context
beacon := &BeaconNode{ beacon := &BeaconNode{

View File

@@ -59,7 +59,7 @@ func TestNodeClose_OK(t *testing.T) {
WithDataColumnStorage(filesystem.NewEphemeralDataColumnStorage(t)), WithDataColumnStorage(filesystem.NewEphemeralDataColumnStorage(t)),
} }
node, err := New(ctx, cancel, options...) node, err := New(ctx, cancel, nil, options...)
require.NoError(t, err) require.NoError(t, err)
node.Close() node.Close()
@@ -87,7 +87,7 @@ func TestNodeStart_Ok(t *testing.T) {
WithDataColumnStorage(filesystem.NewEphemeralDataColumnStorage(t)), WithDataColumnStorage(filesystem.NewEphemeralDataColumnStorage(t)),
} }
node, err := New(ctx, cancel, options...) node, err := New(ctx, cancel, nil, options...)
require.NoError(t, err) require.NoError(t, err)
require.NotNil(t, node.lcStore) require.NotNil(t, node.lcStore)
node.services = &runtime.ServiceRegistry{} node.services = &runtime.ServiceRegistry{}
@@ -116,7 +116,7 @@ func TestNodeStart_SyncChecker(t *testing.T) {
WithDataColumnStorage(filesystem.NewEphemeralDataColumnStorage(t)), WithDataColumnStorage(filesystem.NewEphemeralDataColumnStorage(t)),
} }
node, err := New(ctx, cancel, options...) node, err := New(ctx, cancel, nil, options...)
require.NoError(t, err) require.NoError(t, err)
go func() { go func() {
node.Start() node.Start()
@@ -151,7 +151,7 @@ func TestClearDB(t *testing.T) {
WithDataColumnStorage(filesystem.NewEphemeralDataColumnStorage(t)), WithDataColumnStorage(filesystem.NewEphemeralDataColumnStorage(t)),
} }
_, err = New(context, cancel, options...) _, err = New(context, cancel, nil, options...)
require.NoError(t, err) require.NoError(t, err)
require.LogsContain(t, hook, "Removing database") require.LogsContain(t, hook, "Removing database")
} }

View File

@@ -26,8 +26,8 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/operations/voluntaryexits/mock" "github.com/OffchainLabs/prysm/v7/beacon-chain/operations/voluntaryexits/mock"
p2pMock "github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/testing" p2pMock "github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/rpc/core" "github.com/OffchainLabs/prysm/v7/beacon-chain/rpc/core"
mockSync "github.com/OffchainLabs/prysm/v7/beacon-chain/sync/initial-sync/testing"
state_native "github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native" state_native "github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native"
mockSync "github.com/OffchainLabs/prysm/v7/beacon-chain/sync/initial-sync/testing"
"github.com/OffchainLabs/prysm/v7/config/params" "github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives" "github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/crypto/bls" "github.com/OffchainLabs/prysm/v7/crypto/bls"

View File

@@ -48,6 +48,7 @@ go_test(
"@com_github_ethereum_go_ethereum//crypto:go_default_library", "@com_github_ethereum_go_ethereum//crypto:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library", "@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@org_golang_google_grpc//:go_default_library", "@org_golang_google_grpc//:go_default_library",
"@org_golang_google_grpc//metadata:go_default_library",
"@org_golang_google_grpc//reflection:go_default_library", "@org_golang_google_grpc//reflection:go_default_library",
"@org_golang_google_protobuf//types/known/emptypb:go_default_library", "@org_golang_google_protobuf//types/known/emptypb:go_default_library",
"@org_golang_google_protobuf//types/known/timestamppb:go_default_library", "@org_golang_google_protobuf//types/known/timestamppb:go_default_library",

View File

@@ -35,18 +35,19 @@ import (
// providing RPC endpoints for verifying a beacon node's sync status, genesis and // providing RPC endpoints for verifying a beacon node's sync status, genesis and
// version information, and services the node implements and runs. // version information, and services the node implements and runs.
type Server struct { type Server struct {
LogsStreamer logs.Streamer LogsStreamer logs.Streamer
StreamLogsBufferSize int StreamLogsBufferSize int
SyncChecker sync.Checker SyncChecker sync.Checker
Server *grpc.Server Server *grpc.Server
BeaconDB db.ReadOnlyDatabase BeaconDB db.ReadOnlyDatabase
PeersFetcher p2p.PeersProvider PeersFetcher p2p.PeersProvider
PeerManager p2p.PeerManager PeerManager p2p.PeerManager
GenesisTimeFetcher blockchain.TimeFetcher GenesisTimeFetcher blockchain.TimeFetcher
GenesisFetcher blockchain.GenesisFetcher GenesisFetcher blockchain.GenesisFetcher
POWChainInfoFetcher execution.ChainInfoFetcher POWChainInfoFetcher execution.ChainInfoFetcher
BeaconMonitoringHost string BeaconMonitoringHost string
BeaconMonitoringPort int BeaconMonitoringPort int
OptimisticModeFetcher blockchain.OptimisticModeFetcher
} }
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API. // Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
@@ -61,21 +62,28 @@ func (ns *Server) GetHealth(ctx context.Context, request *ethpb.HealthRequest) (
ctx, cancel := context.WithTimeout(ctx, timeoutDuration) ctx, cancel := context.WithTimeout(ctx, timeoutDuration)
defer cancel() // Important to avoid a context leak defer cancel() // Important to avoid a context leak
if ns.SyncChecker.Synced() { // Check optimistic status - validators should not participate when optimistic
isOptimistic, err := ns.OptimisticModeFetcher.IsOptimistic(ctx)
if err != nil {
return &empty.Empty{}, status.Errorf(codes.Internal, "Could not check optimistic status: %v", err)
}
if ns.SyncChecker.Synced() && !isOptimistic {
return &empty.Empty{}, nil return &empty.Empty{}, nil
} }
if ns.SyncChecker.Syncing() || ns.SyncChecker.Initialized() { if ns.SyncChecker.Syncing() || ns.SyncChecker.Initialized() {
if request.SyncingStatus != 0 { // Set header for REST API clients (via gRPC-gateway)
// override the 200 success with the provided request status
if err := grpc.SetHeader(ctx, metadata.Pairs("x-http-code", strconv.FormatUint(request.SyncingStatus, 10))); err != nil {
return &empty.Empty{}, status.Errorf(codes.Internal, "Could not set custom success code header: %v", err)
}
return &empty.Empty{}, nil
}
if err := grpc.SetHeader(ctx, metadata.Pairs("x-http-code", strconv.FormatUint(http.StatusPartialContent, 10))); err != nil { if err := grpc.SetHeader(ctx, metadata.Pairs("x-http-code", strconv.FormatUint(http.StatusPartialContent, 10))); err != nil {
return &empty.Empty{}, status.Errorf(codes.Internal, "Could not set custom success code header: %v", err) return &empty.Empty{}, status.Errorf(codes.Internal, "Could not set status code header: %v", err)
} }
return &empty.Empty{}, nil return &empty.Empty{}, status.Error(codes.Unavailable, "node is syncing")
}
if isOptimistic {
// Set header for REST API clients (via gRPC-gateway)
if err := grpc.SetHeader(ctx, metadata.Pairs("x-http-code", strconv.FormatUint(http.StatusPartialContent, 10))); err != nil {
return &empty.Empty{}, status.Errorf(codes.Internal, "Could not set status code header: %v", err)
}
return &empty.Empty{}, status.Error(codes.Unavailable, "node is optimistic")
} }
return &empty.Empty{}, status.Errorf(codes.Unavailable, "service unavailable") return &empty.Empty{}, status.Errorf(codes.Unavailable, "service unavailable")
} }

View File

@@ -2,6 +2,7 @@ package node
import ( import (
"errors" "errors"
"maps"
"testing" "testing"
"time" "time"
@@ -21,6 +22,7 @@ import (
"github.com/ethereum/go-ethereum/crypto" "github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/p2p/enode" "github.com/ethereum/go-ethereum/p2p/enode"
"google.golang.org/grpc" "google.golang.org/grpc"
"google.golang.org/grpc/metadata"
"google.golang.org/grpc/reflection" "google.golang.org/grpc/reflection"
"google.golang.org/protobuf/types/known/emptypb" "google.golang.org/protobuf/types/known/emptypb"
"google.golang.org/protobuf/types/known/timestamppb" "google.golang.org/protobuf/types/known/timestamppb"
@@ -187,32 +189,71 @@ func TestNodeServer_GetETH1ConnectionStatus(t *testing.T) {
assert.Equal(t, errStr, res.CurrentConnectionError) assert.Equal(t, errStr, res.CurrentConnectionError)
} }
// mockServerTransportStream implements grpc.ServerTransportStream for testing
type mockServerTransportStream struct {
headers map[string][]string
}
func (m *mockServerTransportStream) Method() string { return "" }
func (m *mockServerTransportStream) SetHeader(md metadata.MD) error {
maps.Copy(m.headers, md)
return nil
}
func (m *mockServerTransportStream) SendHeader(metadata.MD) error { return nil }
func (m *mockServerTransportStream) SetTrailer(metadata.MD) error { return nil }
func TestNodeServer_GetHealth(t *testing.T) { func TestNodeServer_GetHealth(t *testing.T) {
tests := []struct { tests := []struct {
name string name string
input *mockSync.Sync input *mockSync.Sync
customStatus uint64 isOptimistic bool
wantedErr string wantedErr string
}{ }{
{ {
name: "happy path", name: "happy path - synced and not optimistic",
input: &mockSync.Sync{IsSyncing: false, IsSynced: true}, input: &mockSync.Sync{IsSyncing: false, IsSynced: true},
isOptimistic: false,
}, },
{ {
name: "syncing", name: "returns error when not synced and not syncing",
input: &mockSync.Sync{IsSyncing: false}, input: &mockSync.Sync{IsSyncing: false, IsSynced: false},
wantedErr: "service unavailable", isOptimistic: false,
wantedErr: "service unavailable",
},
{
name: "returns error when syncing",
input: &mockSync.Sync{IsSyncing: true, IsSynced: false},
isOptimistic: false,
wantedErr: "node is syncing",
},
{
name: "returns error when synced but optimistic",
input: &mockSync.Sync{IsSyncing: false, IsSynced: true},
isOptimistic: true,
wantedErr: "node is optimistic",
},
{
name: "returns error when syncing and optimistic",
input: &mockSync.Sync{IsSyncing: true, IsSynced: false},
isOptimistic: true,
wantedErr: "node is syncing",
}, },
} }
for _, tt := range tests { for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) { t.Run(tt.name, func(t *testing.T) {
server := grpc.NewServer() server := grpc.NewServer()
ns := &Server{ ns := &Server{
SyncChecker: tt.input, SyncChecker: tt.input,
OptimisticModeFetcher: &mock.ChainService{Optimistic: tt.isOptimistic},
} }
ethpb.RegisterNodeServer(server, ns) ethpb.RegisterNodeServer(server, ns)
reflection.Register(server) reflection.Register(server)
_, err := ns.GetHealth(t.Context(), &ethpb.HealthRequest{SyncingStatus: tt.customStatus})
// Create context with mock transport stream so grpc.SetHeader works
stream := &mockServerTransportStream{headers: make(map[string][]string)}
ctx := grpc.NewContextWithServerTransportStream(t.Context(), stream)
_, err := ns.GetHealth(ctx, &ethpb.HealthRequest{})
if tt.wantedErr == "" { if tt.wantedErr == "" {
require.NoError(t, err) require.NoError(t, err)
return return

View File

@@ -259,18 +259,19 @@ func NewService(ctx context.Context, cfg *Config) *Service {
} }
s.validatorServer = validatorServer s.validatorServer = validatorServer
nodeServer := &nodev1alpha1.Server{ nodeServer := &nodev1alpha1.Server{
LogsStreamer: logs.NewStreamServer(), LogsStreamer: logs.NewStreamServer(),
StreamLogsBufferSize: 1000, // Enough to handle bursts of beacon node logs for gRPC streaming. StreamLogsBufferSize: 1000, // Enough to handle bursts of beacon node logs for gRPC streaming.
BeaconDB: s.cfg.BeaconDB, BeaconDB: s.cfg.BeaconDB,
Server: s.grpcServer, Server: s.grpcServer,
SyncChecker: s.cfg.SyncService, SyncChecker: s.cfg.SyncService,
GenesisTimeFetcher: s.cfg.GenesisTimeFetcher, GenesisTimeFetcher: s.cfg.GenesisTimeFetcher,
PeersFetcher: s.cfg.PeersFetcher, PeersFetcher: s.cfg.PeersFetcher,
PeerManager: s.cfg.PeerManager, PeerManager: s.cfg.PeerManager,
GenesisFetcher: s.cfg.GenesisFetcher, GenesisFetcher: s.cfg.GenesisFetcher,
POWChainInfoFetcher: s.cfg.ExecutionChainInfoFetcher, POWChainInfoFetcher: s.cfg.ExecutionChainInfoFetcher,
BeaconMonitoringHost: s.cfg.BeaconMonitoringHost, BeaconMonitoringHost: s.cfg.BeaconMonitoringHost,
BeaconMonitoringPort: s.cfg.BeaconMonitoringPort, BeaconMonitoringPort: s.cfg.BeaconMonitoringPort,
OptimisticModeFetcher: s.cfg.OptimisticModeFetcher,
} }
beaconChainServer := &beaconv1alpha1.Server{ beaconChainServer := &beaconv1alpha1.Server{
Ctx: s.ctx, Ctx: s.ctx,

View File

@@ -46,14 +46,20 @@ func (b *BeaconState) BuilderPubkey(builderIndex primitives.BuilderIndex) ([fiel
} }
// IsActiveBuilder returns true if the builder placement is finalized and it has not initiated exit. // IsActiveBuilder returns true if the builder placement is finalized and it has not initiated exit.
// Spec v1.7.0-alpha.0 (pseudocode):
// def is_active_builder(state: BeaconState, builder_index: BuilderIndex) -> bool:
// //
// builder = state.builders[builder_index] // <spec fn="is_active_builder" fork="gloas" hash="1a599fb2">
// return ( // def is_active_builder(state: BeaconState, builder_index: BuilderIndex) -> bool:
// builder.deposit_epoch < state.finalized_checkpoint.epoch // """
// and builder.withdrawable_epoch == FAR_FUTURE_EPOCH // Check if the builder at ``builder_index`` is active for the given ``state``.
// ) // """
// builder = state.builders[builder_index]
// return (
// # Placement in builder list is finalized
// builder.deposit_epoch < state.finalized_checkpoint.epoch
// # Has not initiated exit
// and builder.withdrawable_epoch == FAR_FUTURE_EPOCH
// )
// </spec>
func (b *BeaconState) IsActiveBuilder(builderIndex primitives.BuilderIndex) (bool, error) { func (b *BeaconState) IsActiveBuilder(builderIndex primitives.BuilderIndex) (bool, error) {
if b.version < version.Gloas { if b.version < version.Gloas {
return false, errNotSupported("IsActiveBuilder", b.version) return false, errNotSupported("IsActiveBuilder", b.version)
@@ -72,15 +78,18 @@ func (b *BeaconState) IsActiveBuilder(builderIndex primitives.BuilderIndex) (boo
} }
// CanBuilderCoverBid returns true if the builder has enough balance to cover the given bid amount. // CanBuilderCoverBid returns true if the builder has enough balance to cover the given bid amount.
// Spec v1.7.0-alpha.0 (pseudocode):
// def can_builder_cover_bid(state: BeaconState, builder_index: BuilderIndex, bid_amount: Gwei) -> bool:
// //
// builder_balance = state.builders[builder_index].balance // <spec fn="can_builder_cover_bid" fork="gloas" hash="9e3f2d7c">
// pending_withdrawals_amount = get_pending_balance_to_withdraw_for_builder(state, builder_index) // def can_builder_cover_bid(
// min_balance = MIN_DEPOSIT_AMOUNT + pending_withdrawals_amount // state: BeaconState, builder_index: BuilderIndex, bid_amount: Gwei
// if builder_balance < min_balance: // ) -> bool:
// return False // builder_balance = state.builders[builder_index].balance
// return builder_balance - min_balance >= bid_amount // pending_withdrawals_amount = get_pending_balance_to_withdraw_for_builder(state, builder_index)
// min_balance = MIN_DEPOSIT_AMOUNT + pending_withdrawals_amount
// if builder_balance < min_balance:
// return False
// return builder_balance - min_balance >= bid_amount
// </spec>
func (b *BeaconState) CanBuilderCoverBid(builderIndex primitives.BuilderIndex, bidAmount primitives.Gwei) (bool, error) { func (b *BeaconState) CanBuilderCoverBid(builderIndex primitives.BuilderIndex, bidAmount primitives.Gwei) (bool, error) {
if b.version < version.Gloas { if b.version < version.Gloas {
return false, errNotSupported("CanBuilderCoverBid", b.version) return false, errNotSupported("CanBuilderCoverBid", b.version)

View File

@@ -1027,10 +1027,10 @@ func TestGetVerifyingStateEdgeCases(t *testing.T) {
sc: signatureCache, sc: signatureCache,
sr: &mockStateByRooter{sbr: sbrErrorIfCalled(t)}, // Should not be called sr: &mockStateByRooter{sbr: sbrErrorIfCalled(t)}, // Should not be called
hsp: &mockHeadStateProvider{ hsp: &mockHeadStateProvider{
headRoot: parentRoot[:], // Same as parent headRoot: parentRoot[:], // Same as parent
headSlot: 32, // Epoch 1 headSlot: 32, // Epoch 1
headState: fuluState.Copy(), // HeadState (not ReadOnly) for ProcessSlots headState: fuluState.Copy(), // HeadState (not ReadOnly) for ProcessSlots
headStateReadOnly: nil, // Should not use ReadOnly path headStateReadOnly: nil, // Should not use ReadOnly path
}, },
fc: &mockForkchoicer{ fc: &mockForkchoicer{
// Return same root for both to simulate same chain // Return same root for both to simulate same chain
@@ -1045,8 +1045,8 @@ func TestGetVerifyingStateEdgeCases(t *testing.T) {
// Wrap to detect HeadState call // Wrap to detect HeadState call
originalHsp := initializer.shared.hsp.(*mockHeadStateProvider) originalHsp := initializer.shared.hsp.(*mockHeadStateProvider)
wrappedHsp := &mockHeadStateProvider{ wrappedHsp := &mockHeadStateProvider{
headRoot: originalHsp.headRoot, headRoot: originalHsp.headRoot,
headSlot: originalHsp.headSlot, headSlot: originalHsp.headSlot,
headState: originalHsp.headState, headState: originalHsp.headState,
} }
initializer.shared.hsp = &headStateCallTracker{ initializer.shared.hsp = &headStateCallTracker{

View File

@@ -0,0 +1,3 @@
### Added
- Set beacon node options after reading the config file.

View File

@@ -0,0 +1,6 @@
### Added
- Added new proofCollector type to ssz-query
### Ignored
- Added testing covering the production of Merkle proof from Phase0 beacon state and benchmarked against real Hoodi beacon state (Fulu version)

View File

@@ -0,0 +1,7 @@
### Changed
- gRPC fallback now matches rest api implementation and will also check and connect to only synced nodes.
### Removed
- gRPC resolver for load balancing, the new implementation matches rest api's so we should remove the resolver so it's handled the same way for consistency.

View File

@@ -0,0 +1,11 @@
### Ignored
- moved finding healthy node logic to connection provider and other various cleanup on naming.
### Changed
- Improved node fallback logs.
### Fixed
- a potential race condition when switching hosts quickly and reconnecting to same host on an old connection.

View File

@@ -0,0 +1,3 @@
### Ignored
- improving maintainability and deduplication on get and post block parsing.

View File

@@ -0,0 +1,3 @@
### Changed
- gRPC health endpoint will now return an error on syncing or optimistic status showing that it's unavailable.

View File

@@ -0,0 +1,3 @@
### Added
- Added README for maintaining specrefs.

View File

@@ -0,0 +1,3 @@
### Changed
- Improved integrations with ethspecify so specrefs can be used throughout the codebase.

View File

@@ -0,0 +1,3 @@
### Added
- The ability to download the nightly reference tests from a specific day.

View File

@@ -0,0 +1,3 @@
### Ignored
- Updated golangci to run lint on tests too.

View File

@@ -0,0 +1,3 @@
### Ignored
- Add handy documentation for SSZ Query package (`encoding/ssz/query`).

View File

@@ -0,0 +1,2 @@
### Changed
- Sample PTC per committee to reduce allocations.

View File

@@ -0,0 +1,2 @@
### Ignored
- Run go fmt

View File

@@ -367,17 +367,8 @@ func startNode(ctx *cli.Context, cancel context.CancelFunc) error {
backfill.BeaconNodeOptions, backfill.BeaconNodeOptions,
das.BeaconNodeOptions, das.BeaconNodeOptions,
} }
for _, of := range optFuncs {
ofo, err := of(ctx)
if err != nil {
return err
}
if ofo != nil {
opts = append(opts, ofo...)
}
}
beacon, err := node.New(ctx, cancel, opts...) beacon, err := node.New(ctx, cancel, optFuncs, opts...)
if err != nil { if err != nil {
return fmt.Errorf("unable to start beacon node: %w", err) return fmt.Errorf("unable to start beacon node: %w", err)
} }

View File

@@ -163,3 +163,18 @@ func Uint256ToSSZBytes(num string) ([]byte, error) {
} }
return PadTo(ReverseByteOrder(uint256.Bytes()), 32), nil return PadTo(ReverseByteOrder(uint256.Bytes()), 32), nil
} }
// PutLittleEndian writes an unsigned integer value in little-endian format.
// Supports sizes 1, 2, 4, or 8 bytes for uint8/16/32/64 respectively.
func PutLittleEndian(dst []byte, val uint64, size int) {
switch size {
case 1:
dst[0] = byte(val)
case 2:
binary.LittleEndian.PutUint16(dst, uint16(val))
case 4:
binary.LittleEndian.PutUint32(dst, uint32(val))
case 8:
binary.LittleEndian.PutUint64(dst, val)
}
}

View File

@@ -9,7 +9,9 @@ go_library(
"container.go", "container.go",
"generalized_index.go", "generalized_index.go",
"list.go", "list.go",
"merkle_proof.go",
"path.go", "path.go",
"proof_collector.go",
"query.go", "query.go",
"ssz_info.go", "ssz_info.go",
"ssz_object.go", "ssz_object.go",
@@ -20,7 +22,12 @@ go_library(
importpath = "github.com/OffchainLabs/prysm/v7/encoding/ssz/query", importpath = "github.com/OffchainLabs/prysm/v7/encoding/ssz/query",
visibility = ["//visibility:public"], visibility = ["//visibility:public"],
deps = [ deps = [
"//container/trie:go_default_library",
"//crypto/hash/htr:go_default_library",
"//encoding/bytesutil:go_default_library",
"//encoding/ssz:go_default_library", "//encoding/ssz:go_default_library",
"//math:go_default_library",
"@com_github_prysmaticlabs_fastssz//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library", "@com_github_prysmaticlabs_go_bitfield//:go_default_library",
], ],
) )
@@ -29,15 +36,24 @@ go_test(
name = "go_default_test", name = "go_default_test",
srcs = [ srcs = [
"generalized_index_test.go", "generalized_index_test.go",
"merkle_proof_test.go",
"path_test.go", "path_test.go",
"proof_collector_test.go",
"query_test.go", "query_test.go",
"tag_parser_test.go", "tag_parser_test.go",
], ],
embed = [":go_default_library"],
deps = [ deps = [
":go_default_library", "//beacon-chain/state/stateutil:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/ssz:go_default_library",
"//encoding/ssz/query/testutil:go_default_library", "//encoding/ssz/query/testutil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/ssz_query/testing:go_default_library", "//proto/ssz_query/testing:go_default_library",
"//testing/require:go_default_library", "//testing/require:go_default_library",
"//testing/util:go_default_library",
"@com_github_prysmaticlabs_fastssz//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library", "@com_github_prysmaticlabs_go_bitfield//:go_default_library",
], ],
) )

190
encoding/ssz/query/doc.md Normal file
View File

@@ -0,0 +1,190 @@
# SSZ Query Package
The `encoding/ssz/query` package provides a system for analyzing and querying SSZ ([Simple Serialize](https://github.com/ethereum/consensus-specs/blob/master/ssz/simple-serialize.md)) data structures, as well as generating Merkle proofs from them. It enables runtime analysis of SSZ-serialized Go objects with reflection, path-based queries through nested structures, generalized index calculation, and Merkle proof generation.
This package is designed to be generic. It operates on arbitrary SSZ-serialized Go values at runtime, so the same query/proof machinery applies equally to any SSZ type, including the BeaconState/BeaconBlock.
## Usage Example
```go
// 1. Analyze an SSZ object
block := &ethpb.BeaconBlock{...}
info, err := query.AnalyzeObject(block)
// 2. Parse a path
path, err := query.ParsePath(".body.attestations[0].data.slot")
// 3. Get the generalized index
gindex, err := query.GetGeneralizedIndexFromPath(info, path)
// 4. Generate a Merkle proof
proof, err := info.Prove(gindex)
// 5. Get offset and length to slice the SSZ-encoded bytes
sszBytes, _ := block.MarshalSSZ()
_, offset, length, err := query.CalculateOffsetAndLength(info, path)
// slotBytes contains the SSZ-encoded value at the queried path
slotBytes := sszBytes[offset : offset+length]
```
## Exported API
The main exported API consists of:
```go
// AnalyzeObject analyzes an SSZ object and returns its structural information
func AnalyzeObject(obj SSZObject) (*SszInfo, error)
// ParsePath parses a path string like ".field1.field2[0].field3"
func ParsePath(rawPath string) (Path, error)
// CalculateOffsetAndLength computes byte offset and length for a path within an SSZ object
func CalculateOffsetAndLength(sszInfo *SszInfo, path Path) (*SszInfo, uint64, uint64, error)
// GetGeneralizedIndexFromPath calculates the generalized index for a given path
func GetGeneralizedIndexFromPath(info *SszInfo, path Path) (uint64, error)
// Prove generates a Merkle proof for a target generalized index
func (s *SszInfo) Prove(gindex uint64) (*fastssz.Proof, error)
```
## Type System
### SSZ Types
The package now supports [all standard SSZ types](https://github.com/ethereum/consensus-specs/blob/master/ssz/simple-serialize.md#typing) except `ProgressiveList`, `ProgressiveContainer`, `ProgressiveBitlist`, `Union`, and `CompatibleUnion`.
### Core Data Structures
#### `SszInfo`
The `SszInfo` structure contains complete structural metadata for an SSZ type:
```go
type SszInfo struct {
sszType SszType // SSZ Type classification
typ reflect.Type // Go reflect.Type
source SSZObject // Original SSZObject reference. Mostly used for reusing SSZ methods like `HashTreeRoot`.
isVariable bool // True if contains variable-size fields
// Composite types have corresponding metadata. Other fields would be nil except for the current type.
containerInfo *containerInfo
listInfo *listInfo
vectorInfo *vectorInfo
bitlistInfo *bitlistInfo
bitvectorInfo *bitvectorInfo
}
```
#### `Path`
The `Path` structure represents navigation paths through SSZ structures. It supports accessing a field by field name, accessing an element by index (list/vector type), and finding the length of homogenous collection types. The `ParsePath` function parses a raw string into a `Path` instance, which is commonly used in other APIs like `CalculateOffsetAndLength` and `GetGeneralizedIndexFromPath`.
```go
type Path struct {
Length bool // Flag for length queries (e.g., len(.field))
Elements []PathElement // Sequence of field accesses and indices
}
type PathElement struct {
Name string // Field name
Index *uint64 // list/vector index (nil if not an index access)
}
```
## Implementation Details
### Type Analysis (`analyzer.go`)
The `AnalyzeObject` function performs recursive type introspection using Go reflection:
1. **Type Inspection** - Examines Go `reflect.Value` to determine SSZ type
- Basic types (`uint8`, `uint16`, `uint32`, `uint64`, `bool`): `SSZType` constants
- Slices: Determined from struct tags (`ssz-size` for vectors, `ssz-max` for lists). There is a related [write-up](https://hackmd.io/@junsong/H101DKnwxl) regarding struct tags.
- Structs: Analyzed as Containers with field ordering from JSON tags
- Pointers: Dereferenced automatically
2. **Variable-Length Population** - Determines actual sizes at runtime
- For lists: Iterates elements, caches sizes for variable-element lists
- For containers: Recursively populates variable fields, adjusts offsets
- For bitlists: Decodes bit length from bitvector
3. **Offset Calculation** - Computes byte positions within serialized data
- Fixed-size fields: Offset = sum of preceding field sizes
- Variable-size fields: Offset stored as 4-byte pointer entries
### Path Parsing (`path.go`)
The `ParsePath` function parses path strings with the following rules:
- **Dot notation**: `.field1.field2` for field access
- **Array indexing**: `[0]`, `[42]` for element access
- **Length queries**: `len(.field)` for list/vector lengths
- **Character set**: Only `[A-Za-z0-9._\[\]\(\)]` allowed
Example:
```go
path, _ := ParsePath(".nested.array_field[5].inner_field")
// Returns: Path{
// Elements: [
// PathElement{Name: "nested"},
// PathElement{Name: "array_field", Index: <Pointer to uint64(5)>},
// PathElement{Name: "inner_field"}
// ]
// }
```
### Generalized Index Calculation (`generalized_index.go`)
The generalized index is a tree position identifier. This package follows the [Ethereum consensus-specs](https://github.com/ethereum/consensus-specs/blob/master/ssz/merkle-proofs.md#generalized-merkle-tree-index) to calculate the generalized index.
### Merkle Proof Generation (`merkle_proof.go`, `proof_collector.go`)
The `Prove` method generates Merkle proofs using a single-sweep merkleization algorithm:
#### Algorithm Overview
**Key Terms:**
- **Target gindex** (generalized index): The position of the SSZ element you want to prove, expressed as a generalized Merkle tree index. Stored in `Proof.Index`.
- Note: The generalized index for root is 1.
- **Registered gindices**: The set of tree positions whose node hashes must be captured during merkleization in order to later assemble the proof.
- **Sibling node**: The node that shares the same parent as another node.
- **Leaf value**: The 32-byte hash of the target node (the node being proven). Stored in `Proof.Leaf`.
**Phases:**
1. **Registration Phase** (`addTarget`)
> Goal: determine exactly which sibling hashes are needed for the proof.
- Record the target gindex as the proof target.
- Starting from the target node, walk the Merkle tree from the leaf (target gindex) to the root (gindex = 1).
- At each step:
- Compute and register the sibling gindex (`i XOR 1`) as “must collect”.
- Move to the parent (`i = i/2`).
- This produces the full set of registered gindices (the sibling nodes on the target-to-root path).
2. **Merkleization Phase** (`merkleize`)
> Goal: recursively merkleize the tree and capture the needed hashes.
- Recursively traverse the SSZ structure and compute Merkle tree node hashes from leaves to root.
- Whenever the traversal computes a node whose gindex is in registered gindices, store that nodes hash for later proof construction.
3. **Proof Assembly Phase** (`toProof`)
> Goal: create the final `fastssz.Proof` object in the correct format and order.
```go
// Proof represents a merkle proof against a general index.
type Proof struct {
Index int
Leaf []byte
Hashes [][]byte
}
```
- Set `Proof.Index` to the target gindex.
- Set `Proof.Leaf` to the 32-byte hash of the target node.
- Build `Proof.Hashes` by walking from the target node up to (but not including) the root:
- At node `i`, append the stored hash for the sibling (`i XOR 1`).
- Move to the parent (`i = i/2`).
- The resulting `Proof.Hashes` is ordered from the target level upward, containing one sibling hash per tree level on the path to the root.

View File

@@ -0,0 +1,34 @@
package query
import (
"fmt"
"reflect"
fastssz "github.com/prysmaticlabs/fastssz"
)
// Prove is the entrypoint to generate an SSZ Merkle proof for the given generalized index.
// Parameters:
// - gindex: the generalized index of the node to prove inclusion for.
// Returns:
// - fastssz.Proof: the Merkle proof containing the leaf, index, and sibling hashes.
// - error: any error encountered during proof generation.
func (info *SszInfo) Prove(gindex uint64) (*fastssz.Proof, error) {
if info == nil {
return nil, fmt.Errorf("nil SszInfo")
}
collector := newProofCollector()
collector.addTarget(gindex)
// info.source is guaranteed to be valid and dereferenced by AnalyzeObject
v := reflect.ValueOf(info.source).Elem()
// Start the merkleization and proof collection process.
// In SSZ generalized indices, the root is always at index 1.
if _, err := collector.merkleize(info, v, 1); err != nil {
return nil, err
}
return collector.toProof()
}

View File

@@ -0,0 +1,163 @@
package query_test
import (
"testing"
"github.com/OffchainLabs/go-bitfield"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/encoding/ssz/query"
eth "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/OffchainLabs/prysm/v7/testing/util"
ssz "github.com/prysmaticlabs/fastssz"
)
func TestProve_FixedTestContainer(t *testing.T) {
obj := createFixedTestContainer()
tests := []string{
".field_uint32",
".nested.value2",
".vector_field[3]",
".bitvector64_field",
".trailing_field",
}
for _, tc := range tests {
t.Run(tc, func(t *testing.T) {
proveAndVerify(t, obj, tc)
})
}
}
func TestProve_VariableTestContainer(t *testing.T) {
obj := createVariableTestContainer()
tests := []string{
".leading_field",
".field_list_uint64[2]",
"len(field_list_uint64)",
".nested.nested_list_field[1]",
".variable_container_list[0].inner_1.field_list_uint64[1]",
}
for _, tc := range tests {
t.Run(tc, func(t *testing.T) {
proveAndVerify(t, obj, tc)
})
}
}
func TestProve_BeaconBlock(t *testing.T) {
randaoReveal := make([]byte, 96)
for i := range randaoReveal {
randaoReveal[i] = 0x42
}
root32 := make([]byte, 32)
for i := range root32 {
root32[i] = 0x24
}
sig := make([]byte, 96)
for i := range sig {
sig[i] = 0x99
}
att := &eth.Attestation{
AggregationBits: bitfield.Bitlist{0x01},
Data: &eth.AttestationData{
Slot: 1,
CommitteeIndex: 1,
BeaconBlockRoot: root32,
Source: &eth.Checkpoint{
Epoch: 1,
Root: root32,
},
Target: &eth.Checkpoint{
Epoch: 1,
Root: root32,
},
},
Signature: sig,
}
b := util.NewBeaconBlock()
b.Block.Slot = 123
b.Block.Body.RandaoReveal = randaoReveal
b.Block.Body.Attestations = []*eth.Attestation{att}
sb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
protoBlock, err := sb.Block().Proto()
require.NoError(t, err)
obj, ok := protoBlock.(query.SSZObject)
require.Equal(t, true, ok, "block proto does not implement query.SSZObject")
tests := []string{
".slot",
".body.randao_reveal",
".body.attestations[0].data.slot",
"len(body.attestations)",
}
for _, tc := range tests {
t.Run(tc, func(t *testing.T) {
proveAndVerify(t, obj, tc)
})
}
}
func TestProve_BeaconState(t *testing.T) {
st, _ := util.DeterministicGenesisState(t, 16)
require.NoError(t, st.SetSlot(primitives.Slot(42)))
sszObj, ok := st.ToProtoUnsafe().(query.SSZObject)
require.Equal(t, true, ok, "state proto does not implement query.SSZObject")
tests := []string{
".slot",
".latest_block_header",
".validators[0].effective_balance",
"len(validators)",
}
for _, tc := range tests {
t.Run(tc, func(t *testing.T) {
proveAndVerify(t, sszObj, tc)
})
}
}
// proveAndVerify helper to analyze an object, generate a merkle proof for the given path,
// and verify the proof against the object's root.
func proveAndVerify(t *testing.T, obj query.SSZObject, pathStr string) {
t.Helper()
info, err := query.AnalyzeObject(obj)
require.NoError(t, err)
path, err := query.ParsePath(pathStr)
require.NoError(t, err)
gi, err := query.GetGeneralizedIndexFromPath(info, path)
require.NoError(t, err)
proof, err := info.Prove(gi)
require.NoError(t, err)
require.Equal(t, int(gi), proof.Index)
root, err := obj.HashTreeRoot()
require.NoError(t, err)
ok, err := ssz.VerifyProof(root[:], proof)
require.NoError(t, err)
require.Equal(t, true, ok, "merkle proof verification failed")
require.Equal(t, 32, len(proof.Leaf))
for i, h := range proof.Hashes {
require.Equal(t, 32, len(h), "proof hash %d is not 32 bytes", i)
}
}

View File

@@ -0,0 +1,672 @@
package query
import (
"encoding/binary"
"errors"
"fmt"
"math/bits"
"reflect"
"runtime"
"slices"
"sync"
"github.com/OffchainLabs/go-bitfield"
"github.com/OffchainLabs/prysm/v7/container/trie"
"github.com/OffchainLabs/prysm/v7/crypto/hash/htr"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
ssz "github.com/OffchainLabs/prysm/v7/encoding/ssz"
"github.com/OffchainLabs/prysm/v7/math"
fastssz "github.com/prysmaticlabs/fastssz"
)
// proofCollector collects sibling hashes and leaves needed for Merkle proofs.
//
// Multiproof-ready design:
// - requiredSiblings/requiredLeaves store which gindices we want to collect (registered before merkleization).
// - siblings/leaves store the actual collected hashes.
//
// Concurrency:
// - required* maps are read-only during merkleization.
// - siblings/leaves writes are protected by mutex.
type proofCollector struct {
sync.Mutex
// Required gindices (registered before merkleization)
requiredSiblings map[uint64]struct{}
requiredLeaves map[uint64]struct{}
// Collected hashes
siblings map[uint64][32]byte
leaves map[uint64][32]byte
}
func newProofCollector() *proofCollector {
return &proofCollector{
requiredSiblings: make(map[uint64]struct{}),
requiredLeaves: make(map[uint64]struct{}),
siblings: make(map[uint64][32]byte),
leaves: make(map[uint64][32]byte),
}
}
func (pc *proofCollector) reset() {
pc.Lock()
defer pc.Unlock()
pc.requiredSiblings = make(map[uint64]struct{})
pc.requiredLeaves = make(map[uint64]struct{})
pc.siblings = make(map[uint64][32]byte)
pc.leaves = make(map[uint64][32]byte)
}
// addTarget register the target leaf and its required sibling nodes for proof construction.
// Registration should happen before merkleization begins.
func (pc *proofCollector) addTarget(gindex uint64) {
pc.Lock()
defer pc.Unlock()
pc.requiredLeaves[gindex] = struct{}{}
// Walk from the target leaf up to (but not including) the root (gindex=1).
// At each step, register the sibling node required to prove inclusion.
nodeGindex := gindex
for nodeGindex > 1 {
siblingGindex := nodeGindex ^ 1 // flip the last bit: left<->right sibling
pc.requiredSiblings[siblingGindex] = struct{}{}
// Move to parent
nodeGindex /= 2
}
}
// toProof converts the collected siblings and leaves into a fastssz.Proof structure.
// Current behavior expects a single target leaf (single proof).
func (pc *proofCollector) toProof() (*fastssz.Proof, error) {
pc.Lock()
defer pc.Unlock()
proof := &fastssz.Proof{}
if len(pc.leaves) == 0 {
return nil, errors.New("no leaves collected: add target leaves before merkleization")
}
leafGindices := make([]uint64, 0, len(pc.leaves))
for g := range pc.leaves {
leafGindices = append(leafGindices, g)
}
slices.Sort(leafGindices)
// single proof resides in leafGindices[0]
targetGindex := leafGindices[0]
proofIndex, err := math.Int(targetGindex)
if err != nil {
return nil, fmt.Errorf("gindex %d overflows int: %w", targetGindex, err)
}
proof.Index = proofIndex
// store the leaf
leaf := pc.leaves[targetGindex]
leafBuf := make([]byte, 32)
copy(leafBuf, leaf[:])
proof.Leaf = leafBuf
// Walk from target up to root, collecting siblings.
steps := bits.Len64(targetGindex) - 1
proof.Hashes = make([][]byte, 0, steps)
for targetGindex > 1 {
sib := targetGindex ^ 1
h, ok := pc.siblings[sib]
if !ok {
return nil, fmt.Errorf("missing sibling hash for gindex %d", sib)
}
proof.Hashes = append(proof.Hashes, h[:])
targetGindex /= 2
}
return proof, nil
}
// collectLeaf checks if the given gindex is a required leaf for the proof,
// and if so, stores the provided leaf hash in the collector.
func (pc *proofCollector) collectLeaf(gindex uint64, leaf [32]byte) {
if _, ok := pc.requiredLeaves[gindex]; !ok {
return
}
pc.Lock()
pc.leaves[gindex] = leaf
pc.Unlock()
}
// collectSibling stores the hash for a sibling node identified by gindex.
// It only stores the hash if gindex was pre-registered via addTarget (present in requiredSiblings).
// Writes to the collected siblings map are protected by the collector mutex.
func (pc *proofCollector) collectSibling(gindex uint64, hash [32]byte) {
if _, ok := pc.requiredSiblings[gindex]; !ok {
return
}
pc.Lock()
pc.siblings[gindex] = hash
pc.Unlock()
}
// Merkleizers and proof collection methods
// merkleize recursively traverses an SSZ info and computes the Merkle root of the subtree.
//
// Proof collection:
// - During traversal it calls collectLeaf/collectSibling with the SSZ generalized indices (gindices)
// of visited nodes.
// - The collector only stores hashes for gindices that were pre-registered via addTarget
// (requiredLeaves/requiredSiblings). This makes the traversal multiproof-ready: you can register
// multiple targets before calling merkleize.
//
// SSZ types handled: basic types, containers, lists, vectors, bitlists, and bitvectors.
//
// Parameters:
// - info: SSZ type metadata for the current value.
// - v: reflect.Value of the current value.
// - currentGindex: generalized index of the current subtree root.
//
// Returns:
// - [32]byte: Merkle root of the current subtree.
// - error: any error encountered during traversal/merkleization.
func (pc *proofCollector) merkleize(info *SszInfo, v reflect.Value, currentGindex uint64) ([32]byte, error) {
if info.sszType.isBasic() {
return pc.merkleizeBasicType(info.sszType, v, currentGindex)
}
switch info.sszType {
case Container:
return pc.merkleizeContainer(info, v, currentGindex)
case List:
return pc.merkleizeList(info, v, currentGindex)
case Vector:
return pc.merkleizeVector(info, v, currentGindex)
case Bitlist:
return pc.merkleizeBitlist(info, v, currentGindex)
case Bitvector:
return pc.merkleizeBitvector(info, v, currentGindex)
default:
return [32]byte{}, fmt.Errorf("unsupported SSZ type: %v", info.sszType)
}
}
// merkleizeBasicType serializes a basic SSZ value into a 32-byte leaf chunk (little-endian, zero-padded).
//
// Proof collection:
// - It calls collectLeaf(currentGindex, leaf) and stores the leaf if currentGindex was pre-registered via addTarget.
//
// Parameters:
// - t: the SSZType (basic).
// - v: the reflect.Value of the basic value.
// - currentGindex: the generalized index (gindex) of this leaf.
//
// Returns:
// - [32]byte: the 32-byte SSZ leaf chunk.
// - error: if the SSZType is not a supported basic type.
func (pc *proofCollector) merkleizeBasicType(t SSZType, v reflect.Value, currentGindex uint64) ([32]byte, error) {
var leaf [32]byte
// Serialize the value into a 32-byte chunk (little-endian, zero-padded)
switch t {
case Uint8:
leaf[0] = uint8(v.Uint())
case Uint16:
binary.LittleEndian.PutUint16(leaf[:2], uint16(v.Uint()))
case Uint32:
binary.LittleEndian.PutUint32(leaf[:4], uint32(v.Uint()))
case Uint64:
binary.LittleEndian.PutUint64(leaf[:8], v.Uint())
case Boolean:
if v.Bool() {
leaf[0] = 1
}
default:
return [32]byte{}, fmt.Errorf("unexpected basic type: %v", t)
}
pc.collectLeaf(currentGindex, leaf)
return leaf, nil
}
// merkleizeContainer computes the Merkle root of an SSZ container by:
// 1. Merkleizing each field into a 32-byte subtree root
// 2. Merkleizing the field roots into the container root (padding to the next power-of-2)
//
// Generalized indices (gindices): depth = ssz.Depth(uint64(N)) and field i has gindex = (currentGindex << depth) + uint64(i).
// Proof collection: merkleize() computes each field root, merkleizeVectorAndCollect collects required siblings, and collectLeaf stores the container root if registered.
//
// Parameters:
// - info: SSZ type metadata for the container.
// - v: reflect.Value of the container value.
// - currentGindex: generalized index (gindex) of the container root.
//
// Returns:
// - [32]byte: Merkle root of the container.
// - error: any error encountered while merkleizing fields.
func (pc *proofCollector) merkleizeContainer(info *SszInfo, v reflect.Value, currentGindex uint64) ([32]byte, error) {
// If the container root itself is the target, compute directly and return early.
// This avoids full subtree merkleization when we only need the root.
if _, ok := pc.requiredLeaves[currentGindex]; ok {
root, err := info.HashTreeRoot()
if err != nil {
return [32]byte{}, err
}
pc.collectLeaf(currentGindex, root)
return root, nil
}
ci, err := info.ContainerInfo()
if err != nil {
return [32]byte{}, err
}
v = dereferencePointer(v)
// Calculate depth: how many levels from container root to field leaves
numFields := len(ci.order)
depth := ssz.Depth(uint64(numFields))
// Step 1: Compute HTR for each subtree (field)
fieldRoots := make([][32]byte, numFields)
for i, name := range ci.order {
fieldInfo := ci.fields[name]
fieldVal := v.FieldByName(fieldInfo.goFieldName)
// Field i's gindex: shift currentGindex left by depth, then OR with field index
fieldGindex := currentGindex<<depth + uint64(i)
htr, err := pc.merkleize(fieldInfo.sszInfo, fieldVal, fieldGindex)
if err != nil {
return [32]byte{}, fmt.Errorf("field %s: %w", name, err)
}
fieldRoots[i] = htr
}
// Step 2: Merkleize the field hashes into the container root,
// collecting sibling hashes if target is within this subtree
root := pc.merkleizeVectorAndCollect(fieldRoots, currentGindex, uint64(depth))
return root, nil
}
// merkleizeVectorBody computes the Merkle root of the "data" subtree for vector-like SSZ types
// (vectors and the data-part of lists/bitlists).
//
// Generalized indices (gindices): depth = ssz.Depth(limit); leafBase = subtreeRootGindex << depth; element/chunk i gindex = leafBase + uint64(i).
// Proof collection: merkleize() is called for composite elements; merkleizeVectorAndCollect collects required siblings at this layer.
// Padding: merkleizeVectorAndCollect uses trie.ZeroHashes as needed.
//
// Parameters:
// - elemInfo: SSZ type metadata for the element.
// - v: reflect.Value of the vector/list data.
// - length: number of actual elements present.
// - limit: virtual leaf capacity used for padding/Depth (fixed length for vectors, limit for lists).
// - subtreeRootGindex: gindex of the data subtree root.
//
// Returns:
// - [32]byte: Merkle root of the data subtree.
// - error: any error encountered while merkleizing composite elements.
func (pc *proofCollector) merkleizeVectorBody(elemInfo *SszInfo, v reflect.Value, length int, limit uint64, subtreeRootGindex uint64) ([32]byte, error) {
depth := uint64(ssz.Depth(limit))
var chunks [][32]byte
if elemInfo.sszType.isBasic() {
// Serialize basic elements and pack into 32-byte chunks using ssz.PackByChunk.
elemSize, err := math.Int(itemLength(elemInfo))
if err != nil {
return [32]byte{}, fmt.Errorf("element size %d overflows int: %w", itemLength(elemInfo), err)
}
serialized := make([][]byte, length)
// Single contiguous allocation for all element data
allData := make([]byte, length*elemSize)
for i := range length {
buf := allData[i*elemSize : (i+1)*elemSize]
elem := v.Index(i)
if elemInfo.sszType == Boolean && elem.Bool() {
buf[0] = 1
} else {
bytesutil.PutLittleEndian(buf, elem.Uint(), elemSize)
}
serialized[i] = buf
}
chunks, err = ssz.PackByChunk(serialized)
if err != nil {
return [32]byte{}, err
}
} else {
// Composite elements: compute each element root (no padding here; merkleizeVectorAndCollect pads).
chunks = make([][32]byte, length)
// Fall back to per-element merkleization with proper gindices for proof collection.
// Parallel execution
workerCount := min(runtime.GOMAXPROCS(0), length)
jobs := make(chan int, workerCount*16)
errCh := make(chan error, 1) // only need the first error
stopCh := make(chan struct{})
var stopOnce sync.Once
var wg sync.WaitGroup
worker := func() {
defer wg.Done()
for idx := range jobs {
select {
case <-stopCh:
return
default:
}
elemGindex := subtreeRootGindex<<depth + uint64(idx)
htr, err := pc.merkleize(elemInfo, v.Index(idx), elemGindex)
if err != nil {
stopOnce.Do(func() { close(stopCh) })
select {
case errCh <- fmt.Errorf("index %d: %w", idx, err):
default:
}
return
}
chunks[idx] = htr
}
}
wg.Add(workerCount)
for range workerCount {
go worker()
}
// Enqueue jobs; stop early if any worker reports an error.
enqueue:
for i := range length {
select {
case <-stopCh:
break enqueue
case jobs <- i:
}
}
close(jobs)
wg.Wait()
select {
case err := <-errCh:
return [32]byte{}, err
default:
}
}
root := pc.merkleizeVectorAndCollect(chunks, subtreeRootGindex, depth)
return root, nil
}
// merkleizeVector computes the Merkle root of an SSZ vector (fixed-length).
//
// Generalized indices (gindices): currentGindex is the gindex of the vector root; element/chunk gindices are derived
// inside merkleizeVectorBody using leafBase = currentGindex << ssz.Depth(leaves).
//
// Proof collection: merkleizeVectorBody performs element/chunk merkleization and collects required siblings at the
// vector layer; collectLeaf stores the vector root if currentGindex was registered via addTarget.
//
// Parameters:
// - info: SSZ type metadata for the vector.
// - v: reflect.Value of the vector value.
// - currentGindex: generalized index (gindex) of the vector root.
//
// Returns:
// - [32]byte: Merkle root of the vector.
// - error: any error encountered while merkleizing composite elements.
func (pc *proofCollector) merkleizeVector(info *SszInfo, v reflect.Value, currentGindex uint64) ([32]byte, error) {
vi, err := info.VectorInfo()
if err != nil {
return [32]byte{}, err
}
length, err := math.Int(vi.Length())
if err != nil {
return [32]byte{}, fmt.Errorf("vector length %d overflows int: %w", vi.Length(), err)
}
elemInfo := vi.element
// Determine the virtual leaf capacity for the vector.
leaves, err := getChunkCount(info)
if err != nil {
return [32]byte{}, err
}
root, err := pc.merkleizeVectorBody(elemInfo, v, length, leaves, currentGindex)
if err != nil {
return [32]byte{}, err
}
// If the vector root itself is the target
pc.collectLeaf(currentGindex, root)
return root, nil
}
// merkleizeList computes the Merkle root of an SSZ list by merkleizing its data subtree and mixing in the length.
//
// Generalized indices (gindices): dataRoot is the left child of the list root (dataRootGindex = currentGindex*2); the length mixin is the right child (currentGindex*2+1).
// Proof collection: merkleizeVectorBody computes the data root (collecting required siblings in the data subtree), and mixinLengthAndCollect collects required siblings at the length-mixin level; collectLeaf stores the list root if registered.
//
// Parameters:
// - info: SSZ type metadata for the list.
// - v: reflect.Value of the list value.
// - currentGindex: generalized index (gindex) of the list root.
//
// Returns:
// - [32]byte: Merkle root of the list.
// - error: any error encountered while merkleizing the data subtree.
func (pc *proofCollector) merkleizeList(info *SszInfo, v reflect.Value, currentGindex uint64) ([32]byte, error) {
li, err := info.ListInfo()
if err != nil {
return [32]byte{}, err
}
length := v.Len()
elemInfo := li.element
chunks := make([][32]byte, 2)
// Compute the length hash (little-endian uint256)
binary.LittleEndian.PutUint64(chunks[1][:8], uint64(length))
// Data subtree root is the left child of the list root.
dataRootGindex := currentGindex * 2
// Compute virtual leaf capacity for the data subtree.
leaves, err := getChunkCount(info)
if err != nil {
return [32]byte{}, err
}
chunks[0], err = pc.merkleizeVectorBody(elemInfo, v, length, leaves, dataRootGindex)
if err != nil {
return [32]byte{}, err
}
// Handle the length mixin level (and proof bookkeeping at this level).
// Compute the final list root: hash(dataRoot || lengthHash)
root := pc.mixinLengthAndCollect(currentGindex, chunks)
// If the list root itself is the target
pc.collectLeaf(currentGindex, root)
return root, nil
}
// merkleizeBitvectorBody computes the Merkle root of a bitvector-like byte sequence by packing it into 32-byte chunks
// and merkleizing those chunks as a fixed-capacity vector (padding with trie.ZeroHashes as needed).
//
// Generalized indices (gindices): depth = ssz.Depth(chunkLimit); leafBase = subtreeRootGindex << depth; chunk i uses gindex = leafBase + uint64(i).
// Proof collection: merkleizeVectorAndCollect collects required sibling hashes at the chunk-merkleization layer.
//
// Parameters:
// - data: raw byte sequence representing the bitvector payload.
// - chunkLimit: fixed/limit number of 32-byte chunks (used for padding/Depth).
// - subtreeRootGindex: gindex of the bitvector data subtree root.
//
// Returns:
// - [32]byte: Merkle root of the bitvector data subtree.
// - error: any error encountered while packing data into chunks.
func (pc *proofCollector) merkleizeBitvectorBody(data []byte, chunkLimit uint64, subtreeRootGindex uint64) ([32]byte, error) {
depth := ssz.Depth(chunkLimit)
chunks, err := ssz.PackByChunk([][]byte{data})
if err != nil {
return [32]byte{}, err
}
root := pc.merkleizeVectorAndCollect(chunks, subtreeRootGindex, uint64(depth))
return root, nil
}
// merkleizeBitvector computes the Merkle root of a fixed-length SSZ bitvector and collects proof nodes for targets.
//
// Parameters:
// - info: SSZ type metadata for the bitvector.
// - v: reflect.Value of the bitvector value.
// - currentGindex: generalized index (gindex) of the bitvector root.
//
// Returns:
// - [32]byte: Merkle root of the bitvector.
// - error: any error encountered during packing or merkleization.
func (pc *proofCollector) merkleizeBitvector(info *SszInfo, v reflect.Value, currentGindex uint64) ([32]byte, error) {
bitvectorBytes := v.Bytes()
if len(bitvectorBytes) == 0 {
return [32]byte{}, fmt.Errorf("bitvector field is uninitialized (nil or empty slice)")
}
// Compute virtual leaf capacity for the bitvector.
numChunks, err := getChunkCount(info)
if err != nil {
return [32]byte{}, err
}
root, err := pc.merkleizeBitvectorBody(bitvectorBytes, numChunks, currentGindex)
if err != nil {
return [32]byte{}, err
}
pc.collectLeaf(currentGindex, root)
return root, nil
}
// merkleizeBitlist computes the Merkle root of an SSZ bitlist by merkleizing its data chunks and mixing in the bit length.
//
// Generalized indices (gindices): dataRoot is the left child (dataRootGindex = currentGindex*2) and the length mixin is the right child (currentGindex*2+1).
// Proof collection: merkleizeBitvectorBody computes the data root (collecting required siblings under dataRootGindex), and mixinLengthAndCollect collects required siblings at the length-mixin level; collectLeaf stores the bitlist root if registered.
//
// Parameters:
// - info: SSZ type metadata for the bitlist.
// - v: reflect.Value of the bitlist value.
// - currentGindex: generalized index (gindex) of the bitlist root.
//
// Returns:
// - [32]byte: Merkle root of the bitlist.
// - error: any error encountered while merkleizing the data subtree.
func (pc *proofCollector) merkleizeBitlist(info *SszInfo, v reflect.Value, currentGindex uint64) ([32]byte, error) {
bi, err := info.BitlistInfo()
if err != nil {
return [32]byte{}, err
}
bitlistBytes := v.Bytes()
// Use go-bitfield to get bytes with termination bit cleared
bl := bitfield.Bitlist(bitlistBytes)
data := bl.BytesNoTrim()
// Get the bit length from bitlistInfo
bitLength := bi.Length()
// Get the chunk limit from getChunkCount
limitChunks, err := getChunkCount(info)
if err != nil {
return [32]byte{}, err
}
chunks := make([][32]byte, 2)
// Compute the length hash (little-endian uint256)
binary.LittleEndian.PutUint64(chunks[1][:8], uint64(bitLength))
dataRootGindex := currentGindex * 2
chunks[0], err = pc.merkleizeBitvectorBody(data, limitChunks, dataRootGindex)
if err != nil {
return [32]byte{}, err
}
// Handle the length mixin level (and proof bookkeeping at this level).
root := pc.mixinLengthAndCollect(currentGindex, chunks)
pc.collectLeaf(currentGindex, root)
return root, nil
}
// merkleizeVectorAndCollect merkleizes a slice of 32-byte leaf nodes into a subtree root, padding to a virtual size of 2^depth.
//
// Generalized indices (gindices): at layer i (0-based), nodes have gindices levelBase = subtreeGeneralizedIndex << (depth-i) and node gindex = levelBase + idx.
// Proof collection: for each layer it calls collectSibling(nodeGindex, nodeHash) and stores only those gindices registered via addTarget.
//
// Parameters:
// - elements: leaf-level hashes (may be shorter than 2^depth; padding is applied with trie.ZeroHashes).
// - subtreeGeneralizedIndex: gindex of the subtree root.
// - depth: number of merkleization layers from subtree root to leaves.
//
// Returns:
// - [32]byte: Merkle root of the subtree.
func (pc *proofCollector) merkleizeVectorAndCollect(elements [][32]byte, subtreeGeneralizedIndex uint64, depth uint64) [32]byte {
// Return zerohash at depth
if len(elements) == 0 {
return trie.ZeroHashes[depth]
}
for i := range depth {
layerLen := len(elements)
oddNodeLength := layerLen%2 == 1
if oddNodeLength {
zerohash := trie.ZeroHashes[i]
elements = append(elements, zerohash)
}
levelBaseGindex := subtreeGeneralizedIndex << (depth - i)
for idx := range elements {
gindex := levelBaseGindex + uint64(idx)
pc.collectSibling(gindex, elements[idx])
pc.collectLeaf(gindex, elements[idx])
}
elements = htr.VectorizedSha256(elements)
}
return elements[0]
}
// mixinLengthAndCollect computes the final mix-in root for list/bitlist values:
//
// root = hash(dataRoot, lengthHash)
//
// where chunks[0] is dataRoot and chunks[1] is the 32-byte length hash.
//
// Generalized indices (gindices): dataRoot is the left child (dataRootGindex = currentGindex*2) and lengthHash is the right child (lengthHashGindex = currentGindex*2+1).
// Proof collection: it calls collectSibling/collectLeaf for both child gindices; the collector stores them only if they were registered via addTarget.
//
// Parameters:
// - currentGindex: gindex of the parent node (list/bitlist root).
// - chunks: two 32-byte nodes: [dataRoot, lengthHash].
//
// Returns:
// - [32]byte: mixed-in Merkle root (or zero value on hashing error).
// - error: any error encountered during hashing.
func (pc *proofCollector) mixinLengthAndCollect(currentGindex uint64, chunks [][32]byte) [32]byte {
dataRoot, lengthHash := chunks[0], chunks[1]
dataRootGindex, lengthHashGindex := currentGindex*2, currentGindex*2+1
pc.collectSibling(dataRootGindex, dataRoot)
pc.collectSibling(lengthHashGindex, lengthHash)
pc.collectLeaf(dataRootGindex, dataRoot)
pc.collectLeaf(lengthHashGindex, lengthHash)
return ssz.MixInLength(dataRoot, lengthHash[:])
}

View File

@@ -0,0 +1,531 @@
package query
import (
"crypto/sha256"
"encoding/binary"
"reflect"
"slices"
"testing"
"github.com/OffchainLabs/go-bitfield"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state/stateutil"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
ssz "github.com/OffchainLabs/prysm/v7/encoding/ssz"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
sszquerypb "github.com/OffchainLabs/prysm/v7/proto/ssz_query/testing"
"github.com/OffchainLabs/prysm/v7/testing/require"
)
func TestProofCollector_New(t *testing.T) {
pc := newProofCollector()
require.NotNil(t, pc)
require.Equal(t, 0, len(pc.requiredSiblings))
require.Equal(t, 0, len(pc.requiredLeaves))
require.Equal(t, 0, len(pc.siblings))
require.Equal(t, 0, len(pc.leaves))
}
func TestProofCollector_Reset(t *testing.T) {
pc := newProofCollector()
pc.requiredSiblings[3] = struct{}{}
pc.requiredLeaves[5] = struct{}{}
pc.siblings[3] = [32]byte{1}
pc.leaves[5] = [32]byte{2}
pc.reset()
require.Equal(t, 0, len(pc.requiredSiblings))
require.Equal(t, 0, len(pc.requiredLeaves))
require.Equal(t, 0, len(pc.siblings))
require.Equal(t, 0, len(pc.leaves))
}
func TestProofCollector_AddTarget(t *testing.T) {
pc := newProofCollector()
pc.addTarget(5)
_, hasLeaf := pc.requiredLeaves[5]
_, hasSibling4 := pc.requiredSiblings[4]
_, hasSibling3 := pc.requiredSiblings[3]
_, hasSibling1 := pc.requiredSiblings[1] // GI 1 is the root
require.Equal(t, true, hasLeaf)
require.Equal(t, true, hasSibling4)
require.Equal(t, true, hasSibling3)
require.Equal(t, false, hasSibling1)
}
func TestProofCollector_ToProof(t *testing.T) {
pc := newProofCollector()
pc.addTarget(5)
leaf := [32]byte{9}
sibling4 := [32]byte{4}
sibling3 := [32]byte{3}
pc.collectLeaf(5, leaf)
pc.collectSibling(4, sibling4)
pc.collectSibling(3, sibling3)
proof, err := pc.toProof()
require.NoError(t, err)
require.Equal(t, 5, proof.Index)
require.DeepEqual(t, leaf[:], proof.Leaf)
require.Equal(t, 2, len(proof.Hashes))
require.DeepEqual(t, sibling4[:], proof.Hashes[0])
require.DeepEqual(t, sibling3[:], proof.Hashes[1])
}
func TestProofCollector_ToProof_NoLeaves(t *testing.T) {
pc := newProofCollector()
_, err := pc.toProof()
require.NotNil(t, err)
}
func TestProofCollector_CollectLeaf(t *testing.T) {
pc := newProofCollector()
leaf := [32]byte{7}
pc.collectLeaf(10, leaf)
require.Equal(t, 0, len(pc.leaves))
pc.addTarget(10)
pc.collectLeaf(10, leaf)
stored, ok := pc.leaves[10]
require.Equal(t, true, ok)
require.Equal(t, leaf, stored)
}
func TestProofCollector_CollectSibling(t *testing.T) {
pc := newProofCollector()
hash := [32]byte{5}
pc.collectSibling(4, hash)
require.Equal(t, 0, len(pc.siblings))
pc.addTarget(5)
pc.collectSibling(4, hash)
stored, ok := pc.siblings[4]
require.Equal(t, true, ok)
require.Equal(t, hash, stored)
}
func TestProofCollector_Merkleize_BasicTypes(t *testing.T) {
testCases := []struct {
name string
sszType SSZType
value any
expected [32]byte
}{
{
name: "uint8",
sszType: Uint8,
value: uint8(0x11),
expected: func() [32]byte {
var leaf [32]byte
leaf[0] = 0x11
return leaf
}(),
},
{
name: "uint16",
sszType: Uint16,
value: uint16(0x2211),
expected: func() [32]byte {
var leaf [32]byte
binary.LittleEndian.PutUint16(leaf[:2], 0x2211)
return leaf
}(),
},
{
name: "uint32",
sszType: Uint32,
value: uint32(0x44332211),
expected: func() [32]byte {
var leaf [32]byte
binary.LittleEndian.PutUint32(leaf[:4], 0x44332211)
return leaf
}(),
},
{
name: "uint64",
sszType: Uint64,
value: uint64(0x8877665544332211),
expected: func() [32]byte {
var leaf [32]byte
binary.LittleEndian.PutUint64(leaf[:8], 0x8877665544332211)
return leaf
}(),
},
{
name: "bool",
sszType: Boolean,
value: true,
expected: func() [32]byte {
var leaf [32]byte
leaf[0] = 1
return leaf
}(),
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
pc := newProofCollector()
gindex := uint64(3)
pc.addTarget(gindex)
leaf, err := pc.merkleizeBasicType(tc.sszType, reflect.ValueOf(tc.value), gindex)
require.NoError(t, err)
require.Equal(t, tc.expected, leaf)
stored, ok := pc.leaves[gindex]
require.Equal(t, true, ok)
require.Equal(t, tc.expected, stored)
})
}
}
func TestProofCollector_Merkleize_Container(t *testing.T) {
container := makeFixedTestContainer()
info, err := AnalyzeObject(container)
require.NoError(t, err)
pc := newProofCollector()
pc.addTarget(1)
root, err := pc.merkleize(info, reflect.ValueOf(container), 1)
require.NoError(t, err)
expected, err := container.HashTreeRoot()
require.NoError(t, err)
require.Equal(t, expected, root)
stored, ok := pc.leaves[1]
require.Equal(t, true, ok)
require.Equal(t, expected, stored)
}
func TestProofCollector_Merkleize_Vector(t *testing.T) {
container := makeFixedTestContainer()
info, err := AnalyzeObject(container)
require.NoError(t, err)
ci, err := info.ContainerInfo()
require.NoError(t, err)
field := ci.fields["vector_field"]
pc := newProofCollector()
root, err := pc.merkleizeVector(field.sszInfo, reflect.ValueOf(container.VectorField), 1)
require.NoError(t, err)
serialized := make([][]byte, len(container.VectorField))
for i, v := range container.VectorField {
buf := make([]byte, 8)
binary.LittleEndian.PutUint64(buf, v)
serialized[i] = buf
}
chunks, err := ssz.PackByChunk(serialized)
require.NoError(t, err)
limit, err := getChunkCount(field.sszInfo)
require.NoError(t, err)
expected := ssz.MerkleizeVector(chunks, limit)
require.Equal(t, expected, root)
}
func TestProofCollector_Merkleize_List(t *testing.T) {
list := []*sszquerypb.FixedNestedContainer{
makeFixedNestedContainer(1),
makeFixedNestedContainer(2),
}
container := makeVariableTestContainer(list, bitfield.NewBitlist(1))
info, err := AnalyzeObject(container)
require.NoError(t, err)
ci, err := info.ContainerInfo()
require.NoError(t, err)
field := ci.fields["field_list_container"]
pc := newProofCollector()
root, err := pc.merkleizeList(field.sszInfo, reflect.ValueOf(list), 1)
require.NoError(t, err)
listInfo, err := field.sszInfo.ListInfo()
require.NoError(t, err)
expected, err := ssz.MerkleizeListSSZ(list, listInfo.Limit())
require.NoError(t, err)
require.Equal(t, expected, root)
}
func TestProofCollector_Merkleize_Bitvector(t *testing.T) {
container := makeFixedTestContainer()
info, err := AnalyzeObject(container)
require.NoError(t, err)
ci, err := info.ContainerInfo()
require.NoError(t, err)
field := ci.fields["bitvector64_field"]
pc := newProofCollector()
root, err := pc.merkleizeBitvector(field.sszInfo, reflect.ValueOf(container.Bitvector64Field), 1)
require.NoError(t, err)
expected, err := ssz.MerkleizeByteSliceSSZ([]byte(container.Bitvector64Field))
require.NoError(t, err)
require.Equal(t, expected, root)
}
func TestProofCollector_Merkleize_Bitlist(t *testing.T) {
bitlist := bitfield.NewBitlist(16)
bitlist.SetBitAt(3, true)
bitlist.SetBitAt(8, true)
container := makeVariableTestContainer(nil, bitlist)
info, err := AnalyzeObject(container)
require.NoError(t, err)
ci, err := info.ContainerInfo()
require.NoError(t, err)
field := ci.fields["bitlist_field"]
pc := newProofCollector()
root, err := pc.merkleizeBitlist(field.sszInfo, reflect.ValueOf(container.BitlistField), 1)
require.NoError(t, err)
bitlistInfo, err := field.sszInfo.BitlistInfo()
require.NoError(t, err)
expected, err := ssz.BitlistRoot(bitfield.Bitlist(bitlist), bitlistInfo.Limit())
require.NoError(t, err)
require.Equal(t, expected, root)
}
func TestProofCollector_MerkleizeVectorBody_Basic(t *testing.T) {
container := makeFixedTestContainer()
info, err := AnalyzeObject(container)
require.NoError(t, err)
ci, err := info.ContainerInfo()
require.NoError(t, err)
field := ci.fields["vector_field"]
vectorInfo, err := field.sszInfo.VectorInfo()
require.NoError(t, err)
length := len(container.VectorField)
limit, err := getChunkCount(field.sszInfo)
require.NoError(t, err)
pc := newProofCollector()
root, err := pc.merkleizeVectorBody(vectorInfo.element, reflect.ValueOf(container.VectorField), length, limit, 2)
require.NoError(t, err)
serialized := make([][]byte, len(container.VectorField))
for i, v := range container.VectorField {
buf := make([]byte, 8)
binary.LittleEndian.PutUint64(buf, v)
serialized[i] = buf
}
chunks, err := ssz.PackByChunk(serialized)
require.NoError(t, err)
expected := ssz.MerkleizeVector(chunks, limit)
require.Equal(t, expected, root)
}
func TestProofCollector_MerkleizeVectorAndCollect(t *testing.T) {
pc := newProofCollector()
pc.addTarget(6)
elements := [][32]byte{{1}, {2}}
expected := ssz.MerkleizeVector(slices.Clone(elements), 2)
root := pc.merkleizeVectorAndCollect(elements, 3, 1)
storedLeaf, hasLeaf := pc.leaves[6]
storedSibling, hasSibling := pc.siblings[7]
require.Equal(t, true, hasLeaf)
require.Equal(t, true, hasSibling)
require.Equal(t, elements[0], storedLeaf)
require.Equal(t, elements[1], storedSibling)
require.Equal(t, expected, root)
}
func TestProofCollector_MixinLengthAndCollect(t *testing.T) {
list := []*sszquerypb.FixedNestedContainer{
makeFixedNestedContainer(1),
makeFixedNestedContainer(2),
}
container := makeVariableTestContainer(list, bitfield.NewBitlist(1))
info, err := AnalyzeObject(container)
require.NoError(t, err)
ci, err := info.ContainerInfo()
require.NoError(t, err)
field := ci.fields["field_list_container"]
// Target gindex 2 (data root) - sibling at gindex 3 (length hash) should be collected
pc := newProofCollector()
pc.addTarget(2)
root, err := pc.merkleizeList(field.sszInfo, reflect.ValueOf(list), 1)
require.NoError(t, err)
listInfo, err := field.sszInfo.ListInfo()
require.NoError(t, err)
expected, err := ssz.MerkleizeListSSZ(list, listInfo.Limit())
require.NoError(t, err)
require.Equal(t, expected, root)
// Verify data root is collected as leaf at gindex 2
storedLeaf, hasLeaf := pc.leaves[2]
require.Equal(t, true, hasLeaf)
// Verify length hash is collected as sibling at gindex 3
storedSibling, hasSibling := pc.siblings[3]
require.Equal(t, true, hasSibling)
// Verify the root is hash(dataRoot || lengthHash)
expectedBuf := append(storedLeaf[:], storedSibling[:]...)
expectedRoot := sha256.Sum256(expectedBuf)
require.Equal(t, expectedRoot, root)
}
func BenchmarkOptimizedValidatorRoots(b *testing.B) {
validators := make([]*ethpb.Validator, 1000)
for i := range validators {
validators[i] = makeTestValidator(i)
}
b.ResetTimer()
for b.Loop() {
_, err := stateutil.OptimizedValidatorRoots(validators)
if err != nil {
b.Fatal(err)
}
}
}
func BenchmarkProofCollectorMerkleize(b *testing.B) {
validators := make([]*ethpb.Validator, 1000)
for i := range validators {
validators[i] = makeTestValidator(i)
}
info, err := AnalyzeObject(validators[0])
require.NoError(b, err)
b.ResetTimer()
for b.Loop() {
for _, val := range validators {
pc := newProofCollector()
v := reflect.ValueOf(val)
_, err := pc.merkleize(info, v, 1)
if err != nil {
b.Fatal(err)
}
}
}
}
func makeTestValidator(i int) *ethpb.Validator {
pubkey := make([]byte, 48)
for j := range pubkey {
pubkey[j] = byte(i + j)
}
withdrawalCredentials := make([]byte, 32)
for j := range withdrawalCredentials {
withdrawalCredentials[j] = byte(255 - ((i + j) % 256))
}
return &ethpb.Validator{
PublicKey: pubkey,
WithdrawalCredentials: withdrawalCredentials,
EffectiveBalance: uint64(32000000000 + i),
Slashed: i%2 == 0,
ActivationEligibilityEpoch: primitives.Epoch(i),
ActivationEpoch: primitives.Epoch(i + 1),
ExitEpoch: primitives.Epoch(i + 2),
WithdrawableEpoch: primitives.Epoch(i + 3),
}
}
func makeFixedNestedContainer(value uint64) *sszquerypb.FixedNestedContainer {
value2 := make([]byte, 32)
for i := range value2 {
value2[i] = byte(i)
}
return &sszquerypb.FixedNestedContainer{
Value1: value,
Value2: value2,
}
}
func makeFixedTestContainer() *sszquerypb.FixedTestContainer {
fieldBytes32 := make([]byte, 32)
for i := range fieldBytes32 {
fieldBytes32[i] = byte(i)
}
vectorField := make([]uint64, 24)
for i := range vectorField {
vectorField[i] = uint64(i)
}
rows := make([][]byte, 5)
for i := range rows {
row := make([]byte, 32)
for j := range row {
row[j] = byte(i) + byte(j)
}
rows[i] = row
}
bitvector64 := bitfield.NewBitvector64()
bitvector64.SetBitAt(1, true)
bitvector512 := bitfield.NewBitvector512()
bitvector512.SetBitAt(10, true)
trailing := make([]byte, 56)
for i := range trailing {
trailing[i] = byte(i)
}
return &sszquerypb.FixedTestContainer{
FieldUint32: 1,
FieldUint64: 2,
FieldBool: true,
FieldBytes32: fieldBytes32,
Nested: makeFixedNestedContainer(3),
VectorField: vectorField,
TwoDimensionBytesField: rows,
Bitvector64Field: bitvector64,
Bitvector512Field: bitvector512,
TrailingField: trailing,
}
}
func makeVariableTestContainer(list []*sszquerypb.FixedNestedContainer, bitlist bitfield.Bitlist) *sszquerypb.VariableTestContainer {
leading := make([]byte, 32)
for i := range leading {
leading[i] = byte(i)
}
trailing := make([]byte, 56)
for i := range trailing {
trailing[i] = byte(255 - i)
}
if bitlist == nil {
bitlist = bitfield.NewBitlist(0)
}
return &sszquerypb.VariableTestContainer{
LeadingField: leading,
FieldListContainer: list,
BitlistField: bitlist,
TrailingField: trailing,
}
}

View File

@@ -389,6 +389,7 @@ func TestHashTreeRoot(t *testing.T) {
require.NoError(t, err, "HashTreeRoot should not return an error") require.NoError(t, err, "HashTreeRoot should not return an error")
expectedHashTreeRoot, err := tt.obj.HashTreeRoot() expectedHashTreeRoot, err := tt.obj.HashTreeRoot()
require.NoError(t, err, "HashTreeRoot on original object should not return an error") require.NoError(t, err, "HashTreeRoot on original object should not return an error")
// Verify the Merkle tree root matches with the SSZ generated HashTreeRoot
require.Equal(t, expectedHashTreeRoot, hashTreeRoot, "HashTreeRoot from sszInfo should match original object's HashTreeRoot") require.Equal(t, expectedHashTreeRoot, hashTreeRoot, "HashTreeRoot from sszInfo should match original object's HashTreeRoot")
}) })
} }

View File

@@ -26,21 +26,21 @@ func TestLifecycle(t *testing.T) {
port := 1000 + rand.Intn(1000) port := 1000 + rand.Intn(1000)
prometheusService := NewService(t.Context(), fmt.Sprintf(":%d", port), nil) prometheusService := NewService(t.Context(), fmt.Sprintf(":%d", port), nil)
prometheusService.Start() prometheusService.Start()
// Actively wait until the service responds on /metrics (faster and less flaky than a fixed sleep) // Actively wait until the service responds on /metrics (faster and less flaky than a fixed sleep)
deadline := time.Now().Add(3 * time.Second) deadline := time.Now().Add(3 * time.Second)
for { for {
if time.Now().After(deadline) { if time.Now().After(deadline) {
t.Fatalf("metrics endpoint not ready within timeout") t.Fatalf("metrics endpoint not ready within timeout")
} }
resp, err := http.Get(fmt.Sprintf("http://localhost:%d/metrics", port)) resp, err := http.Get(fmt.Sprintf("http://localhost:%d/metrics", port))
if err == nil { if err == nil {
_ = resp.Body.Close() _ = resp.Body.Close()
if resp.StatusCode == http.StatusOK { if resp.StatusCode == http.StatusOK {
break break
} }
} }
time.Sleep(50 * time.Millisecond) time.Sleep(50 * time.Millisecond)
} }
// Query the service to ensure it really started. // Query the service to ensure it really started.
resp, err := http.Get(fmt.Sprintf("http://localhost:%d/metrics", port)) resp, err := http.Get(fmt.Sprintf("http://localhost:%d/metrics", port))
@@ -49,18 +49,18 @@ func TestLifecycle(t *testing.T) {
err = prometheusService.Stop() err = prometheusService.Stop()
require.NoError(t, err) require.NoError(t, err)
// Actively wait until the service stops responding on /metrics // Actively wait until the service stops responding on /metrics
deadline = time.Now().Add(3 * time.Second) deadline = time.Now().Add(3 * time.Second)
for { for {
if time.Now().After(deadline) { if time.Now().After(deadline) {
t.Fatalf("metrics endpoint still reachable after timeout") t.Fatalf("metrics endpoint still reachable after timeout")
} }
_, err = http.Get(fmt.Sprintf("http://localhost:%d/metrics", port)) _, err = http.Get(fmt.Sprintf("http://localhost:%d/metrics", port))
if err != nil { if err != nil {
break break
} }
time.Sleep(50 * time.Millisecond) time.Sleep(50 * time.Millisecond)
} }
// Query the service to ensure it really stopped. // Query the service to ensure it really stopped.
_, err = http.Get(fmt.Sprintf("http://localhost:%d/metrics", port)) _, err = http.Get(fmt.Sprintf("http://localhost:%d/metrics", port))

30
specrefs/README.md Normal file
View File

@@ -0,0 +1,30 @@
# Specification References
This directory contains specification reference tracking files managed by
[ethspecify](https://github.com/jtraglia/ethspecify).
## Installation
Install `ethspecify` with the following command:
```bash
pipx install ethspecify
```
## Maintenance
When adding support for a new specification version, follow these steps:
1. Update the version in `.ethspecify.yml` configuration.
2. Run `ethspecify` to update/populate specrefs.
3. Run `ethspecify check` to check specrefs.
4. If there are errors, use the error message as a guide to fix the issue. If
there are new specrefs with empty sources, implement/locate each item and
update each specref source list. If you choose not to implement an item,
add an exception to the appropriate section the the `.ethspecify.yml`
configuration.
5. Repeat steps 3 and 4 until `ethspecify check` passes.
6. Run `git diff` to view updated specrefs. If an object/function/etc has
changed, make the necessary updates to the implementation.
7. Lastly, in the project's root directory, run `act -j check-specrefs` to
ensure everything is correct.

View File

@@ -1,4 +1,4 @@
- name: AGGREGATE_DUE_BPS - name: AGGREGATE_DUE_BPS#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: AggregateDueBPS\s+primitives.BP search: AggregateDueBPS\s+primitives.BP
@@ -8,7 +8,14 @@
AGGREGATE_DUE_BPS: uint64 = 6667 AGGREGATE_DUE_BPS: uint64 = 6667
</spec> </spec>
- name: ALTAIR_FORK_EPOCH - name: AGGREGATE_DUE_BPS_GLOAS#gloas
sources: []
spec: |
<spec config_var="AGGREGATE_DUE_BPS_GLOAS" fork="gloas" hash="34aa3164">
AGGREGATE_DUE_BPS_GLOAS: uint64 = 5000
</spec>
- name: ALTAIR_FORK_EPOCH#altair
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: AltairForkEpoch\s+primitives.Epoch search: AltairForkEpoch\s+primitives.Epoch
@@ -18,7 +25,7 @@
ALTAIR_FORK_EPOCH: Epoch = 74240 ALTAIR_FORK_EPOCH: Epoch = 74240
</spec> </spec>
- name: ALTAIR_FORK_VERSION - name: ALTAIR_FORK_VERSION#altair
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: AltairForkVersion\s+\[]byte search: AltairForkVersion\s+\[]byte
@@ -28,7 +35,7 @@
ALTAIR_FORK_VERSION: Version = '0x01000000' ALTAIR_FORK_VERSION: Version = '0x01000000'
</spec> </spec>
- name: ATTESTATION_DUE_BPS - name: ATTESTATION_DUE_BPS#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: AttestationDueBPS\s+primitives.BP search: AttestationDueBPS\s+primitives.BP
@@ -38,7 +45,14 @@
ATTESTATION_DUE_BPS: uint64 = 3333 ATTESTATION_DUE_BPS: uint64 = 3333
</spec> </spec>
- name: ATTESTATION_PROPAGATION_SLOT_RANGE - name: ATTESTATION_DUE_BPS_GLOAS#gloas
sources: []
spec: |
<spec config_var="ATTESTATION_DUE_BPS_GLOAS" fork="gloas" hash="3863c1ef">
ATTESTATION_DUE_BPS_GLOAS: uint64 = 2500
</spec>
- name: ATTESTATION_PROPAGATION_SLOT_RANGE#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: AttestationPropagationSlotRange\s+primitives.Slot search: AttestationPropagationSlotRange\s+primitives.Slot
@@ -48,7 +62,7 @@
ATTESTATION_PROPAGATION_SLOT_RANGE = 32 ATTESTATION_PROPAGATION_SLOT_RANGE = 32
</spec> </spec>
- name: ATTESTATION_SUBNET_COUNT - name: ATTESTATION_SUBNET_COUNT#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: AttestationSubnetCount\s+uint64 search: AttestationSubnetCount\s+uint64
@@ -58,7 +72,7 @@
ATTESTATION_SUBNET_COUNT = 64 ATTESTATION_SUBNET_COUNT = 64
</spec> </spec>
- name: ATTESTATION_SUBNET_EXTRA_BITS - name: ATTESTATION_SUBNET_EXTRA_BITS#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: AttestationSubnetExtraBits\s+uint64 search: AttestationSubnetExtraBits\s+uint64
@@ -68,7 +82,7 @@
ATTESTATION_SUBNET_EXTRA_BITS = 0 ATTESTATION_SUBNET_EXTRA_BITS = 0
</spec> </spec>
- name: ATTESTATION_SUBNET_PREFIX_BITS - name: ATTESTATION_SUBNET_PREFIX_BITS#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: AttestationSubnetPrefixBits\s+uint64 search: AttestationSubnetPrefixBits\s+uint64
@@ -78,7 +92,7 @@
ATTESTATION_SUBNET_PREFIX_BITS: int = 6 ATTESTATION_SUBNET_PREFIX_BITS: int = 6
</spec> </spec>
- name: BALANCE_PER_ADDITIONAL_CUSTODY_GROUP - name: BALANCE_PER_ADDITIONAL_CUSTODY_GROUP#fulu
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: BalancePerAdditionalCustodyGroup\s+uint64 search: BalancePerAdditionalCustodyGroup\s+uint64
@@ -88,7 +102,7 @@
BALANCE_PER_ADDITIONAL_CUSTODY_GROUP: Gwei = 32000000000 BALANCE_PER_ADDITIONAL_CUSTODY_GROUP: Gwei = 32000000000
</spec> </spec>
- name: BELLATRIX_FORK_EPOCH - name: BELLATRIX_FORK_EPOCH#bellatrix
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: BellatrixForkEpoch\s+primitives.Epoch search: BellatrixForkEpoch\s+primitives.Epoch
@@ -98,7 +112,7 @@
BELLATRIX_FORK_EPOCH: Epoch = 144896 BELLATRIX_FORK_EPOCH: Epoch = 144896
</spec> </spec>
- name: BELLATRIX_FORK_VERSION - name: BELLATRIX_FORK_VERSION#bellatrix
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: BellatrixForkVersion\s+\[]byte search: BellatrixForkVersion\s+\[]byte
@@ -108,7 +122,7 @@
BELLATRIX_FORK_VERSION: Version = '0x02000000' BELLATRIX_FORK_VERSION: Version = '0x02000000'
</spec> </spec>
- name: BLOB_SCHEDULE - name: BLOB_SCHEDULE#fulu
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: BlobSchedule\s+\[]BlobScheduleEntry search: BlobSchedule\s+\[]BlobScheduleEntry
@@ -127,7 +141,7 @@
) )
</spec> </spec>
- name: BLOB_SIDECAR_SUBNET_COUNT - name: BLOB_SIDECAR_SUBNET_COUNT#deneb
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: BlobsidecarSubnetCount\s+uint64 search: BlobsidecarSubnetCount\s+uint64
@@ -137,7 +151,7 @@
BLOB_SIDECAR_SUBNET_COUNT = 6 BLOB_SIDECAR_SUBNET_COUNT = 6
</spec> </spec>
- name: BLOB_SIDECAR_SUBNET_COUNT_ELECTRA - name: BLOB_SIDECAR_SUBNET_COUNT_ELECTRA#electra
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: BlobsidecarSubnetCountElectra\s+uint64 search: BlobsidecarSubnetCountElectra\s+uint64
@@ -147,7 +161,7 @@
BLOB_SIDECAR_SUBNET_COUNT_ELECTRA = 9 BLOB_SIDECAR_SUBNET_COUNT_ELECTRA = 9
</spec> </spec>
- name: CAPELLA_FORK_EPOCH - name: CAPELLA_FORK_EPOCH#capella
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: CapellaForkEpoch\s+primitives.Epoch search: CapellaForkEpoch\s+primitives.Epoch
@@ -157,7 +171,7 @@
CAPELLA_FORK_EPOCH: Epoch = 194048 CAPELLA_FORK_EPOCH: Epoch = 194048
</spec> </spec>
- name: CAPELLA_FORK_VERSION - name: CAPELLA_FORK_VERSION#capella
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: CapellaForkVersion\s+\[]byte search: CapellaForkVersion\s+\[]byte
@@ -167,7 +181,7 @@
CAPELLA_FORK_VERSION: Version = '0x03000000' CAPELLA_FORK_VERSION: Version = '0x03000000'
</spec> </spec>
- name: CHURN_LIMIT_QUOTIENT - name: CHURN_LIMIT_QUOTIENT#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: ChurnLimitQuotient\s+uint64 search: ChurnLimitQuotient\s+uint64
@@ -177,7 +191,7 @@
CHURN_LIMIT_QUOTIENT: uint64 = 65536 CHURN_LIMIT_QUOTIENT: uint64 = 65536
</spec> </spec>
- name: CONTRIBUTION_DUE_BPS - name: CONTRIBUTION_DUE_BPS#altair
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: ContributionDueBPS\s+primitives.BP search: ContributionDueBPS\s+primitives.BP
@@ -187,7 +201,14 @@
CONTRIBUTION_DUE_BPS: uint64 = 6667 CONTRIBUTION_DUE_BPS: uint64 = 6667
</spec> </spec>
- name: CUSTODY_REQUIREMENT - name: CONTRIBUTION_DUE_BPS_GLOAS#gloas
sources: []
spec: |
<spec config_var="CONTRIBUTION_DUE_BPS_GLOAS" fork="gloas" hash="0ead2ac1">
CONTRIBUTION_DUE_BPS_GLOAS: uint64 = 5000
</spec>
- name: CUSTODY_REQUIREMENT#fulu
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: CustodyRequirement\s+uint64.*yaml:"CUSTODY_REQUIREMENT" search: CustodyRequirement\s+uint64.*yaml:"CUSTODY_REQUIREMENT"
@@ -197,7 +218,7 @@
CUSTODY_REQUIREMENT = 4 CUSTODY_REQUIREMENT = 4
</spec> </spec>
- name: DATA_COLUMN_SIDECAR_SUBNET_COUNT - name: DATA_COLUMN_SIDECAR_SUBNET_COUNT#fulu
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: DataColumnSidecarSubnetCount\s+uint64 search: DataColumnSidecarSubnetCount\s+uint64
@@ -207,7 +228,7 @@
DATA_COLUMN_SIDECAR_SUBNET_COUNT = 128 DATA_COLUMN_SIDECAR_SUBNET_COUNT = 128
</spec> </spec>
- name: DENEB_FORK_EPOCH - name: DENEB_FORK_EPOCH#deneb
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: DenebForkEpoch\s+primitives.Epoch search: DenebForkEpoch\s+primitives.Epoch
@@ -217,7 +238,7 @@
DENEB_FORK_EPOCH: Epoch = 269568 DENEB_FORK_EPOCH: Epoch = 269568
</spec> </spec>
- name: DENEB_FORK_VERSION - name: DENEB_FORK_VERSION#deneb
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: DenebForkVersion\s+\[]byte search: DenebForkVersion\s+\[]byte
@@ -227,7 +248,7 @@
DENEB_FORK_VERSION: Version = '0x04000000' DENEB_FORK_VERSION: Version = '0x04000000'
</spec> </spec>
- name: EJECTION_BALANCE - name: EJECTION_BALANCE#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: EjectionBalance\s+uint64 search: EjectionBalance\s+uint64
@@ -237,7 +258,7 @@
EJECTION_BALANCE: Gwei = 16000000000 EJECTION_BALANCE: Gwei = 16000000000
</spec> </spec>
- name: ELECTRA_FORK_EPOCH - name: ELECTRA_FORK_EPOCH#electra
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: ElectraForkEpoch\s+primitives.Epoch search: ElectraForkEpoch\s+primitives.Epoch
@@ -247,7 +268,7 @@
ELECTRA_FORK_EPOCH: Epoch = 364032 ELECTRA_FORK_EPOCH: Epoch = 364032
</spec> </spec>
- name: ELECTRA_FORK_VERSION - name: ELECTRA_FORK_VERSION#electra
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: ElectraForkVersion\s+\[]byte search: ElectraForkVersion\s+\[]byte
@@ -257,7 +278,7 @@
ELECTRA_FORK_VERSION: Version = '0x05000000' ELECTRA_FORK_VERSION: Version = '0x05000000'
</spec> </spec>
- name: EPOCHS_PER_SUBNET_SUBSCRIPTION - name: EPOCHS_PER_SUBNET_SUBSCRIPTION#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: EpochsPerSubnetSubscription\s+uint64 search: EpochsPerSubnetSubscription\s+uint64
@@ -267,7 +288,7 @@
EPOCHS_PER_SUBNET_SUBSCRIPTION = 256 EPOCHS_PER_SUBNET_SUBSCRIPTION = 256
</spec> </spec>
- name: ETH1_FOLLOW_DISTANCE - name: ETH1_FOLLOW_DISTANCE#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: Eth1FollowDistance\s+uint64 search: Eth1FollowDistance\s+uint64
@@ -277,7 +298,7 @@
ETH1_FOLLOW_DISTANCE: uint64 = 2048 ETH1_FOLLOW_DISTANCE: uint64 = 2048
</spec> </spec>
- name: FULU_FORK_EPOCH - name: FULU_FORK_EPOCH#fulu
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: FuluForkEpoch\s+primitives.Epoch search: FuluForkEpoch\s+primitives.Epoch
@@ -287,7 +308,7 @@
FULU_FORK_EPOCH: Epoch = 411392 FULU_FORK_EPOCH: Epoch = 411392
</spec> </spec>
- name: FULU_FORK_VERSION - name: FULU_FORK_VERSION#fulu
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: FuluForkVersion\s+\[]byte search: FuluForkVersion\s+\[]byte
@@ -297,7 +318,7 @@
FULU_FORK_VERSION: Version = '0x06000000' FULU_FORK_VERSION: Version = '0x06000000'
</spec> </spec>
- name: GENESIS_DELAY - name: GENESIS_DELAY#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: GenesisDelay\s+uint64 search: GenesisDelay\s+uint64
@@ -307,7 +328,7 @@
GENESIS_DELAY: uint64 = 604800 GENESIS_DELAY: uint64 = 604800
</spec> </spec>
- name: GENESIS_FORK_VERSION - name: GENESIS_FORK_VERSION#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: GenesisForkVersion\s+\[]byte search: GenesisForkVersion\s+\[]byte
@@ -317,7 +338,21 @@
GENESIS_FORK_VERSION: Version = '0x00000000' GENESIS_FORK_VERSION: Version = '0x00000000'
</spec> </spec>
- name: INACTIVITY_SCORE_BIAS - name: GLOAS_FORK_EPOCH#gloas
sources: []
spec: |
<spec config_var="GLOAS_FORK_EPOCH" fork="gloas" hash="c25152cf">
GLOAS_FORK_EPOCH: Epoch = 18446744073709551615
</spec>
- name: GLOAS_FORK_VERSION#gloas
sources: []
spec: |
<spec config_var="GLOAS_FORK_VERSION" fork="gloas" hash="c1c5c007">
GLOAS_FORK_VERSION: Version = '0x07000000'
</spec>
- name: INACTIVITY_SCORE_BIAS#altair
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: InactivityScoreBias\s+uint64 search: InactivityScoreBias\s+uint64
@@ -327,7 +362,7 @@
INACTIVITY_SCORE_BIAS: uint64 = 4 INACTIVITY_SCORE_BIAS: uint64 = 4
</spec> </spec>
- name: INACTIVITY_SCORE_RECOVERY_RATE - name: INACTIVITY_SCORE_RECOVERY_RATE#altair
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: InactivityScoreRecoveryRate\s+uint64 search: InactivityScoreRecoveryRate\s+uint64
@@ -337,7 +372,7 @@
INACTIVITY_SCORE_RECOVERY_RATE: uint64 = 16 INACTIVITY_SCORE_RECOVERY_RATE: uint64 = 16
</spec> </spec>
- name: MAXIMUM_GOSSIP_CLOCK_DISPARITY - name: MAXIMUM_GOSSIP_CLOCK_DISPARITY#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: MaximumGossipClockDisparity\s+uint64 search: MaximumGossipClockDisparity\s+uint64
@@ -347,7 +382,7 @@
MAXIMUM_GOSSIP_CLOCK_DISPARITY = 500 MAXIMUM_GOSSIP_CLOCK_DISPARITY = 500
</spec> </spec>
- name: MAX_BLOBS_PER_BLOCK - name: MAX_BLOBS_PER_BLOCK#deneb
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: DeprecatedMaxBlobsPerBlock\s+int search: DeprecatedMaxBlobsPerBlock\s+int
@@ -357,7 +392,7 @@
MAX_BLOBS_PER_BLOCK: uint64 = 6 MAX_BLOBS_PER_BLOCK: uint64 = 6
</spec> </spec>
- name: MAX_BLOBS_PER_BLOCK_ELECTRA - name: MAX_BLOBS_PER_BLOCK_ELECTRA#electra
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: DeprecatedMaxBlobsPerBlockElectra\s+int search: DeprecatedMaxBlobsPerBlockElectra\s+int
@@ -367,7 +402,7 @@
MAX_BLOBS_PER_BLOCK_ELECTRA: uint64 = 9 MAX_BLOBS_PER_BLOCK_ELECTRA: uint64 = 9
</spec> </spec>
- name: MAX_PAYLOAD_SIZE - name: MAX_PAYLOAD_SIZE#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: MaxPayloadSize\s+uint64 search: MaxPayloadSize\s+uint64
@@ -377,7 +412,7 @@
MAX_PAYLOAD_SIZE = 10485760 MAX_PAYLOAD_SIZE = 10485760
</spec> </spec>
- name: MAX_PER_EPOCH_ACTIVATION_CHURN_LIMIT - name: MAX_PER_EPOCH_ACTIVATION_CHURN_LIMIT#deneb
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: MaxPerEpochActivationChurnLimit\s+uint64 search: MaxPerEpochActivationChurnLimit\s+uint64
@@ -387,7 +422,7 @@
MAX_PER_EPOCH_ACTIVATION_CHURN_LIMIT: uint64 = 8 MAX_PER_EPOCH_ACTIVATION_CHURN_LIMIT: uint64 = 8
</spec> </spec>
- name: MAX_PER_EPOCH_ACTIVATION_EXIT_CHURN_LIMIT - name: MAX_PER_EPOCH_ACTIVATION_EXIT_CHURN_LIMIT#electra
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: MaxPerEpochActivationExitChurnLimit\s+uint64 search: MaxPerEpochActivationExitChurnLimit\s+uint64
@@ -397,7 +432,7 @@
MAX_PER_EPOCH_ACTIVATION_EXIT_CHURN_LIMIT: Gwei = 256000000000 MAX_PER_EPOCH_ACTIVATION_EXIT_CHURN_LIMIT: Gwei = 256000000000
</spec> </spec>
- name: MAX_REQUEST_BLOB_SIDECARS - name: MAX_REQUEST_BLOB_SIDECARS#deneb
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: MaxRequestBlobSidecars\s+uint64 search: MaxRequestBlobSidecars\s+uint64
@@ -407,7 +442,7 @@
MAX_REQUEST_BLOB_SIDECARS = 768 MAX_REQUEST_BLOB_SIDECARS = 768
</spec> </spec>
- name: MAX_REQUEST_BLOB_SIDECARS_ELECTRA - name: MAX_REQUEST_BLOB_SIDECARS_ELECTRA#electra
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: MaxRequestBlobSidecarsElectra\s+uint64 search: MaxRequestBlobSidecarsElectra\s+uint64
@@ -417,7 +452,7 @@
MAX_REQUEST_BLOB_SIDECARS_ELECTRA = 1152 MAX_REQUEST_BLOB_SIDECARS_ELECTRA = 1152
</spec> </spec>
- name: MAX_REQUEST_BLOCKS - name: MAX_REQUEST_BLOCKS#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: MaxRequestBlocks\s+uint64 search: MaxRequestBlocks\s+uint64
@@ -427,7 +462,7 @@
MAX_REQUEST_BLOCKS = 1024 MAX_REQUEST_BLOCKS = 1024
</spec> </spec>
- name: MAX_REQUEST_BLOCKS_DENEB - name: MAX_REQUEST_BLOCKS_DENEB#deneb
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: MaxRequestBlocksDeneb\s+uint64 search: MaxRequestBlocksDeneb\s+uint64
@@ -437,7 +472,7 @@
MAX_REQUEST_BLOCKS_DENEB = 128 MAX_REQUEST_BLOCKS_DENEB = 128
</spec> </spec>
- name: MAX_REQUEST_DATA_COLUMN_SIDECARS - name: MAX_REQUEST_DATA_COLUMN_SIDECARS#fulu
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: MaxRequestDataColumnSidecars\s+uint64 search: MaxRequestDataColumnSidecars\s+uint64
@@ -447,7 +482,14 @@
MAX_REQUEST_DATA_COLUMN_SIDECARS = 16384 MAX_REQUEST_DATA_COLUMN_SIDECARS = 16384
</spec> </spec>
- name: MESSAGE_DOMAIN_INVALID_SNAPPY - name: MAX_REQUEST_PAYLOADS#gloas
sources: []
spec: |
<spec config_var="MAX_REQUEST_PAYLOADS" fork="gloas" hash="23399ee5">
MAX_REQUEST_PAYLOADS = 128
</spec>
- name: MESSAGE_DOMAIN_INVALID_SNAPPY#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: MessageDomainInvalidSnappy\s+\[4\]byte search: MessageDomainInvalidSnappy\s+\[4\]byte
@@ -457,7 +499,7 @@
MESSAGE_DOMAIN_INVALID_SNAPPY: DomainType = '0x00000000' MESSAGE_DOMAIN_INVALID_SNAPPY: DomainType = '0x00000000'
</spec> </spec>
- name: MESSAGE_DOMAIN_VALID_SNAPPY - name: MESSAGE_DOMAIN_VALID_SNAPPY#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: MessageDomainValidSnappy\s+\[4\]byte search: MessageDomainValidSnappy\s+\[4\]byte
@@ -467,7 +509,14 @@
MESSAGE_DOMAIN_VALID_SNAPPY: DomainType = '0x01000000' MESSAGE_DOMAIN_VALID_SNAPPY: DomainType = '0x01000000'
</spec> </spec>
- name: MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS - name: MIN_BUILDER_WITHDRAWABILITY_DELAY#gloas
sources: []
spec: |
<spec config_var="MIN_BUILDER_WITHDRAWABILITY_DELAY" fork="gloas" hash="d378428f">
MIN_BUILDER_WITHDRAWABILITY_DELAY: uint64 = 4096
</spec>
- name: MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS#deneb
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: MinEpochsForBlobsSidecarsRequest\s+primitives.Epoch search: MinEpochsForBlobsSidecarsRequest\s+primitives.Epoch
@@ -477,7 +526,7 @@
MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS = 4096 MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS = 4096
</spec> </spec>
- name: MIN_EPOCHS_FOR_BLOCK_REQUESTS - name: MIN_EPOCHS_FOR_BLOCK_REQUESTS#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: MinEpochsForBlockRequests\s+uint64 search: MinEpochsForBlockRequests\s+uint64
@@ -487,7 +536,7 @@
MIN_EPOCHS_FOR_BLOCK_REQUESTS = 33024 MIN_EPOCHS_FOR_BLOCK_REQUESTS = 33024
</spec> </spec>
- name: MIN_EPOCHS_FOR_DATA_COLUMN_SIDECARS_REQUESTS - name: MIN_EPOCHS_FOR_DATA_COLUMN_SIDECARS_REQUESTS#fulu
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: MinEpochsForDataColumnSidecarsRequest\s+primitives.Epoch search: MinEpochsForDataColumnSidecarsRequest\s+primitives.Epoch
@@ -497,7 +546,7 @@
MIN_EPOCHS_FOR_DATA_COLUMN_SIDECARS_REQUESTS = 4096 MIN_EPOCHS_FOR_DATA_COLUMN_SIDECARS_REQUESTS = 4096
</spec> </spec>
- name: MIN_GENESIS_ACTIVE_VALIDATOR_COUNT - name: MIN_GENESIS_ACTIVE_VALIDATOR_COUNT#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: MinGenesisActiveValidatorCount\s+uint64 search: MinGenesisActiveValidatorCount\s+uint64
@@ -507,7 +556,7 @@
MIN_GENESIS_ACTIVE_VALIDATOR_COUNT: uint64 = 16384 MIN_GENESIS_ACTIVE_VALIDATOR_COUNT: uint64 = 16384
</spec> </spec>
- name: MIN_GENESIS_TIME - name: MIN_GENESIS_TIME#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: MinGenesisTime\s+uint64 search: MinGenesisTime\s+uint64
@@ -517,7 +566,7 @@
MIN_GENESIS_TIME: uint64 = 1606824000 MIN_GENESIS_TIME: uint64 = 1606824000
</spec> </spec>
- name: MIN_PER_EPOCH_CHURN_LIMIT - name: MIN_PER_EPOCH_CHURN_LIMIT#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: MinPerEpochChurnLimit\s+uint64 search: MinPerEpochChurnLimit\s+uint64
@@ -527,7 +576,7 @@
MIN_PER_EPOCH_CHURN_LIMIT: uint64 = 4 MIN_PER_EPOCH_CHURN_LIMIT: uint64 = 4
</spec> </spec>
- name: MIN_PER_EPOCH_CHURN_LIMIT_ELECTRA - name: MIN_PER_EPOCH_CHURN_LIMIT_ELECTRA#electra
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: MinPerEpochChurnLimitElectra\s+uint64 search: MinPerEpochChurnLimitElectra\s+uint64
@@ -537,7 +586,7 @@
MIN_PER_EPOCH_CHURN_LIMIT_ELECTRA: Gwei = 128000000000 MIN_PER_EPOCH_CHURN_LIMIT_ELECTRA: Gwei = 128000000000
</spec> </spec>
- name: MIN_VALIDATOR_WITHDRAWABILITY_DELAY - name: MIN_VALIDATOR_WITHDRAWABILITY_DELAY#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: MinValidatorWithdrawabilityDelay\s+primitives.Epoch search: MinValidatorWithdrawabilityDelay\s+primitives.Epoch
@@ -547,7 +596,7 @@
MIN_VALIDATOR_WITHDRAWABILITY_DELAY: uint64 = 256 MIN_VALIDATOR_WITHDRAWABILITY_DELAY: uint64 = 256
</spec> </spec>
- name: NUMBER_OF_CUSTODY_GROUPS - name: NUMBER_OF_CUSTODY_GROUPS#fulu
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: NumberOfCustodyGroups\s+uint64 search: NumberOfCustodyGroups\s+uint64
@@ -557,7 +606,14 @@
NUMBER_OF_CUSTODY_GROUPS = 128 NUMBER_OF_CUSTODY_GROUPS = 128
</spec> </spec>
- name: PROPOSER_REORG_CUTOFF_BPS - name: PAYLOAD_ATTESTATION_DUE_BPS#gloas
sources: []
spec: |
<spec config_var="PAYLOAD_ATTESTATION_DUE_BPS" fork="gloas" hash="17307d0e">
PAYLOAD_ATTESTATION_DUE_BPS: uint64 = 7500
</spec>
- name: PROPOSER_REORG_CUTOFF_BPS#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: ProposerReorgCutoffBPS\s+primitives.BP search: ProposerReorgCutoffBPS\s+primitives.BP
@@ -567,7 +623,7 @@
PROPOSER_REORG_CUTOFF_BPS: uint64 = 1667 PROPOSER_REORG_CUTOFF_BPS: uint64 = 1667
</spec> </spec>
- name: PROPOSER_SCORE_BOOST - name: PROPOSER_SCORE_BOOST#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: ProposerScoreBoost\s+uint64 search: ProposerScoreBoost\s+uint64
@@ -577,7 +633,7 @@
PROPOSER_SCORE_BOOST: uint64 = 40 PROPOSER_SCORE_BOOST: uint64 = 40
</spec> </spec>
- name: REORG_HEAD_WEIGHT_THRESHOLD - name: REORG_HEAD_WEIGHT_THRESHOLD#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: ReorgHeadWeightThreshold\s+uint64 search: ReorgHeadWeightThreshold\s+uint64
@@ -587,7 +643,7 @@
REORG_HEAD_WEIGHT_THRESHOLD: uint64 = 20 REORG_HEAD_WEIGHT_THRESHOLD: uint64 = 20
</spec> </spec>
- name: REORG_MAX_EPOCHS_SINCE_FINALIZATION - name: REORG_MAX_EPOCHS_SINCE_FINALIZATION#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: ReorgMaxEpochsSinceFinalization\s+primitives.Epoch search: ReorgMaxEpochsSinceFinalization\s+primitives.Epoch
@@ -597,7 +653,7 @@
REORG_MAX_EPOCHS_SINCE_FINALIZATION: Epoch = 2 REORG_MAX_EPOCHS_SINCE_FINALIZATION: Epoch = 2
</spec> </spec>
- name: REORG_PARENT_WEIGHT_THRESHOLD - name: REORG_PARENT_WEIGHT_THRESHOLD#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: ReorgParentWeightThreshold\s+uint64 search: ReorgParentWeightThreshold\s+uint64
@@ -607,7 +663,7 @@
REORG_PARENT_WEIGHT_THRESHOLD: uint64 = 160 REORG_PARENT_WEIGHT_THRESHOLD: uint64 = 160
</spec> </spec>
- name: SAMPLES_PER_SLOT - name: SAMPLES_PER_SLOT#fulu
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: SamplesPerSlot\s+uint64 search: SamplesPerSlot\s+uint64
@@ -617,7 +673,7 @@
SAMPLES_PER_SLOT = 8 SAMPLES_PER_SLOT = 8
</spec> </spec>
- name: SECONDS_PER_ETH1_BLOCK - name: SECONDS_PER_ETH1_BLOCK#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: SecondsPerETH1Block\s+uint64 search: SecondsPerETH1Block\s+uint64
@@ -627,7 +683,7 @@
SECONDS_PER_ETH1_BLOCK: uint64 = 14 SECONDS_PER_ETH1_BLOCK: uint64 = 14
</spec> </spec>
- name: SECONDS_PER_SLOT - name: SECONDS_PER_SLOT#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: SecondsPerSlot\s+uint64 search: SecondsPerSlot\s+uint64
@@ -637,7 +693,7 @@
SECONDS_PER_SLOT: uint64 = 12 SECONDS_PER_SLOT: uint64 = 12
</spec> </spec>
- name: SHARD_COMMITTEE_PERIOD - name: SHARD_COMMITTEE_PERIOD#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: ShardCommitteePeriod\s+primitives.Epoch search: ShardCommitteePeriod\s+primitives.Epoch
@@ -647,7 +703,7 @@
SHARD_COMMITTEE_PERIOD: uint64 = 256 SHARD_COMMITTEE_PERIOD: uint64 = 256
</spec> </spec>
- name: SLOT_DURATION_MS - name: SLOT_DURATION_MS#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: SlotDurationMilliseconds\s+uint64 search: SlotDurationMilliseconds\s+uint64
@@ -657,7 +713,7 @@
SLOT_DURATION_MS: uint64 = 12000 SLOT_DURATION_MS: uint64 = 12000
</spec> </spec>
- name: SUBNETS_PER_NODE - name: SUBNETS_PER_NODE#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: SubnetsPerNode\s+uint64 search: SubnetsPerNode\s+uint64
@@ -667,7 +723,7 @@
SUBNETS_PER_NODE = 2 SUBNETS_PER_NODE = 2
</spec> </spec>
- name: SYNC_MESSAGE_DUE_BPS - name: SYNC_MESSAGE_DUE_BPS#altair
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: SyncMessageDueBPS\s+primitives.BP search: SyncMessageDueBPS\s+primitives.BP
@@ -677,7 +733,14 @@
SYNC_MESSAGE_DUE_BPS: uint64 = 3333 SYNC_MESSAGE_DUE_BPS: uint64 = 3333
</spec> </spec>
- name: TERMINAL_BLOCK_HASH - name: SYNC_MESSAGE_DUE_BPS_GLOAS#gloas
sources: []
spec: |
<spec config_var="SYNC_MESSAGE_DUE_BPS_GLOAS" fork="gloas" hash="47f14d95">
SYNC_MESSAGE_DUE_BPS_GLOAS: uint64 = 2500
</spec>
- name: TERMINAL_BLOCK_HASH#bellatrix
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: TerminalBlockHash\s+common.Hash search: TerminalBlockHash\s+common.Hash
@@ -687,7 +750,7 @@
TERMINAL_BLOCK_HASH: Hash32 = '0x0000000000000000000000000000000000000000000000000000000000000000' TERMINAL_BLOCK_HASH: Hash32 = '0x0000000000000000000000000000000000000000000000000000000000000000'
</spec> </spec>
- name: TERMINAL_BLOCK_HASH_ACTIVATION_EPOCH - name: TERMINAL_BLOCK_HASH_ACTIVATION_EPOCH#bellatrix
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: TerminalBlockHashActivationEpoch\s+primitives.Epoch search: TerminalBlockHashActivationEpoch\s+primitives.Epoch
@@ -697,7 +760,7 @@
TERMINAL_BLOCK_HASH_ACTIVATION_EPOCH = 18446744073709551615 TERMINAL_BLOCK_HASH_ACTIVATION_EPOCH = 18446744073709551615
</spec> </spec>
- name: TERMINAL_TOTAL_DIFFICULTY - name: TERMINAL_TOTAL_DIFFICULTY#bellatrix
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: TerminalTotalDifficulty\s+string search: TerminalTotalDifficulty\s+string
@@ -707,7 +770,7 @@
TERMINAL_TOTAL_DIFFICULTY = 58750000000000000000000 TERMINAL_TOTAL_DIFFICULTY = 58750000000000000000000
</spec> </spec>
- name: VALIDATOR_CUSTODY_REQUIREMENT - name: VALIDATOR_CUSTODY_REQUIREMENT#fulu
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: ValidatorCustodyRequirement\s+uint64 search: ValidatorCustodyRequirement\s+uint64

File diff suppressed because one or more lines are too long

View File

@@ -50,7 +50,7 @@
committee_bits: Bitvector[MAX_COMMITTEES_PER_SLOT] committee_bits: Bitvector[MAX_COMMITTEES_PER_SLOT]
</spec> </spec>
- name: AttestationData - name: AttestationData#phase0
sources: sources:
- file: proto/eth/v1/attestation.proto - file: proto/eth/v1/attestation.proto
search: message AttestationData { search: message AttestationData {
@@ -88,7 +88,7 @@
attestation_2: IndexedAttestation attestation_2: IndexedAttestation
</spec> </spec>
- name: BLSToExecutionChange - name: BLSToExecutionChange#capella
sources: sources:
- file: proto/prysm/v1alpha1/withdrawals.proto - file: proto/prysm/v1alpha1/withdrawals.proto
search: message BLSToExecutionChange { search: message BLSToExecutionChange {
@@ -100,7 +100,7 @@
to_execution_address: ExecutionAddress to_execution_address: ExecutionAddress
</spec> </spec>
- name: BeaconBlock - name: BeaconBlock#phase0
sources: sources:
- file: proto/eth/v1/beacon_block.proto - file: proto/eth/v1/beacon_block.proto
search: message BeaconBlock { search: message BeaconBlock {
@@ -239,7 +239,34 @@
execution_requests: ExecutionRequests execution_requests: ExecutionRequests
</spec> </spec>
- name: BeaconBlockHeader - name: BeaconBlockBody#gloas
sources: []
spec: |
<spec ssz_object="BeaconBlockBody" fork="gloas" hash="7e472a77">
class BeaconBlockBody(Container):
randao_reveal: BLSSignature
eth1_data: Eth1Data
graffiti: Bytes32
proposer_slashings: List[ProposerSlashing, MAX_PROPOSER_SLASHINGS]
attester_slashings: List[AttesterSlashing, MAX_ATTESTER_SLASHINGS_ELECTRA]
attestations: List[Attestation, MAX_ATTESTATIONS_ELECTRA]
deposits: List[Deposit, MAX_DEPOSITS]
voluntary_exits: List[SignedVoluntaryExit, MAX_VOLUNTARY_EXITS]
sync_aggregate: SyncAggregate
# [Modified in Gloas:EIP7732]
# Removed `execution_payload`
bls_to_execution_changes: List[SignedBLSToExecutionChange, MAX_BLS_TO_EXECUTION_CHANGES]
# [Modified in Gloas:EIP7732]
# Removed `blob_kzg_commitments`
# [Modified in Gloas:EIP7732]
# Removed `execution_requests`
# [New in Gloas:EIP7732]
signed_execution_payload_bid: SignedExecutionPayloadBid
# [New in Gloas:EIP7732]
payload_attestations: List[PayloadAttestation, MAX_PAYLOAD_ATTESTATIONS]
</spec>
- name: BeaconBlockHeader#phase0
sources: sources:
- file: proto/eth/v1/beacon_block.proto - file: proto/eth/v1/beacon_block.proto
search: message BeaconBlockHeader { search: message BeaconBlockHeader {
@@ -538,7 +565,69 @@
proposer_lookahead: Vector[ValidatorIndex, (MIN_SEED_LOOKAHEAD + 1) * SLOTS_PER_EPOCH] proposer_lookahead: Vector[ValidatorIndex, (MIN_SEED_LOOKAHEAD + 1) * SLOTS_PER_EPOCH]
</spec> </spec>
- name: BlobIdentifier - name: BeaconState#gloas
sources: []
spec: |
<spec ssz_object="BeaconState" fork="gloas" hash="c33b0db2">
class BeaconState(Container):
genesis_time: uint64
genesis_validators_root: Root
slot: Slot
fork: Fork
latest_block_header: BeaconBlockHeader
block_roots: Vector[Root, SLOTS_PER_HISTORICAL_ROOT]
state_roots: Vector[Root, SLOTS_PER_HISTORICAL_ROOT]
historical_roots: List[Root, HISTORICAL_ROOTS_LIMIT]
eth1_data: Eth1Data
eth1_data_votes: List[Eth1Data, EPOCHS_PER_ETH1_VOTING_PERIOD * SLOTS_PER_EPOCH]
eth1_deposit_index: uint64
validators: List[Validator, VALIDATOR_REGISTRY_LIMIT]
balances: List[Gwei, VALIDATOR_REGISTRY_LIMIT]
randao_mixes: Vector[Bytes32, EPOCHS_PER_HISTORICAL_VECTOR]
slashings: Vector[Gwei, EPOCHS_PER_SLASHINGS_VECTOR]
previous_epoch_participation: List[ParticipationFlags, VALIDATOR_REGISTRY_LIMIT]
current_epoch_participation: List[ParticipationFlags, VALIDATOR_REGISTRY_LIMIT]
justification_bits: Bitvector[JUSTIFICATION_BITS_LENGTH]
previous_justified_checkpoint: Checkpoint
current_justified_checkpoint: Checkpoint
finalized_checkpoint: Checkpoint
inactivity_scores: List[uint64, VALIDATOR_REGISTRY_LIMIT]
current_sync_committee: SyncCommittee
next_sync_committee: SyncCommittee
# [Modified in Gloas:EIP7732]
# Removed `latest_execution_payload_header`
# [New in Gloas:EIP7732]
latest_execution_payload_bid: ExecutionPayloadBid
next_withdrawal_index: WithdrawalIndex
next_withdrawal_validator_index: ValidatorIndex
historical_summaries: List[HistoricalSummary, HISTORICAL_ROOTS_LIMIT]
deposit_requests_start_index: uint64
deposit_balance_to_consume: Gwei
exit_balance_to_consume: Gwei
earliest_exit_epoch: Epoch
consolidation_balance_to_consume: Gwei
earliest_consolidation_epoch: Epoch
pending_deposits: List[PendingDeposit, PENDING_DEPOSITS_LIMIT]
pending_partial_withdrawals: List[PendingPartialWithdrawal, PENDING_PARTIAL_WITHDRAWALS_LIMIT]
pending_consolidations: List[PendingConsolidation, PENDING_CONSOLIDATIONS_LIMIT]
proposer_lookahead: Vector[ValidatorIndex, (MIN_SEED_LOOKAHEAD + 1) * SLOTS_PER_EPOCH]
# [New in Gloas:EIP7732]
builders: List[Builder, BUILDER_REGISTRY_LIMIT]
# [New in Gloas:EIP7732]
next_withdrawal_builder_index: BuilderIndex
# [New in Gloas:EIP7732]
execution_payload_availability: Bitvector[SLOTS_PER_HISTORICAL_ROOT]
# [New in Gloas:EIP7732]
builder_pending_payments: Vector[BuilderPendingPayment, 2 * SLOTS_PER_EPOCH]
# [New in Gloas:EIP7732]
builder_pending_withdrawals: List[BuilderPendingWithdrawal, BUILDER_PENDING_WITHDRAWALS_LIMIT]
# [New in Gloas:EIP7732]
latest_block_hash: Hash32
# [New in Gloas:EIP7732]
payload_expected_withdrawals: List[Withdrawal, MAX_WITHDRAWALS_PER_PAYLOAD]
</spec>
- name: BlobIdentifier#deneb
sources: sources:
- file: proto/prysm/v1alpha1/blobs.proto - file: proto/prysm/v1alpha1/blobs.proto
search: message BlobIdentifier { search: message BlobIdentifier {
@@ -549,7 +638,7 @@
index: BlobIndex index: BlobIndex
</spec> </spec>
- name: BlobSidecar - name: BlobSidecar#deneb
sources: sources:
- file: proto/prysm/v1alpha1/beacon_block.proto - file: proto/prysm/v1alpha1/beacon_block.proto
search: message BlobSidecar { search: message BlobSidecar {
@@ -564,7 +653,39 @@
kzg_commitment_inclusion_proof: Vector[Bytes32, KZG_COMMITMENT_INCLUSION_PROOF_DEPTH] kzg_commitment_inclusion_proof: Vector[Bytes32, KZG_COMMITMENT_INCLUSION_PROOF_DEPTH]
</spec> </spec>
- name: Checkpoint - name: Builder#gloas
sources: []
spec: |
<spec ssz_object="Builder" fork="gloas" hash="ae177179">
class Builder(Container):
pubkey: BLSPubkey
version: uint8
execution_address: ExecutionAddress
balance: Gwei
deposit_epoch: Epoch
withdrawable_epoch: Epoch
</spec>
- name: BuilderPendingPayment#gloas
sources: []
spec: |
<spec ssz_object="BuilderPendingPayment" fork="gloas" hash="73cf1649">
class BuilderPendingPayment(Container):
weight: Gwei
withdrawal: BuilderPendingWithdrawal
</spec>
- name: BuilderPendingWithdrawal#gloas
sources: []
spec: |
<spec ssz_object="BuilderPendingWithdrawal" fork="gloas" hash="0579f0ac">
class BuilderPendingWithdrawal(Container):
fee_recipient: ExecutionAddress
amount: Gwei
builder_index: BuilderIndex
</spec>
- name: Checkpoint#phase0
sources: sources:
- file: proto/eth/v1/attestation.proto - file: proto/eth/v1/attestation.proto
search: message Checkpoint { search: message Checkpoint {
@@ -575,7 +696,7 @@
root: Root root: Root
</spec> </spec>
- name: ConsolidationRequest - name: ConsolidationRequest#electra
sources: sources:
- file: proto/engine/v1/electra.proto - file: proto/engine/v1/electra.proto
search: message ConsolidationRequest { search: message ConsolidationRequest {
@@ -587,7 +708,7 @@
target_pubkey: BLSPubkey target_pubkey: BLSPubkey
</spec> </spec>
- name: ContributionAndProof - name: ContributionAndProof#altair
sources: sources:
- file: proto/prysm/v1alpha1/sync_committee.proto - file: proto/prysm/v1alpha1/sync_committee.proto
search: message ContributionAndProof { search: message ContributionAndProof {
@@ -599,7 +720,7 @@
selection_proof: BLSSignature selection_proof: BLSSignature
</spec> </spec>
- name: DataColumnSidecar - name: DataColumnSidecar#fulu
sources: sources:
- file: proto/prysm/v1alpha1/data_columns.proto - file: proto/prysm/v1alpha1/data_columns.proto
search: message DataColumnSidecar { search: message DataColumnSidecar {
@@ -614,7 +735,26 @@
kzg_commitments_inclusion_proof: Vector[Bytes32, KZG_COMMITMENTS_INCLUSION_PROOF_DEPTH] kzg_commitments_inclusion_proof: Vector[Bytes32, KZG_COMMITMENTS_INCLUSION_PROOF_DEPTH]
</spec> </spec>
- name: DataColumnsByRootIdentifier - name: DataColumnSidecar#gloas
sources: []
spec: |
<spec ssz_object="DataColumnSidecar" fork="gloas" hash="8028928b">
class DataColumnSidecar(Container):
index: ColumnIndex
column: List[Cell, MAX_BLOB_COMMITMENTS_PER_BLOCK]
kzg_commitments: List[KZGCommitment, MAX_BLOB_COMMITMENTS_PER_BLOCK]
kzg_proofs: List[KZGProof, MAX_BLOB_COMMITMENTS_PER_BLOCK]
# [Modified in Gloas:EIP7732]
# Removed `signed_block_header`
# [Modified in Gloas:EIP7732]
# Removed `kzg_commitments_inclusion_proof`
# [New in Gloas:EIP7732]
slot: Slot
# [New in Gloas:EIP7732]
beacon_block_root: Root
</spec>
- name: DataColumnsByRootIdentifier#fulu
sources: sources:
- file: proto/prysm/v1alpha1/data_columns.proto - file: proto/prysm/v1alpha1/data_columns.proto
search: message DataColumnsByRootIdentifier { search: message DataColumnsByRootIdentifier {
@@ -625,7 +765,7 @@
columns: List[ColumnIndex, NUMBER_OF_COLUMNS] columns: List[ColumnIndex, NUMBER_OF_COLUMNS]
</spec> </spec>
- name: Deposit - name: Deposit#phase0
sources: sources:
- file: proto/eth/v1/beacon_block.proto - file: proto/eth/v1/beacon_block.proto
search: message Deposit { search: message Deposit {
@@ -636,7 +776,7 @@
data: DepositData data: DepositData
</spec> </spec>
- name: DepositData - name: DepositData#phase0
sources: sources:
- file: proto/prysm/v1alpha1/beacon_core_types.proto - file: proto/prysm/v1alpha1/beacon_core_types.proto
search: message Data { search: message Data {
@@ -649,7 +789,7 @@
signature: BLSSignature signature: BLSSignature
</spec> </spec>
- name: DepositMessage - name: DepositMessage#phase0
sources: sources:
- file: proto/prysm/v1alpha1/beacon_state.proto - file: proto/prysm/v1alpha1/beacon_state.proto
search: message DepositMessage { search: message DepositMessage {
@@ -661,7 +801,7 @@
amount: Gwei amount: Gwei
</spec> </spec>
- name: DepositRequest - name: DepositRequest#electra
sources: sources:
- file: proto/engine/v1/electra.proto - file: proto/engine/v1/electra.proto
search: message DepositRequest { search: message DepositRequest {
@@ -675,7 +815,17 @@
index: uint64 index: uint64
</spec> </spec>
- name: Eth1Data - name: Eth1Block#phase0
sources: []
spec: |
<spec ssz_object="Eth1Block" fork="phase0" hash="0a5c6b45">
class Eth1Block(Container):
timestamp: uint64
deposit_root: Root
deposit_count: uint64
</spec>
- name: Eth1Data#phase0
sources: sources:
- file: proto/prysm/v1alpha1/beacon_core_types.proto - file: proto/prysm/v1alpha1/beacon_core_types.proto
search: message Eth1Data { search: message Eth1Data {
@@ -763,6 +913,38 @@
excess_blob_gas: uint64 excess_blob_gas: uint64
</spec> </spec>
- name: ExecutionPayloadBid#gloas
sources: []
spec: |
<spec ssz_object="ExecutionPayloadBid" fork="gloas" hash="aa71ba16">
class ExecutionPayloadBid(Container):
parent_block_hash: Hash32
parent_block_root: Root
block_hash: Hash32
prev_randao: Bytes32
fee_recipient: ExecutionAddress
gas_limit: uint64
builder_index: BuilderIndex
slot: Slot
value: Gwei
execution_payment: Gwei
blob_kzg_commitments_root: Root
</spec>
- name: ExecutionPayloadEnvelope#gloas
sources: []
spec: |
<spec ssz_object="ExecutionPayloadEnvelope" fork="gloas" hash="cd522f7f">
class ExecutionPayloadEnvelope(Container):
payload: ExecutionPayload
execution_requests: ExecutionRequests
builder_index: BuilderIndex
beacon_block_root: Root
slot: Slot
blob_kzg_commitments: List[KZGCommitment, MAX_BLOB_COMMITMENTS_PER_BLOCK]
state_root: Root
</spec>
- name: ExecutionPayloadHeader#bellatrix - name: ExecutionPayloadHeader#bellatrix
sources: sources:
- file: proto/engine/v1/execution_engine.proto - file: proto/engine/v1/execution_engine.proto
@@ -839,7 +1021,7 @@
excess_blob_gas: uint64 excess_blob_gas: uint64
</spec> </spec>
- name: ExecutionRequests - name: ExecutionRequests#electra
sources: sources:
- file: proto/engine/v1/electra.proto - file: proto/engine/v1/electra.proto
search: message ExecutionRequests { search: message ExecutionRequests {
@@ -854,7 +1036,7 @@
consolidations: List[ConsolidationRequest, MAX_CONSOLIDATION_REQUESTS_PER_PAYLOAD] consolidations: List[ConsolidationRequest, MAX_CONSOLIDATION_REQUESTS_PER_PAYLOAD]
</spec> </spec>
- name: Fork - name: Fork#phase0
sources: sources:
- file: proto/prysm/v1alpha1/beacon_core_types.proto - file: proto/prysm/v1alpha1/beacon_core_types.proto
search: message Fork { search: message Fork {
@@ -866,7 +1048,16 @@
epoch: Epoch epoch: Epoch
</spec> </spec>
- name: ForkData - name: ForkChoiceNode#gloas
sources: []
spec: |
<spec ssz_object="ForkChoiceNode" fork="gloas" hash="755a4b34">
class ForkChoiceNode(Container):
root: Root
payload_status: PayloadStatus # One of PAYLOAD_STATUS_* values
</spec>
- name: ForkData#phase0
sources: sources:
- file: proto/prysm/v1alpha1/beacon_state.proto - file: proto/prysm/v1alpha1/beacon_state.proto
search: message ForkData { search: message ForkData {
@@ -877,7 +1068,7 @@
genesis_validators_root: Root genesis_validators_root: Root
</spec> </spec>
- name: HistoricalBatch - name: HistoricalBatch#phase0
sources: sources:
- file: proto/prysm/v1alpha1/beacon_state.proto - file: proto/prysm/v1alpha1/beacon_state.proto
search: message HistoricalBatch { search: message HistoricalBatch {
@@ -888,7 +1079,7 @@
state_roots: Vector[Root, SLOTS_PER_HISTORICAL_ROOT] state_roots: Vector[Root, SLOTS_PER_HISTORICAL_ROOT]
</spec> </spec>
- name: HistoricalSummary - name: HistoricalSummary#capella
sources: sources:
- file: proto/prysm/v1alpha1/beacon_core_types.proto - file: proto/prysm/v1alpha1/beacon_core_types.proto
search: message HistoricalSummary { search: message HistoricalSummary {
@@ -924,7 +1115,17 @@
signature: BLSSignature signature: BLSSignature
</spec> </spec>
- name: LightClientBootstrap - name: IndexedPayloadAttestation#gloas
sources: []
spec: |
<spec ssz_object="IndexedPayloadAttestation" fork="gloas" hash="fa4832c8">
class IndexedPayloadAttestation(Container):
attesting_indices: List[ValidatorIndex, PTC_SIZE]
data: PayloadAttestationData
signature: BLSSignature
</spec>
- name: LightClientBootstrap#altair
sources: sources:
- file: proto/prysm/v1alpha1/light_client.proto - file: proto/prysm/v1alpha1/light_client.proto
search: message LightClientBootstrapAltair { search: message LightClientBootstrapAltair {
@@ -938,7 +1139,18 @@
current_sync_committee_branch: CurrentSyncCommitteeBranch current_sync_committee_branch: CurrentSyncCommitteeBranch
</spec> </spec>
- name: LightClientFinalityUpdate - name: LightClientBootstrap#capella
sources: []
spec: |
<spec ssz_object="LightClientBootstrap" fork="capella" hash="85f4f5fe">
class LightClientBootstrap(Container):
# [Modified in Capella]
header: LightClientHeader
current_sync_committee: SyncCommittee
current_sync_committee_branch: CurrentSyncCommitteeBranch
</spec>
- name: LightClientFinalityUpdate#altair
sources: sources:
- file: proto/prysm/v1alpha1/light_client.proto - file: proto/prysm/v1alpha1/light_client.proto
search: message LightClientFinalityUpdateAltair { search: message LightClientFinalityUpdateAltair {
@@ -956,6 +1168,20 @@
signature_slot: Slot signature_slot: Slot
</spec> </spec>
- name: LightClientFinalityUpdate#capella
sources: []
spec: |
<spec ssz_object="LightClientFinalityUpdate" fork="capella" hash="9d2b55dd">
class LightClientFinalityUpdate(Container):
# [Modified in Capella]
attested_header: LightClientHeader
# [Modified in Capella]
finalized_header: LightClientHeader
finality_branch: FinalityBranch
sync_aggregate: SyncAggregate
signature_slot: Slot
</spec>
- name: LightClientHeader#altair - name: LightClientHeader#altair
sources: sources:
- file: proto/prysm/v1alpha1/light_client.proto - file: proto/prysm/v1alpha1/light_client.proto
@@ -980,7 +1206,7 @@
execution_branch: ExecutionBranch execution_branch: ExecutionBranch
</spec> </spec>
- name: LightClientOptimisticUpdate - name: LightClientOptimisticUpdate#altair
sources: sources:
- file: proto/prysm/v1alpha1/light_client.proto - file: proto/prysm/v1alpha1/light_client.proto
search: message LightClientOptimisticUpdateAltair { search: message LightClientOptimisticUpdateAltair {
@@ -995,7 +1221,18 @@
signature_slot: Slot signature_slot: Slot
</spec> </spec>
- name: LightClientUpdate - name: LightClientOptimisticUpdate#capella
sources: []
spec: |
<spec ssz_object="LightClientOptimisticUpdate" fork="capella" hash="bdce7b1d">
class LightClientOptimisticUpdate(Container):
# [Modified in Capella]
attested_header: LightClientHeader
sync_aggregate: SyncAggregate
signature_slot: Slot
</spec>
- name: LightClientUpdate#altair
sources: sources:
- file: proto/prysm/v1alpha1/light_client.proto - file: proto/prysm/v1alpha1/light_client.proto
search: message LightClientUpdateAltair { search: message LightClientUpdateAltair {
@@ -1016,7 +1253,65 @@
signature_slot: Slot signature_slot: Slot
</spec> </spec>
- name: PendingAttestation - name: LightClientUpdate#capella
sources: []
spec: |
<spec ssz_object="LightClientUpdate" fork="capella" hash="8d215165">
class LightClientUpdate(Container):
# [Modified in Capella]
attested_header: LightClientHeader
next_sync_committee: SyncCommittee
next_sync_committee_branch: NextSyncCommitteeBranch
# [Modified in Capella]
finalized_header: LightClientHeader
finality_branch: FinalityBranch
sync_aggregate: SyncAggregate
signature_slot: Slot
</spec>
- name: MatrixEntry#fulu
sources: []
spec: |
<spec ssz_object="MatrixEntry" fork="fulu" hash="0da9cc8e">
class MatrixEntry(Container):
cell: Cell
kzg_proof: KZGProof
column_index: ColumnIndex
row_index: RowIndex
</spec>
- name: PayloadAttestation#gloas
sources: []
spec: |
<spec ssz_object="PayloadAttestation" fork="gloas" hash="c769473d">
class PayloadAttestation(Container):
aggregation_bits: Bitvector[PTC_SIZE]
data: PayloadAttestationData
signature: BLSSignature
</spec>
- name: PayloadAttestationData#gloas
sources: []
spec: |
<spec ssz_object="PayloadAttestationData" fork="gloas" hash="9f1b7f92">
class PayloadAttestationData(Container):
beacon_block_root: Root
slot: Slot
payload_present: boolean
blob_data_available: boolean
</spec>
- name: PayloadAttestationMessage#gloas
sources: []
spec: |
<spec ssz_object="PayloadAttestationMessage" fork="gloas" hash="3707678d">
class PayloadAttestationMessage(Container):
validator_index: ValidatorIndex
data: PayloadAttestationData
signature: BLSSignature
</spec>
- name: PendingAttestation#phase0
sources: sources:
- file: proto/prysm/v1alpha1/beacon_state.proto - file: proto/prysm/v1alpha1/beacon_state.proto
search: message PendingAttestation { search: message PendingAttestation {
@@ -1029,7 +1324,7 @@
proposer_index: ValidatorIndex proposer_index: ValidatorIndex
</spec> </spec>
- name: PendingConsolidation - name: PendingConsolidation#electra
sources: sources:
- file: proto/prysm/v1alpha1/eip_7251.proto - file: proto/prysm/v1alpha1/eip_7251.proto
search: message PendingConsolidation { search: message PendingConsolidation {
@@ -1040,7 +1335,7 @@
target_index: ValidatorIndex target_index: ValidatorIndex
</spec> </spec>
- name: PendingDeposit - name: PendingDeposit#electra
sources: sources:
- file: proto/prysm/v1alpha1/eip_7251.proto - file: proto/prysm/v1alpha1/eip_7251.proto
search: message PendingDeposit { search: message PendingDeposit {
@@ -1054,7 +1349,7 @@
slot: Slot slot: Slot
</spec> </spec>
- name: PendingPartialWithdrawal - name: PendingPartialWithdrawal#electra
sources: sources:
- file: proto/prysm/v1alpha1/eip_7251.proto - file: proto/prysm/v1alpha1/eip_7251.proto
search: message PendingPartialWithdrawal { search: message PendingPartialWithdrawal {
@@ -1066,7 +1361,7 @@
withdrawable_epoch: Epoch withdrawable_epoch: Epoch
</spec> </spec>
- name: PowBlock - name: PowBlock#bellatrix
sources: sources:
- file: proto/prysm/v1alpha1/beacon_state.proto - file: proto/prysm/v1alpha1/beacon_state.proto
search: message PowBlock { search: message PowBlock {
@@ -1078,7 +1373,18 @@
total_difficulty: uint256 total_difficulty: uint256
</spec> </spec>
- name: ProposerSlashing - name: ProposerPreferences#gloas
sources: []
spec: |
<spec ssz_object="ProposerPreferences" fork="gloas" hash="2a38b149">
class ProposerPreferences(Container):
proposal_slot: Slot
validator_index: ValidatorIndex
fee_recipient: ExecutionAddress
gas_limit: uint64
</spec>
- name: ProposerSlashing#phase0
sources: sources:
- file: proto/eth/v1/beacon_block.proto - file: proto/eth/v1/beacon_block.proto
search: message ProposerSlashing { search: message ProposerSlashing {
@@ -1112,7 +1418,7 @@
signature: BLSSignature signature: BLSSignature
</spec> </spec>
- name: SignedBLSToExecutionChange - name: SignedBLSToExecutionChange#capella
sources: sources:
- file: proto/prysm/v1alpha1/withdrawals.proto - file: proto/prysm/v1alpha1/withdrawals.proto
search: message SignedBLSToExecutionChange { search: message SignedBLSToExecutionChange {
@@ -1123,7 +1429,7 @@
signature: BLSSignature signature: BLSSignature
</spec> </spec>
- name: SignedBeaconBlock - name: SignedBeaconBlock#phase0
sources: sources:
- file: proto/prysm/v1alpha1/beacon_block.proto - file: proto/prysm/v1alpha1/beacon_block.proto
search: message SignedBeaconBlock { search: message SignedBeaconBlock {
@@ -1134,7 +1440,7 @@
signature: BLSSignature signature: BLSSignature
</spec> </spec>
- name: SignedBeaconBlockHeader - name: SignedBeaconBlockHeader#phase0
sources: sources:
- file: proto/prysm/v1alpha1/beacon_core_types.proto - file: proto/prysm/v1alpha1/beacon_core_types.proto
search: message SignedBeaconBlockHeader { search: message SignedBeaconBlockHeader {
@@ -1145,7 +1451,7 @@
signature: BLSSignature signature: BLSSignature
</spec> </spec>
- name: SignedContributionAndProof - name: SignedContributionAndProof#altair
sources: sources:
- file: proto/prysm/v1alpha1/sync_committee.proto - file: proto/prysm/v1alpha1/sync_committee.proto
search: message SignedContributionAndProof { search: message SignedContributionAndProof {
@@ -1156,7 +1462,34 @@
signature: BLSSignature signature: BLSSignature
</spec> </spec>
- name: SignedVoluntaryExit - name: SignedExecutionPayloadBid#gloas
sources: []
spec: |
<spec ssz_object="SignedExecutionPayloadBid" fork="gloas" hash="6b344341">
class SignedExecutionPayloadBid(Container):
message: ExecutionPayloadBid
signature: BLSSignature
</spec>
- name: SignedExecutionPayloadEnvelope#gloas
sources: []
spec: |
<spec ssz_object="SignedExecutionPayloadEnvelope" fork="gloas" hash="ab8f3404">
class SignedExecutionPayloadEnvelope(Container):
message: ExecutionPayloadEnvelope
signature: BLSSignature
</spec>
- name: SignedProposerPreferences#gloas
sources: []
spec: |
<spec ssz_object="SignedProposerPreferences" fork="gloas" hash="2142774c">
class SignedProposerPreferences(Container):
message: ProposerPreferences
signature: BLSSignature
</spec>
- name: SignedVoluntaryExit#phase0
sources: sources:
- file: proto/prysm/v1alpha1/beacon_core_types.proto - file: proto/prysm/v1alpha1/beacon_core_types.proto
search: message SignedVoluntaryExit { search: message SignedVoluntaryExit {
@@ -1167,7 +1500,7 @@
signature: BLSSignature signature: BLSSignature
</spec> </spec>
- name: SigningData - name: SigningData#phase0
sources: sources:
- file: proto/prysm/v1alpha1/beacon_state.proto - file: proto/prysm/v1alpha1/beacon_state.proto
search: message SigningData { search: message SigningData {
@@ -1178,7 +1511,7 @@
domain: Domain domain: Domain
</spec> </spec>
- name: SingleAttestation - name: SingleAttestation#electra
sources: sources:
- file: proto/prysm/v1alpha1/attestation.proto - file: proto/prysm/v1alpha1/attestation.proto
search: message SingleAttestation { search: message SingleAttestation {
@@ -1191,7 +1524,7 @@
signature: BLSSignature signature: BLSSignature
</spec> </spec>
- name: SyncAggregate - name: SyncAggregate#altair
sources: sources:
- file: proto/prysm/v1alpha1/beacon_core_types.proto - file: proto/prysm/v1alpha1/beacon_core_types.proto
search: message SyncAggregate { search: message SyncAggregate {
@@ -1202,7 +1535,7 @@
sync_committee_signature: BLSSignature sync_committee_signature: BLSSignature
</spec> </spec>
- name: SyncAggregatorSelectionData - name: SyncAggregatorSelectionData#altair
sources: sources:
- file: proto/prysm/v1alpha1/beacon_state.proto - file: proto/prysm/v1alpha1/beacon_state.proto
search: message SyncAggregatorSelectionData { search: message SyncAggregatorSelectionData {
@@ -1213,7 +1546,7 @@
subcommittee_index: uint64 subcommittee_index: uint64
</spec> </spec>
- name: SyncCommittee - name: SyncCommittee#altair
sources: sources:
- file: proto/prysm/v1alpha1/beacon_core_types.proto - file: proto/prysm/v1alpha1/beacon_core_types.proto
search: message SyncCommittee { search: message SyncCommittee {
@@ -1224,7 +1557,7 @@
aggregate_pubkey: BLSPubkey aggregate_pubkey: BLSPubkey
</spec> </spec>
- name: SyncCommitteeContribution - name: SyncCommitteeContribution#altair
sources: sources:
- file: proto/prysm/v1alpha1/sync_committee.proto - file: proto/prysm/v1alpha1/sync_committee.proto
search: message SyncCommitteeContribution { search: message SyncCommitteeContribution {
@@ -1238,7 +1571,7 @@
signature: BLSSignature signature: BLSSignature
</spec> </spec>
- name: SyncCommitteeMessage - name: SyncCommitteeMessage#altair
sources: sources:
- file: proto/prysm/v1alpha1/sync_committee.proto - file: proto/prysm/v1alpha1/sync_committee.proto
search: message SyncCommitteeMessage { search: message SyncCommitteeMessage {
@@ -1251,7 +1584,7 @@
signature: BLSSignature signature: BLSSignature
</spec> </spec>
- name: Validator - name: Validator#phase0
sources: sources:
- file: proto/prysm/v1alpha1/beacon_core_types.proto - file: proto/prysm/v1alpha1/beacon_core_types.proto
search: message Validator { search: message Validator {
@@ -1268,7 +1601,7 @@
withdrawable_epoch: Epoch withdrawable_epoch: Epoch
</spec> </spec>
- name: VoluntaryExit - name: VoluntaryExit#phase0
sources: sources:
- file: proto/eth/v1/beacon_block.proto - file: proto/eth/v1/beacon_block.proto
search: message VoluntaryExit { search: message VoluntaryExit {
@@ -1279,7 +1612,7 @@
validator_index: ValidatorIndex validator_index: ValidatorIndex
</spec> </spec>
- name: Withdrawal - name: Withdrawal#capella
sources: sources:
- file: proto/engine/v1/execution_engine.proto - file: proto/engine/v1/execution_engine.proto
search: message Withdrawal { search: message Withdrawal {
@@ -1292,7 +1625,7 @@
amount: Gwei amount: Gwei
</spec> </spec>
- name: WithdrawalRequest - name: WithdrawalRequest#electra
sources: sources:
- file: proto/engine/v1/electra.proto - file: proto/engine/v1/electra.proto
search: message WithdrawalRequest { search: message WithdrawalRequest {

View File

@@ -1,4 +1,4 @@
- name: BlobParameters - name: BlobParameters#fulu
sources: [] sources: []
spec: | spec: |
<spec dataclass="BlobParameters" fork="fulu" hash="a4575aa8"> <spec dataclass="BlobParameters" fork="fulu" hash="a4575aa8">
@@ -54,6 +54,20 @@
processed_sweep_withdrawals_count: uint64 processed_sweep_withdrawals_count: uint64
</spec> </spec>
- name: ExpectedWithdrawals#gloas
sources: []
spec: |
<spec dataclass="ExpectedWithdrawals" fork="gloas" hash="b32cc9c9">
class ExpectedWithdrawals(object):
withdrawals: Sequence[Withdrawal]
# [New in Gloas:EIP7732]
processed_builder_withdrawals_count: uint64
processed_partial_withdrawals_count: uint64
# [New in Gloas:EIP7732]
processed_builders_sweep_count: uint64
processed_sweep_withdrawals_count: uint64
</spec>
- name: GetPayloadResponse#bellatrix - name: GetPayloadResponse#bellatrix
sources: sources:
- file: consensus-types/blocks/get_payload.go - file: consensus-types/blocks/get_payload.go
@@ -126,7 +140,7 @@
execution_requests: Sequence[bytes] execution_requests: Sequence[bytes]
</spec> </spec>
- name: LatestMessage - name: LatestMessage#phase0
sources: [] sources: []
spec: | spec: |
<spec dataclass="LatestMessage" fork="phase0" hash="44e832d0"> <spec dataclass="LatestMessage" fork="phase0" hash="44e832d0">
@@ -136,7 +150,18 @@
root: Root root: Root
</spec> </spec>
- name: LightClientStore - name: LatestMessage#gloas
sources: []
spec: |
<spec dataclass="LatestMessage" fork="gloas" hash="a0030894">
@dataclass(eq=True, frozen=True)
class LatestMessage(object):
slot: Slot
root: Root
payload_present: boolean
</spec>
- name: LightClientStore#altair
sources: [] sources: []
spec: | spec: |
<spec dataclass="LightClientStore" fork="altair" hash="24725cec"> <spec dataclass="LightClientStore" fork="altair" hash="24725cec">
@@ -155,6 +180,23 @@
current_max_active_participants: uint64 current_max_active_participants: uint64
</spec> </spec>
- name: LightClientStore#capella
sources: []
spec: |
<spec dataclass="LightClientStore" fork="capella" hash="04b41062">
class LightClientStore(object):
# [Modified in Capella]
finalized_header: LightClientHeader
current_sync_committee: SyncCommittee
next_sync_committee: SyncCommittee
# [Modified in Capella]
best_valid_update: Optional[LightClientUpdate]
# [Modified in Capella]
optimistic_header: LightClientHeader
previous_max_active_participants: uint64
current_max_active_participants: uint64
</spec>
- name: NewPayloadRequest#bellatrix - name: NewPayloadRequest#bellatrix
sources: sources:
- file: beacon-chain/execution/engine_client.go - file: beacon-chain/execution/engine_client.go
@@ -191,7 +233,7 @@
execution_requests: ExecutionRequests execution_requests: ExecutionRequests
</spec> </spec>
- name: OptimisticStore - name: OptimisticStore#bellatrix
sources: [] sources: []
spec: | spec: |
<spec dataclass="OptimisticStore" fork="bellatrix" hash="a2b2182c"> <spec dataclass="OptimisticStore" fork="bellatrix" hash="a2b2182c">
@@ -246,7 +288,7 @@
parent_beacon_block_root: Root parent_beacon_block_root: Root
</spec> </spec>
- name: Store - name: Store#phase0
sources: [] sources: []
spec: | spec: |
<spec dataclass="Store" fork="phase0" hash="abe525d6"> <spec dataclass="Store" fork="phase0" hash="abe525d6">
@@ -266,3 +308,30 @@
latest_messages: Dict[ValidatorIndex, LatestMessage] = field(default_factory=dict) latest_messages: Dict[ValidatorIndex, LatestMessage] = field(default_factory=dict)
unrealized_justifications: Dict[Root, Checkpoint] = field(default_factory=dict) unrealized_justifications: Dict[Root, Checkpoint] = field(default_factory=dict)
</spec> </spec>
- name: Store#gloas
sources: []
spec: |
<spec dataclass="Store" fork="gloas" hash="4dbfec46">
class Store(object):
time: uint64
genesis_time: uint64
justified_checkpoint: Checkpoint
finalized_checkpoint: Checkpoint
unrealized_justified_checkpoint: Checkpoint
unrealized_finalized_checkpoint: Checkpoint
proposer_boost_root: Root
equivocating_indices: Set[ValidatorIndex]
blocks: Dict[Root, BeaconBlock] = field(default_factory=dict)
block_states: Dict[Root, BeaconState] = field(default_factory=dict)
block_timeliness: Dict[Root, Vector[boolean, NUM_BLOCK_TIMELINESS_DEADLINES]] = field(
default_factory=dict
)
checkpoint_states: Dict[Checkpoint, BeaconState] = field(default_factory=dict)
latest_messages: Dict[ValidatorIndex, LatestMessage] = field(default_factory=dict)
unrealized_justifications: Dict[Root, Checkpoint] = field(default_factory=dict)
# [New in Gloas:EIP7732]
execution_payload_states: Dict[Root, BeaconState] = field(default_factory=dict)
# [New in Gloas:EIP7732]
ptc_vote: Dict[Root, Vector[boolean, PTC_SIZE]] = field(default_factory=dict)
</spec>

File diff suppressed because it is too large Load Diff

View File

@@ -1,4 +1,4 @@
- name: BASE_REWARD_FACTOR - name: BASE_REWARD_FACTOR#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: BaseRewardFactor\s+.*yaml:"BASE_REWARD_FACTOR" search: BaseRewardFactor\s+.*yaml:"BASE_REWARD_FACTOR"
@@ -8,7 +8,21 @@
BASE_REWARD_FACTOR: uint64 = 64 BASE_REWARD_FACTOR: uint64 = 64
</spec> </spec>
- name: BYTES_PER_LOGS_BLOOM - name: BUILDER_PENDING_WITHDRAWALS_LIMIT#gloas
sources: []
spec: |
<spec preset_var="BUILDER_PENDING_WITHDRAWALS_LIMIT" fork="gloas" hash="40b31377">
BUILDER_PENDING_WITHDRAWALS_LIMIT: uint64 = 1048576
</spec>
- name: BUILDER_REGISTRY_LIMIT#gloas
sources: []
spec: |
<spec preset_var="BUILDER_REGISTRY_LIMIT" fork="gloas" hash="e951ff73">
BUILDER_REGISTRY_LIMIT: uint64 = 1099511627776
</spec>
- name: BYTES_PER_LOGS_BLOOM#bellatrix
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: BytesPerLogsBloom\s+.*yaml:"BYTES_PER_LOGS_BLOOM" search: BytesPerLogsBloom\s+.*yaml:"BYTES_PER_LOGS_BLOOM"
@@ -18,7 +32,7 @@
BYTES_PER_LOGS_BLOOM: uint64 = 256 BYTES_PER_LOGS_BLOOM: uint64 = 256
</spec> </spec>
- name: CELLS_PER_EXT_BLOB - name: CELLS_PER_EXT_BLOB#fulu
sources: sources:
- file: beacon-chain/rpc/eth/config/handlers.go - file: beacon-chain/rpc/eth/config/handlers.go
search: data\["CELLS_PER_EXT_BLOB"\] search: data\["CELLS_PER_EXT_BLOB"\]
@@ -28,7 +42,7 @@
CELLS_PER_EXT_BLOB = 128 CELLS_PER_EXT_BLOB = 128
</spec> </spec>
- name: EFFECTIVE_BALANCE_INCREMENT - name: EFFECTIVE_BALANCE_INCREMENT#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: EffectiveBalanceIncrement\s+.*yaml:"EFFECTIVE_BALANCE_INCREMENT" search: EffectiveBalanceIncrement\s+.*yaml:"EFFECTIVE_BALANCE_INCREMENT"
@@ -38,7 +52,7 @@
EFFECTIVE_BALANCE_INCREMENT: Gwei = 1000000000 EFFECTIVE_BALANCE_INCREMENT: Gwei = 1000000000
</spec> </spec>
- name: EPOCHS_PER_ETH1_VOTING_PERIOD - name: EPOCHS_PER_ETH1_VOTING_PERIOD#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: EpochsPerEth1VotingPeriod\s+.*yaml:"EPOCHS_PER_ETH1_VOTING_PERIOD" search: EpochsPerEth1VotingPeriod\s+.*yaml:"EPOCHS_PER_ETH1_VOTING_PERIOD"
@@ -48,7 +62,7 @@
EPOCHS_PER_ETH1_VOTING_PERIOD: uint64 = 64 EPOCHS_PER_ETH1_VOTING_PERIOD: uint64 = 64
</spec> </spec>
- name: EPOCHS_PER_HISTORICAL_VECTOR - name: EPOCHS_PER_HISTORICAL_VECTOR#phase0
sources: sources:
- file: config/fieldparams/mainnet.go - file: config/fieldparams/mainnet.go
search: RandaoMixesLength\s*= search: RandaoMixesLength\s*=
@@ -58,7 +72,7 @@
EPOCHS_PER_HISTORICAL_VECTOR: uint64 = 65536 EPOCHS_PER_HISTORICAL_VECTOR: uint64 = 65536
</spec> </spec>
- name: EPOCHS_PER_SLASHINGS_VECTOR - name: EPOCHS_PER_SLASHINGS_VECTOR#phase0
sources: sources:
- file: config/fieldparams/mainnet.go - file: config/fieldparams/mainnet.go
search: SlashingsLength\s*= search: SlashingsLength\s*=
@@ -68,7 +82,7 @@
EPOCHS_PER_SLASHINGS_VECTOR: uint64 = 8192 EPOCHS_PER_SLASHINGS_VECTOR: uint64 = 8192
</spec> </spec>
- name: EPOCHS_PER_SYNC_COMMITTEE_PERIOD - name: EPOCHS_PER_SYNC_COMMITTEE_PERIOD#altair
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: EpochsPerSyncCommitteePeriod\s+.*yaml:"EPOCHS_PER_SYNC_COMMITTEE_PERIOD" search: EpochsPerSyncCommitteePeriod\s+.*yaml:"EPOCHS_PER_SYNC_COMMITTEE_PERIOD"
@@ -78,7 +92,7 @@
EPOCHS_PER_SYNC_COMMITTEE_PERIOD: uint64 = 256 EPOCHS_PER_SYNC_COMMITTEE_PERIOD: uint64 = 256
</spec> </spec>
- name: FIELD_ELEMENTS_PER_BLOB - name: FIELD_ELEMENTS_PER_BLOB#deneb
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: FieldElementsPerBlob\s+.*yaml:"FIELD_ELEMENTS_PER_BLOB" search: FieldElementsPerBlob\s+.*yaml:"FIELD_ELEMENTS_PER_BLOB"
@@ -88,7 +102,7 @@
FIELD_ELEMENTS_PER_BLOB: uint64 = 4096 FIELD_ELEMENTS_PER_BLOB: uint64 = 4096
</spec> </spec>
- name: FIELD_ELEMENTS_PER_CELL - name: FIELD_ELEMENTS_PER_CELL#fulu
sources: sources:
- file: config/fieldparams/mainnet.go - file: config/fieldparams/mainnet.go
search: CellsPerBlob\s*= search: CellsPerBlob\s*=
@@ -98,7 +112,7 @@
FIELD_ELEMENTS_PER_CELL: uint64 = 64 FIELD_ELEMENTS_PER_CELL: uint64 = 64
</spec> </spec>
- name: FIELD_ELEMENTS_PER_EXT_BLOB - name: FIELD_ELEMENTS_PER_EXT_BLOB#fulu
sources: sources:
- file: proto/ssz_proto_library.bzl - file: proto/ssz_proto_library.bzl
search: mainnet\s*=\s*\{[^}]*"field_elements_per_ext_blob\.size".*[^}]*\} search: mainnet\s*=\s*\{[^}]*"field_elements_per_ext_blob\.size".*[^}]*\}
@@ -108,7 +122,7 @@
FIELD_ELEMENTS_PER_EXT_BLOB = 8192 FIELD_ELEMENTS_PER_EXT_BLOB = 8192
</spec> </spec>
- name: HISTORICAL_ROOTS_LIMIT - name: HISTORICAL_ROOTS_LIMIT#phase0
sources: sources:
- file: config/fieldparams/mainnet.go - file: config/fieldparams/mainnet.go
search: HistoricalRootsLength\s*= search: HistoricalRootsLength\s*=
@@ -118,7 +132,7 @@
HISTORICAL_ROOTS_LIMIT: uint64 = 16777216 HISTORICAL_ROOTS_LIMIT: uint64 = 16777216
</spec> </spec>
- name: HYSTERESIS_DOWNWARD_MULTIPLIER - name: HYSTERESIS_DOWNWARD_MULTIPLIER#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: HysteresisDownwardMultiplier\s+.*yaml:"HYSTERESIS_DOWNWARD_MULTIPLIER" search: HysteresisDownwardMultiplier\s+.*yaml:"HYSTERESIS_DOWNWARD_MULTIPLIER"
@@ -128,7 +142,7 @@
HYSTERESIS_DOWNWARD_MULTIPLIER: uint64 = 1 HYSTERESIS_DOWNWARD_MULTIPLIER: uint64 = 1
</spec> </spec>
- name: HYSTERESIS_QUOTIENT - name: HYSTERESIS_QUOTIENT#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: HysteresisQuotient\s+.*yaml:"HYSTERESIS_QUOTIENT" search: HysteresisQuotient\s+.*yaml:"HYSTERESIS_QUOTIENT"
@@ -138,7 +152,7 @@
HYSTERESIS_QUOTIENT: uint64 = 4 HYSTERESIS_QUOTIENT: uint64 = 4
</spec> </spec>
- name: HYSTERESIS_UPWARD_MULTIPLIER - name: HYSTERESIS_UPWARD_MULTIPLIER#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: HysteresisUpwardMultiplier\s+.*yaml:"HYSTERESIS_UPWARD_MULTIPLIER" search: HysteresisUpwardMultiplier\s+.*yaml:"HYSTERESIS_UPWARD_MULTIPLIER"
@@ -148,7 +162,7 @@
HYSTERESIS_UPWARD_MULTIPLIER: uint64 = 5 HYSTERESIS_UPWARD_MULTIPLIER: uint64 = 5
</spec> </spec>
- name: INACTIVITY_PENALTY_QUOTIENT - name: INACTIVITY_PENALTY_QUOTIENT#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: InactivityPenaltyQuotient\s+.*yaml:"INACTIVITY_PENALTY_QUOTIENT" search: InactivityPenaltyQuotient\s+.*yaml:"INACTIVITY_PENALTY_QUOTIENT"
@@ -158,7 +172,7 @@
INACTIVITY_PENALTY_QUOTIENT: uint64 = 67108864 INACTIVITY_PENALTY_QUOTIENT: uint64 = 67108864
</spec> </spec>
- name: INACTIVITY_PENALTY_QUOTIENT_ALTAIR - name: INACTIVITY_PENALTY_QUOTIENT_ALTAIR#altair
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: InactivityPenaltyQuotientAltair\s+.*yaml:"INACTIVITY_PENALTY_QUOTIENT_ALTAIR" search: InactivityPenaltyQuotientAltair\s+.*yaml:"INACTIVITY_PENALTY_QUOTIENT_ALTAIR"
@@ -168,7 +182,7 @@
INACTIVITY_PENALTY_QUOTIENT_ALTAIR: uint64 = 50331648 INACTIVITY_PENALTY_QUOTIENT_ALTAIR: uint64 = 50331648
</spec> </spec>
- name: INACTIVITY_PENALTY_QUOTIENT_BELLATRIX - name: INACTIVITY_PENALTY_QUOTIENT_BELLATRIX#bellatrix
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: InactivityPenaltyQuotientBellatrix\s+.*yaml:"INACTIVITY_PENALTY_QUOTIENT_BELLATRIX" search: InactivityPenaltyQuotientBellatrix\s+.*yaml:"INACTIVITY_PENALTY_QUOTIENT_BELLATRIX"
@@ -178,7 +192,7 @@
INACTIVITY_PENALTY_QUOTIENT_BELLATRIX: uint64 = 16777216 INACTIVITY_PENALTY_QUOTIENT_BELLATRIX: uint64 = 16777216
</spec> </spec>
- name: KZG_COMMITMENTS_INCLUSION_PROOF_DEPTH - name: KZG_COMMITMENTS_INCLUSION_PROOF_DEPTH#fulu
sources: sources:
- file: proto/ssz_proto_library.bzl - file: proto/ssz_proto_library.bzl
search: mainnet\s*=\s*\{[^}]*"kzg_commitments_inclusion_proof_depth\.size":.*[^}]*\} search: mainnet\s*=\s*\{[^}]*"kzg_commitments_inclusion_proof_depth\.size":.*[^}]*\}
@@ -188,7 +202,7 @@
KZG_COMMITMENTS_INCLUSION_PROOF_DEPTH: uint64 = 4 KZG_COMMITMENTS_INCLUSION_PROOF_DEPTH: uint64 = 4
</spec> </spec>
- name: KZG_COMMITMENT_INCLUSION_PROOF_DEPTH - name: KZG_COMMITMENT_INCLUSION_PROOF_DEPTH#deneb
sources: sources:
- file: config/fieldparams/mainnet.go - file: config/fieldparams/mainnet.go
search: KzgCommitmentInclusionProofDepth\s*= search: KzgCommitmentInclusionProofDepth\s*=
@@ -198,7 +212,7 @@
KZG_COMMITMENT_INCLUSION_PROOF_DEPTH: uint64 = 17 KZG_COMMITMENT_INCLUSION_PROOF_DEPTH: uint64 = 17
</spec> </spec>
- name: MAX_ATTESTATIONS - name: MAX_ATTESTATIONS#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: MaxAttestations\s+.*yaml:"MAX_ATTESTATIONS" search: MaxAttestations\s+.*yaml:"MAX_ATTESTATIONS"
@@ -208,7 +222,7 @@
MAX_ATTESTATIONS = 128 MAX_ATTESTATIONS = 128
</spec> </spec>
- name: MAX_ATTESTATIONS_ELECTRA - name: MAX_ATTESTATIONS_ELECTRA#electra
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: MaxAttestationsElectra\s+.*yaml:"MAX_ATTESTATIONS_ELECTRA" search: MaxAttestationsElectra\s+.*yaml:"MAX_ATTESTATIONS_ELECTRA"
@@ -218,7 +232,7 @@
MAX_ATTESTATIONS_ELECTRA = 8 MAX_ATTESTATIONS_ELECTRA = 8
</spec> </spec>
- name: MAX_ATTESTER_SLASHINGS - name: MAX_ATTESTER_SLASHINGS#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: MaxAttesterSlashings\s+.*yaml:"MAX_ATTESTER_SLASHINGS" search: MaxAttesterSlashings\s+.*yaml:"MAX_ATTESTER_SLASHINGS"
@@ -228,7 +242,7 @@
MAX_ATTESTER_SLASHINGS = 2 MAX_ATTESTER_SLASHINGS = 2
</spec> </spec>
- name: MAX_ATTESTER_SLASHINGS_ELECTRA - name: MAX_ATTESTER_SLASHINGS_ELECTRA#electra
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: MaxAttesterSlashingsElectra\s+.*yaml:"MAX_ATTESTER_SLASHINGS_ELECTRA" search: MaxAttesterSlashingsElectra\s+.*yaml:"MAX_ATTESTER_SLASHINGS_ELECTRA"
@@ -238,7 +252,7 @@
MAX_ATTESTER_SLASHINGS_ELECTRA = 1 MAX_ATTESTER_SLASHINGS_ELECTRA = 1
</spec> </spec>
- name: MAX_BLOB_COMMITMENTS_PER_BLOCK - name: MAX_BLOB_COMMITMENTS_PER_BLOCK#deneb
sources: sources:
- file: config/fieldparams/mainnet.go - file: config/fieldparams/mainnet.go
search: MaxBlobCommitmentsPerBlock\s*= search: MaxBlobCommitmentsPerBlock\s*=
@@ -248,7 +262,7 @@
MAX_BLOB_COMMITMENTS_PER_BLOCK: uint64 = 4096 MAX_BLOB_COMMITMENTS_PER_BLOCK: uint64 = 4096
</spec> </spec>
- name: MAX_BLS_TO_EXECUTION_CHANGES - name: MAX_BLS_TO_EXECUTION_CHANGES#capella
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: MaxBlsToExecutionChanges\s+.*yaml:"MAX_BLS_TO_EXECUTION_CHANGES" search: MaxBlsToExecutionChanges\s+.*yaml:"MAX_BLS_TO_EXECUTION_CHANGES"
@@ -258,7 +272,14 @@
MAX_BLS_TO_EXECUTION_CHANGES = 16 MAX_BLS_TO_EXECUTION_CHANGES = 16
</spec> </spec>
- name: MAX_BYTES_PER_TRANSACTION - name: MAX_BUILDERS_PER_WITHDRAWALS_SWEEP#gloas
sources: []
spec: |
<spec preset_var="MAX_BUILDERS_PER_WITHDRAWALS_SWEEP" fork="gloas" hash="1556b314">
MAX_BUILDERS_PER_WITHDRAWALS_SWEEP = 16384
</spec>
- name: MAX_BYTES_PER_TRANSACTION#bellatrix
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: MaxBytesPerTransaction\s+.*yaml:"MAX_BYTES_PER_TRANSACTION" search: MaxBytesPerTransaction\s+.*yaml:"MAX_BYTES_PER_TRANSACTION"
@@ -288,7 +309,7 @@
MAX_COMMITTEES_PER_SLOT: uint64 = 64 MAX_COMMITTEES_PER_SLOT: uint64 = 64
</spec> </spec>
- name: MAX_CONSOLIDATION_REQUESTS_PER_PAYLOAD - name: MAX_CONSOLIDATION_REQUESTS_PER_PAYLOAD#electra
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: MaxConsolidationsRequestsPerPayload\s+.*yaml:"MAX_CONSOLIDATION_REQUESTS_PER_PAYLOAD" search: MaxConsolidationsRequestsPerPayload\s+.*yaml:"MAX_CONSOLIDATION_REQUESTS_PER_PAYLOAD"
@@ -298,7 +319,7 @@
MAX_CONSOLIDATION_REQUESTS_PER_PAYLOAD: uint64 = 2 MAX_CONSOLIDATION_REQUESTS_PER_PAYLOAD: uint64 = 2
</spec> </spec>
- name: MAX_DEPOSITS - name: MAX_DEPOSITS#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: MaxDeposits\s+.*yaml:"MAX_DEPOSITS" search: MaxDeposits\s+.*yaml:"MAX_DEPOSITS"
@@ -308,7 +329,7 @@
MAX_DEPOSITS = 16 MAX_DEPOSITS = 16
</spec> </spec>
- name: MAX_DEPOSIT_REQUESTS_PER_PAYLOAD - name: MAX_DEPOSIT_REQUESTS_PER_PAYLOAD#electra
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: MaxDepositRequestsPerPayload\s+.*yaml:"MAX_DEPOSIT_REQUESTS_PER_PAYLOAD" search: MaxDepositRequestsPerPayload\s+.*yaml:"MAX_DEPOSIT_REQUESTS_PER_PAYLOAD"
@@ -318,7 +339,7 @@
MAX_DEPOSIT_REQUESTS_PER_PAYLOAD: uint64 = 8192 MAX_DEPOSIT_REQUESTS_PER_PAYLOAD: uint64 = 8192
</spec> </spec>
- name: MAX_EFFECTIVE_BALANCE - name: MAX_EFFECTIVE_BALANCE#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: MaxEffectiveBalance\s+.*yaml:"MAX_EFFECTIVE_BALANCE" search: MaxEffectiveBalance\s+.*yaml:"MAX_EFFECTIVE_BALANCE"
@@ -328,7 +349,7 @@
MAX_EFFECTIVE_BALANCE: Gwei = 32000000000 MAX_EFFECTIVE_BALANCE: Gwei = 32000000000
</spec> </spec>
- name: MAX_EFFECTIVE_BALANCE_ELECTRA - name: MAX_EFFECTIVE_BALANCE_ELECTRA#electra
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: MaxEffectiveBalanceElectra\s+.*yaml:"MAX_EFFECTIVE_BALANCE_ELECTRA" search: MaxEffectiveBalanceElectra\s+.*yaml:"MAX_EFFECTIVE_BALANCE_ELECTRA"
@@ -338,7 +359,7 @@
MAX_EFFECTIVE_BALANCE_ELECTRA: Gwei = 2048000000000 MAX_EFFECTIVE_BALANCE_ELECTRA: Gwei = 2048000000000
</spec> </spec>
- name: MAX_EXTRA_DATA_BYTES - name: MAX_EXTRA_DATA_BYTES#bellatrix
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: MaxExtraDataBytes\s+.*yaml:"MAX_EXTRA_DATA_BYTES" search: MaxExtraDataBytes\s+.*yaml:"MAX_EXTRA_DATA_BYTES"
@@ -348,7 +369,14 @@
MAX_EXTRA_DATA_BYTES = 32 MAX_EXTRA_DATA_BYTES = 32
</spec> </spec>
- name: MAX_PENDING_DEPOSITS_PER_EPOCH - name: MAX_PAYLOAD_ATTESTATIONS#gloas
sources: []
spec: |
<spec preset_var="MAX_PAYLOAD_ATTESTATIONS" fork="gloas" hash="fc24e7ea">
MAX_PAYLOAD_ATTESTATIONS = 4
</spec>
- name: MAX_PENDING_DEPOSITS_PER_EPOCH#electra
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: MaxPendingDepositsPerEpoch\s+.*yaml:"MAX_PENDING_DEPOSITS_PER_EPOCH" search: MaxPendingDepositsPerEpoch\s+.*yaml:"MAX_PENDING_DEPOSITS_PER_EPOCH"
@@ -358,7 +386,7 @@
MAX_PENDING_DEPOSITS_PER_EPOCH: uint64 = 16 MAX_PENDING_DEPOSITS_PER_EPOCH: uint64 = 16
</spec> </spec>
- name: MAX_PENDING_PARTIALS_PER_WITHDRAWALS_SWEEP - name: MAX_PENDING_PARTIALS_PER_WITHDRAWALS_SWEEP#electra
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: MaxPendingPartialsPerWithdrawalsSweep\s+.*yaml:"MAX_PENDING_PARTIALS_PER_WITHDRAWALS_SWEEP" search: MaxPendingPartialsPerWithdrawalsSweep\s+.*yaml:"MAX_PENDING_PARTIALS_PER_WITHDRAWALS_SWEEP"
@@ -368,7 +396,7 @@
MAX_PENDING_PARTIALS_PER_WITHDRAWALS_SWEEP: uint64 = 8 MAX_PENDING_PARTIALS_PER_WITHDRAWALS_SWEEP: uint64 = 8
</spec> </spec>
- name: MAX_PROPOSER_SLASHINGS - name: MAX_PROPOSER_SLASHINGS#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: MaxProposerSlashings\s+.*yaml:"MAX_PROPOSER_SLASHINGS" search: MaxProposerSlashings\s+.*yaml:"MAX_PROPOSER_SLASHINGS"
@@ -378,7 +406,7 @@
MAX_PROPOSER_SLASHINGS = 16 MAX_PROPOSER_SLASHINGS = 16
</spec> </spec>
- name: MAX_SEED_LOOKAHEAD - name: MAX_SEED_LOOKAHEAD#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: MaxSeedLookahead\s+.*yaml:"MAX_SEED_LOOKAHEAD" search: MaxSeedLookahead\s+.*yaml:"MAX_SEED_LOOKAHEAD"
@@ -388,7 +416,7 @@
MAX_SEED_LOOKAHEAD: uint64 = 4 MAX_SEED_LOOKAHEAD: uint64 = 4
</spec> </spec>
- name: MAX_TRANSACTIONS_PER_PAYLOAD - name: MAX_TRANSACTIONS_PER_PAYLOAD#bellatrix
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: MaxTransactionsPerPayload\s+.*yaml:"MAX_TRANSACTIONS_PER_PAYLOAD" search: MaxTransactionsPerPayload\s+.*yaml:"MAX_TRANSACTIONS_PER_PAYLOAD"
@@ -418,7 +446,7 @@
MAX_VALIDATORS_PER_COMMITTEE: uint64 = 2048 MAX_VALIDATORS_PER_COMMITTEE: uint64 = 2048
</spec> </spec>
- name: MAX_VALIDATORS_PER_WITHDRAWALS_SWEEP - name: MAX_VALIDATORS_PER_WITHDRAWALS_SWEEP#capella
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: MaxValidatorsPerWithdrawalsSweep\s+.*yaml:"MAX_VALIDATORS_PER_WITHDRAWALS_SWEEP" search: MaxValidatorsPerWithdrawalsSweep\s+.*yaml:"MAX_VALIDATORS_PER_WITHDRAWALS_SWEEP"
@@ -428,7 +456,7 @@
MAX_VALIDATORS_PER_WITHDRAWALS_SWEEP = 16384 MAX_VALIDATORS_PER_WITHDRAWALS_SWEEP = 16384
</spec> </spec>
- name: MAX_VOLUNTARY_EXITS - name: MAX_VOLUNTARY_EXITS#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: MaxVoluntaryExits\s+.*yaml:"MAX_VOLUNTARY_EXITS" search: MaxVoluntaryExits\s+.*yaml:"MAX_VOLUNTARY_EXITS"
@@ -438,7 +466,7 @@
MAX_VOLUNTARY_EXITS = 16 MAX_VOLUNTARY_EXITS = 16
</spec> </spec>
- name: MAX_WITHDRAWALS_PER_PAYLOAD - name: MAX_WITHDRAWALS_PER_PAYLOAD#capella
sources: sources:
- file: config/fieldparams/mainnet.go - file: config/fieldparams/mainnet.go
search: MaxWithdrawalsPerPayload\s*= search: MaxWithdrawalsPerPayload\s*=
@@ -448,7 +476,7 @@
MAX_WITHDRAWALS_PER_PAYLOAD: uint64 = 16 MAX_WITHDRAWALS_PER_PAYLOAD: uint64 = 16
</spec> </spec>
- name: MAX_WITHDRAWAL_REQUESTS_PER_PAYLOAD - name: MAX_WITHDRAWAL_REQUESTS_PER_PAYLOAD#electra
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: MaxWithdrawalRequestsPerPayload\s+.*yaml:"MAX_WITHDRAWAL_REQUESTS_PER_PAYLOAD" search: MaxWithdrawalRequestsPerPayload\s+.*yaml:"MAX_WITHDRAWAL_REQUESTS_PER_PAYLOAD"
@@ -458,7 +486,7 @@
MAX_WITHDRAWAL_REQUESTS_PER_PAYLOAD: uint64 = 16 MAX_WITHDRAWAL_REQUESTS_PER_PAYLOAD: uint64 = 16
</spec> </spec>
- name: MIN_ACTIVATION_BALANCE - name: MIN_ACTIVATION_BALANCE#electra
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: MinActivationBalance\s+.*yaml:"MIN_ACTIVATION_BALANCE" search: MinActivationBalance\s+.*yaml:"MIN_ACTIVATION_BALANCE"
@@ -468,7 +496,7 @@
MIN_ACTIVATION_BALANCE: Gwei = 32000000000 MIN_ACTIVATION_BALANCE: Gwei = 32000000000
</spec> </spec>
- name: MIN_ATTESTATION_INCLUSION_DELAY - name: MIN_ATTESTATION_INCLUSION_DELAY#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: MinAttestationInclusionDelay\s+.*yaml:"MIN_ATTESTATION_INCLUSION_DELAY" search: MinAttestationInclusionDelay\s+.*yaml:"MIN_ATTESTATION_INCLUSION_DELAY"
@@ -478,7 +506,7 @@
MIN_ATTESTATION_INCLUSION_DELAY: uint64 = 1 MIN_ATTESTATION_INCLUSION_DELAY: uint64 = 1
</spec> </spec>
- name: MIN_DEPOSIT_AMOUNT - name: MIN_DEPOSIT_AMOUNT#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: MinDepositAmount\s+.*yaml:"MIN_DEPOSIT_AMOUNT" search: MinDepositAmount\s+.*yaml:"MIN_DEPOSIT_AMOUNT"
@@ -488,7 +516,7 @@
MIN_DEPOSIT_AMOUNT: Gwei = 1000000000 MIN_DEPOSIT_AMOUNT: Gwei = 1000000000
</spec> </spec>
- name: MIN_EPOCHS_TO_INACTIVITY_PENALTY - name: MIN_EPOCHS_TO_INACTIVITY_PENALTY#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: MinEpochsToInactivityPenalty\s+.*yaml:"MIN_EPOCHS_TO_INACTIVITY_PENALTY" search: MinEpochsToInactivityPenalty\s+.*yaml:"MIN_EPOCHS_TO_INACTIVITY_PENALTY"
@@ -498,7 +526,7 @@
MIN_EPOCHS_TO_INACTIVITY_PENALTY: uint64 = 4 MIN_EPOCHS_TO_INACTIVITY_PENALTY: uint64 = 4
</spec> </spec>
- name: MIN_SEED_LOOKAHEAD - name: MIN_SEED_LOOKAHEAD#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: MinSeedLookahead\s+.*yaml:"MIN_SEED_LOOKAHEAD" search: MinSeedLookahead\s+.*yaml:"MIN_SEED_LOOKAHEAD"
@@ -508,7 +536,7 @@
MIN_SEED_LOOKAHEAD: uint64 = 1 MIN_SEED_LOOKAHEAD: uint64 = 1
</spec> </spec>
- name: MIN_SLASHING_PENALTY_QUOTIENT - name: MIN_SLASHING_PENALTY_QUOTIENT#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: MinSlashingPenaltyQuotient\s+.*yaml:"MIN_SLASHING_PENALTY_QUOTIENT" search: MinSlashingPenaltyQuotient\s+.*yaml:"MIN_SLASHING_PENALTY_QUOTIENT"
@@ -518,7 +546,7 @@
MIN_SLASHING_PENALTY_QUOTIENT: uint64 = 128 MIN_SLASHING_PENALTY_QUOTIENT: uint64 = 128
</spec> </spec>
- name: MIN_SLASHING_PENALTY_QUOTIENT_ALTAIR - name: MIN_SLASHING_PENALTY_QUOTIENT_ALTAIR#altair
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: MinSlashingPenaltyQuotientAltair\s+.*yaml:"MIN_SLASHING_PENALTY_QUOTIENT_ALTAIR" search: MinSlashingPenaltyQuotientAltair\s+.*yaml:"MIN_SLASHING_PENALTY_QUOTIENT_ALTAIR"
@@ -528,7 +556,7 @@
MIN_SLASHING_PENALTY_QUOTIENT_ALTAIR: uint64 = 64 MIN_SLASHING_PENALTY_QUOTIENT_ALTAIR: uint64 = 64
</spec> </spec>
- name: MIN_SLASHING_PENALTY_QUOTIENT_BELLATRIX - name: MIN_SLASHING_PENALTY_QUOTIENT_BELLATRIX#bellatrix
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: MinSlashingPenaltyQuotientBellatrix\s+.*yaml:"MIN_SLASHING_PENALTY_QUOTIENT_BELLATRIX" search: MinSlashingPenaltyQuotientBellatrix\s+.*yaml:"MIN_SLASHING_PENALTY_QUOTIENT_BELLATRIX"
@@ -538,7 +566,7 @@
MIN_SLASHING_PENALTY_QUOTIENT_BELLATRIX: uint64 = 32 MIN_SLASHING_PENALTY_QUOTIENT_BELLATRIX: uint64 = 32
</spec> </spec>
- name: MIN_SLASHING_PENALTY_QUOTIENT_ELECTRA - name: MIN_SLASHING_PENALTY_QUOTIENT_ELECTRA#electra
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: MinSlashingPenaltyQuotientElectra\s+.*yaml:"MIN_SLASHING_PENALTY_QUOTIENT_ELECTRA" search: MinSlashingPenaltyQuotientElectra\s+.*yaml:"MIN_SLASHING_PENALTY_QUOTIENT_ELECTRA"
@@ -548,7 +576,7 @@
MIN_SLASHING_PENALTY_QUOTIENT_ELECTRA: uint64 = 4096 MIN_SLASHING_PENALTY_QUOTIENT_ELECTRA: uint64 = 4096
</spec> </spec>
- name: MIN_SYNC_COMMITTEE_PARTICIPANTS - name: MIN_SYNC_COMMITTEE_PARTICIPANTS#altair
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: MinSyncCommitteeParticipants\s+.*yaml:"MIN_SYNC_COMMITTEE_PARTICIPANTS" search: MinSyncCommitteeParticipants\s+.*yaml:"MIN_SYNC_COMMITTEE_PARTICIPANTS"
@@ -558,7 +586,7 @@
MIN_SYNC_COMMITTEE_PARTICIPANTS = 1 MIN_SYNC_COMMITTEE_PARTICIPANTS = 1
</spec> </spec>
- name: NUMBER_OF_COLUMNS - name: NUMBER_OF_COLUMNS#fulu
sources: sources:
- file: config/fieldparams/mainnet.go - file: config/fieldparams/mainnet.go
search: NumberOfColumns\s*= search: NumberOfColumns\s*=
@@ -568,7 +596,7 @@
NUMBER_OF_COLUMNS: uint64 = 128 NUMBER_OF_COLUMNS: uint64 = 128
</spec> </spec>
- name: PENDING_CONSOLIDATIONS_LIMIT - name: PENDING_CONSOLIDATIONS_LIMIT#electra
sources: sources:
- file: config/fieldparams/mainnet.go - file: config/fieldparams/mainnet.go
search: PendingConsolidationsLimit\s*= search: PendingConsolidationsLimit\s*=
@@ -578,7 +606,7 @@
PENDING_CONSOLIDATIONS_LIMIT: uint64 = 262144 PENDING_CONSOLIDATIONS_LIMIT: uint64 = 262144
</spec> </spec>
- name: PENDING_DEPOSITS_LIMIT - name: PENDING_DEPOSITS_LIMIT#electra
sources: sources:
- file: config/fieldparams/mainnet.go - file: config/fieldparams/mainnet.go
search: PendingDepositsLimit\s*= search: PendingDepositsLimit\s*=
@@ -588,7 +616,7 @@
PENDING_DEPOSITS_LIMIT: uint64 = 134217728 PENDING_DEPOSITS_LIMIT: uint64 = 134217728
</spec> </spec>
- name: PENDING_PARTIAL_WITHDRAWALS_LIMIT - name: PENDING_PARTIAL_WITHDRAWALS_LIMIT#electra
sources: sources:
- file: config/fieldparams/mainnet.go - file: config/fieldparams/mainnet.go
search: PendingPartialWithdrawalsLimit\s*= search: PendingPartialWithdrawalsLimit\s*=
@@ -598,7 +626,7 @@
PENDING_PARTIAL_WITHDRAWALS_LIMIT: uint64 = 134217728 PENDING_PARTIAL_WITHDRAWALS_LIMIT: uint64 = 134217728
</spec> </spec>
- name: PROPORTIONAL_SLASHING_MULTIPLIER - name: PROPORTIONAL_SLASHING_MULTIPLIER#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: ProportionalSlashingMultiplier\s+.*yaml:"PROPORTIONAL_SLASHING_MULTIPLIER" search: ProportionalSlashingMultiplier\s+.*yaml:"PROPORTIONAL_SLASHING_MULTIPLIER"
@@ -608,7 +636,7 @@
PROPORTIONAL_SLASHING_MULTIPLIER: uint64 = 1 PROPORTIONAL_SLASHING_MULTIPLIER: uint64 = 1
</spec> </spec>
- name: PROPORTIONAL_SLASHING_MULTIPLIER_ALTAIR - name: PROPORTIONAL_SLASHING_MULTIPLIER_ALTAIR#altair
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: ProportionalSlashingMultiplierAltair\s+.*yaml:"PROPORTIONAL_SLASHING_MULTIPLIER_ALTAIR" search: ProportionalSlashingMultiplierAltair\s+.*yaml:"PROPORTIONAL_SLASHING_MULTIPLIER_ALTAIR"
@@ -618,7 +646,7 @@
PROPORTIONAL_SLASHING_MULTIPLIER_ALTAIR: uint64 = 2 PROPORTIONAL_SLASHING_MULTIPLIER_ALTAIR: uint64 = 2
</spec> </spec>
- name: PROPORTIONAL_SLASHING_MULTIPLIER_BELLATRIX - name: PROPORTIONAL_SLASHING_MULTIPLIER_BELLATRIX#bellatrix
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: ProportionalSlashingMultiplierBellatrix\s+.*yaml:"PROPORTIONAL_SLASHING_MULTIPLIER_BELLATRIX" search: ProportionalSlashingMultiplierBellatrix\s+.*yaml:"PROPORTIONAL_SLASHING_MULTIPLIER_BELLATRIX"
@@ -628,7 +656,7 @@
PROPORTIONAL_SLASHING_MULTIPLIER_BELLATRIX: uint64 = 3 PROPORTIONAL_SLASHING_MULTIPLIER_BELLATRIX: uint64 = 3
</spec> </spec>
- name: PROPOSER_REWARD_QUOTIENT - name: PROPOSER_REWARD_QUOTIENT#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: ProposerRewardQuotient\s+.*yaml:"PROPOSER_REWARD_QUOTIENT" search: ProposerRewardQuotient\s+.*yaml:"PROPOSER_REWARD_QUOTIENT"
@@ -638,7 +666,14 @@
PROPOSER_REWARD_QUOTIENT: uint64 = 8 PROPOSER_REWARD_QUOTIENT: uint64 = 8
</spec> </spec>
- name: SHUFFLE_ROUND_COUNT - name: PTC_SIZE#gloas
sources: []
spec: |
<spec preset_var="PTC_SIZE" fork="gloas" hash="d61c5930">
PTC_SIZE: uint64 = 512
</spec>
- name: SHUFFLE_ROUND_COUNT#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: ShuffleRoundCount\s+.*yaml:"SHUFFLE_ROUND_COUNT" search: ShuffleRoundCount\s+.*yaml:"SHUFFLE_ROUND_COUNT"
@@ -648,7 +683,7 @@
SHUFFLE_ROUND_COUNT: uint64 = 90 SHUFFLE_ROUND_COUNT: uint64 = 90
</spec> </spec>
- name: SLOTS_PER_EPOCH - name: SLOTS_PER_EPOCH#phase0
sources: sources:
- file: config/fieldparams/mainnet.go - file: config/fieldparams/mainnet.go
search: SlotsPerEpoch\s*= search: SlotsPerEpoch\s*=
@@ -658,7 +693,7 @@
SLOTS_PER_EPOCH: uint64 = 32 SLOTS_PER_EPOCH: uint64 = 32
</spec> </spec>
- name: SLOTS_PER_HISTORICAL_ROOT - name: SLOTS_PER_HISTORICAL_ROOT#phase0
sources: sources:
- file: config/fieldparams/mainnet.go - file: config/fieldparams/mainnet.go
search: BlockRootsLength\s*= search: BlockRootsLength\s*=
@@ -668,7 +703,7 @@
SLOTS_PER_HISTORICAL_ROOT: uint64 = 8192 SLOTS_PER_HISTORICAL_ROOT: uint64 = 8192
</spec> </spec>
- name: SYNC_COMMITTEE_SIZE - name: SYNC_COMMITTEE_SIZE#altair
sources: sources:
- file: config/fieldparams/mainnet.go - file: config/fieldparams/mainnet.go
search: SyncCommitteeLength\s*= search: SyncCommitteeLength\s*=
@@ -678,7 +713,7 @@
SYNC_COMMITTEE_SIZE: uint64 = 512 SYNC_COMMITTEE_SIZE: uint64 = 512
</spec> </spec>
- name: TARGET_COMMITTEE_SIZE - name: TARGET_COMMITTEE_SIZE#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: TargetCommitteeSize\s+.*yaml:"TARGET_COMMITTEE_SIZE" search: TargetCommitteeSize\s+.*yaml:"TARGET_COMMITTEE_SIZE"
@@ -688,7 +723,7 @@
TARGET_COMMITTEE_SIZE: uint64 = 128 TARGET_COMMITTEE_SIZE: uint64 = 128
</spec> </spec>
- name: UPDATE_TIMEOUT - name: UPDATE_TIMEOUT#altair
sources: sources:
- file: beacon-chain/rpc/eth/config/handlers.go - file: beacon-chain/rpc/eth/config/handlers.go
search: data\["UPDATE_TIMEOUT"\] search: data\["UPDATE_TIMEOUT"\]
@@ -698,7 +733,7 @@
UPDATE_TIMEOUT = 8192 UPDATE_TIMEOUT = 8192
</spec> </spec>
- name: VALIDATOR_REGISTRY_LIMIT - name: VALIDATOR_REGISTRY_LIMIT#phase0
sources: sources:
- file: config/fieldparams/mainnet.go - file: config/fieldparams/mainnet.go
search: ValidatorRegistryLimit\s*= search: ValidatorRegistryLimit\s*=
@@ -708,7 +743,7 @@
VALIDATOR_REGISTRY_LIMIT: uint64 = 1099511627776 VALIDATOR_REGISTRY_LIMIT: uint64 = 1099511627776
</spec> </spec>
- name: WHISTLEBLOWER_REWARD_QUOTIENT - name: WHISTLEBLOWER_REWARD_QUOTIENT#phase0
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: WhistleBlowerRewardQuotient\s+.*yaml:"WHISTLEBLOWER_REWARD_QUOTIENT" search: WhistleBlowerRewardQuotient\s+.*yaml:"WHISTLEBLOWER_REWARD_QUOTIENT"
@@ -718,7 +753,7 @@
WHISTLEBLOWER_REWARD_QUOTIENT: uint64 = 512 WHISTLEBLOWER_REWARD_QUOTIENT: uint64 = 512
</spec> </spec>
- name: WHISTLEBLOWER_REWARD_QUOTIENT_ELECTRA - name: WHISTLEBLOWER_REWARD_QUOTIENT_ELECTRA#electra
sources: sources:
- file: config/params/config.go - file: config/params/config.go
search: WhistleBlowerRewardQuotientElectra\s+.*yaml:"WHISTLEBLOWER_REWARD_QUOTIENT_ELECTRA" search: WhistleBlowerRewardQuotientElectra\s+.*yaml:"WHISTLEBLOWER_REWARD_QUOTIENT_ELECTRA"

View File

@@ -225,9 +225,9 @@ func (r *testRunner) testDepositsAndTx(ctx context.Context, g *errgroup.Group,
if err := helpers.ComponentsStarted(ctx, []e2etypes.ComponentRunner{r.depositor}); err != nil { if err := helpers.ComponentsStarted(ctx, []e2etypes.ComponentRunner{r.depositor}); err != nil {
return errors.Wrap(err, "testDepositsAndTx unable to run, depositor did not Start") return errors.Wrap(err, "testDepositsAndTx unable to run, depositor did not Start")
} }
go func() { go func() {
if r.config.TestDeposits { if r.config.TestDeposits {
log.Info("Running deposit tests") log.Info("Running deposit tests")
// The validators with an index < minGenesisActiveCount all have deposits already from the chain start. // The validators with an index < minGenesisActiveCount all have deposits already from the chain start.
// Skip all of those chain start validators by seeking to minGenesisActiveCount in the validator list // Skip all of those chain start validators by seeking to minGenesisActiveCount in the validator list
// for further deposit testing. // for further deposit testing.
@@ -238,12 +238,12 @@ func (r *testRunner) testDepositsAndTx(ctx context.Context, g *errgroup.Group,
r.t.Error(errors.Wrap(err, "depositor.SendAndMine failed")) r.t.Error(errors.Wrap(err, "depositor.SendAndMine failed"))
} }
} }
} }
// Only generate background transactions when relevant for the test. // Only generate background transactions when relevant for the test.
if r.config.TestDeposits || r.config.TestFeature || r.config.UseBuilder { if r.config.TestDeposits || r.config.TestFeature || r.config.UseBuilder {
r.testTxGeneration(ctx, g, keystorePath, []e2etypes.ComponentRunner{}) r.testTxGeneration(ctx, g, keystorePath, []e2etypes.ComponentRunner{})
} }
}() }()
if r.config.TestDeposits { if r.config.TestDeposits {
return depositCheckValidator.Start(ctx) return depositCheckValidator.Start(ctx)
} }

View File

@@ -38,8 +38,8 @@ func TestEndToEnd_MinimalConfig(t *testing.T) {
r := e2eMinimal(t, cfg, r := e2eMinimal(t, cfg,
types.WithCheckpointSync(), types.WithCheckpointSync(),
types.WithEpochs(10), types.WithEpochs(10),
types.WithExitEpoch(4), // Minimum due to ShardCommitteePeriod=4 types.WithExitEpoch(4), // Minimum due to ShardCommitteePeriod=4
types.WithLargeBlobs(), // Use large blob transactions for BPO testing types.WithLargeBlobs(), // Use large blob transactions for BPO testing
) )
r.run() r.run()
} }

View File

@@ -21,10 +21,14 @@ There are tests for mainnet and minimal config, so for each config we will add a
## Running nightly spectests ## Running nightly spectests
Since [PR 15312](https://github.com/OffchainLabs/prysm/pull/15312), Prysm has support to download "nightly" spectests from github via a starlark rule configuration by environment variable. Since [PR 15312](https://github.com/OffchainLabs/prysm/pull/15312), Prysm has support to download "nightly" spectests from github via a starlark rule configuration by environment variable.
Set `--repo_env=CONSENSUS_SPEC_TESTS_VERSION=nightly` when running spectest to download the "nightly" spectests. Set `--repo_env=CONSENSUS_SPEC_TESTS_VERSION=nightly` or `--repo_env=CONSENSUS_SPEC_TESTS_VERSION=nightly-<run_id>` when running spectest to download the "nightly" spectests.
Note: A GITHUB_TOKEN environment variable is required to be set. The github token must be a [fine grained token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens#creating-a-fine-grained-personal-access-token). Note: A GITHUB_TOKEN environment variable is required to be set. The github token does not need to be associated with your main account; it can be from a "burner account". And the token does not need to be a fine-grained token; it can be a classic token.
``` ```
bazel test //... --test_tag_filters=spectest --repo_env=CONSENSUS_SPEC_TESTS_VERSION=nightly bazel test //... --test_tag_filters=spectest --repo_env=CONSENSUS_SPEC_TESTS_VERSION=nightly
``` ```
```
bazel test //... --test_tag_filters=spectest --repo_env=CONSENSUS_SPEC_TESTS_VERSION=nightly-21422848633
```

View File

@@ -283,16 +283,18 @@ func (mr *MockValidatorClientMockRecorder) ProposeExit(ctx, in any) *gomock.Call
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "ProposeExit", reflect.TypeOf((*MockValidatorClient)(nil).ProposeExit), ctx, in) return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "ProposeExit", reflect.TypeOf((*MockValidatorClient)(nil).ProposeExit), ctx, in)
} }
// SetHost mocks base method. // EnsureReady mocks base method.
func (m *MockValidatorClient) SetHost(host string) { func (m *MockValidatorClient) EnsureReady(ctx context.Context) bool {
m.ctrl.T.Helper() m.ctrl.T.Helper()
m.ctrl.Call(m, "SetHost", host) ret := m.ctrl.Call(m, "EnsureReady", ctx)
ret0, _ := ret[0].(bool)
return ret0
} }
// SetHost indicates an expected call of SetHost. // EnsureReady indicates an expected call of EnsureReady.
func (mr *MockValidatorClientMockRecorder) SetHost(host any) *gomock.Call { func (mr *MockValidatorClientMockRecorder) EnsureReady(ctx any) *gomock.Call {
mr.mock.ctrl.T.Helper() mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SetHost", reflect.TypeOf((*MockValidatorClient)(nil).SetHost), host) return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "EnsureReady", reflect.TypeOf((*MockValidatorClient)(nil).EnsureReady), ctx)
} }
// StartEventStream mocks base method. // StartEventStream mocks base method.

View File

@@ -128,18 +128,18 @@ func (mr *MockValidatorMockRecorder) EventsChan() *gomock.Call {
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "EventsChan", reflect.TypeOf((*MockValidator)(nil).EventsChan)) return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "EventsChan", reflect.TypeOf((*MockValidator)(nil).EventsChan))
} }
// FindHealthyHost mocks base method. // EnsureReady mocks base method.
func (m *MockValidator) FindHealthyHost(arg0 context.Context) bool { func (m *MockValidator) EnsureReady(arg0 context.Context) bool {
m.ctrl.T.Helper() m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "FindHealthyHost", arg0) ret := m.ctrl.Call(m, "EnsureReady", arg0)
ret0, _ := ret[0].(bool) ret0, _ := ret[0].(bool)
return ret0 return ret0
} }
// FindHealthyHost indicates an expected call of FindHealthyHost. // EnsureReady indicates an expected call of EnsureReady.
func (mr *MockValidatorMockRecorder) FindHealthyHost(arg0 any) *gomock.Call { func (mr *MockValidatorMockRecorder) EnsureReady(arg0 any) *gomock.Call {
mr.mock.ctrl.T.Helper() mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "FindHealthyHost", reflect.TypeOf((*MockValidator)(nil).FindHealthyHost), arg0) return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "EnsureReady", reflect.TypeOf((*MockValidator)(nil).EnsureReady), arg0)
} }
// GenesisTime mocks base method. // GenesisTime mocks base method.

View File

@@ -1,5 +1,6 @@
# bazel build @consensus_spec_tests//:test_data # bazel build @consensus_spec_tests//:test_data
# bazel build @consensus_spec_tests//:test_data --repo_env=CONSENSUS_SPEC_TESTS_VERSION=nightly # bazel build @consensus_spec_tests//:test_data --repo_env=CONSENSUS_SPEC_TESTS_VERSION=nightly
# bazel build @consensus_spec_tests//:test_data --repo_env=CONSENSUS_SPEC_TESTS_VERSION=nightly-<run_id>
def _get_redirected_url(repository_ctx, url, headers): def _get_redirected_url(repository_ctx, url, headers):
if not repository_ctx.which("curl"): if not repository_ctx.which("curl"):
@@ -24,7 +25,7 @@ def _impl(repository_ctx):
version = repository_ctx.getenv("CONSENSUS_SPEC_TESTS_VERSION") or repository_ctx.attr.version version = repository_ctx.getenv("CONSENSUS_SPEC_TESTS_VERSION") or repository_ctx.attr.version
token = repository_ctx.getenv("GITHUB_TOKEN") or "" token = repository_ctx.getenv("GITHUB_TOKEN") or ""
if version == "nightly": if version == "nightly" or version.startswith("nightly-"):
print("Downloading nightly tests") print("Downloading nightly tests")
if not token: if not token:
fail("Error GITHUB_TOKEN is not set") fail("Error GITHUB_TOKEN is not set")
@@ -34,16 +35,22 @@ def _impl(repository_ctx):
"Accept": "application/vnd.github+json", "Accept": "application/vnd.github+json",
} }
repository_ctx.download( if version.startswith("nightly-"):
"https://api.github.com/repos/%s/actions/workflows/%s/runs?branch=%s&status=success&per_page=1" run_id = version.split("nightly-", 1)[1]
% (repository_ctx.attr.repo, repository_ctx.attr.workflow, repository_ctx.attr.branch), if not run_id:
headers = headers, fail("Error invalid run id")
output = "runs.json" else:
) repository_ctx.download(
"https://api.github.com/repos/%s/actions/workflows/%s/runs?branch=%s&status=success&per_page=1"
% (repository_ctx.attr.repo, repository_ctx.attr.workflow, repository_ctx.attr.branch),
headers = headers,
output = "runs.json"
)
run_id = json.decode(repository_ctx.read("runs.json"))["workflow_runs"][0]["id"] run_id = json.decode(repository_ctx.read("runs.json"))["workflow_runs"][0]["id"]
repository_ctx.delete("runs.json") repository_ctx.delete("runs.json")
print("Run id:", run_id)
repository_ctx.download( repository_ctx.download(
"https://api.github.com/repos/%s/actions/runs/%s/artifacts" "https://api.github.com/repos/%s/actions/runs/%s/artifacts"
% (repository_ctx.attr.repo, run_id), % (repository_ctx.attr.repo, run_id),
@@ -108,8 +115,8 @@ consensus_spec_tests = repository_rule(
"version": attr.string(mandatory = True), "version": attr.string(mandatory = True),
"flavors": attr.string_dict(mandatory = True), "flavors": attr.string_dict(mandatory = True),
"repo": attr.string(default = "ethereum/consensus-specs"), "repo": attr.string(default = "ethereum/consensus-specs"),
"workflow": attr.string(default = "generate_vectors.yml"), "workflow": attr.string(default = "nightly-reftests.yml"),
"branch": attr.string(default = "dev"), "branch": attr.string(default = "master"),
"release_url_template": attr.string(default = "https://github.com/ethereum/consensus-specs/releases/download/%s"), "release_url_template": attr.string(default = "https://github.com/ethereum/consensus-specs/releases/download/%s"),
}, },
) )

View File

@@ -25,6 +25,7 @@ go_library(
], ],
deps = [ deps = [
"//api/grpc:go_default_library", "//api/grpc:go_default_library",
"//api/rest:go_default_library",
"//beacon-chain/core/blocks:go_default_library", "//beacon-chain/core/blocks:go_default_library",
"//cmd/validator/flags:go_default_library", "//cmd/validator/flags:go_default_library",
"//config/fieldparams:go_default_library", "//config/fieldparams:go_default_library",

View File

@@ -3,14 +3,13 @@ package accounts
import ( import (
"context" "context"
"io" "io"
"net/http"
"os" "os"
"time" "time"
grpcutil "github.com/OffchainLabs/prysm/v7/api/grpc" grpcutil "github.com/OffchainLabs/prysm/v7/api/grpc"
"github.com/OffchainLabs/prysm/v7/api/rest"
"github.com/OffchainLabs/prysm/v7/crypto/bls" "github.com/OffchainLabs/prysm/v7/crypto/bls"
"github.com/OffchainLabs/prysm/v7/validator/accounts/wallet" "github.com/OffchainLabs/prysm/v7/validator/accounts/wallet"
beaconApi "github.com/OffchainLabs/prysm/v7/validator/client/beacon-api"
iface "github.com/OffchainLabs/prysm/v7/validator/client/iface" iface "github.com/OffchainLabs/prysm/v7/validator/client/iface"
nodeClientFactory "github.com/OffchainLabs/prysm/v7/validator/client/node-client-factory" nodeClientFactory "github.com/OffchainLabs/prysm/v7/validator/client/node-client-factory"
validatorClientFactory "github.com/OffchainLabs/prysm/v7/validator/client/validator-client-factory" validatorClientFactory "github.com/OffchainLabs/prysm/v7/validator/client/validator-client-factory"
@@ -77,22 +76,17 @@ func (acm *CLIManager) prepareBeaconClients(ctx context.Context) (*iface.Validat
} }
ctx = grpcutil.AppendHeaders(ctx, acm.grpcHeaders) ctx = grpcutil.AppendHeaders(ctx, acm.grpcHeaders)
grpcConn, err := grpc.DialContext(ctx, acm.beaconRPCProvider, acm.dialOpts...)
if err != nil {
return nil, nil, errors.Wrapf(err, "could not dial endpoint %s", acm.beaconRPCProvider)
}
conn := validatorHelpers.NewNodeConnection(
grpcConn,
acm.beaconApiEndpoint,
validatorHelpers.WithBeaconApiTimeout(acm.beaconApiTimeout),
)
restHandler := beaconApi.NewBeaconApiRestHandler( conn, err := validatorHelpers.NewNodeConnection(
http.Client{Timeout: acm.beaconApiTimeout}, validatorHelpers.WithGRPC(ctx, acm.beaconRPCProvider, acm.dialOpts),
acm.beaconApiEndpoint, validatorHelpers.WithREST(acm.beaconApiEndpoint, rest.WithHttpTimeout(acm.beaconApiTimeout)),
) )
validatorClient := validatorClientFactory.NewValidatorClient(conn, restHandler) if err != nil {
nodeClient := nodeClientFactory.NewNodeClient(conn, restHandler) return nil, nil, err
}
validatorClient := validatorClientFactory.NewValidatorClient(conn)
nodeClient := nodeClientFactory.NewNodeClient(conn)
return &validatorClient, &nodeClient, nil return &validatorClient, &nodeClient, nil
} }

View File

@@ -10,7 +10,6 @@ go_library(
"log.go", "log.go",
"log_helpers.go", "log_helpers.go",
"metrics.go", "metrics.go",
"multiple_endpoints_grpc_resolver.go",
"propose.go", "propose.go",
"registration.go", "registration.go",
"runner.go", "runner.go",
@@ -29,6 +28,7 @@ go_library(
"//api/client:go_default_library", "//api/client:go_default_library",
"//api/client/event:go_default_library", "//api/client/event:go_default_library",
"//api/grpc:go_default_library", "//api/grpc:go_default_library",
"//api/rest:go_default_library",
"//api/server/structs:go_default_library", "//api/server/structs:go_default_library",
"//async:go_default_library", "//async:go_default_library",
"//async/event:go_default_library", "//async/event:go_default_library",
@@ -58,7 +58,6 @@ go_library(
"//time/slots:go_default_library", "//time/slots:go_default_library",
"//validator/accounts/iface:go_default_library", "//validator/accounts/iface:go_default_library",
"//validator/accounts/wallet:go_default_library", "//validator/accounts/wallet:go_default_library",
"//validator/client/beacon-api:go_default_library",
"//validator/client/beacon-chain-client-factory:go_default_library", "//validator/client/beacon-chain-client-factory:go_default_library",
"//validator/client/iface:go_default_library", "//validator/client/iface:go_default_library",
"//validator/client/node-client-factory:go_default_library", "//validator/client/node-client-factory:go_default_library",
@@ -86,13 +85,11 @@ go_library(
"@com_github_prysmaticlabs_go_bitfield//:go_default_library", "@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library", "@com_github_sirupsen_logrus//:go_default_library",
"@io_opentelemetry_go_contrib_instrumentation_google_golang_org_grpc_otelgrpc//:go_default_library", "@io_opentelemetry_go_contrib_instrumentation_google_golang_org_grpc_otelgrpc//:go_default_library",
"@io_opentelemetry_go_contrib_instrumentation_net_http_otelhttp//:go_default_library",
"@io_opentelemetry_go_otel_trace//:go_default_library", "@io_opentelemetry_go_otel_trace//:go_default_library",
"@org_golang_google_grpc//:go_default_library", "@org_golang_google_grpc//:go_default_library",
"@org_golang_google_grpc//codes:go_default_library", "@org_golang_google_grpc//codes:go_default_library",
"@org_golang_google_grpc//credentials:go_default_library", "@org_golang_google_grpc//credentials:go_default_library",
"@org_golang_google_grpc//metadata:go_default_library", "@org_golang_google_grpc//metadata:go_default_library",
"@org_golang_google_grpc//resolver:go_default_library",
"@org_golang_google_grpc//status:go_default_library", "@org_golang_google_grpc//status:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library", "@org_golang_google_protobuf//proto:go_default_library",
"@org_golang_google_protobuf//types/known/emptypb:go_default_library", "@org_golang_google_protobuf//types/known/emptypb:go_default_library",
@@ -124,6 +121,7 @@ go_test(
], ],
embed = [":go_default_library"], embed = [":go_default_library"],
deps = [ deps = [
"//api/grpc:go_default_library",
"//api/server/structs:go_default_library", "//api/server/structs:go_default_library",
"//async/event:go_default_library", "//async/event:go_default_library",
"//beacon-chain/core/signing:go_default_library", "//beacon-chain/core/signing:go_default_library",

View File

@@ -26,7 +26,6 @@ go_library(
"propose_exit.go", "propose_exit.go",
"prysm_beacon_chain_client.go", "prysm_beacon_chain_client.go",
"registration.go", "registration.go",
"rest_handler_client.go",
"state_validators.go", "state_validators.go",
"status.go", "status.go",
"stream_blocks.go", "stream_blocks.go",
@@ -43,6 +42,8 @@ go_library(
"//api:go_default_library", "//api:go_default_library",
"//api/apiutil:go_default_library", "//api/apiutil:go_default_library",
"//api/client/event:go_default_library", "//api/client/event:go_default_library",
"//api/fallback:go_default_library",
"//api/rest:go_default_library",
"//api/server/structs:go_default_library", "//api/server/structs:go_default_library",
"//beacon-chain/core/helpers:go_default_library", "//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/signing:go_default_library", "//beacon-chain/core/signing:go_default_library",
@@ -51,6 +52,7 @@ go_library(
"//consensus-types/primitives:go_default_library", "//consensus-types/primitives:go_default_library",
"//consensus-types/validator:go_default_library", "//consensus-types/validator:go_default_library",
"//encoding/bytesutil:go_default_library", "//encoding/bytesutil:go_default_library",
"//encoding/ssz:go_default_library",
"//monitoring/tracing/trace:go_default_library", "//monitoring/tracing/trace:go_default_library",
"//network/httputil:go_default_library", "//network/httputil:go_default_library",
"//proto/engine/v1:go_default_library", "//proto/engine/v1:go_default_library",
@@ -111,6 +113,7 @@ go_test(
deps = [ deps = [
"//api:go_default_library", "//api:go_default_library",
"//api/apiutil:go_default_library", "//api/apiutil:go_default_library",
"//api/rest:go_default_library",
"//api/server/structs:go_default_library", "//api/server/structs:go_default_library",
"//beacon-chain/core/helpers:go_default_library", "//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/rpc/eth/shared/testing:go_default_library", "//beacon-chain/rpc/eth/shared/testing:go_default_library",

View File

@@ -26,7 +26,7 @@ func (c *beaconApiValidatorClient) attestationData(
query := apiutil.BuildURL("/eth/v1/validator/attestation_data", params) query := apiutil.BuildURL("/eth/v1/validator/attestation_data", params)
produceAttestationDataResponseJson := structs.GetAttestationDataResponse{} produceAttestationDataResponseJson := structs.GetAttestationDataResponse{}
if err := c.jsonRestHandler.Get(ctx, query, &produceAttestationDataResponseJson); err != nil { if err := c.handler.Get(ctx, query, &produceAttestationDataResponseJson); err != nil {
return nil, err return nil, err
} }

View File

@@ -28,10 +28,10 @@ func TestGetAttestationData_ValidAttestation(t *testing.T) {
ctrl := gomock.NewController(t) ctrl := gomock.NewController(t)
defer ctrl.Finish() defer ctrl.Finish()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
produceAttestationDataResponseJson := structs.GetAttestationDataResponse{} produceAttestationDataResponseJson := structs.GetAttestationDataResponse{}
jsonRestHandler.EXPECT().Get( handler.EXPECT().Get(
gomock.Any(), gomock.Any(),
fmt.Sprintf("/eth/v1/validator/attestation_data?committee_index=%d&slot=%d", expectedCommitteeIndex, expectedSlot), fmt.Sprintf("/eth/v1/validator/attestation_data?committee_index=%d&slot=%d", expectedCommitteeIndex, expectedSlot),
&produceAttestationDataResponseJson, &produceAttestationDataResponseJson,
@@ -56,7 +56,7 @@ func TestGetAttestationData_ValidAttestation(t *testing.T) {
}, },
).Times(1) ).Times(1)
validatorClient := &beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := &beaconApiValidatorClient{handler: handler}
resp, err := validatorClient.attestationData(ctx, primitives.Slot(expectedSlot), primitives.CommitteeIndex(expectedCommitteeIndex)) resp, err := validatorClient.attestationData(ctx, primitives.Slot(expectedSlot), primitives.CommitteeIndex(expectedCommitteeIndex))
assert.NoError(t, err) assert.NoError(t, err)
@@ -180,8 +180,8 @@ func TestGetAttestationData_InvalidData(t *testing.T) {
defer ctrl.Finish() defer ctrl.Finish()
produceAttestationDataResponseJson := structs.GetAttestationDataResponse{} produceAttestationDataResponseJson := structs.GetAttestationDataResponse{}
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
jsonRestHandler.EXPECT().Get( handler.EXPECT().Get(
gomock.Any(), gomock.Any(),
"/eth/v1/validator/attestation_data?committee_index=2&slot=1", "/eth/v1/validator/attestation_data?committee_index=2&slot=1",
&produceAttestationDataResponseJson, &produceAttestationDataResponseJson,
@@ -192,7 +192,7 @@ func TestGetAttestationData_InvalidData(t *testing.T) {
testCase.generateData(), testCase.generateData(),
).Times(1) ).Times(1)
validatorClient := &beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := &beaconApiValidatorClient{handler: handler}
_, err := validatorClient.attestationData(ctx, 1, 2) _, err := validatorClient.attestationData(ctx, 1, 2)
assert.ErrorContains(t, testCase.expectedErrorMessage, err) assert.ErrorContains(t, testCase.expectedErrorMessage, err)
}) })
@@ -208,9 +208,9 @@ func TestGetAttestationData_JsonResponseError(t *testing.T) {
ctrl := gomock.NewController(t) ctrl := gomock.NewController(t)
defer ctrl.Finish() defer ctrl.Finish()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
produceAttestationDataResponseJson := structs.GetAttestationDataResponse{} produceAttestationDataResponseJson := structs.GetAttestationDataResponse{}
jsonRestHandler.EXPECT().Get( handler.EXPECT().Get(
gomock.Any(), gomock.Any(),
fmt.Sprintf("/eth/v1/validator/attestation_data?committee_index=%d&slot=%d", committeeIndex, slot), fmt.Sprintf("/eth/v1/validator/attestation_data?committee_index=%d&slot=%d", committeeIndex, slot),
&produceAttestationDataResponseJson, &produceAttestationDataResponseJson,
@@ -218,7 +218,7 @@ func TestGetAttestationData_JsonResponseError(t *testing.T) {
errors.New("some specific json response error"), errors.New("some specific json response error"),
).Times(1) ).Times(1)
validatorClient := &beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := &beaconApiValidatorClient{handler: handler}
_, err := validatorClient.attestationData(ctx, slot, committeeIndex) _, err := validatorClient.attestationData(ctx, slot, committeeIndex)
assert.ErrorContains(t, "some specific json response error", err) assert.ErrorContains(t, "some specific json response error", err)
} }

View File

@@ -5,6 +5,7 @@ import (
"reflect" "reflect"
"strconv" "strconv"
"github.com/OffchainLabs/prysm/v7/api/rest"
"github.com/OffchainLabs/prysm/v7/api/server/structs" "github.com/OffchainLabs/prysm/v7/api/server/structs"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives" "github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1" ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
@@ -17,13 +18,13 @@ import (
type beaconApiChainClient struct { type beaconApiChainClient struct {
fallbackClient iface.ChainClient fallbackClient iface.ChainClient
jsonRestHandler RestHandler handler rest.Handler
stateValidatorsProvider StateValidatorsProvider stateValidatorsProvider StateValidatorsProvider
} }
func (c beaconApiChainClient) headBlockHeaders(ctx context.Context) (*structs.GetBlockHeaderResponse, error) { func (c beaconApiChainClient) headBlockHeaders(ctx context.Context) (*structs.GetBlockHeaderResponse, error) {
blockHeader := structs.GetBlockHeaderResponse{} blockHeader := structs.GetBlockHeaderResponse{}
err := c.jsonRestHandler.Get(ctx, "/eth/v1/beacon/headers/head", &blockHeader) err := c.handler.Get(ctx, "/eth/v1/beacon/headers/head", &blockHeader)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -43,7 +44,7 @@ func (c beaconApiChainClient) ChainHead(ctx context.Context, _ *empty.Empty) (*e
const endpoint = "/eth/v1/beacon/states/head/finality_checkpoints" const endpoint = "/eth/v1/beacon/states/head/finality_checkpoints"
finalityCheckpoints := structs.GetFinalityCheckpointsResponse{} finalityCheckpoints := structs.GetFinalityCheckpointsResponse{}
if err := c.jsonRestHandler.Get(ctx, endpoint, &finalityCheckpoints); err != nil { if err := c.handler.Get(ctx, endpoint, &finalityCheckpoints); err != nil {
return nil, err return nil, err
} }
@@ -327,10 +328,10 @@ func (c beaconApiChainClient) ValidatorParticipation(ctx context.Context, in *et
return nil, errors.New("beaconApiChainClient.ValidatorParticipation is not implemented. To use a fallback client, pass a fallback client as the last argument of NewBeaconApiChainClientWithFallback.") return nil, errors.New("beaconApiChainClient.ValidatorParticipation is not implemented. To use a fallback client, pass a fallback client as the last argument of NewBeaconApiChainClientWithFallback.")
} }
func NewBeaconApiChainClientWithFallback(jsonRestHandler RestHandler, fallbackClient iface.ChainClient) iface.ChainClient { func NewBeaconApiChainClientWithFallback(handler rest.Handler, fallbackClient iface.ChainClient) iface.ChainClient {
return &beaconApiChainClient{ return &beaconApiChainClient{
jsonRestHandler: jsonRestHandler, handler: handler,
fallbackClient: fallbackClient, fallbackClient: fallbackClient,
stateValidatorsProvider: beaconApiStateValidatorsProvider{jsonRestHandler: jsonRestHandler}, stateValidatorsProvider: beaconApiStateValidatorsProvider{handler: handler},
} }
} }

View File

@@ -115,12 +115,12 @@ func TestListValidators(t *testing.T) {
nil, nil,
) )
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
jsonRestHandler.EXPECT().Get(gomock.Any(), blockHeaderEndpoint, gomock.Any()).Return(errors.New("bar error")) handler.EXPECT().Get(gomock.Any(), blockHeaderEndpoint, gomock.Any()).Return(errors.New("bar error"))
beaconChainClient := beaconApiChainClient{ beaconChainClient := beaconApiChainClient{
stateValidatorsProvider: stateValidatorsProvider, stateValidatorsProvider: stateValidatorsProvider,
jsonRestHandler: jsonRestHandler, handler: handler,
} }
_, err := beaconChainClient.Validators(ctx, &ethpb.ListValidatorsRequest{ _, err := beaconChainClient.Validators(ctx, &ethpb.ListValidatorsRequest{
QueryFilter: nil, QueryFilter: nil,
@@ -188,8 +188,8 @@ func TestListValidators(t *testing.T) {
nil, nil,
) )
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
jsonRestHandler.EXPECT().Get(gomock.Any(), blockHeaderEndpoint, gomock.Any()).Return( handler.EXPECT().Get(gomock.Any(), blockHeaderEndpoint, gomock.Any()).Return(
nil, nil,
).SetArg( ).SetArg(
2, 2,
@@ -198,7 +198,7 @@ func TestListValidators(t *testing.T) {
beaconChainClient := beaconApiChainClient{ beaconChainClient := beaconApiChainClient{
stateValidatorsProvider: stateValidatorsProvider, stateValidatorsProvider: stateValidatorsProvider,
jsonRestHandler: jsonRestHandler, handler: handler,
} }
_, err := beaconChainClient.Validators(ctx, &ethpb.ListValidatorsRequest{ _, err := beaconChainClient.Validators(ctx, &ethpb.ListValidatorsRequest{
QueryFilter: nil, QueryFilter: nil,
@@ -740,15 +740,15 @@ func TestGetChainHead(t *testing.T) {
ctx := t.Context() ctx := t.Context()
finalityCheckpointsResponse := structs.GetFinalityCheckpointsResponse{} finalityCheckpointsResponse := structs.GetFinalityCheckpointsResponse{}
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
jsonRestHandler.EXPECT().Get(gomock.Any(), finalityCheckpointsEndpoint, &finalityCheckpointsResponse).Return( handler.EXPECT().Get(gomock.Any(), finalityCheckpointsEndpoint, &finalityCheckpointsResponse).Return(
testCase.finalityCheckpointsError, testCase.finalityCheckpointsError,
).SetArg( ).SetArg(
2, 2,
testCase.generateFinalityCheckpointsResponse(), testCase.generateFinalityCheckpointsResponse(),
) )
beaconChainClient := beaconApiChainClient{jsonRestHandler: jsonRestHandler} beaconChainClient := beaconApiChainClient{handler: handler}
_, err := beaconChainClient.ChainHead(ctx, &emptypb.Empty{}) _, err := beaconChainClient.ChainHead(ctx, &emptypb.Empty{})
assert.ErrorContains(t, testCase.expectedError, err) assert.ErrorContains(t, testCase.expectedError, err)
}) })
@@ -837,10 +837,10 @@ func TestGetChainHead(t *testing.T) {
defer ctrl.Finish() defer ctrl.Finish()
ctx := t.Context() ctx := t.Context()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
finalityCheckpointsResponse := structs.GetFinalityCheckpointsResponse{} finalityCheckpointsResponse := structs.GetFinalityCheckpointsResponse{}
jsonRestHandler.EXPECT().Get(gomock.Any(), finalityCheckpointsEndpoint, &finalityCheckpointsResponse).Return( handler.EXPECT().Get(gomock.Any(), finalityCheckpointsEndpoint, &finalityCheckpointsResponse).Return(
nil, nil,
).SetArg( ).SetArg(
2, 2,
@@ -848,14 +848,14 @@ func TestGetChainHead(t *testing.T) {
) )
headBlockHeadersResponse := structs.GetBlockHeaderResponse{} headBlockHeadersResponse := structs.GetBlockHeaderResponse{}
jsonRestHandler.EXPECT().Get(gomock.Any(), headBlockHeadersEndpoint, &headBlockHeadersResponse).Return( handler.EXPECT().Get(gomock.Any(), headBlockHeadersEndpoint, &headBlockHeadersResponse).Return(
testCase.headBlockHeadersError, testCase.headBlockHeadersError,
).SetArg( ).SetArg(
2, 2,
testCase.generateHeadBlockHeadersResponse(), testCase.generateHeadBlockHeadersResponse(),
) )
beaconChainClient := beaconApiChainClient{jsonRestHandler: jsonRestHandler} beaconChainClient := beaconApiChainClient{handler: handler}
_, err := beaconChainClient.ChainHead(ctx, &emptypb.Empty{}) _, err := beaconChainClient.ChainHead(ctx, &emptypb.Empty{})
assert.ErrorContains(t, testCase.expectedError, err) assert.ErrorContains(t, testCase.expectedError, err)
}) })
@@ -867,10 +867,10 @@ func TestGetChainHead(t *testing.T) {
defer ctrl.Finish() defer ctrl.Finish()
ctx := t.Context() ctx := t.Context()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
finalityCheckpointsResponse := structs.GetFinalityCheckpointsResponse{} finalityCheckpointsResponse := structs.GetFinalityCheckpointsResponse{}
jsonRestHandler.EXPECT().Get(gomock.Any(), finalityCheckpointsEndpoint, &finalityCheckpointsResponse).Return( handler.EXPECT().Get(gomock.Any(), finalityCheckpointsEndpoint, &finalityCheckpointsResponse).Return(
nil, nil,
).SetArg( ).SetArg(
2, 2,
@@ -878,7 +878,7 @@ func TestGetChainHead(t *testing.T) {
) )
headBlockHeadersResponse := structs.GetBlockHeaderResponse{} headBlockHeadersResponse := structs.GetBlockHeaderResponse{}
jsonRestHandler.EXPECT().Get(gomock.Any(), headBlockHeadersEndpoint, &headBlockHeadersResponse).Return( handler.EXPECT().Get(gomock.Any(), headBlockHeadersEndpoint, &headBlockHeadersResponse).Return(
nil, nil,
).SetArg( ).SetArg(
2, 2,
@@ -909,7 +909,7 @@ func TestGetChainHead(t *testing.T) {
HeadEpoch: slots.ToEpoch(8), HeadEpoch: slots.ToEpoch(8),
} }
beaconChainClient := beaconApiChainClient{jsonRestHandler: jsonRestHandler} beaconChainClient := beaconApiChainClient{handler: handler}
chainHead, err := beaconChainClient.ChainHead(ctx, &emptypb.Empty{}) chainHead, err := beaconChainClient.ChainHead(ctx, &emptypb.Empty{})
require.NoError(t, err) require.NoError(t, err)
assert.DeepEqual(t, expectedChainHead, chainHead) assert.DeepEqual(t, expectedChainHead, chainHead)

View File

@@ -29,7 +29,7 @@ func (c *beaconApiValidatorClient) fork(ctx context.Context) (*structs.GetStateF
stateForkResponseJson := &structs.GetStateForkResponse{} stateForkResponseJson := &structs.GetStateForkResponse{}
if err := c.jsonRestHandler.Get(ctx, endpoint, stateForkResponseJson); err != nil { if err := c.handler.Get(ctx, endpoint, stateForkResponseJson); err != nil {
return nil, err return nil, err
} }
@@ -41,7 +41,7 @@ func (c *beaconApiValidatorClient) headers(ctx context.Context) (*structs.GetBlo
blockHeadersResponseJson := &structs.GetBlockHeadersResponse{} blockHeadersResponseJson := &structs.GetBlockHeadersResponse{}
if err := c.jsonRestHandler.Get(ctx, endpoint, blockHeadersResponseJson); err != nil { if err := c.handler.Get(ctx, endpoint, blockHeadersResponseJson); err != nil {
return nil, err return nil, err
} }
@@ -59,7 +59,7 @@ func (c *beaconApiValidatorClient) liveness(ctx context.Context, epoch primitive
return nil, errors.Wrapf(err, "failed to marshal validator indexes") return nil, errors.Wrapf(err, "failed to marshal validator indexes")
} }
if err = c.jsonRestHandler.Post(ctx, url, nil, bytes.NewBuffer(marshalledJsonValidatorIndexes), livenessResponseJson); err != nil { if err = c.handler.Post(ctx, url, nil, bytes.NewBuffer(marshalledJsonValidatorIndexes), livenessResponseJson); err != nil {
return nil, err return nil, err
} }
@@ -71,7 +71,7 @@ func (c *beaconApiValidatorClient) syncing(ctx context.Context) (*structs.SyncSt
syncingResponseJson := &structs.SyncStatusResponse{} syncingResponseJson := &structs.SyncStatusResponse{}
if err := c.jsonRestHandler.Get(ctx, endpoint, syncingResponseJson); err != nil { if err := c.handler.Get(ctx, endpoint, syncingResponseJson); err != nil {
return nil, err return nil, err
} }

View File

@@ -20,7 +20,7 @@ func TestGetFork_Nominal(t *testing.T) {
defer ctrl.Finish() defer ctrl.Finish()
stateForkResponseJson := structs.GetStateForkResponse{} stateForkResponseJson := structs.GetStateForkResponse{}
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
expected := structs.GetStateForkResponse{ expected := structs.GetStateForkResponse{
Data: &structs.Fork{ Data: &structs.Fork{
@@ -32,7 +32,7 @@ func TestGetFork_Nominal(t *testing.T) {
ctx := t.Context() ctx := t.Context()
jsonRestHandler.EXPECT().Get( handler.EXPECT().Get(
gomock.Any(), gomock.Any(),
forkEndpoint, forkEndpoint,
&stateForkResponseJson, &stateForkResponseJson,
@@ -44,7 +44,7 @@ func TestGetFork_Nominal(t *testing.T) {
).Times(1) ).Times(1)
validatorClient := beaconApiValidatorClient{ validatorClient := beaconApiValidatorClient{
jsonRestHandler: jsonRestHandler, handler: handler,
} }
fork, err := validatorClient.fork(ctx) fork, err := validatorClient.fork(ctx)
@@ -56,11 +56,11 @@ func TestGetFork_Invalid(t *testing.T) {
ctrl := gomock.NewController(t) ctrl := gomock.NewController(t)
defer ctrl.Finish() defer ctrl.Finish()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
ctx := t.Context() ctx := t.Context()
jsonRestHandler.EXPECT().Get( handler.EXPECT().Get(
gomock.Any(), gomock.Any(),
forkEndpoint, forkEndpoint,
gomock.Any(), gomock.Any(),
@@ -69,7 +69,7 @@ func TestGetFork_Invalid(t *testing.T) {
).Times(1) ).Times(1)
validatorClient := beaconApiValidatorClient{ validatorClient := beaconApiValidatorClient{
jsonRestHandler: jsonRestHandler, handler: handler,
} }
_, err := validatorClient.fork(ctx) _, err := validatorClient.fork(ctx)
@@ -83,7 +83,7 @@ func TestGetHeaders_Nominal(t *testing.T) {
defer ctrl.Finish() defer ctrl.Finish()
blockHeadersResponseJson := structs.GetBlockHeadersResponse{} blockHeadersResponseJson := structs.GetBlockHeadersResponse{}
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
expected := structs.GetBlockHeadersResponse{ expected := structs.GetBlockHeadersResponse{
Data: []*structs.SignedBeaconBlockHeaderContainer{ Data: []*structs.SignedBeaconBlockHeaderContainer{
@@ -99,7 +99,7 @@ func TestGetHeaders_Nominal(t *testing.T) {
ctx := t.Context() ctx := t.Context()
jsonRestHandler.EXPECT().Get( handler.EXPECT().Get(
gomock.Any(), gomock.Any(),
headersEndpoint, headersEndpoint,
&blockHeadersResponseJson, &blockHeadersResponseJson,
@@ -111,7 +111,7 @@ func TestGetHeaders_Nominal(t *testing.T) {
).Times(1) ).Times(1)
validatorClient := beaconApiValidatorClient{ validatorClient := beaconApiValidatorClient{
jsonRestHandler: jsonRestHandler, handler: handler,
} }
headers, err := validatorClient.headers(ctx) headers, err := validatorClient.headers(ctx)
@@ -123,11 +123,11 @@ func TestGetHeaders_Invalid(t *testing.T) {
ctrl := gomock.NewController(t) ctrl := gomock.NewController(t)
defer ctrl.Finish() defer ctrl.Finish()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
ctx := t.Context() ctx := t.Context()
jsonRestHandler.EXPECT().Get( handler.EXPECT().Get(
gomock.Any(), gomock.Any(),
headersEndpoint, headersEndpoint,
gomock.Any(), gomock.Any(),
@@ -136,7 +136,7 @@ func TestGetHeaders_Invalid(t *testing.T) {
).Times(1) ).Times(1)
validatorClient := beaconApiValidatorClient{ validatorClient := beaconApiValidatorClient{
jsonRestHandler: jsonRestHandler, handler: handler,
} }
_, err := validatorClient.headers(ctx) _, err := validatorClient.headers(ctx)
@@ -170,8 +170,8 @@ func TestGetLiveness_Nominal(t *testing.T) {
ctx := t.Context() ctx := t.Context()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
jsonRestHandler.EXPECT().Post( handler.EXPECT().Post(
gomock.Any(), gomock.Any(),
livenessEndpoint, livenessEndpoint,
nil, nil,
@@ -184,7 +184,7 @@ func TestGetLiveness_Nominal(t *testing.T) {
nil, nil,
).Times(1) ).Times(1)
validatorClient := &beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := &beaconApiValidatorClient{handler: handler}
liveness, err := validatorClient.liveness(ctx, 42, indexes) liveness, err := validatorClient.liveness(ctx, 42, indexes)
require.NoError(t, err) require.NoError(t, err)
@@ -197,8 +197,8 @@ func TestGetLiveness_Invalid(t *testing.T) {
ctx := t.Context() ctx := t.Context()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
jsonRestHandler.EXPECT().Post( handler.EXPECT().Post(
gomock.Any(), gomock.Any(),
livenessEndpoint, livenessEndpoint,
nil, nil,
@@ -208,7 +208,7 @@ func TestGetLiveness_Invalid(t *testing.T) {
errors.New("custom error"), errors.New("custom error"),
).Times(1) ).Times(1)
validatorClient := &beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := &beaconApiValidatorClient{handler: handler}
_, err := validatorClient.liveness(ctx, 42, nil) _, err := validatorClient.liveness(ctx, 42, nil)
require.ErrorContains(t, "custom error", err) require.ErrorContains(t, "custom error", err)
@@ -237,7 +237,7 @@ func TestGetIsSyncing_Nominal(t *testing.T) {
defer ctrl.Finish() defer ctrl.Finish()
syncingResponseJson := structs.SyncStatusResponse{} syncingResponseJson := structs.SyncStatusResponse{}
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
expected := structs.SyncStatusResponse{ expected := structs.SyncStatusResponse{
Data: &structs.SyncStatusResponseData{ Data: &structs.SyncStatusResponseData{
@@ -247,7 +247,7 @@ func TestGetIsSyncing_Nominal(t *testing.T) {
ctx := t.Context() ctx := t.Context()
jsonRestHandler.EXPECT().Get( handler.EXPECT().Get(
gomock.Any(), gomock.Any(),
syncingEndpoint, syncingEndpoint,
&syncingResponseJson, &syncingResponseJson,
@@ -259,7 +259,7 @@ func TestGetIsSyncing_Nominal(t *testing.T) {
).Times(1) ).Times(1)
validatorClient := beaconApiValidatorClient{ validatorClient := beaconApiValidatorClient{
jsonRestHandler: jsonRestHandler, handler: handler,
} }
isSyncing, err := validatorClient.isSyncing(ctx) isSyncing, err := validatorClient.isSyncing(ctx)
@@ -274,11 +274,11 @@ func TestGetIsSyncing_Invalid(t *testing.T) {
defer ctrl.Finish() defer ctrl.Finish()
syncingResponseJson := structs.SyncStatusResponse{} syncingResponseJson := structs.SyncStatusResponse{}
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
ctx := t.Context() ctx := t.Context()
jsonRestHandler.EXPECT().Get( handler.EXPECT().Get(
gomock.Any(), gomock.Any(),
syncingEndpoint, syncingEndpoint,
&syncingResponseJson, &syncingResponseJson,
@@ -287,7 +287,7 @@ func TestGetIsSyncing_Invalid(t *testing.T) {
).Times(1) ).Times(1)
validatorClient := beaconApiValidatorClient{ validatorClient := beaconApiValidatorClient{
jsonRestHandler: jsonRestHandler, handler: handler,
} }
isSyncing, err := validatorClient.isSyncing(ctx) isSyncing, err := validatorClient.isSyncing(ctx)

View File

@@ -5,6 +5,7 @@ import (
"net/http" "net/http"
"strconv" "strconv"
"github.com/OffchainLabs/prysm/v7/api/rest"
"github.com/OffchainLabs/prysm/v7/api/server/structs" "github.com/OffchainLabs/prysm/v7/api/server/structs"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1" ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/validator/client/iface" "github.com/OffchainLabs/prysm/v7/validator/client/iface"
@@ -20,13 +21,13 @@ var (
type beaconApiNodeClient struct { type beaconApiNodeClient struct {
fallbackClient iface.NodeClient fallbackClient iface.NodeClient
jsonRestHandler RestHandler handler rest.Handler
genesisProvider GenesisProvider genesisProvider GenesisProvider
} }
func (c *beaconApiNodeClient) SyncStatus(ctx context.Context, _ *empty.Empty) (*ethpb.SyncStatus, error) { func (c *beaconApiNodeClient) SyncStatus(ctx context.Context, _ *empty.Empty) (*ethpb.SyncStatus, error) {
syncingResponse := structs.SyncStatusResponse{} syncingResponse := structs.SyncStatusResponse{}
if err := c.jsonRestHandler.Get(ctx, "/eth/v1/node/syncing", &syncingResponse); err != nil { if err := c.handler.Get(ctx, "/eth/v1/node/syncing", &syncingResponse); err != nil {
return nil, err return nil, err
} }
@@ -56,7 +57,7 @@ func (c *beaconApiNodeClient) Genesis(ctx context.Context, _ *empty.Empty) (*eth
} }
depositContractJson := structs.GetDepositContractResponse{} depositContractJson := structs.GetDepositContractResponse{}
if err = c.jsonRestHandler.Get(ctx, "/eth/v1/config/deposit_contract", &depositContractJson); err != nil { if err = c.handler.Get(ctx, "/eth/v1/config/deposit_contract", &depositContractJson); err != nil {
return nil, err return nil, err
} }
@@ -80,7 +81,7 @@ func (c *beaconApiNodeClient) Genesis(ctx context.Context, _ *empty.Empty) (*eth
func (c *beaconApiNodeClient) Version(ctx context.Context, _ *empty.Empty) (*ethpb.Version, error) { func (c *beaconApiNodeClient) Version(ctx context.Context, _ *empty.Empty) (*ethpb.Version, error) {
var versionResponse structs.GetVersionResponse var versionResponse structs.GetVersionResponse
if err := c.jsonRestHandler.Get(ctx, "/eth/v1/node/version", &versionResponse); err != nil { if err := c.handler.Get(ctx, "/eth/v1/node/version", &versionResponse); err != nil {
return nil, err return nil, err
} }
@@ -105,9 +106,9 @@ func (c *beaconApiNodeClient) Peers(ctx context.Context, in *empty.Empty) (*ethp
// IsReady returns true only if the node is fully synced (200 OK). // IsReady returns true only if the node is fully synced (200 OK).
// A 206 Partial Content response indicates the node is syncing and not ready. // A 206 Partial Content response indicates the node is syncing and not ready.
func (c *beaconApiNodeClient) IsReady(ctx context.Context) bool { func (c *beaconApiNodeClient) IsReady(ctx context.Context) bool {
statusCode, err := c.jsonRestHandler.GetStatusCode(ctx, "/eth/v1/node/health") statusCode, err := c.handler.GetStatusCode(ctx, "/eth/v1/node/health")
if err != nil { if err != nil {
log.WithError(err).Error("failed to get health of node") log.WithError(err).WithField("url", c.handler.Host()).Error("failed to get health of node")
return false return false
} }
// Only 200 OK means the node is fully synced and ready. // Only 200 OK means the node is fully synced and ready.
@@ -115,11 +116,11 @@ func (c *beaconApiNodeClient) IsReady(ctx context.Context) bool {
return statusCode == http.StatusOK return statusCode == http.StatusOK
} }
func NewNodeClientWithFallback(jsonRestHandler RestHandler, fallbackClient iface.NodeClient) iface.NodeClient { func NewNodeClientWithFallback(handler rest.Handler, fallbackClient iface.NodeClient) iface.NodeClient {
b := &beaconApiNodeClient{ b := &beaconApiNodeClient{
jsonRestHandler: jsonRestHandler, handler: handler,
fallbackClient: fallbackClient, fallbackClient: fallbackClient,
genesisProvider: &beaconApiGenesisProvider{jsonRestHandler: jsonRestHandler}, genesisProvider: &beaconApiGenesisProvider{handler: handler},
} }
return b return b
} }

View File

@@ -120,10 +120,10 @@ func TestGetGenesis(t *testing.T) {
) )
depositContractJson := structs.GetDepositContractResponse{} depositContractJson := structs.GetDepositContractResponse{}
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
if testCase.queriesDepositContract { if testCase.queriesDepositContract {
jsonRestHandler.EXPECT().Get( handler.EXPECT().Get(
gomock.Any(), gomock.Any(),
"/eth/v1/config/deposit_contract", "/eth/v1/config/deposit_contract",
&depositContractJson, &depositContractJson,
@@ -137,7 +137,7 @@ func TestGetGenesis(t *testing.T) {
nodeClient := &beaconApiNodeClient{ nodeClient := &beaconApiNodeClient{
genesisProvider: genesisProvider, genesisProvider: genesisProvider,
jsonRestHandler: jsonRestHandler, handler: handler,
} }
response, err := nodeClient.Genesis(ctx, &emptypb.Empty{}) response, err := nodeClient.Genesis(ctx, &emptypb.Empty{})
@@ -201,8 +201,8 @@ func TestGetSyncStatus(t *testing.T) {
ctx := t.Context() ctx := t.Context()
syncingResponse := structs.SyncStatusResponse{} syncingResponse := structs.SyncStatusResponse{}
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
jsonRestHandler.EXPECT().Get( handler.EXPECT().Get(
gomock.Any(), gomock.Any(),
syncingEndpoint, syncingEndpoint,
&syncingResponse, &syncingResponse,
@@ -213,7 +213,7 @@ func TestGetSyncStatus(t *testing.T) {
testCase.restEndpointResponse, testCase.restEndpointResponse,
) )
nodeClient := &beaconApiNodeClient{jsonRestHandler: jsonRestHandler} nodeClient := &beaconApiNodeClient{handler: handler}
syncStatus, err := nodeClient.SyncStatus(ctx, &emptypb.Empty{}) syncStatus, err := nodeClient.SyncStatus(ctx, &emptypb.Empty{})
if testCase.expectedResponse == nil { if testCase.expectedResponse == nil {
@@ -265,8 +265,8 @@ func TestGetVersion(t *testing.T) {
ctx := t.Context() ctx := t.Context()
var versionResponse structs.GetVersionResponse var versionResponse structs.GetVersionResponse
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
jsonRestHandler.EXPECT().Get( handler.EXPECT().Get(
gomock.Any(), gomock.Any(),
versionEndpoint, versionEndpoint,
&versionResponse, &versionResponse,
@@ -277,7 +277,7 @@ func TestGetVersion(t *testing.T) {
testCase.restEndpointResponse, testCase.restEndpointResponse,
) )
nodeClient := &beaconApiNodeClient{jsonRestHandler: jsonRestHandler} nodeClient := &beaconApiNodeClient{handler: handler}
version, err := nodeClient.Version(ctx, &emptypb.Empty{}) version, err := nodeClient.Version(ctx, &emptypb.Empty{})
if testCase.expectedResponse == nil { if testCase.expectedResponse == nil {
@@ -331,13 +331,14 @@ func TestIsReady(t *testing.T) {
defer ctrl.Finish() defer ctrl.Finish()
ctx := t.Context() ctx := t.Context()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
jsonRestHandler.EXPECT().GetStatusCode( handler.EXPECT().GetStatusCode(
gomock.Any(), gomock.Any(),
healthEndpoint, healthEndpoint,
).Return(tc.statusCode, tc.err) ).Return(tc.statusCode, tc.err)
handler.EXPECT().Host().Return("http://localhost:3500").AnyTimes()
nodeClient := &beaconApiNodeClient{jsonRestHandler: jsonRestHandler} nodeClient := &beaconApiNodeClient{handler: handler}
result := nodeClient.IsReady(ctx) result := nodeClient.IsReady(ctx)
assert.Equal(t, tc.expectedResult, result) assert.Equal(t, tc.expectedResult, result)

View File

@@ -6,6 +6,8 @@ import (
"time" "time"
"github.com/OffchainLabs/prysm/v7/api/client/event" "github.com/OffchainLabs/prysm/v7/api/client/event"
"github.com/OffchainLabs/prysm/v7/api/fallback"
"github.com/OffchainLabs/prysm/v7/api/rest"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives" "github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil" "github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v7/monitoring/tracing/trace" "github.com/OffchainLabs/prysm/v7/monitoring/tracing/trace"
@@ -22,22 +24,28 @@ type beaconApiValidatorClient struct {
genesisProvider GenesisProvider genesisProvider GenesisProvider
dutiesProvider dutiesProvider dutiesProvider dutiesProvider
stateValidatorsProvider StateValidatorsProvider stateValidatorsProvider StateValidatorsProvider
jsonRestHandler RestHandler restProvider rest.RestConnectionProvider
handler rest.Handler
nodeClient *beaconApiNodeClient
beaconBlockConverter BeaconBlockConverter beaconBlockConverter BeaconBlockConverter
prysmChainClient iface.PrysmChainClient prysmChainClient iface.PrysmChainClient
isEventStreamRunning bool isEventStreamRunning bool
} }
func NewBeaconApiValidatorClient(jsonRestHandler RestHandler, opts ...ValidatorClientOpt) iface.ValidatorClient { func NewBeaconApiValidatorClient(provider rest.RestConnectionProvider, opts ...ValidatorClientOpt) iface.ValidatorClient {
handler := provider.Handler()
nc := &beaconApiNodeClient{handler: handler}
c := &beaconApiValidatorClient{ c := &beaconApiValidatorClient{
genesisProvider: &beaconApiGenesisProvider{jsonRestHandler: jsonRestHandler}, genesisProvider: &beaconApiGenesisProvider{handler: handler},
dutiesProvider: beaconApiDutiesProvider{jsonRestHandler: jsonRestHandler}, dutiesProvider: beaconApiDutiesProvider{handler: handler},
stateValidatorsProvider: beaconApiStateValidatorsProvider{jsonRestHandler: jsonRestHandler}, stateValidatorsProvider: beaconApiStateValidatorsProvider{handler: handler},
jsonRestHandler: jsonRestHandler, restProvider: provider,
handler: handler,
nodeClient: nc,
beaconBlockConverter: beaconApiBeaconBlockConverter{}, beaconBlockConverter: beaconApiBeaconBlockConverter{},
prysmChainClient: prysmChainClient{ prysmChainClient: prysmChainClient{
nodeClient: &beaconApiNodeClient{jsonRestHandler: jsonRestHandler}, nodeClient: nc,
jsonRestHandler: jsonRestHandler, handler: handler,
}, },
isEventStreamRunning: false, isEventStreamRunning: false,
} }
@@ -279,8 +287,8 @@ func (c *beaconApiValidatorClient) WaitForChainStart(ctx context.Context, _ *emp
} }
func (c *beaconApiValidatorClient) StartEventStream(ctx context.Context, topics []string, eventsChannel chan<- *event.Event) { func (c *beaconApiValidatorClient) StartEventStream(ctx context.Context, topics []string, eventsChannel chan<- *event.Event) {
client := &http.Client{} // event stream should not be subject to the same settings as other api calls, so we won't use c.jsonRestHandler.HttpClient() client := &http.Client{} // event stream should not be subject to the same settings as other api calls
eventStream, err := event.NewEventStream(ctx, client, c.jsonRestHandler.Host(), topics) eventStream, err := event.NewEventStream(ctx, client, c.handler.Host(), topics)
if err != nil { if err != nil {
eventsChannel <- &event.Event{ eventsChannel <- &event.Event{
EventType: event.EventError, EventType: event.EventError,
@@ -328,9 +336,9 @@ func wrapInMetrics[Resp any](action string, f func() (Resp, error)) (Resp, error
} }
func (c *beaconApiValidatorClient) Host() string { func (c *beaconApiValidatorClient) Host() string {
return c.jsonRestHandler.Host() return c.handler.Host()
} }
func (c *beaconApiValidatorClient) SetHost(host string) { func (c *beaconApiValidatorClient) EnsureReady(ctx context.Context) bool {
c.jsonRestHandler.SetHost(host) return fallback.EnsureReady(ctx, c.restProvider, c.nodeClient)
} }

View File

@@ -31,9 +31,9 @@ func TestBeaconApiValidatorClient_GetAttestationDataValid(t *testing.T) {
ctx := t.Context() ctx := t.Context()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
produceAttestationDataResponseJson := structs.GetAttestationDataResponse{} produceAttestationDataResponseJson := structs.GetAttestationDataResponse{}
jsonRestHandler.EXPECT().Get( handler.EXPECT().Get(
gomock.Any(), gomock.Any(),
fmt.Sprintf("/eth/v1/validator/attestation_data?committee_index=%d&slot=%d", committeeIndex, slot), fmt.Sprintf("/eth/v1/validator/attestation_data?committee_index=%d&slot=%d", committeeIndex, slot),
&produceAttestationDataResponseJson, &produceAttestationDataResponseJson,
@@ -44,7 +44,7 @@ func TestBeaconApiValidatorClient_GetAttestationDataValid(t *testing.T) {
generateValidAttestation(uint64(slot), uint64(committeeIndex)), generateValidAttestation(uint64(slot), uint64(committeeIndex)),
).Times(2) ).Times(2)
validatorClient := beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := beaconApiValidatorClient{handler: handler}
expectedResp, expectedErr := validatorClient.attestationData(ctx, slot, committeeIndex) expectedResp, expectedErr := validatorClient.attestationData(ctx, slot, committeeIndex)
resp, err := validatorClient.AttestationData( resp, err := validatorClient.AttestationData(
@@ -65,9 +65,9 @@ func TestBeaconApiValidatorClient_GetAttestationDataError(t *testing.T) {
ctx := t.Context() ctx := t.Context()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
produceAttestationDataResponseJson := structs.GetAttestationDataResponse{} produceAttestationDataResponseJson := structs.GetAttestationDataResponse{}
jsonRestHandler.EXPECT().Get( handler.EXPECT().Get(
gomock.Any(), gomock.Any(),
fmt.Sprintf("/eth/v1/validator/attestation_data?committee_index=%d&slot=%d", committeeIndex, slot), fmt.Sprintf("/eth/v1/validator/attestation_data?committee_index=%d&slot=%d", committeeIndex, slot),
&produceAttestationDataResponseJson, &produceAttestationDataResponseJson,
@@ -78,7 +78,7 @@ func TestBeaconApiValidatorClient_GetAttestationDataError(t *testing.T) {
generateValidAttestation(uint64(slot), uint64(committeeIndex)), generateValidAttestation(uint64(slot), uint64(committeeIndex)),
).Times(2) ).Times(2)
validatorClient := beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := beaconApiValidatorClient{handler: handler}
expectedResp, expectedErr := validatorClient.attestationData(ctx, slot, committeeIndex) expectedResp, expectedErr := validatorClient.attestationData(ctx, slot, committeeIndex)
resp, err := validatorClient.AttestationData( resp, err := validatorClient.AttestationData(
@@ -139,8 +139,8 @@ func TestBeaconApiValidatorClient_ProposeBeaconBlockValid(t *testing.T) {
ctx := t.Context() ctx := t.Context()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
jsonRestHandler.EXPECT().PostSSZ( handler.EXPECT().PostSSZ(
gomock.Any(), gomock.Any(),
"/eth/v2/beacon/blocks", "/eth/v2/beacon/blocks",
gomock.Any(), gomock.Any(),
@@ -149,7 +149,7 @@ func TestBeaconApiValidatorClient_ProposeBeaconBlockValid(t *testing.T) {
nil, nil, nil, nil, nil, nil,
).Times(1) ).Times(1)
validatorClient := beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := beaconApiValidatorClient{handler: handler}
expectedResp, expectedErr := validatorClient.proposeBeaconBlock( expectedResp, expectedErr := validatorClient.proposeBeaconBlock(
ctx, ctx,
&ethpb.GenericSignedBeaconBlock{ &ethpb.GenericSignedBeaconBlock{
@@ -166,8 +166,8 @@ func TestBeaconApiValidatorClient_ProposeBeaconBlockError_ThenPass(t *testing.T)
ctx := t.Context() ctx := t.Context()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
jsonRestHandler.EXPECT().PostSSZ( handler.EXPECT().PostSSZ(
gomock.Any(), gomock.Any(),
"/eth/v2/beacon/blocks", "/eth/v2/beacon/blocks",
gomock.Any(), gomock.Any(),
@@ -179,7 +179,7 @@ func TestBeaconApiValidatorClient_ProposeBeaconBlockError_ThenPass(t *testing.T)
}, },
).Times(1) ).Times(1)
jsonRestHandler.EXPECT().Post( handler.EXPECT().Post(
gomock.Any(), gomock.Any(),
"/eth/v2/beacon/blocks", "/eth/v2/beacon/blocks",
gomock.Any(), gomock.Any(),
@@ -189,7 +189,7 @@ func TestBeaconApiValidatorClient_ProposeBeaconBlockError_ThenPass(t *testing.T)
nil, nil,
).Times(1) ).Times(1)
validatorClient := beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := beaconApiValidatorClient{handler: handler}
expectedResp, expectedErr := validatorClient.proposeBeaconBlock( expectedResp, expectedErr := validatorClient.proposeBeaconBlock(
ctx, ctx,
&ethpb.GenericSignedBeaconBlock{ &ethpb.GenericSignedBeaconBlock{
@@ -308,10 +308,10 @@ func TestBeaconApiValidatorClient_ProposeBeaconBlockAllTypes(t *testing.T) {
defer ctrl.Finish() defer ctrl.Finish()
ctx := t.Context() ctx := t.Context()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
if !tt.wantErr { if !tt.wantErr {
jsonRestHandler.EXPECT().PostSSZ( handler.EXPECT().PostSSZ(
gomock.Any(), gomock.Any(),
tt.expectedPath, tt.expectedPath,
gomock.Any(), gomock.Any(),
@@ -319,7 +319,7 @@ func TestBeaconApiValidatorClient_ProposeBeaconBlockAllTypes(t *testing.T) {
).Return(nil, nil, nil).Times(1) ).Return(nil, nil, nil).Times(1)
} }
validatorClient := beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := beaconApiValidatorClient{handler: handler}
resp, err := validatorClient.proposeBeaconBlock(ctx, tt.block) resp, err := validatorClient.proposeBeaconBlock(ctx, tt.block)
if tt.wantErr { if tt.wantErr {
@@ -366,9 +366,9 @@ func TestBeaconApiValidatorClient_ProposeBeaconBlockHTTPErrors(t *testing.T) {
defer ctrl.Finish() defer ctrl.Finish()
ctx := t.Context() ctx := t.Context()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
jsonRestHandler.EXPECT().PostSSZ( handler.EXPECT().PostSSZ(
gomock.Any(), gomock.Any(),
"/eth/v2/beacon/blocks", "/eth/v2/beacon/blocks",
gomock.Any(), gomock.Any(),
@@ -377,7 +377,7 @@ func TestBeaconApiValidatorClient_ProposeBeaconBlockHTTPErrors(t *testing.T) {
if tt.expectJSON { if tt.expectJSON {
// When SSZ fails, it falls back to JSON // When SSZ fails, it falls back to JSON
jsonRestHandler.EXPECT().Post( handler.EXPECT().Post(
gomock.Any(), gomock.Any(),
"/eth/v2/beacon/blocks", "/eth/v2/beacon/blocks",
gomock.Any(), gomock.Any(),
@@ -386,7 +386,7 @@ func TestBeaconApiValidatorClient_ProposeBeaconBlockHTTPErrors(t *testing.T) {
).Return(tt.sszError).Times(1) ).Return(tt.sszError).Times(1)
} }
validatorClient := beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := beaconApiValidatorClient{handler: handler}
_, err := validatorClient.proposeBeaconBlock( _, err := validatorClient.proposeBeaconBlock(
ctx, ctx,
&ethpb.GenericSignedBeaconBlock{ &ethpb.GenericSignedBeaconBlock{
@@ -507,10 +507,10 @@ func TestBeaconApiValidatorClient_ProposeBeaconBlockJSONFallback(t *testing.T) {
defer ctrl.Finish() defer ctrl.Finish()
ctx := t.Context() ctx := t.Context()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
// SSZ call fails with 406 to trigger JSON fallback // SSZ call fails with 406 to trigger JSON fallback
jsonRestHandler.EXPECT().PostSSZ( handler.EXPECT().PostSSZ(
gomock.Any(), gomock.Any(),
tt.expectedPath, tt.expectedPath,
gomock.Any(), gomock.Any(),
@@ -521,7 +521,7 @@ func TestBeaconApiValidatorClient_ProposeBeaconBlockJSONFallback(t *testing.T) {
}).Times(1) }).Times(1)
// JSON fallback // JSON fallback
jsonRestHandler.EXPECT().Post( handler.EXPECT().Post(
gomock.Any(), gomock.Any(),
tt.expectedPath, tt.expectedPath,
gomock.Any(), gomock.Any(),
@@ -529,7 +529,7 @@ func TestBeaconApiValidatorClient_ProposeBeaconBlockJSONFallback(t *testing.T) {
gomock.Any(), gomock.Any(),
).Return(tt.jsonError).Times(1) ).Return(tt.jsonError).Times(1)
validatorClient := beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := beaconApiValidatorClient{handler: handler}
resp, err := validatorClient.proposeBeaconBlock(ctx, tt.block) resp, err := validatorClient.proposeBeaconBlock(ctx, tt.block)
if tt.wantErr { if tt.wantErr {
@@ -547,29 +547,12 @@ func TestBeaconApiValidatorClient_Host(t *testing.T) {
ctrl := gomock.NewController(t) ctrl := gomock.NewController(t)
defer ctrl.Finish() defer ctrl.Finish()
hosts := []string{"http://localhost:8080", "http://localhost:8081"} handler := mock.NewMockJsonRestHandler(ctrl)
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler.EXPECT().Host().Return("http://localhost:8080").Times(1)
jsonRestHandler.EXPECT().SetHost(
hosts[0],
).Times(1)
jsonRestHandler.EXPECT().Host().Return(
hosts[0],
).Times(1)
validatorClient := beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := beaconApiValidatorClient{handler: handler}
validatorClient.SetHost(hosts[0])
host := validatorClient.Host() host := validatorClient.Host()
require.Equal(t, hosts[0], host) require.Equal(t, "http://localhost:8080", host)
jsonRestHandler.EXPECT().SetHost(
hosts[1],
).Times(1)
jsonRestHandler.EXPECT().Host().Return(
hosts[1],
).Times(1)
validatorClient.SetHost(hosts[1])
host = validatorClient.Host()
require.Equal(t, hosts[1], host)
} }
// Helper functions for generating test blocks for newer consensus versions // Helper functions for generating test blocks for newer consensus versions

View File

@@ -20,7 +20,7 @@ func (c *beaconApiValidatorClient) aggregatedSelection(ctx context.Context, sele
} }
var resp aggregatedSelectionResponse var resp aggregatedSelectionResponse
err = c.jsonRestHandler.Post(ctx, "/eth/v1/validator/beacon_committee_selections", nil, bytes.NewBuffer(body), &resp) err = c.handler.Post(ctx, "/eth/v1/validator/beacon_committee_selections", nil, bytes.NewBuffer(body), &resp)
if err != nil { if err != nil {
return nil, errors.Wrap(err, "error calling post endpoint") return nil, errors.Wrap(err, "error calling post endpoint")
} }

View File

@@ -89,13 +89,13 @@ func TestGetAggregatedSelections(t *testing.T) {
for _, test := range testcases { for _, test := range testcases {
t.Run(test.name, func(t *testing.T) { t.Run(test.name, func(t *testing.T) {
ctrl := gomock.NewController(t) ctrl := gomock.NewController(t)
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
reqBody, err := json.Marshal(test.req) reqBody, err := json.Marshal(test.req)
require.NoError(t, err) require.NoError(t, err)
ctx := t.Context() ctx := t.Context()
jsonRestHandler.EXPECT().Post( handler.EXPECT().Post(
gomock.Any(), gomock.Any(),
"/eth/v1/validator/beacon_committee_selections", "/eth/v1/validator/beacon_committee_selections",
nil, nil,
@@ -108,7 +108,7 @@ func TestGetAggregatedSelections(t *testing.T) {
test.endpointError, test.endpointError,
).Times(1) ).Times(1)
validatorClient := &beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := &beaconApiValidatorClient{handler: handler}
res, err := validatorClient.AggregatedSelections(ctx, test.req) res, err := validatorClient.AggregatedSelections(ctx, test.req)
if test.expectedErrorMessage != "" { if test.expectedErrorMessage != "" {
require.ErrorContains(t, test.expectedErrorMessage, err) require.ErrorContains(t, test.expectedErrorMessage, err)

View File

@@ -288,12 +288,12 @@ func TestCheckDoppelGanger_Nominal(t *testing.T) {
ctrl := gomock.NewController(t) ctrl := gomock.NewController(t)
defer ctrl.Finish() defer ctrl.Finish()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
if testCase.getSyncingOutput != nil { if testCase.getSyncingOutput != nil {
syncingResponseJson := structs.SyncStatusResponse{} syncingResponseJson := structs.SyncStatusResponse{}
jsonRestHandler.EXPECT().Get( handler.EXPECT().Get(
gomock.Any(), gomock.Any(),
syncingEndpoint, syncingEndpoint,
&syncingResponseJson, &syncingResponseJson,
@@ -308,7 +308,7 @@ func TestCheckDoppelGanger_Nominal(t *testing.T) {
if testCase.getForkOutput != nil { if testCase.getForkOutput != nil {
stateForkResponseJson := structs.GetStateForkResponse{} stateForkResponseJson := structs.GetStateForkResponse{}
jsonRestHandler.EXPECT().Get( handler.EXPECT().Get(
gomock.Any(), gomock.Any(),
forkEndpoint, forkEndpoint,
&stateForkResponseJson, &stateForkResponseJson,
@@ -323,7 +323,7 @@ func TestCheckDoppelGanger_Nominal(t *testing.T) {
if testCase.getHeadersOutput != nil { if testCase.getHeadersOutput != nil {
blockHeadersResponseJson := structs.GetBlockHeadersResponse{} blockHeadersResponseJson := structs.GetBlockHeadersResponse{}
jsonRestHandler.EXPECT().Get( handler.EXPECT().Get(
gomock.Any(), gomock.Any(),
headersEndpoint, headersEndpoint,
&blockHeadersResponseJson, &blockHeadersResponseJson,
@@ -342,7 +342,7 @@ func TestCheckDoppelGanger_Nominal(t *testing.T) {
marshalledIndexes, err := json.Marshal(iface.inputStringIndexes) marshalledIndexes, err := json.Marshal(iface.inputStringIndexes)
require.NoError(t, err) require.NoError(t, err)
jsonRestHandler.EXPECT().Post( handler.EXPECT().Post(
gomock.Any(), gomock.Any(),
iface.inputUrl, iface.inputUrl,
nil, nil,
@@ -372,7 +372,7 @@ func TestCheckDoppelGanger_Nominal(t *testing.T) {
} }
validatorClient := beaconApiValidatorClient{ validatorClient := beaconApiValidatorClient{
jsonRestHandler: jsonRestHandler, handler: handler,
stateValidatorsProvider: stateValidatorsProvider, stateValidatorsProvider: stateValidatorsProvider,
} }
@@ -722,12 +722,12 @@ func TestCheckDoppelGanger_Errors(t *testing.T) {
ctrl := gomock.NewController(t) ctrl := gomock.NewController(t)
defer ctrl.Finish() defer ctrl.Finish()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
if testCase.getSyncingOutput != nil { if testCase.getSyncingOutput != nil {
syncingResponseJson := structs.SyncStatusResponse{} syncingResponseJson := structs.SyncStatusResponse{}
jsonRestHandler.EXPECT().Get( handler.EXPECT().Get(
gomock.Any(), gomock.Any(),
syncingEndpoint, syncingEndpoint,
&syncingResponseJson, &syncingResponseJson,
@@ -742,7 +742,7 @@ func TestCheckDoppelGanger_Errors(t *testing.T) {
if testCase.getForkOutput != nil { if testCase.getForkOutput != nil {
stateForkResponseJson := structs.GetStateForkResponse{} stateForkResponseJson := structs.GetStateForkResponse{}
jsonRestHandler.EXPECT().Get( handler.EXPECT().Get(
gomock.Any(), gomock.Any(),
forkEndpoint, forkEndpoint,
&stateForkResponseJson, &stateForkResponseJson,
@@ -757,7 +757,7 @@ func TestCheckDoppelGanger_Errors(t *testing.T) {
if testCase.getHeadersOutput != nil { if testCase.getHeadersOutput != nil {
blockHeadersResponseJson := structs.GetBlockHeadersResponse{} blockHeadersResponseJson := structs.GetBlockHeadersResponse{}
jsonRestHandler.EXPECT().Get( handler.EXPECT().Get(
gomock.Any(), gomock.Any(),
headersEndpoint, headersEndpoint,
&blockHeadersResponseJson, &blockHeadersResponseJson,
@@ -790,7 +790,7 @@ func TestCheckDoppelGanger_Errors(t *testing.T) {
marshalledIndexes, err := json.Marshal(iface.inputStringIndexes) marshalledIndexes, err := json.Marshal(iface.inputStringIndexes)
require.NoError(t, err) require.NoError(t, err)
jsonRestHandler.EXPECT().Post( handler.EXPECT().Post(
gomock.Any(), gomock.Any(),
iface.inputUrl, iface.inputUrl,
nil, nil,
@@ -806,7 +806,7 @@ func TestCheckDoppelGanger_Errors(t *testing.T) {
} }
validatorClient := beaconApiValidatorClient{ validatorClient := beaconApiValidatorClient{
jsonRestHandler: jsonRestHandler, handler: handler,
stateValidatorsProvider: stateValidatorsProvider, stateValidatorsProvider: stateValidatorsProvider,
} }

View File

@@ -9,6 +9,7 @@ import (
"strconv" "strconv"
"github.com/OffchainLabs/prysm/v7/api/apiutil" "github.com/OffchainLabs/prysm/v7/api/apiutil"
"github.com/OffchainLabs/prysm/v7/api/rest"
"github.com/OffchainLabs/prysm/v7/api/server/structs" "github.com/OffchainLabs/prysm/v7/api/server/structs"
"github.com/OffchainLabs/prysm/v7/config/params" "github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives" "github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
@@ -27,7 +28,7 @@ type dutiesProvider interface {
} }
type beaconApiDutiesProvider struct { type beaconApiDutiesProvider struct {
jsonRestHandler RestHandler handler rest.Handler
} }
type attesterDuty struct { type attesterDuty struct {
@@ -65,12 +66,13 @@ func (c *beaconApiValidatorClient) duties(ctx context.Context, in *ethpb.DutiesR
}() }()
nextEpochDuties := &ethpb.ValidatorDutiesContainer{} nextEpochDuties := &ethpb.ValidatorDutiesContainer{}
if err := c.dutiesForEpoch(ctx, nextEpochDuties, in.Epoch+1, vals, fetchSyncDuties); err != nil { nextEpochErr := c.dutiesForEpoch(ctx, nextEpochDuties, in.Epoch+1, vals, fetchSyncDuties)
return nil, errors.Wrapf(err, "failed to get duties for next epoch `%d`", in.Epoch+1)
}
if err = <-errCh; err != nil { if currEpochErr := <-errCh; currEpochErr != nil {
return nil, err return nil, currEpochErr
}
if nextEpochErr != nil {
return nil, errors.Wrapf(nextEpochErr, "failed to get duties for next epoch `%d`", in.Epoch+1)
} }
return &ethpb.ValidatorDutiesContainer{ return &ethpb.ValidatorDutiesContainer{
@@ -278,7 +280,7 @@ func (c beaconApiDutiesProvider) Committees(ctx context.Context, epoch primitive
committeesRequest := apiutil.BuildURL("/eth/v1/beacon/states/head/committees", committeeParams) committeesRequest := apiutil.BuildURL("/eth/v1/beacon/states/head/committees", committeeParams)
var stateCommittees structs.GetCommitteesResponse var stateCommittees structs.GetCommitteesResponse
if err := c.jsonRestHandler.Get(ctx, committeesRequest, &stateCommittees); err != nil { if err := c.handler.Get(ctx, committeesRequest, &stateCommittees); err != nil {
return nil, err return nil, err
} }
@@ -308,7 +310,7 @@ func (c beaconApiDutiesProvider) AttesterDuties(ctx context.Context, epoch primi
} }
attesterDuties := &structs.GetAttesterDutiesResponse{} attesterDuties := &structs.GetAttesterDutiesResponse{}
if err = c.jsonRestHandler.Post( if err = c.handler.Post(
ctx, ctx,
fmt.Sprintf("/eth/v1/validator/duties/attester/%d", epoch), fmt.Sprintf("/eth/v1/validator/duties/attester/%d", epoch),
nil, nil,
@@ -330,7 +332,7 @@ func (c beaconApiDutiesProvider) AttesterDuties(ctx context.Context, epoch primi
// ProposerDuties retrieves the proposer duties for the given epoch // ProposerDuties retrieves the proposer duties for the given epoch
func (c beaconApiDutiesProvider) ProposerDuties(ctx context.Context, epoch primitives.Epoch) (*structs.GetProposerDutiesResponse, error) { func (c beaconApiDutiesProvider) ProposerDuties(ctx context.Context, epoch primitives.Epoch) (*structs.GetProposerDutiesResponse, error) {
proposerDuties := &structs.GetProposerDutiesResponse{} proposerDuties := &structs.GetProposerDutiesResponse{}
if err := c.jsonRestHandler.Get(ctx, fmt.Sprintf("/eth/v1/validator/duties/proposer/%d", epoch), proposerDuties); err != nil { if err := c.handler.Get(ctx, fmt.Sprintf("/eth/v1/validator/duties/proposer/%d", epoch), proposerDuties); err != nil {
return nil, err return nil, err
} }
@@ -360,7 +362,7 @@ func (c beaconApiDutiesProvider) SyncDuties(ctx context.Context, epoch primitive
} }
syncDuties := structs.GetSyncCommitteeDutiesResponse{} syncDuties := structs.GetSyncCommitteeDutiesResponse{}
if err = c.jsonRestHandler.Post( if err = c.handler.Post(
ctx, ctx,
fmt.Sprintf("/eth/v1/validator/duties/sync/%d", epoch), fmt.Sprintf("/eth/v1/validator/duties/sync/%d", epoch),
nil, nil,

View File

@@ -60,8 +60,8 @@ func TestGetAttesterDuties_Valid(t *testing.T) {
ctx := t.Context() ctx := t.Context()
validatorIndices := []primitives.ValidatorIndex{2, 9} validatorIndices := []primitives.ValidatorIndex{2, 9}
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
jsonRestHandler.EXPECT().Post( handler.EXPECT().Post(
gomock.Any(), gomock.Any(),
fmt.Sprintf("%s/%d", getAttesterDutiesTestEndpoint, epoch), fmt.Sprintf("%s/%d", getAttesterDutiesTestEndpoint, epoch),
nil, nil,
@@ -74,7 +74,7 @@ func TestGetAttesterDuties_Valid(t *testing.T) {
expectedAttesterDuties, expectedAttesterDuties,
).Times(1) ).Times(1)
dutiesProvider := &beaconApiDutiesProvider{jsonRestHandler: jsonRestHandler} dutiesProvider := &beaconApiDutiesProvider{handler: handler}
attesterDuties, err := dutiesProvider.AttesterDuties(ctx, epoch, validatorIndices) attesterDuties, err := dutiesProvider.AttesterDuties(ctx, epoch, validatorIndices)
require.NoError(t, err) require.NoError(t, err)
assert.DeepEqual(t, expectedAttesterDuties.Data, attesterDuties.Data) assert.DeepEqual(t, expectedAttesterDuties.Data, attesterDuties.Data)
@@ -88,8 +88,8 @@ func TestGetAttesterDuties_HttpError(t *testing.T) {
ctx := t.Context() ctx := t.Context()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
jsonRestHandler.EXPECT().Post( handler.EXPECT().Post(
gomock.Any(), gomock.Any(),
fmt.Sprintf("%s/%d", getAttesterDutiesTestEndpoint, epoch), fmt.Sprintf("%s/%d", getAttesterDutiesTestEndpoint, epoch),
gomock.Any(), gomock.Any(),
@@ -99,7 +99,7 @@ func TestGetAttesterDuties_HttpError(t *testing.T) {
errors.New("foo error"), errors.New("foo error"),
).Times(1) ).Times(1)
dutiesProvider := &beaconApiDutiesProvider{jsonRestHandler: jsonRestHandler} dutiesProvider := &beaconApiDutiesProvider{handler: handler}
_, err := dutiesProvider.AttesterDuties(ctx, epoch, nil) _, err := dutiesProvider.AttesterDuties(ctx, epoch, nil)
assert.ErrorContains(t, "foo error", err) assert.ErrorContains(t, "foo error", err)
} }
@@ -112,8 +112,8 @@ func TestGetAttesterDuties_NilAttesterDuty(t *testing.T) {
ctx := t.Context() ctx := t.Context()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
jsonRestHandler.EXPECT().Post( handler.EXPECT().Post(
gomock.Any(), gomock.Any(),
fmt.Sprintf("%s/%d", getAttesterDutiesTestEndpoint, epoch), fmt.Sprintf("%s/%d", getAttesterDutiesTestEndpoint, epoch),
gomock.Any(), gomock.Any(),
@@ -128,7 +128,7 @@ func TestGetAttesterDuties_NilAttesterDuty(t *testing.T) {
}, },
).Times(1) ).Times(1)
dutiesProvider := &beaconApiDutiesProvider{jsonRestHandler: jsonRestHandler} dutiesProvider := &beaconApiDutiesProvider{handler: handler}
_, err := dutiesProvider.AttesterDuties(ctx, epoch, nil) _, err := dutiesProvider.AttesterDuties(ctx, epoch, nil)
assert.ErrorContains(t, "attester duty at index `0` is nil", err) assert.ErrorContains(t, "attester duty at index `0` is nil", err)
} }
@@ -156,8 +156,8 @@ func TestGetProposerDuties_Valid(t *testing.T) {
ctx := t.Context() ctx := t.Context()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
jsonRestHandler.EXPECT().Get( handler.EXPECT().Get(
gomock.Any(), gomock.Any(),
fmt.Sprintf("%s/%d", getProposerDutiesTestEndpoint, epoch), fmt.Sprintf("%s/%d", getProposerDutiesTestEndpoint, epoch),
&structs.GetProposerDutiesResponse{}, &structs.GetProposerDutiesResponse{},
@@ -168,7 +168,7 @@ func TestGetProposerDuties_Valid(t *testing.T) {
expectedProposerDuties, expectedProposerDuties,
).Times(1) ).Times(1)
dutiesProvider := &beaconApiDutiesProvider{jsonRestHandler: jsonRestHandler} dutiesProvider := &beaconApiDutiesProvider{handler: handler}
proposerDuties, err := dutiesProvider.ProposerDuties(ctx, epoch) proposerDuties, err := dutiesProvider.ProposerDuties(ctx, epoch)
require.NoError(t, err) require.NoError(t, err)
assert.DeepEqual(t, expectedProposerDuties.Data, proposerDuties.Data) assert.DeepEqual(t, expectedProposerDuties.Data, proposerDuties.Data)
@@ -182,8 +182,8 @@ func TestGetProposerDuties_HttpError(t *testing.T) {
ctx := t.Context() ctx := t.Context()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
jsonRestHandler.EXPECT().Get( handler.EXPECT().Get(
gomock.Any(), gomock.Any(),
fmt.Sprintf("%s/%d", getProposerDutiesTestEndpoint, epoch), fmt.Sprintf("%s/%d", getProposerDutiesTestEndpoint, epoch),
gomock.Any(), gomock.Any(),
@@ -191,7 +191,7 @@ func TestGetProposerDuties_HttpError(t *testing.T) {
errors.New("foo error"), errors.New("foo error"),
).Times(1) ).Times(1)
dutiesProvider := &beaconApiDutiesProvider{jsonRestHandler: jsonRestHandler} dutiesProvider := &beaconApiDutiesProvider{handler: handler}
_, err := dutiesProvider.ProposerDuties(ctx, epoch) _, err := dutiesProvider.ProposerDuties(ctx, epoch)
assert.ErrorContains(t, "foo error", err) assert.ErrorContains(t, "foo error", err)
} }
@@ -204,8 +204,8 @@ func TestGetProposerDuties_NilData(t *testing.T) {
ctx := t.Context() ctx := t.Context()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
jsonRestHandler.EXPECT().Get( handler.EXPECT().Get(
gomock.Any(), gomock.Any(),
fmt.Sprintf("%s/%d", getProposerDutiesTestEndpoint, epoch), fmt.Sprintf("%s/%d", getProposerDutiesTestEndpoint, epoch),
gomock.Any(), gomock.Any(),
@@ -218,7 +218,7 @@ func TestGetProposerDuties_NilData(t *testing.T) {
}, },
).Times(1) ).Times(1)
dutiesProvider := &beaconApiDutiesProvider{jsonRestHandler: jsonRestHandler} dutiesProvider := &beaconApiDutiesProvider{handler: handler}
_, err := dutiesProvider.ProposerDuties(ctx, epoch) _, err := dutiesProvider.ProposerDuties(ctx, epoch)
assert.ErrorContains(t, "proposer duties data is nil", err) assert.ErrorContains(t, "proposer duties data is nil", err)
} }
@@ -231,8 +231,8 @@ func TestGetProposerDuties_NilProposerDuty(t *testing.T) {
ctx := t.Context() ctx := t.Context()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
jsonRestHandler.EXPECT().Get( handler.EXPECT().Get(
gomock.Any(), gomock.Any(),
fmt.Sprintf("%s/%d", getProposerDutiesTestEndpoint, epoch), fmt.Sprintf("%s/%d", getProposerDutiesTestEndpoint, epoch),
gomock.Any(), gomock.Any(),
@@ -245,7 +245,7 @@ func TestGetProposerDuties_NilProposerDuty(t *testing.T) {
}, },
).Times(1) ).Times(1)
dutiesProvider := &beaconApiDutiesProvider{jsonRestHandler: jsonRestHandler} dutiesProvider := &beaconApiDutiesProvider{handler: handler}
_, err := dutiesProvider.ProposerDuties(ctx, epoch) _, err := dutiesProvider.ProposerDuties(ctx, epoch)
assert.ErrorContains(t, "proposer duty at index `0` is nil", err) assert.ErrorContains(t, "proposer duty at index `0` is nil", err)
} }
@@ -284,8 +284,8 @@ func TestGetSyncDuties_Valid(t *testing.T) {
ctx := t.Context() ctx := t.Context()
validatorIndices := []primitives.ValidatorIndex{2, 6} validatorIndices := []primitives.ValidatorIndex{2, 6}
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
jsonRestHandler.EXPECT().Post( handler.EXPECT().Post(
gomock.Any(), gomock.Any(),
fmt.Sprintf("%s/%d", getSyncDutiesTestEndpoint, epoch), fmt.Sprintf("%s/%d", getSyncDutiesTestEndpoint, epoch),
nil, nil,
@@ -298,7 +298,7 @@ func TestGetSyncDuties_Valid(t *testing.T) {
expectedSyncDuties, expectedSyncDuties,
).Times(1) ).Times(1)
dutiesProvider := &beaconApiDutiesProvider{jsonRestHandler: jsonRestHandler} dutiesProvider := &beaconApiDutiesProvider{handler: handler}
syncDuties, err := dutiesProvider.SyncDuties(ctx, epoch, validatorIndices) syncDuties, err := dutiesProvider.SyncDuties(ctx, epoch, validatorIndices)
require.NoError(t, err) require.NoError(t, err)
assert.DeepEqual(t, expectedSyncDuties.Data, syncDuties) assert.DeepEqual(t, expectedSyncDuties.Data, syncDuties)
@@ -312,8 +312,8 @@ func TestGetSyncDuties_HttpError(t *testing.T) {
ctx := t.Context() ctx := t.Context()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
jsonRestHandler.EXPECT().Post( handler.EXPECT().Post(
gomock.Any(), gomock.Any(),
fmt.Sprintf("%s/%d", getSyncDutiesTestEndpoint, epoch), fmt.Sprintf("%s/%d", getSyncDutiesTestEndpoint, epoch),
gomock.Any(), gomock.Any(),
@@ -323,7 +323,7 @@ func TestGetSyncDuties_HttpError(t *testing.T) {
errors.New("foo error"), errors.New("foo error"),
).Times(1) ).Times(1)
dutiesProvider := &beaconApiDutiesProvider{jsonRestHandler: jsonRestHandler} dutiesProvider := &beaconApiDutiesProvider{handler: handler}
_, err := dutiesProvider.SyncDuties(ctx, epoch, nil) _, err := dutiesProvider.SyncDuties(ctx, epoch, nil)
assert.ErrorContains(t, "foo error", err) assert.ErrorContains(t, "foo error", err)
} }
@@ -336,8 +336,8 @@ func TestGetSyncDuties_NilData(t *testing.T) {
ctx := t.Context() ctx := t.Context()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
jsonRestHandler.EXPECT().Post( handler.EXPECT().Post(
gomock.Any(), gomock.Any(),
fmt.Sprintf("%s/%d", getSyncDutiesTestEndpoint, epoch), fmt.Sprintf("%s/%d", getSyncDutiesTestEndpoint, epoch),
gomock.Any(), gomock.Any(),
@@ -352,7 +352,7 @@ func TestGetSyncDuties_NilData(t *testing.T) {
}, },
).Times(1) ).Times(1)
dutiesProvider := &beaconApiDutiesProvider{jsonRestHandler: jsonRestHandler} dutiesProvider := &beaconApiDutiesProvider{handler: handler}
_, err := dutiesProvider.SyncDuties(ctx, epoch, nil) _, err := dutiesProvider.SyncDuties(ctx, epoch, nil)
assert.ErrorContains(t, "sync duties data is nil", err) assert.ErrorContains(t, "sync duties data is nil", err)
} }
@@ -365,8 +365,8 @@ func TestGetSyncDuties_NilSyncDuty(t *testing.T) {
ctx := t.Context() ctx := t.Context()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
jsonRestHandler.EXPECT().Post( handler.EXPECT().Post(
gomock.Any(), gomock.Any(),
fmt.Sprintf("%s/%d", getSyncDutiesTestEndpoint, epoch), fmt.Sprintf("%s/%d", getSyncDutiesTestEndpoint, epoch),
gomock.Any(), gomock.Any(),
@@ -381,7 +381,7 @@ func TestGetSyncDuties_NilSyncDuty(t *testing.T) {
}, },
).Times(1) ).Times(1)
dutiesProvider := &beaconApiDutiesProvider{jsonRestHandler: jsonRestHandler} dutiesProvider := &beaconApiDutiesProvider{handler: handler}
_, err := dutiesProvider.SyncDuties(ctx, epoch, nil) _, err := dutiesProvider.SyncDuties(ctx, epoch, nil)
assert.ErrorContains(t, "sync duty at index `0` is nil", err) assert.ErrorContains(t, "sync duty at index `0` is nil", err)
} }
@@ -415,8 +415,8 @@ func TestGetCommittees_Valid(t *testing.T) {
ctx := t.Context() ctx := t.Context()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
jsonRestHandler.EXPECT().Get( handler.EXPECT().Get(
gomock.Any(), gomock.Any(),
fmt.Sprintf("%s?epoch=%d", getCommitteesTestEndpoint, epoch), fmt.Sprintf("%s?epoch=%d", getCommitteesTestEndpoint, epoch),
&structs.GetCommitteesResponse{}, &structs.GetCommitteesResponse{},
@@ -427,7 +427,7 @@ func TestGetCommittees_Valid(t *testing.T) {
expectedCommittees, expectedCommittees,
).Times(1) ).Times(1)
dutiesProvider := &beaconApiDutiesProvider{jsonRestHandler: jsonRestHandler} dutiesProvider := &beaconApiDutiesProvider{handler: handler}
committees, err := dutiesProvider.Committees(ctx, epoch) committees, err := dutiesProvider.Committees(ctx, epoch)
require.NoError(t, err) require.NoError(t, err)
assert.DeepEqual(t, expectedCommittees.Data, committees) assert.DeepEqual(t, expectedCommittees.Data, committees)
@@ -441,8 +441,8 @@ func TestGetCommittees_HttpError(t *testing.T) {
ctx := t.Context() ctx := t.Context()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
jsonRestHandler.EXPECT().Get( handler.EXPECT().Get(
gomock.Any(), gomock.Any(),
fmt.Sprintf("%s?epoch=%d", getCommitteesTestEndpoint, epoch), fmt.Sprintf("%s?epoch=%d", getCommitteesTestEndpoint, epoch),
gomock.Any(), gomock.Any(),
@@ -450,7 +450,7 @@ func TestGetCommittees_HttpError(t *testing.T) {
errors.New("foo error"), errors.New("foo error"),
).Times(1) ).Times(1)
dutiesProvider := &beaconApiDutiesProvider{jsonRestHandler: jsonRestHandler} dutiesProvider := &beaconApiDutiesProvider{handler: handler}
_, err := dutiesProvider.Committees(ctx, epoch) _, err := dutiesProvider.Committees(ctx, epoch)
assert.ErrorContains(t, "foo error", err) assert.ErrorContains(t, "foo error", err)
} }
@@ -463,8 +463,8 @@ func TestGetCommittees_NilData(t *testing.T) {
ctx := t.Context() ctx := t.Context()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
jsonRestHandler.EXPECT().Get( handler.EXPECT().Get(
gomock.Any(), gomock.Any(),
fmt.Sprintf("%s?epoch=%d", getCommitteesTestEndpoint, epoch), fmt.Sprintf("%s?epoch=%d", getCommitteesTestEndpoint, epoch),
gomock.Any(), gomock.Any(),
@@ -477,7 +477,7 @@ func TestGetCommittees_NilData(t *testing.T) {
}, },
).Times(1) ).Times(1)
dutiesProvider := &beaconApiDutiesProvider{jsonRestHandler: jsonRestHandler} dutiesProvider := &beaconApiDutiesProvider{handler: handler}
_, err := dutiesProvider.Committees(ctx, epoch) _, err := dutiesProvider.Committees(ctx, epoch)
assert.ErrorContains(t, "state committees data is nil", err) assert.ErrorContains(t, "state committees data is nil", err)
} }
@@ -490,8 +490,8 @@ func TestGetCommittees_NilCommittee(t *testing.T) {
ctx := t.Context() ctx := t.Context()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
jsonRestHandler.EXPECT().Get( handler.EXPECT().Get(
gomock.Any(), gomock.Any(),
fmt.Sprintf("%s?epoch=%d", getCommitteesTestEndpoint, epoch), fmt.Sprintf("%s?epoch=%d", getCommitteesTestEndpoint, epoch),
gomock.Any(), gomock.Any(),
@@ -504,7 +504,7 @@ func TestGetCommittees_NilCommittee(t *testing.T) {
}, },
).Times(1) ).Times(1)
dutiesProvider := &beaconApiDutiesProvider{jsonRestHandler: jsonRestHandler} dutiesProvider := &beaconApiDutiesProvider{handler: handler}
_, err := dutiesProvider.Committees(ctx, epoch) _, err := dutiesProvider.Committees(ctx, epoch)
assert.ErrorContains(t, "committee at index `0` is nil", err) assert.ErrorContains(t, "committee at index `0` is nil", err)
} }

View File

@@ -7,6 +7,7 @@ import (
"sync" "sync"
"time" "time"
"github.com/OffchainLabs/prysm/v7/api/rest"
"github.com/OffchainLabs/prysm/v7/api/server/structs" "github.com/OffchainLabs/prysm/v7/api/server/structs"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams" fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil" "github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
@@ -20,9 +21,9 @@ type GenesisProvider interface {
} }
type beaconApiGenesisProvider struct { type beaconApiGenesisProvider struct {
jsonRestHandler RestHandler handler rest.Handler
genesis *structs.Genesis genesis *structs.Genesis
once sync.Once once sync.Once
} }
func (c *beaconApiValidatorClient) waitForChainStart(ctx context.Context) (*ethpb.ChainStartResponse, error) { func (c *beaconApiValidatorClient) waitForChainStart(ctx context.Context) (*ethpb.ChainStartResponse, error) {
@@ -68,7 +69,7 @@ func (c *beaconApiGenesisProvider) Genesis(ctx context.Context) (*structs.Genesi
genesisJson := &structs.GetGenesisResponse{} genesisJson := &structs.GetGenesisResponse{}
var doErr error var doErr error
c.once.Do(func() { c.once.Do(func() {
if err := c.jsonRestHandler.Get(ctx, "/eth/v1/beacon/genesis", genesisJson); err != nil { if err := c.handler.Get(ctx, "/eth/v1/beacon/genesis", genesisJson); err != nil {
doErr = err doErr = err
return return
} }

View File

@@ -18,8 +18,8 @@ func TestGetGenesis_ValidGenesis(t *testing.T) {
ctx := t.Context() ctx := t.Context()
genesisResponseJson := structs.GetGenesisResponse{} genesisResponseJson := structs.GetGenesisResponse{}
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
jsonRestHandler.EXPECT().Get( handler.EXPECT().Get(
gomock.Any(), gomock.Any(),
"/eth/v1/beacon/genesis", "/eth/v1/beacon/genesis",
&genesisResponseJson, &genesisResponseJson,
@@ -35,7 +35,7 @@ func TestGetGenesis_ValidGenesis(t *testing.T) {
}, },
).Times(1) ).Times(1)
genesisProvider := &beaconApiGenesisProvider{jsonRestHandler: jsonRestHandler} genesisProvider := &beaconApiGenesisProvider{handler: handler}
resp, err := genesisProvider.Genesis(ctx) resp, err := genesisProvider.Genesis(ctx)
assert.NoError(t, err) assert.NoError(t, err)
require.NotNil(t, resp) require.NotNil(t, resp)
@@ -50,8 +50,8 @@ func TestGetGenesis_NilData(t *testing.T) {
ctx := t.Context() ctx := t.Context()
genesisResponseJson := structs.GetGenesisResponse{} genesisResponseJson := structs.GetGenesisResponse{}
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
jsonRestHandler.EXPECT().Get( handler.EXPECT().Get(
gomock.Any(), gomock.Any(),
"/eth/v1/beacon/genesis", "/eth/v1/beacon/genesis",
&genesisResponseJson, &genesisResponseJson,
@@ -62,7 +62,7 @@ func TestGetGenesis_NilData(t *testing.T) {
structs.GetGenesisResponse{Data: nil}, structs.GetGenesisResponse{Data: nil},
).Times(1) ).Times(1)
genesisProvider := &beaconApiGenesisProvider{jsonRestHandler: jsonRestHandler} genesisProvider := &beaconApiGenesisProvider{handler: handler}
_, err := genesisProvider.Genesis(ctx) _, err := genesisProvider.Genesis(ctx)
assert.ErrorContains(t, "genesis data is nil", err) assert.ErrorContains(t, "genesis data is nil", err)
} }
@@ -74,8 +74,8 @@ func TestGetGenesis_EndpointCalledOnlyOnce(t *testing.T) {
ctx := t.Context() ctx := t.Context()
genesisResponseJson := structs.GetGenesisResponse{} genesisResponseJson := structs.GetGenesisResponse{}
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
jsonRestHandler.EXPECT().Get( handler.EXPECT().Get(
gomock.Any(), gomock.Any(),
"/eth/v1/beacon/genesis", "/eth/v1/beacon/genesis",
&genesisResponseJson, &genesisResponseJson,
@@ -91,7 +91,7 @@ func TestGetGenesis_EndpointCalledOnlyOnce(t *testing.T) {
}, },
).Times(1) ).Times(1)
genesisProvider := &beaconApiGenesisProvider{jsonRestHandler: jsonRestHandler} genesisProvider := &beaconApiGenesisProvider{handler: handler}
_, err := genesisProvider.Genesis(ctx) _, err := genesisProvider.Genesis(ctx)
assert.NoError(t, err) assert.NoError(t, err)
resp, err := genesisProvider.Genesis(ctx) resp, err := genesisProvider.Genesis(ctx)
@@ -108,15 +108,15 @@ func TestGetGenesis_EndpointCanBeCalledAgainAfterError(t *testing.T) {
ctx := t.Context() ctx := t.Context()
genesisResponseJson := structs.GetGenesisResponse{} genesisResponseJson := structs.GetGenesisResponse{}
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
jsonRestHandler.EXPECT().Get( handler.EXPECT().Get(
gomock.Any(), gomock.Any(),
"/eth/v1/beacon/genesis", "/eth/v1/beacon/genesis",
&genesisResponseJson, &genesisResponseJson,
).Return( ).Return(
errors.New("foo"), errors.New("foo"),
).Times(1) ).Times(1)
jsonRestHandler.EXPECT().Get( handler.EXPECT().Get(
gomock.Any(), gomock.Any(),
"/eth/v1/beacon/genesis", "/eth/v1/beacon/genesis",
&genesisResponseJson, &genesisResponseJson,
@@ -132,7 +132,7 @@ func TestGetGenesis_EndpointCanBeCalledAgainAfterError(t *testing.T) {
}, },
).Times(1) ).Times(1)
genesisProvider := &beaconApiGenesisProvider{jsonRestHandler: jsonRestHandler} genesisProvider := &beaconApiGenesisProvider{handler: handler}
_, err := genesisProvider.Genesis(ctx) _, err := genesisProvider.Genesis(ctx)
require.ErrorContains(t, "foo", err) require.ErrorContains(t, "foo", err)
resp, err := genesisProvider.Genesis(ctx) resp, err := genesisProvider.Genesis(ctx)

View File

@@ -26,7 +26,7 @@ func (c *beaconApiValidatorClient) beaconBlock(ctx context.Context, slot primiti
queryParams.Add("graffiti", hexutil.Encode(graffiti)) queryParams.Add("graffiti", hexutil.Encode(graffiti))
} }
queryUrl := apiutil.BuildURL(fmt.Sprintf("/eth/v3/validator/blocks/%d", slot), queryParams) queryUrl := apiutil.BuildURL(fmt.Sprintf("/eth/v3/validator/blocks/%d", slot), queryParams)
data, header, err := c.jsonRestHandler.GetSSZ(ctx, queryUrl) data, header, err := c.handler.GetSSZ(ctx, queryUrl)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -55,114 +55,153 @@ func (c *beaconApiValidatorClient) beaconBlock(ctx context.Context, slot primiti
} }
} }
// sszBlockCodec defines SSZ unmarshalers for a fork's block and blinded block types.
type sszBlockCodec struct {
unmarshalBlock func([]byte) (*ethpb.GenericBeaconBlock, error)
unmarshalBlinded func([]byte) (*ethpb.GenericBeaconBlock, error) // nil for Phase0/Altair
}
type sszCodecEntry struct {
minVersion int
codec sszBlockCodec
}
// sszCodecs is ordered descending by version so that unknown future versions
// fall through to the latest known fork (matching the original if-cascade).
var sszCodecs = []sszCodecEntry{
{
minVersion: version.Fulu,
codec: sszBlockCodec{
unmarshalBlock: func(data []byte) (*ethpb.GenericBeaconBlock, error) {
block := &ethpb.BeaconBlockContentsFulu{}
if err := block.UnmarshalSSZ(data); err != nil {
return nil, err
}
return &ethpb.GenericBeaconBlock{Block: &ethpb.GenericBeaconBlock_Fulu{Fulu: block}}, nil
},
unmarshalBlinded: func(data []byte) (*ethpb.GenericBeaconBlock, error) {
blindedBlock := &ethpb.BlindedBeaconBlockFulu{}
if err := blindedBlock.UnmarshalSSZ(data); err != nil {
return nil, err
}
return &ethpb.GenericBeaconBlock{Block: &ethpb.GenericBeaconBlock_BlindedFulu{BlindedFulu: blindedBlock}, IsBlinded: true}, nil
},
},
},
{
minVersion: version.Electra,
codec: sszBlockCodec{
unmarshalBlock: func(data []byte) (*ethpb.GenericBeaconBlock, error) {
block := &ethpb.BeaconBlockContentsElectra{}
if err := block.UnmarshalSSZ(data); err != nil {
return nil, err
}
return &ethpb.GenericBeaconBlock{Block: &ethpb.GenericBeaconBlock_Electra{Electra: block}}, nil
},
unmarshalBlinded: func(data []byte) (*ethpb.GenericBeaconBlock, error) {
blindedBlock := &ethpb.BlindedBeaconBlockElectra{}
if err := blindedBlock.UnmarshalSSZ(data); err != nil {
return nil, err
}
return &ethpb.GenericBeaconBlock{Block: &ethpb.GenericBeaconBlock_BlindedElectra{BlindedElectra: blindedBlock}, IsBlinded: true}, nil
},
},
},
{
minVersion: version.Deneb,
codec: sszBlockCodec{
unmarshalBlock: func(data []byte) (*ethpb.GenericBeaconBlock, error) {
block := &ethpb.BeaconBlockContentsDeneb{}
if err := block.UnmarshalSSZ(data); err != nil {
return nil, err
}
return &ethpb.GenericBeaconBlock{Block: &ethpb.GenericBeaconBlock_Deneb{Deneb: block}}, nil
},
unmarshalBlinded: func(data []byte) (*ethpb.GenericBeaconBlock, error) {
blindedBlock := &ethpb.BlindedBeaconBlockDeneb{}
if err := blindedBlock.UnmarshalSSZ(data); err != nil {
return nil, err
}
return &ethpb.GenericBeaconBlock{Block: &ethpb.GenericBeaconBlock_BlindedDeneb{BlindedDeneb: blindedBlock}, IsBlinded: true}, nil
},
},
},
{
minVersion: version.Capella,
codec: sszBlockCodec{
unmarshalBlock: func(data []byte) (*ethpb.GenericBeaconBlock, error) {
block := &ethpb.BeaconBlockCapella{}
if err := block.UnmarshalSSZ(data); err != nil {
return nil, err
}
return &ethpb.GenericBeaconBlock{Block: &ethpb.GenericBeaconBlock_Capella{Capella: block}}, nil
},
unmarshalBlinded: func(data []byte) (*ethpb.GenericBeaconBlock, error) {
blindedBlock := &ethpb.BlindedBeaconBlockCapella{}
if err := blindedBlock.UnmarshalSSZ(data); err != nil {
return nil, err
}
return &ethpb.GenericBeaconBlock{Block: &ethpb.GenericBeaconBlock_BlindedCapella{BlindedCapella: blindedBlock}, IsBlinded: true}, nil
},
},
},
{
minVersion: version.Bellatrix,
codec: sszBlockCodec{
unmarshalBlock: func(data []byte) (*ethpb.GenericBeaconBlock, error) {
block := &ethpb.BeaconBlockBellatrix{}
if err := block.UnmarshalSSZ(data); err != nil {
return nil, err
}
return &ethpb.GenericBeaconBlock{Block: &ethpb.GenericBeaconBlock_Bellatrix{Bellatrix: block}}, nil
},
unmarshalBlinded: func(data []byte) (*ethpb.GenericBeaconBlock, error) {
blindedBlock := &ethpb.BlindedBeaconBlockBellatrix{}
if err := blindedBlock.UnmarshalSSZ(data); err != nil {
return nil, err
}
return &ethpb.GenericBeaconBlock{Block: &ethpb.GenericBeaconBlock_BlindedBellatrix{BlindedBellatrix: blindedBlock}, IsBlinded: true}, nil
},
},
},
{
minVersion: version.Altair,
codec: sszBlockCodec{
unmarshalBlock: func(data []byte) (*ethpb.GenericBeaconBlock, error) {
block := &ethpb.BeaconBlockAltair{}
if err := block.UnmarshalSSZ(data); err != nil {
return nil, err
}
return &ethpb.GenericBeaconBlock{Block: &ethpb.GenericBeaconBlock_Altair{Altair: block}}, nil
},
},
},
{
minVersion: version.Phase0,
codec: sszBlockCodec{
unmarshalBlock: func(data []byte) (*ethpb.GenericBeaconBlock, error) {
block := &ethpb.BeaconBlock{}
if err := block.UnmarshalSSZ(data); err != nil {
return nil, err
}
return &ethpb.GenericBeaconBlock{Block: &ethpb.GenericBeaconBlock_Phase0{Phase0: block}}, nil
},
},
},
}
func processBlockSSZResponse(ver int, data []byte, isBlinded bool) (*ethpb.GenericBeaconBlock, error) { func processBlockSSZResponse(ver int, data []byte, isBlinded bool) (*ethpb.GenericBeaconBlock, error) {
if ver >= version.Fulu { for _, entry := range sszCodecs {
return processBlockSSZResponseFulu(data, isBlinded) if ver >= entry.minVersion {
} if isBlinded && entry.codec.unmarshalBlinded != nil {
if ver >= version.Electra { return entry.codec.unmarshalBlinded(data)
return processBlockSSZResponseElectra(data, isBlinded) }
} return entry.codec.unmarshalBlock(data)
if ver >= version.Deneb {
return processBlockSSZResponseDeneb(data, isBlinded)
}
if ver >= version.Capella {
return processBlockSSZResponseCapella(data, isBlinded)
}
if ver >= version.Bellatrix {
return processBlockSSZResponseBellatrix(data, isBlinded)
}
if ver >= version.Altair {
block := &ethpb.BeaconBlockAltair{}
if err := block.UnmarshalSSZ(data); err != nil {
return nil, err
} }
return &ethpb.GenericBeaconBlock{Block: &ethpb.GenericBeaconBlock_Altair{Altair: block}}, nil
}
if ver >= version.Phase0 {
block := &ethpb.BeaconBlock{}
if err := block.UnmarshalSSZ(data); err != nil {
return nil, err
}
return &ethpb.GenericBeaconBlock{Block: &ethpb.GenericBeaconBlock_Phase0{Phase0: block}}, nil
} }
return nil, fmt.Errorf("unsupported block version %s", version.String(ver)) return nil, fmt.Errorf("unsupported block version %s", version.String(ver))
} }
func processBlockSSZResponseFulu(data []byte, isBlinded bool) (*ethpb.GenericBeaconBlock, error) {
if isBlinded {
blindedBlock := &ethpb.BlindedBeaconBlockFulu{}
if err := blindedBlock.UnmarshalSSZ(data); err != nil {
return nil, err
}
return &ethpb.GenericBeaconBlock{Block: &ethpb.GenericBeaconBlock_BlindedFulu{BlindedFulu: blindedBlock}, IsBlinded: true}, nil
}
block := &ethpb.BeaconBlockContentsFulu{}
if err := block.UnmarshalSSZ(data); err != nil {
return nil, err
}
return &ethpb.GenericBeaconBlock{Block: &ethpb.GenericBeaconBlock_Fulu{Fulu: block}}, nil
}
func processBlockSSZResponseElectra(data []byte, isBlinded bool) (*ethpb.GenericBeaconBlock, error) {
if isBlinded {
blindedBlock := &ethpb.BlindedBeaconBlockElectra{}
if err := blindedBlock.UnmarshalSSZ(data); err != nil {
return nil, err
}
return &ethpb.GenericBeaconBlock{Block: &ethpb.GenericBeaconBlock_BlindedElectra{BlindedElectra: blindedBlock}, IsBlinded: true}, nil
}
block := &ethpb.BeaconBlockContentsElectra{}
if err := block.UnmarshalSSZ(data); err != nil {
return nil, err
}
return &ethpb.GenericBeaconBlock{Block: &ethpb.GenericBeaconBlock_Electra{Electra: block}}, nil
}
func processBlockSSZResponseDeneb(data []byte, isBlinded bool) (*ethpb.GenericBeaconBlock, error) {
if isBlinded {
blindedBlock := &ethpb.BlindedBeaconBlockDeneb{}
if err := blindedBlock.UnmarshalSSZ(data); err != nil {
return nil, err
}
return &ethpb.GenericBeaconBlock{Block: &ethpb.GenericBeaconBlock_BlindedDeneb{BlindedDeneb: blindedBlock}, IsBlinded: true}, nil
}
block := &ethpb.BeaconBlockContentsDeneb{}
if err := block.UnmarshalSSZ(data); err != nil {
return nil, err
}
return &ethpb.GenericBeaconBlock{Block: &ethpb.GenericBeaconBlock_Deneb{Deneb: block}}, nil
}
func processBlockSSZResponseCapella(data []byte, isBlinded bool) (*ethpb.GenericBeaconBlock, error) {
if isBlinded {
blindedBlock := &ethpb.BlindedBeaconBlockCapella{}
if err := blindedBlock.UnmarshalSSZ(data); err != nil {
return nil, err
}
return &ethpb.GenericBeaconBlock{Block: &ethpb.GenericBeaconBlock_BlindedCapella{BlindedCapella: blindedBlock}, IsBlinded: true}, nil
}
block := &ethpb.BeaconBlockCapella{}
if err := block.UnmarshalSSZ(data); err != nil {
return nil, err
}
return &ethpb.GenericBeaconBlock{Block: &ethpb.GenericBeaconBlock_Capella{Capella: block}}, nil
}
func processBlockSSZResponseBellatrix(data []byte, isBlinded bool) (*ethpb.GenericBeaconBlock, error) {
if isBlinded {
blindedBlock := &ethpb.BlindedBeaconBlockBellatrix{}
if err := blindedBlock.UnmarshalSSZ(data); err != nil {
return nil, err
}
return &ethpb.GenericBeaconBlock{Block: &ethpb.GenericBeaconBlock_BlindedBellatrix{BlindedBellatrix: blindedBlock}, IsBlinded: true}, nil
}
block := &ethpb.BeaconBlockBellatrix{}
if err := block.UnmarshalSSZ(data); err != nil {
return nil, err
}
return &ethpb.GenericBeaconBlock{Block: &ethpb.GenericBeaconBlock_Bellatrix{Bellatrix: block}}, nil
}
func convertBlockToGeneric(decoder *json.Decoder, dest ethpb.GenericConverter, version string, isBlinded bool) (*ethpb.GenericBeaconBlock, error) { func convertBlockToGeneric(decoder *json.Decoder, dest ethpb.GenericConverter, version string, isBlinded bool) (*ethpb.GenericBeaconBlock, error) {
typeName := version typeName := version
if isBlinded { if isBlinded {
@@ -180,69 +219,52 @@ func convertBlockToGeneric(decoder *json.Decoder, dest ethpb.GenericConverter, v
return genericBlock, nil return genericBlock, nil
} }
// jsonBlockTypes defines factory functions for creating block and blinded block structs for JSON decoding.
type jsonBlockTypes struct {
newBlock func() ethpb.GenericConverter
newBlinded func() ethpb.GenericConverter // nil for Phase0/Altair
}
var jsonBlockFactories = map[string]jsonBlockTypes{
version.String(version.Phase0): {
newBlock: func() ethpb.GenericConverter { return &structs.BeaconBlock{} },
},
version.String(version.Altair): {
newBlock: func() ethpb.GenericConverter { return &structs.BeaconBlockAltair{} },
},
version.String(version.Bellatrix): {
newBlock: func() ethpb.GenericConverter { return &structs.BeaconBlockBellatrix{} },
newBlinded: func() ethpb.GenericConverter { return &structs.BlindedBeaconBlockBellatrix{} },
},
version.String(version.Capella): {
newBlock: func() ethpb.GenericConverter { return &structs.BeaconBlockCapella{} },
newBlinded: func() ethpb.GenericConverter { return &structs.BlindedBeaconBlockCapella{} },
},
version.String(version.Deneb): {
newBlock: func() ethpb.GenericConverter { return &structs.BeaconBlockContentsDeneb{} },
newBlinded: func() ethpb.GenericConverter { return &structs.BlindedBeaconBlockDeneb{} },
},
version.String(version.Electra): {
newBlock: func() ethpb.GenericConverter { return &structs.BeaconBlockContentsElectra{} },
newBlinded: func() ethpb.GenericConverter { return &structs.BlindedBeaconBlockElectra{} },
},
version.String(version.Fulu): {
newBlock: func() ethpb.GenericConverter { return &structs.BeaconBlockContentsFulu{} },
newBlinded: func() ethpb.GenericConverter { return &structs.BlindedBeaconBlockFulu{} },
},
}
func processBlockJSONResponse(ver string, isBlinded bool, decoder *json.Decoder) (*ethpb.GenericBeaconBlock, error) { func processBlockJSONResponse(ver string, isBlinded bool, decoder *json.Decoder) (*ethpb.GenericBeaconBlock, error) {
if decoder == nil { if decoder == nil {
return nil, errors.New("no produce block json decoder found") return nil, errors.New("no produce block json decoder found")
} }
switch ver { factory, ok := jsonBlockFactories[ver]
case version.String(version.Phase0): if !ok {
return convertBlockToGeneric(decoder, &structs.BeaconBlock{}, version.String(version.Phase0), false)
case version.String(version.Altair):
return convertBlockToGeneric(decoder, &structs.BeaconBlockAltair{}, "altair", false)
case version.String(version.Bellatrix):
return processBellatrixBlock(decoder, isBlinded)
case version.String(version.Capella):
return processCapellaBlock(decoder, isBlinded)
case version.String(version.Deneb):
return processDenebBlock(decoder, isBlinded)
case version.String(version.Electra):
return processElectraBlock(decoder, isBlinded)
case version.String(version.Fulu):
return processFuluBlock(decoder, isBlinded)
default:
return nil, errors.Errorf("unsupported consensus version `%s`", ver) return nil, errors.Errorf("unsupported consensus version `%s`", ver)
} }
} if isBlinded && factory.newBlinded != nil {
return convertBlockToGeneric(decoder, factory.newBlinded(), ver, true)
func processBellatrixBlock(decoder *json.Decoder, isBlinded bool) (*ethpb.GenericBeaconBlock, error) {
if isBlinded {
return convertBlockToGeneric(decoder, &structs.BlindedBeaconBlockBellatrix{}, "bellatrix", true)
} }
return convertBlockToGeneric(decoder, &structs.BeaconBlockBellatrix{}, "bellatrix", false) return convertBlockToGeneric(decoder, factory.newBlock(), ver, false)
}
func processCapellaBlock(decoder *json.Decoder, isBlinded bool) (*ethpb.GenericBeaconBlock, error) {
if isBlinded {
return convertBlockToGeneric(decoder, &structs.BlindedBeaconBlockCapella{}, "capella", true)
}
return convertBlockToGeneric(decoder, &structs.BeaconBlockCapella{}, "capella", false)
}
func processDenebBlock(decoder *json.Decoder, isBlinded bool) (*ethpb.GenericBeaconBlock, error) {
if isBlinded {
return convertBlockToGeneric(decoder, &structs.BlindedBeaconBlockDeneb{}, "deneb", true)
}
return convertBlockToGeneric(decoder, &structs.BeaconBlockContentsDeneb{}, "deneb", false)
}
func processElectraBlock(decoder *json.Decoder, isBlinded bool) (*ethpb.GenericBeaconBlock, error) {
if isBlinded {
return convertBlockToGeneric(decoder, &structs.BlindedBeaconBlockElectra{}, "electra", true)
}
return convertBlockToGeneric(decoder, &structs.BeaconBlockContentsElectra{}, "electra", false)
}
func processFuluBlock(decoder *json.Decoder, isBlinded bool) (*ethpb.GenericBeaconBlock, error) {
if isBlinded {
return convertBlockToGeneric(decoder, &structs.BlindedBeaconBlockFulu{}, "fulu", true)
}
return convertBlockToGeneric(decoder, &structs.BeaconBlockContentsFulu{}, "fulu", false)
} }

View File

@@ -11,6 +11,7 @@ import (
"github.com/OffchainLabs/prysm/v7/api/server/structs" "github.com/OffchainLabs/prysm/v7/api/server/structs"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives" "github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1" ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/runtime/version"
"github.com/OffchainLabs/prysm/v7/testing/assert" "github.com/OffchainLabs/prysm/v7/testing/assert"
"github.com/OffchainLabs/prysm/v7/testing/require" "github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/OffchainLabs/prysm/v7/validator/client/beacon-api/mock" "github.com/OffchainLabs/prysm/v7/validator/client/beacon-api/mock"
@@ -25,8 +26,8 @@ func TestGetBeaconBlock_RequestFailed(t *testing.T) {
ctx := t.Context() ctx := t.Context()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockHandler(ctrl)
jsonRestHandler.EXPECT().GetSSZ( handler.EXPECT().GetSSZ(
gomock.Any(), gomock.Any(),
gomock.Any(), gomock.Any(),
).Return( ).Return(
@@ -35,7 +36,7 @@ func TestGetBeaconBlock_RequestFailed(t *testing.T) {
errors.New("foo error"), errors.New("foo error"),
).Times(1) ).Times(1)
validatorClient := &beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := &beaconApiValidatorClient{handler: handler}
_, err := validatorClient.beaconBlock(ctx, 1, []byte{1}, []byte{2}) _, err := validatorClient.beaconBlock(ctx, 1, []byte{1}, []byte{2})
assert.ErrorContains(t, "foo error", err) assert.ErrorContains(t, "foo error", err)
} }
@@ -149,8 +150,8 @@ func TestGetBeaconBlock_Error(t *testing.T) {
b, err := json.Marshal(resp) b, err := json.Marshal(resp)
require.NoError(t, err) require.NoError(t, err)
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockHandler(ctrl)
jsonRestHandler.EXPECT().GetSSZ( handler.EXPECT().GetSSZ(
gomock.Any(), gomock.Any(),
gomock.Any(), gomock.Any(),
).Return( ).Return(
@@ -159,7 +160,7 @@ func TestGetBeaconBlock_Error(t *testing.T) {
nil, nil,
).Times(1) ).Times(1)
validatorClient := &beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := &beaconApiValidatorClient{handler: handler}
_, err = validatorClient.beaconBlock(ctx, 1, []byte{1}, []byte{2}) _, err = validatorClient.beaconBlock(ctx, 1, []byte{1}, []byte{2})
assert.ErrorContains(t, testCase.expectedErrorMessage, err) assert.ErrorContains(t, testCase.expectedErrorMessage, err)
}) })
@@ -185,8 +186,8 @@ func TestGetBeaconBlock_Phase0Valid(t *testing.T) {
Data: bytes, Data: bytes,
}) })
require.NoError(t, err) require.NoError(t, err)
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockHandler(ctrl)
jsonRestHandler.EXPECT().GetSSZ( handler.EXPECT().GetSSZ(
gomock.Any(), gomock.Any(),
fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)), fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)),
).Return( ).Return(
@@ -195,7 +196,7 @@ func TestGetBeaconBlock_Phase0Valid(t *testing.T) {
nil, nil,
).Times(1) ).Times(1)
validatorClient := &beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := &beaconApiValidatorClient{handler: handler}
beaconBlock, err := validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti) beaconBlock, err := validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti)
require.NoError(t, err) require.NoError(t, err)
@@ -208,6 +209,25 @@ func TestGetBeaconBlock_Phase0Valid(t *testing.T) {
assert.DeepEqual(t, expectedBeaconBlock, beaconBlock) assert.DeepEqual(t, expectedBeaconBlock, beaconBlock)
} }
func TestSSZCodecs_OrderAndCoverage(t *testing.T) {
versions := version.All()
require.NotEmpty(t, versions)
expected := make([]int, 0, len(versions))
for i := len(versions) - 1; i >= 0; i-- {
expected = append(expected, versions[i])
}
require.Equal(t, len(expected), len(sszCodecs))
for i, entry := range sszCodecs {
assert.Equal(t, expected[i], entry.minVersion, "sszCodecs[%d] has wrong fork order", i)
if i > 0 {
require.Equal(t, true, entry.minVersion < sszCodecs[i-1].minVersion, "sszCodecs not strictly descending at index %d", i)
}
}
}
// Add SSZ test cases below this line // Add SSZ test cases below this line
func TestGetBeaconBlock_SSZ_BellatrixValid(t *testing.T) { func TestGetBeaconBlock_SSZ_BellatrixValid(t *testing.T) {
@@ -224,8 +244,8 @@ func TestGetBeaconBlock_SSZ_BellatrixValid(t *testing.T) {
ctx := t.Context() ctx := t.Context()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockHandler(ctrl)
jsonRestHandler.EXPECT().GetSSZ( handler.EXPECT().GetSSZ(
gomock.Any(), gomock.Any(),
fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)), fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)),
).Return( ).Return(
@@ -238,7 +258,7 @@ func TestGetBeaconBlock_SSZ_BellatrixValid(t *testing.T) {
nil, nil,
).Times(1) ).Times(1)
validatorClient := &beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := &beaconApiValidatorClient{handler: handler}
beaconBlock, err := validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti) beaconBlock, err := validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti)
require.NoError(t, err) require.NoError(t, err)
@@ -266,8 +286,8 @@ func TestGetBeaconBlock_SSZ_BlindedBellatrixValid(t *testing.T) {
ctx := t.Context() ctx := t.Context()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockHandler(ctrl)
jsonRestHandler.EXPECT().GetSSZ( handler.EXPECT().GetSSZ(
gomock.Any(), gomock.Any(),
fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)), fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)),
).Return( ).Return(
@@ -280,7 +300,7 @@ func TestGetBeaconBlock_SSZ_BlindedBellatrixValid(t *testing.T) {
nil, nil,
).Times(1) ).Times(1)
validatorClient := &beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := &beaconApiValidatorClient{handler: handler}
beaconBlock, err := validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti) beaconBlock, err := validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti)
require.NoError(t, err) require.NoError(t, err)
@@ -308,8 +328,8 @@ func TestGetBeaconBlock_SSZ_CapellaValid(t *testing.T) {
ctx := t.Context() ctx := t.Context()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockHandler(ctrl)
jsonRestHandler.EXPECT().GetSSZ( handler.EXPECT().GetSSZ(
gomock.Any(), gomock.Any(),
fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)), fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)),
).Return( ).Return(
@@ -322,7 +342,7 @@ func TestGetBeaconBlock_SSZ_CapellaValid(t *testing.T) {
nil, nil,
).Times(1) ).Times(1)
validatorClient := &beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := &beaconApiValidatorClient{handler: handler}
beaconBlock, err := validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti) beaconBlock, err := validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti)
require.NoError(t, err) require.NoError(t, err)
@@ -350,8 +370,8 @@ func TestGetBeaconBlock_SSZ_BlindedCapellaValid(t *testing.T) {
ctx := t.Context() ctx := t.Context()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockHandler(ctrl)
jsonRestHandler.EXPECT().GetSSZ( handler.EXPECT().GetSSZ(
gomock.Any(), gomock.Any(),
fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)), fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)),
).Return( ).Return(
@@ -364,7 +384,7 @@ func TestGetBeaconBlock_SSZ_BlindedCapellaValid(t *testing.T) {
nil, nil,
).Times(1) ).Times(1)
validatorClient := &beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := &beaconApiValidatorClient{handler: handler}
beaconBlock, err := validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti) beaconBlock, err := validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti)
require.NoError(t, err) require.NoError(t, err)
@@ -392,8 +412,8 @@ func TestGetBeaconBlock_SSZ_DenebValid(t *testing.T) {
ctx := t.Context() ctx := t.Context()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockHandler(ctrl)
jsonRestHandler.EXPECT().GetSSZ( handler.EXPECT().GetSSZ(
gomock.Any(), gomock.Any(),
fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)), fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)),
).Return( ).Return(
@@ -406,7 +426,7 @@ func TestGetBeaconBlock_SSZ_DenebValid(t *testing.T) {
nil, nil,
).Times(1) ).Times(1)
validatorClient := &beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := &beaconApiValidatorClient{handler: handler}
beaconBlock, err := validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti) beaconBlock, err := validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti)
require.NoError(t, err) require.NoError(t, err)
@@ -434,8 +454,8 @@ func TestGetBeaconBlock_SSZ_BlindedDenebValid(t *testing.T) {
ctx := t.Context() ctx := t.Context()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockHandler(ctrl)
jsonRestHandler.EXPECT().GetSSZ( handler.EXPECT().GetSSZ(
gomock.Any(), gomock.Any(),
fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)), fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)),
).Return( ).Return(
@@ -448,7 +468,7 @@ func TestGetBeaconBlock_SSZ_BlindedDenebValid(t *testing.T) {
nil, nil,
).Times(1) ).Times(1)
validatorClient := &beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := &beaconApiValidatorClient{handler: handler}
beaconBlock, err := validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti) beaconBlock, err := validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti)
require.NoError(t, err) require.NoError(t, err)
@@ -476,8 +496,8 @@ func TestGetBeaconBlock_SSZ_ElectraValid(t *testing.T) {
ctx := t.Context() ctx := t.Context()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockHandler(ctrl)
jsonRestHandler.EXPECT().GetSSZ( handler.EXPECT().GetSSZ(
gomock.Any(), gomock.Any(),
fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)), fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)),
).Return( ).Return(
@@ -490,7 +510,7 @@ func TestGetBeaconBlock_SSZ_ElectraValid(t *testing.T) {
nil, nil,
).Times(1) ).Times(1)
validatorClient := &beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := &beaconApiValidatorClient{handler: handler}
beaconBlock, err := validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti) beaconBlock, err := validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti)
require.NoError(t, err) require.NoError(t, err)
@@ -518,8 +538,8 @@ func TestGetBeaconBlock_SSZ_BlindedElectraValid(t *testing.T) {
ctx := t.Context() ctx := t.Context()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockHandler(ctrl)
jsonRestHandler.EXPECT().GetSSZ( handler.EXPECT().GetSSZ(
gomock.Any(), gomock.Any(),
fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)), fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)),
).Return( ).Return(
@@ -532,7 +552,7 @@ func TestGetBeaconBlock_SSZ_BlindedElectraValid(t *testing.T) {
nil, nil,
).Times(1) ).Times(1)
validatorClient := &beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := &beaconApiValidatorClient{handler: handler}
beaconBlock, err := validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti) beaconBlock, err := validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti)
require.NoError(t, err) require.NoError(t, err)
@@ -546,6 +566,90 @@ func TestGetBeaconBlock_SSZ_BlindedElectraValid(t *testing.T) {
assert.DeepEqual(t, expectedBeaconBlock, beaconBlock) assert.DeepEqual(t, expectedBeaconBlock, beaconBlock)
} }
func TestGetBeaconBlock_SSZ_FuluValid(t *testing.T) {
ctrl := gomock.NewController(t)
defer ctrl.Finish()
proto := testhelpers.GenerateProtoFuluBeaconBlockContents()
bytes, err := proto.MarshalSSZ()
require.NoError(t, err)
const slot = primitives.Slot(1)
randaoReveal := []byte{2}
graffiti := []byte{3}
ctx := t.Context()
handler := mock.NewMockHandler(ctrl)
handler.EXPECT().GetSSZ(
gomock.Any(),
fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)),
).Return(
bytes,
http.Header{
"Content-Type": []string{api.OctetStreamMediaType},
api.VersionHeader: []string{"fulu"},
api.ExecutionPayloadBlindedHeader: []string{"false"},
},
nil,
).Times(1)
validatorClient := &beaconApiValidatorClient{handler: handler}
beaconBlock, err := validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti)
require.NoError(t, err)
expectedBeaconBlock := &ethpb.GenericBeaconBlock{
Block: &ethpb.GenericBeaconBlock_Fulu{
Fulu: proto,
},
IsBlinded: false,
}
assert.DeepEqual(t, expectedBeaconBlock, beaconBlock)
}
func TestGetBeaconBlock_SSZ_BlindedFuluValid(t *testing.T) {
ctrl := gomock.NewController(t)
defer ctrl.Finish()
proto := testhelpers.GenerateProtoBlindedFuluBeaconBlock()
bytes, err := proto.MarshalSSZ()
require.NoError(t, err)
const slot = primitives.Slot(1)
randaoReveal := []byte{2}
graffiti := []byte{3}
ctx := t.Context()
handler := mock.NewMockHandler(ctrl)
handler.EXPECT().GetSSZ(
gomock.Any(),
fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)),
).Return(
bytes,
http.Header{
"Content-Type": []string{api.OctetStreamMediaType},
api.VersionHeader: []string{"fulu"},
api.ExecutionPayloadBlindedHeader: []string{"true"},
},
nil,
).Times(1)
validatorClient := &beaconApiValidatorClient{handler: handler}
beaconBlock, err := validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti)
require.NoError(t, err)
expectedBeaconBlock := &ethpb.GenericBeaconBlock{
Block: &ethpb.GenericBeaconBlock_BlindedFulu{
BlindedFulu: proto,
},
IsBlinded: true,
}
assert.DeepEqual(t, expectedBeaconBlock, beaconBlock)
}
func TestGetBeaconBlock_SSZ_UnsupportedVersion(t *testing.T) { func TestGetBeaconBlock_SSZ_UnsupportedVersion(t *testing.T) {
ctrl := gomock.NewController(t) ctrl := gomock.NewController(t)
defer ctrl.Finish() defer ctrl.Finish()
@@ -556,8 +660,8 @@ func TestGetBeaconBlock_SSZ_UnsupportedVersion(t *testing.T) {
ctx := t.Context() ctx := t.Context()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockHandler(ctrl)
jsonRestHandler.EXPECT().GetSSZ( handler.EXPECT().GetSSZ(
gomock.Any(), gomock.Any(),
fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)), fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)),
).Return( ).Return(
@@ -570,7 +674,7 @@ func TestGetBeaconBlock_SSZ_UnsupportedVersion(t *testing.T) {
nil, nil,
).Times(1) ).Times(1)
validatorClient := &beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := &beaconApiValidatorClient{handler: handler}
_, err := validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti) _, err := validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti)
assert.ErrorContains(t, "version name doesn't map to a known value in the enum", err) assert.ErrorContains(t, "version name doesn't map to a known value in the enum", err)
} }
@@ -589,8 +693,8 @@ func TestGetBeaconBlock_SSZ_InvalidBlindedHeader(t *testing.T) {
ctx := t.Context() ctx := t.Context()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockHandler(ctrl)
jsonRestHandler.EXPECT().GetSSZ( handler.EXPECT().GetSSZ(
gomock.Any(), gomock.Any(),
fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)), fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)),
).Return( ).Return(
@@ -603,7 +707,7 @@ func TestGetBeaconBlock_SSZ_InvalidBlindedHeader(t *testing.T) {
nil, nil,
).Times(1) ).Times(1)
validatorClient := &beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := &beaconApiValidatorClient{handler: handler}
_, err = validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti) _, err = validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti)
assert.ErrorContains(t, "strconv.ParseBool: parsing \"invalid\": invalid syntax", err) assert.ErrorContains(t, "strconv.ParseBool: parsing \"invalid\": invalid syntax", err)
} }
@@ -622,8 +726,8 @@ func TestGetBeaconBlock_SSZ_InvalidVersionHeader(t *testing.T) {
ctx := t.Context() ctx := t.Context()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockHandler(ctrl)
jsonRestHandler.EXPECT().GetSSZ( handler.EXPECT().GetSSZ(
gomock.Any(), gomock.Any(),
fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)), fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)),
).Return( ).Return(
@@ -636,7 +740,7 @@ func TestGetBeaconBlock_SSZ_InvalidVersionHeader(t *testing.T) {
nil, nil,
).Times(1) ).Times(1)
validatorClient := &beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := &beaconApiValidatorClient{handler: handler}
_, err = validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti) _, err = validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti)
assert.ErrorContains(t, "unsupported header version invalid", err) assert.ErrorContains(t, "unsupported header version invalid", err)
} }
@@ -651,8 +755,8 @@ func TestGetBeaconBlock_SSZ_GetSSZError(t *testing.T) {
ctx := t.Context() ctx := t.Context()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockHandler(ctrl)
jsonRestHandler.EXPECT().GetSSZ( handler.EXPECT().GetSSZ(
gomock.Any(), gomock.Any(),
fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)), fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)),
).Return( ).Return(
@@ -661,7 +765,7 @@ func TestGetBeaconBlock_SSZ_GetSSZError(t *testing.T) {
errors.New("get ssz error"), errors.New("get ssz error"),
).Times(1) ).Times(1)
validatorClient := &beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := &beaconApiValidatorClient{handler: handler}
_, err := validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti) _, err := validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti)
assert.ErrorContains(t, "get ssz error", err) assert.ErrorContains(t, "get ssz error", err)
} }
@@ -680,8 +784,8 @@ func TestGetBeaconBlock_SSZ_Phase0Valid(t *testing.T) {
ctx := t.Context() ctx := t.Context()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockHandler(ctrl)
jsonRestHandler.EXPECT().GetSSZ( handler.EXPECT().GetSSZ(
gomock.Any(), gomock.Any(),
fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)), fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)),
).Return( ).Return(
@@ -694,7 +798,7 @@ func TestGetBeaconBlock_SSZ_Phase0Valid(t *testing.T) {
nil, nil,
).Times(1) ).Times(1)
validatorClient := &beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := &beaconApiValidatorClient{handler: handler}
beaconBlock, err := validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti) beaconBlock, err := validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti)
require.NoError(t, err) require.NoError(t, err)
@@ -722,8 +826,8 @@ func TestGetBeaconBlock_SSZ_AltairValid(t *testing.T) {
ctx := t.Context() ctx := t.Context()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockHandler(ctrl)
jsonRestHandler.EXPECT().GetSSZ( handler.EXPECT().GetSSZ(
gomock.Any(), gomock.Any(),
fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)), fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)),
).Return( ).Return(
@@ -736,7 +840,7 @@ func TestGetBeaconBlock_SSZ_AltairValid(t *testing.T) {
nil, nil,
).Times(1) ).Times(1)
validatorClient := &beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := &beaconApiValidatorClient{handler: handler}
beaconBlock, err := validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti) beaconBlock, err := validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti)
require.NoError(t, err) require.NoError(t, err)
@@ -770,8 +874,8 @@ func TestGetBeaconBlock_AltairValid(t *testing.T) {
Data: bytes, Data: bytes,
}) })
require.NoError(t, err) require.NoError(t, err)
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockHandler(ctrl)
jsonRestHandler.EXPECT().GetSSZ( handler.EXPECT().GetSSZ(
gomock.Any(), gomock.Any(),
fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)), fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)),
).Return( ).Return(
@@ -780,7 +884,7 @@ func TestGetBeaconBlock_AltairValid(t *testing.T) {
nil, nil,
).Times(1) ).Times(1)
validatorClient := &beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := &beaconApiValidatorClient{handler: handler}
beaconBlock, err := validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti) beaconBlock, err := validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti)
require.NoError(t, err) require.NoError(t, err)
@@ -814,8 +918,8 @@ func TestGetBeaconBlock_BellatrixValid(t *testing.T) {
Data: bytes, Data: bytes,
}) })
require.NoError(t, err) require.NoError(t, err)
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockHandler(ctrl)
jsonRestHandler.EXPECT().GetSSZ( handler.EXPECT().GetSSZ(
gomock.Any(), gomock.Any(),
fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)), fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)),
).Return( ).Return(
@@ -824,7 +928,7 @@ func TestGetBeaconBlock_BellatrixValid(t *testing.T) {
nil, nil,
).Times(1) ).Times(1)
validatorClient := &beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := &beaconApiValidatorClient{handler: handler}
beaconBlock, err := validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti) beaconBlock, err := validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti)
require.NoError(t, err) require.NoError(t, err)
@@ -859,8 +963,8 @@ func TestGetBeaconBlock_BlindedBellatrixValid(t *testing.T) {
Data: bytes, Data: bytes,
}) })
require.NoError(t, err) require.NoError(t, err)
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockHandler(ctrl)
jsonRestHandler.EXPECT().GetSSZ( handler.EXPECT().GetSSZ(
gomock.Any(), gomock.Any(),
fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)), fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)),
).Return( ).Return(
@@ -869,7 +973,7 @@ func TestGetBeaconBlock_BlindedBellatrixValid(t *testing.T) {
nil, nil,
).Times(1) ).Times(1)
validatorClient := &beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := &beaconApiValidatorClient{handler: handler}
beaconBlock, err := validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti) beaconBlock, err := validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti)
require.NoError(t, err) require.NoError(t, err)
@@ -904,8 +1008,8 @@ func TestGetBeaconBlock_CapellaValid(t *testing.T) {
Data: bytes, Data: bytes,
}) })
require.NoError(t, err) require.NoError(t, err)
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockHandler(ctrl)
jsonRestHandler.EXPECT().GetSSZ( handler.EXPECT().GetSSZ(
gomock.Any(), gomock.Any(),
fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)), fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)),
).Return( ).Return(
@@ -914,7 +1018,7 @@ func TestGetBeaconBlock_CapellaValid(t *testing.T) {
nil, nil,
).Times(1) ).Times(1)
validatorClient := &beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := &beaconApiValidatorClient{handler: handler}
beaconBlock, err := validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti) beaconBlock, err := validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti)
require.NoError(t, err) require.NoError(t, err)
@@ -949,8 +1053,8 @@ func TestGetBeaconBlock_BlindedCapellaValid(t *testing.T) {
Data: bytes, Data: bytes,
}) })
require.NoError(t, err) require.NoError(t, err)
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockHandler(ctrl)
jsonRestHandler.EXPECT().GetSSZ( handler.EXPECT().GetSSZ(
gomock.Any(), gomock.Any(),
fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)), fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)),
).Return( ).Return(
@@ -959,7 +1063,7 @@ func TestGetBeaconBlock_BlindedCapellaValid(t *testing.T) {
nil, nil,
).Times(1) ).Times(1)
validatorClient := &beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := &beaconApiValidatorClient{handler: handler}
beaconBlock, err := validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti) beaconBlock, err := validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti)
require.NoError(t, err) require.NoError(t, err)
@@ -973,6 +1077,96 @@ func TestGetBeaconBlock_BlindedCapellaValid(t *testing.T) {
assert.DeepEqual(t, expectedBeaconBlock, beaconBlock) assert.DeepEqual(t, expectedBeaconBlock, beaconBlock)
} }
func TestGetBeaconBlock_FuluValid(t *testing.T) {
ctrl := gomock.NewController(t)
defer ctrl.Finish()
proto := testhelpers.GenerateProtoFuluBeaconBlockContents()
block := testhelpers.GenerateJsonFuluBeaconBlockContents()
bytes, err := json.Marshal(block)
require.NoError(t, err)
const slot = primitives.Slot(1)
randaoReveal := []byte{2}
graffiti := []byte{3}
ctx := t.Context()
b, err := json.Marshal(structs.ProduceBlockV3Response{
Version: "fulu",
ExecutionPayloadBlinded: false,
Data: bytes,
})
require.NoError(t, err)
handler := mock.NewMockHandler(ctrl)
handler.EXPECT().GetSSZ(
gomock.Any(),
fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)),
).Return(
b,
http.Header{"Content-Type": []string{"application/json"}},
nil,
).Times(1)
validatorClient := &beaconApiValidatorClient{handler: handler}
beaconBlock, err := validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti)
require.NoError(t, err)
expectedBeaconBlock := &ethpb.GenericBeaconBlock{
Block: &ethpb.GenericBeaconBlock_Fulu{
Fulu: proto,
},
IsBlinded: false,
}
assert.DeepEqual(t, expectedBeaconBlock, beaconBlock)
}
func TestGetBeaconBlock_BlindedFuluValid(t *testing.T) {
ctrl := gomock.NewController(t)
defer ctrl.Finish()
proto := testhelpers.GenerateProtoBlindedFuluBeaconBlock()
block := testhelpers.GenerateJsonBlindedFuluBeaconBlock()
bytes, err := json.Marshal(block)
require.NoError(t, err)
const slot = primitives.Slot(1)
randaoReveal := []byte{2}
graffiti := []byte{3}
ctx := t.Context()
b, err := json.Marshal(structs.ProduceBlockV3Response{
Version: "fulu",
ExecutionPayloadBlinded: true,
Data: bytes,
})
require.NoError(t, err)
handler := mock.NewMockHandler(ctrl)
handler.EXPECT().GetSSZ(
gomock.Any(),
fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)),
).Return(
b,
http.Header{"Content-Type": []string{"application/json"}},
nil,
).Times(1)
validatorClient := &beaconApiValidatorClient{handler: handler}
beaconBlock, err := validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti)
require.NoError(t, err)
expectedBeaconBlock := &ethpb.GenericBeaconBlock{
Block: &ethpb.GenericBeaconBlock_BlindedFulu{
BlindedFulu: proto,
},
IsBlinded: true,
}
assert.DeepEqual(t, expectedBeaconBlock, beaconBlock)
}
func TestGetBeaconBlock_DenebValid(t *testing.T) { func TestGetBeaconBlock_DenebValid(t *testing.T) {
ctrl := gomock.NewController(t) ctrl := gomock.NewController(t)
defer ctrl.Finish() defer ctrl.Finish()
@@ -994,8 +1188,8 @@ func TestGetBeaconBlock_DenebValid(t *testing.T) {
Data: bytes, Data: bytes,
}) })
require.NoError(t, err) require.NoError(t, err)
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockHandler(ctrl)
jsonRestHandler.EXPECT().GetSSZ( handler.EXPECT().GetSSZ(
gomock.Any(), gomock.Any(),
fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)), fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)),
).Return( ).Return(
@@ -1004,7 +1198,7 @@ func TestGetBeaconBlock_DenebValid(t *testing.T) {
nil, nil,
).Times(1) ).Times(1)
validatorClient := &beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := &beaconApiValidatorClient{handler: handler}
beaconBlock, err := validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti) beaconBlock, err := validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti)
require.NoError(t, err) require.NoError(t, err)
@@ -1039,8 +1233,8 @@ func TestGetBeaconBlock_BlindedDenebValid(t *testing.T) {
Data: bytes, Data: bytes,
}) })
require.NoError(t, err) require.NoError(t, err)
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockHandler(ctrl)
jsonRestHandler.EXPECT().GetSSZ( handler.EXPECT().GetSSZ(
gomock.Any(), gomock.Any(),
fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)), fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)),
).Return( ).Return(
@@ -1049,7 +1243,7 @@ func TestGetBeaconBlock_BlindedDenebValid(t *testing.T) {
nil, nil,
).Times(1) ).Times(1)
validatorClient := &beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := &beaconApiValidatorClient{handler: handler}
beaconBlock, err := validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti) beaconBlock, err := validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti)
require.NoError(t, err) require.NoError(t, err)
@@ -1084,8 +1278,8 @@ func TestGetBeaconBlock_ElectraValid(t *testing.T) {
Data: bytes, Data: bytes,
}) })
require.NoError(t, err) require.NoError(t, err)
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockHandler(ctrl)
jsonRestHandler.EXPECT().GetSSZ( handler.EXPECT().GetSSZ(
gomock.Any(), gomock.Any(),
fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)), fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)),
).Return( ).Return(
@@ -1094,7 +1288,7 @@ func TestGetBeaconBlock_ElectraValid(t *testing.T) {
nil, nil,
).Times(1) ).Times(1)
validatorClient := &beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := &beaconApiValidatorClient{handler: handler}
beaconBlock, err := validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti) beaconBlock, err := validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti)
require.NoError(t, err) require.NoError(t, err)
@@ -1129,8 +1323,8 @@ func TestGetBeaconBlock_BlindedElectraValid(t *testing.T) {
Data: bytes, Data: bytes,
}) })
require.NoError(t, err) require.NoError(t, err)
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockHandler(ctrl)
jsonRestHandler.EXPECT().GetSSZ( handler.EXPECT().GetSSZ(
gomock.Any(), gomock.Any(),
fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)), fmt.Sprintf("/eth/v3/validator/blocks/%d?graffiti=%s&randao_reveal=%s", slot, hexutil.Encode(graffiti), hexutil.Encode(randaoReveal)),
).Return( ).Return(
@@ -1139,7 +1333,7 @@ func TestGetBeaconBlock_BlindedElectraValid(t *testing.T) {
nil, nil,
).Times(1) ).Times(1)
validatorClient := &beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := &beaconApiValidatorClient{handler: handler}
beaconBlock, err := validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti) beaconBlock, err := validatorClient.beaconBlock(ctx, slot, randaoReveal, graffiti)
require.NoError(t, err) require.NoError(t, err)

View File

@@ -41,9 +41,9 @@ func TestIndex_Nominal(t *testing.T) {
ctx := t.Context() ctx := t.Context()
stateValidatorsResponseJson := structs.GetValidatorsResponse{} stateValidatorsResponseJson := structs.GetValidatorsResponse{}
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
jsonRestHandler.EXPECT().Post( handler.EXPECT().Post(
gomock.Any(), gomock.Any(),
"/eth/v1/beacon/states/head/validators", "/eth/v1/beacon/states/head/validators",
nil, nil,
@@ -68,7 +68,7 @@ func TestIndex_Nominal(t *testing.T) {
validatorClient := beaconApiValidatorClient{ validatorClient := beaconApiValidatorClient{
stateValidatorsProvider: beaconApiStateValidatorsProvider{ stateValidatorsProvider: beaconApiStateValidatorsProvider{
jsonRestHandler: jsonRestHandler, handler: handler,
}, },
} }
@@ -91,9 +91,9 @@ func TestIndex_UnexistingValidator(t *testing.T) {
ctx := t.Context() ctx := t.Context()
stateValidatorsResponseJson := structs.GetValidatorsResponse{} stateValidatorsResponseJson := structs.GetValidatorsResponse{}
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
jsonRestHandler.EXPECT().Post( handler.EXPECT().Post(
gomock.Any(), gomock.Any(),
"/eth/v1/beacon/states/head/validators", "/eth/v1/beacon/states/head/validators",
nil, nil,
@@ -110,7 +110,7 @@ func TestIndex_UnexistingValidator(t *testing.T) {
validatorClient := beaconApiValidatorClient{ validatorClient := beaconApiValidatorClient{
stateValidatorsProvider: beaconApiStateValidatorsProvider{ stateValidatorsProvider: beaconApiStateValidatorsProvider{
jsonRestHandler: jsonRestHandler, handler: handler,
}, },
} }
@@ -133,9 +133,9 @@ func TestIndex_BadIndexError(t *testing.T) {
ctx := t.Context() ctx := t.Context()
stateValidatorsResponseJson := structs.GetValidatorsResponse{} stateValidatorsResponseJson := structs.GetValidatorsResponse{}
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
jsonRestHandler.EXPECT().Post( handler.EXPECT().Post(
gomock.Any(), gomock.Any(),
"/eth/v1/beacon/states/head/validators", "/eth/v1/beacon/states/head/validators",
nil, nil,
@@ -160,7 +160,7 @@ func TestIndex_BadIndexError(t *testing.T) {
validatorClient := beaconApiValidatorClient{ validatorClient := beaconApiValidatorClient{
stateValidatorsProvider: beaconApiStateValidatorsProvider{ stateValidatorsProvider: beaconApiStateValidatorsProvider{
jsonRestHandler: jsonRestHandler, handler: handler,
}, },
} }
@@ -182,9 +182,9 @@ func TestIndex_JsonResponseError(t *testing.T) {
ctx := t.Context() ctx := t.Context()
stateValidatorsResponseJson := structs.GetValidatorsResponse{} stateValidatorsResponseJson := structs.GetValidatorsResponse{}
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
jsonRestHandler.EXPECT().Post( handler.EXPECT().Post(
gomock.Any(), gomock.Any(),
"/eth/v1/beacon/states/head/validators", "/eth/v1/beacon/states/head/validators",
nil, nil,
@@ -207,7 +207,7 @@ func TestIndex_JsonResponseError(t *testing.T) {
queryParams.Add("status", st) queryParams.Add("status", st)
} }
jsonRestHandler.EXPECT().Get( handler.EXPECT().Get(
gomock.Any(), gomock.Any(),
apiutil.BuildURL("/eth/v1/beacon/states/head/validators", queryParams), apiutil.BuildURL("/eth/v1/beacon/states/head/validators", queryParams),
&stateValidatorsResponseJson, &stateValidatorsResponseJson,
@@ -217,7 +217,7 @@ func TestIndex_JsonResponseError(t *testing.T) {
validatorClient := beaconApiValidatorClient{ validatorClient := beaconApiValidatorClient{
stateValidatorsProvider: beaconApiStateValidatorsProvider{ stateValidatorsProvider: beaconApiStateValidatorsProvider{
jsonRestHandler: jsonRestHandler, handler: handler,
}, },
} }

View File

@@ -1,9 +1,9 @@
// Code generated by MockGen. DO NOT EDIT. // Code generated by MockGen. DO NOT EDIT.
// Source: validator/client/beacon-api/rest_handler_client.go // Source: api/rest/rest_handler.go
// //
// Generated by this command: // Generated by this command:
// //
// mockgen -package=mock -source=validator/client/beacon-api/rest_handler_client.go -destination=validator/client/beacon-api/mock/json_rest_handler_mock.go RestHandler // mockgen -package=mock -source=api/rest/rest_handler.go -destination=validator/client/beacon-api/mock/json_rest_handler_mock.go Handler
// //
// Package mock is a generated GoMock package. // Package mock is a generated GoMock package.
@@ -19,36 +19,39 @@ import (
) )
// Backward compatibility aliases for the renamed mock type. // Backward compatibility aliases for the renamed mock type.
type MockJsonRestHandler = MockRestHandler type MockJsonRestHandler = MockHandler
type MockJsonRestHandlerMockRecorder = MockRestHandlerMockRecorder type MockJsonRestHandlerMockRecorder = MockHandlerMockRecorder
type MockRestHandler = MockHandler
type MockRestHandlerMockRecorder = MockHandlerMockRecorder
var NewMockJsonRestHandler = NewMockRestHandler var NewMockJsonRestHandler = NewMockHandler
var NewMockRestHandler = NewMockHandler
// MockRestHandler is a mock of RestHandler interface. // MockHandler is a mock of Handler interface.
type MockRestHandler struct { type MockHandler struct {
ctrl *gomock.Controller ctrl *gomock.Controller
recorder *MockRestHandlerMockRecorder recorder *MockHandlerMockRecorder
} }
// MockRestHandlerMockRecorder is the mock recorder for MockRestHandler. // MockHandlerMockRecorder is the mock recorder for MockHandler.
type MockRestHandlerMockRecorder struct { type MockHandlerMockRecorder struct {
mock *MockRestHandler mock *MockHandler
} }
// NewMockRestHandler creates a new mock instance. // NewMockHandler creates a new mock instance.
func NewMockRestHandler(ctrl *gomock.Controller) *MockRestHandler { func NewMockHandler(ctrl *gomock.Controller) *MockHandler {
mock := &MockRestHandler{ctrl: ctrl} mock := &MockHandler{ctrl: ctrl}
mock.recorder = &MockRestHandlerMockRecorder{mock} mock.recorder = &MockHandlerMockRecorder{mock}
return mock return mock
} }
// EXPECT returns an object that allows the caller to indicate expected use. // EXPECT returns an object that allows the caller to indicate expected use.
func (m *MockRestHandler) EXPECT() *MockRestHandlerMockRecorder { func (m *MockHandler) EXPECT() *MockHandlerMockRecorder {
return m.recorder return m.recorder
} }
// Get mocks base method. // Get mocks base method.
func (m *MockRestHandler) Get(ctx context.Context, endpoint string, resp any) error { func (m *MockHandler) Get(ctx context.Context, endpoint string, resp any) error {
m.ctrl.T.Helper() m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "Get", ctx, endpoint, resp) ret := m.ctrl.Call(m, "Get", ctx, endpoint, resp)
ret0, _ := ret[0].(error) ret0, _ := ret[0].(error)
@@ -56,13 +59,13 @@ func (m *MockRestHandler) Get(ctx context.Context, endpoint string, resp any) er
} }
// Get indicates an expected call of Get. // Get indicates an expected call of Get.
func (mr *MockRestHandlerMockRecorder) Get(ctx, endpoint, resp any) *gomock.Call { func (mr *MockHandlerMockRecorder) Get(ctx, endpoint, resp any) *gomock.Call {
mr.mock.ctrl.T.Helper() mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Get", reflect.TypeOf((*MockRestHandler)(nil).Get), ctx, endpoint, resp) return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Get", reflect.TypeOf((*MockHandler)(nil).Get), ctx, endpoint, resp)
} }
// GetSSZ mocks base method. // GetSSZ mocks base method.
func (m *MockRestHandler) GetSSZ(ctx context.Context, endpoint string) ([]byte, http.Header, error) { func (m *MockHandler) GetSSZ(ctx context.Context, endpoint string) ([]byte, http.Header, error) {
m.ctrl.T.Helper() m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetSSZ", ctx, endpoint) ret := m.ctrl.Call(m, "GetSSZ", ctx, endpoint)
ret0, _ := ret[0].([]byte) ret0, _ := ret[0].([]byte)
@@ -72,13 +75,13 @@ func (m *MockRestHandler) GetSSZ(ctx context.Context, endpoint string) ([]byte,
} }
// GetSSZ indicates an expected call of GetSSZ. // GetSSZ indicates an expected call of GetSSZ.
func (mr *MockRestHandlerMockRecorder) GetSSZ(ctx, endpoint any) *gomock.Call { func (mr *MockHandlerMockRecorder) GetSSZ(ctx, endpoint any) *gomock.Call {
mr.mock.ctrl.T.Helper() mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetSSZ", reflect.TypeOf((*MockRestHandler)(nil).GetSSZ), ctx, endpoint) return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetSSZ", reflect.TypeOf((*MockHandler)(nil).GetSSZ), ctx, endpoint)
} }
// GetStatusCode mocks base method. // GetStatusCode mocks base method.
func (m *MockRestHandler) GetStatusCode(ctx context.Context, endpoint string) (int, error) { func (m *MockHandler) GetStatusCode(ctx context.Context, endpoint string) (int, error) {
m.ctrl.T.Helper() m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetStatusCode", ctx, endpoint) ret := m.ctrl.Call(m, "GetStatusCode", ctx, endpoint)
ret0, _ := ret[0].(int) ret0, _ := ret[0].(int)
@@ -87,13 +90,13 @@ func (m *MockRestHandler) GetStatusCode(ctx context.Context, endpoint string) (i
} }
// GetStatusCode indicates an expected call of GetStatusCode. // GetStatusCode indicates an expected call of GetStatusCode.
func (mr *MockRestHandlerMockRecorder) GetStatusCode(ctx, endpoint any) *gomock.Call { func (mr *MockHandlerMockRecorder) GetStatusCode(ctx, endpoint any) *gomock.Call {
mr.mock.ctrl.T.Helper() mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetStatusCode", reflect.TypeOf((*MockRestHandler)(nil).GetStatusCode), ctx, endpoint) return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetStatusCode", reflect.TypeOf((*MockHandler)(nil).GetStatusCode), ctx, endpoint)
} }
// Host mocks base method. // Host mocks base method.
func (m *MockRestHandler) Host() string { func (m *MockHandler) Host() string {
m.ctrl.T.Helper() m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "Host") ret := m.ctrl.Call(m, "Host")
ret0, _ := ret[0].(string) ret0, _ := ret[0].(string)
@@ -101,27 +104,13 @@ func (m *MockRestHandler) Host() string {
} }
// Host indicates an expected call of Host. // Host indicates an expected call of Host.
func (mr *MockRestHandlerMockRecorder) Host() *gomock.Call { func (mr *MockHandlerMockRecorder) Host() *gomock.Call {
mr.mock.ctrl.T.Helper() mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Host", reflect.TypeOf((*MockRestHandler)(nil).Host)) return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Host", reflect.TypeOf((*MockHandler)(nil).Host))
}
// HttpClient mocks base method.
func (m *MockRestHandler) HttpClient() *http.Client {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "HttpClient")
ret0, _ := ret[0].(*http.Client)
return ret0
}
// HttpClient indicates an expected call of HttpClient.
func (mr *MockRestHandlerMockRecorder) HttpClient() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "HttpClient", reflect.TypeOf((*MockRestHandler)(nil).HttpClient))
} }
// Post mocks base method. // Post mocks base method.
func (m *MockRestHandler) Post(ctx context.Context, endpoint string, headers map[string]string, data *bytes.Buffer, resp any) error { func (m *MockHandler) Post(ctx context.Context, endpoint string, headers map[string]string, data *bytes.Buffer, resp any) error {
m.ctrl.T.Helper() m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "Post", ctx, endpoint, headers, data, resp) ret := m.ctrl.Call(m, "Post", ctx, endpoint, headers, data, resp)
ret0, _ := ret[0].(error) ret0, _ := ret[0].(error)
@@ -129,13 +118,13 @@ func (m *MockRestHandler) Post(ctx context.Context, endpoint string, headers map
} }
// Post indicates an expected call of Post. // Post indicates an expected call of Post.
func (mr *MockRestHandlerMockRecorder) Post(ctx, endpoint, headers, data, resp any) *gomock.Call { func (mr *MockHandlerMockRecorder) Post(ctx, endpoint, headers, data, resp any) *gomock.Call {
mr.mock.ctrl.T.Helper() mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Post", reflect.TypeOf((*MockRestHandler)(nil).Post), ctx, endpoint, headers, data, resp) return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Post", reflect.TypeOf((*MockHandler)(nil).Post), ctx, endpoint, headers, data, resp)
} }
// PostSSZ mocks base method. // PostSSZ mocks base method.
func (m *MockRestHandler) PostSSZ(ctx context.Context, endpoint string, headers map[string]string, data *bytes.Buffer) ([]byte, http.Header, error) { func (m *MockHandler) PostSSZ(ctx context.Context, endpoint string, headers map[string]string, data *bytes.Buffer) ([]byte, http.Header, error) {
m.ctrl.T.Helper() m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "PostSSZ", ctx, endpoint, headers, data) ret := m.ctrl.Call(m, "PostSSZ", ctx, endpoint, headers, data)
ret0, _ := ret[0].([]byte) ret0, _ := ret[0].([]byte)
@@ -145,19 +134,7 @@ func (m *MockRestHandler) PostSSZ(ctx context.Context, endpoint string, headers
} }
// PostSSZ indicates an expected call of PostSSZ. // PostSSZ indicates an expected call of PostSSZ.
func (mr *MockRestHandlerMockRecorder) PostSSZ(ctx, endpoint, headers, data any) *gomock.Call { func (mr *MockHandlerMockRecorder) PostSSZ(ctx, endpoint, headers, data any) *gomock.Call {
mr.mock.ctrl.T.Helper() mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "PostSSZ", reflect.TypeOf((*MockRestHandler)(nil).PostSSZ), ctx, endpoint, headers, data) return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "PostSSZ", reflect.TypeOf((*MockHandler)(nil).PostSSZ), ctx, endpoint, headers, data)
}
// SetHost mocks base method.
func (m *MockRestHandler) SetHost(host string) {
m.ctrl.T.Helper()
m.ctrl.Call(m, "SetHost", host)
}
// SetHost indicates an expected call of SetHost.
func (mr *MockRestHandlerMockRecorder) SetHost(host any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SetHost", reflect.TypeOf((*MockRestHandler)(nil).SetHost), host)
} }

View File

@@ -26,5 +26,5 @@ func (c *beaconApiValidatorClient) prepareBeaconProposer(ctx context.Context, re
return errors.Wrap(err, "failed to marshal recipients") return errors.Wrap(err, "failed to marshal recipients")
} }
return c.jsonRestHandler.Post(ctx, "/eth/v1/validator/prepare_beacon_proposer", nil, bytes.NewBuffer(marshalledJsonRecipients), nil) return c.handler.Post(ctx, "/eth/v1/validator/prepare_beacon_proposer", nil, bytes.NewBuffer(marshalledJsonRecipients), nil)
} }

View File

@@ -45,8 +45,8 @@ func TestPrepareBeaconProposer_Valid(t *testing.T) {
marshalledJsonRecipients, err := json.Marshal(jsonRecipients) marshalledJsonRecipients, err := json.Marshal(jsonRecipients)
require.NoError(t, err) require.NoError(t, err)
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
jsonRestHandler.EXPECT().Post( handler.EXPECT().Post(
gomock.Any(), gomock.Any(),
prepareBeaconProposerTestEndpoint, prepareBeaconProposerTestEndpoint,
nil, nil,
@@ -78,7 +78,7 @@ func TestPrepareBeaconProposer_Valid(t *testing.T) {
}, },
} }
validatorClient := &beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := &beaconApiValidatorClient{handler: handler}
err = validatorClient.prepareBeaconProposer(ctx, protoRecipients) err = validatorClient.prepareBeaconProposer(ctx, protoRecipients)
require.NoError(t, err) require.NoError(t, err)
} }
@@ -89,8 +89,8 @@ func TestPrepareBeaconProposer_BadRequest(t *testing.T) {
ctx := t.Context() ctx := t.Context()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
jsonRestHandler.EXPECT().Post( handler.EXPECT().Post(
gomock.Any(), gomock.Any(),
prepareBeaconProposerTestEndpoint, prepareBeaconProposerTestEndpoint,
nil, nil,
@@ -100,7 +100,7 @@ func TestPrepareBeaconProposer_BadRequest(t *testing.T) {
errors.New("foo error"), errors.New("foo error"),
).Times(1) ).Times(1)
validatorClient := &beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := &beaconApiValidatorClient{handler: handler}
err := validatorClient.prepareBeaconProposer(ctx, nil) err := validatorClient.prepareBeaconProposer(ctx, nil)
assert.ErrorContains(t, "foo error", err) assert.ErrorContains(t, "foo error", err)
} }

View File

@@ -22,7 +22,7 @@ func (c *beaconApiValidatorClient) proposeAttestation(ctx context.Context, attes
} }
headers := map[string]string{"Eth-Consensus-Version": version.String(attestation.Version())} headers := map[string]string{"Eth-Consensus-Version": version.String(attestation.Version())}
err = c.jsonRestHandler.Post( err = c.handler.Post(
ctx, ctx,
"/eth/v2/beacon/pool/attestations", "/eth/v2/beacon/pool/attestations",
headers, headers,
@@ -51,7 +51,7 @@ func (c *beaconApiValidatorClient) proposeAttestationElectra(ctx context.Context
} }
consensusVersion := version.String(slots.ToForkVersion(attestation.Data.Slot)) consensusVersion := version.String(slots.ToForkVersion(attestation.Data.Slot))
headers := map[string]string{"Eth-Consensus-Version": consensusVersion} headers := map[string]string{"Eth-Consensus-Version": consensusVersion}
if err = c.jsonRestHandler.Post( if err = c.handler.Post(
ctx, ctx,
"/eth/v2/beacon/pool/attestations", "/eth/v2/beacon/pool/attestations",
headers, headers,

View File

@@ -107,7 +107,7 @@ func TestProposeAttestation(t *testing.T) {
for _, test := range tests { for _, test := range tests {
t.Run(test.name, func(t *testing.T) { t.Run(test.name, func(t *testing.T) {
ctrl := gomock.NewController(t) ctrl := gomock.NewController(t)
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
var marshalledAttestations []byte var marshalledAttestations []byte
if helpers.ValidateNilAttestation(test.attestation) == nil { if helpers.ValidateNilAttestation(test.attestation) == nil {
@@ -119,7 +119,7 @@ func TestProposeAttestation(t *testing.T) {
ctx := t.Context() ctx := t.Context()
headers := map[string]string{"Eth-Consensus-Version": version.String(test.attestation.Version())} headers := map[string]string{"Eth-Consensus-Version": version.String(test.attestation.Version())}
jsonRestHandler.EXPECT().Post( handler.EXPECT().Post(
gomock.Any(), gomock.Any(),
"/eth/v2/beacon/pool/attestations", "/eth/v2/beacon/pool/attestations",
headers, headers,
@@ -129,7 +129,7 @@ func TestProposeAttestation(t *testing.T) {
test.endpointError, test.endpointError,
).Times(test.endpointCall) ).Times(test.endpointCall)
validatorClient := &beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := &beaconApiValidatorClient{handler: handler}
proposeResponse, err := validatorClient.proposeAttestation(ctx, test.attestation) proposeResponse, err := validatorClient.proposeAttestation(ctx, test.attestation)
if test.expectedErrorMessage != "" { if test.expectedErrorMessage != "" {
require.ErrorContains(t, test.expectedErrorMessage, err) require.ErrorContains(t, test.expectedErrorMessage, err)
@@ -254,7 +254,7 @@ func TestProposeAttestationElectra(t *testing.T) {
for _, test := range tests { for _, test := range tests {
t.Run(test.name, func(t *testing.T) { t.Run(test.name, func(t *testing.T) {
ctrl := gomock.NewController(t) ctrl := gomock.NewController(t)
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
var marshalledAttestations []byte var marshalledAttestations []byte
if helpers.ValidateNilAttestation(test.attestation) == nil { if helpers.ValidateNilAttestation(test.attestation) == nil {
@@ -268,7 +268,7 @@ func TestProposeAttestationElectra(t *testing.T) {
if test.expectedConsensusVersion != "" { if test.expectedConsensusVersion != "" {
headerMatcher = gomock.Eq(map[string]string{"Eth-Consensus-Version": test.expectedConsensusVersion}) headerMatcher = gomock.Eq(map[string]string{"Eth-Consensus-Version": test.expectedConsensusVersion})
} }
jsonRestHandler.EXPECT().Post( handler.EXPECT().Post(
gomock.Any(), gomock.Any(),
"/eth/v2/beacon/pool/attestations", "/eth/v2/beacon/pool/attestations",
headerMatcher, headerMatcher,
@@ -278,7 +278,7 @@ func TestProposeAttestationElectra(t *testing.T) {
test.endpointError, test.endpointError,
).Times(test.endpointCall) ).Times(test.endpointCall)
validatorClient := &beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := &beaconApiValidatorClient{handler: handler}
proposeResponse, err := validatorClient.proposeAttestationElectra(ctx, test.attestation) proposeResponse, err := validatorClient.proposeAttestationElectra(ctx, test.attestation)
if test.expectedErrorMessage != "" { if test.expectedErrorMessage != "" {
require.ErrorContains(t, test.expectedErrorMessage, err) require.ErrorContains(t, test.expectedErrorMessage, err)

View File

@@ -7,6 +7,7 @@ import (
"net/http" "net/http"
"github.com/OffchainLabs/prysm/v7/api/server/structs" "github.com/OffchainLabs/prysm/v7/api/server/structs"
"github.com/OffchainLabs/prysm/v7/encoding/ssz"
"github.com/OffchainLabs/prysm/v7/network/httputil" "github.com/OffchainLabs/prysm/v7/network/httputil"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1" ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/pkg/errors" "github.com/pkg/errors"
@@ -21,34 +22,128 @@ type blockProcessingResult struct {
marshalJSON func() ([]byte, error) marshalJSON func() ([]byte, error)
} }
type sszMarshaler interface {
MarshalSSZ() ([]byte, error)
}
func buildBlockResult(
versionName string,
blinded bool,
sszObj sszMarshaler,
rootObj ssz.Hashable,
jsonFn func() ([]byte, error),
) (*blockProcessingResult, error) {
beaconBlockRoot, err := rootObj.HashTreeRoot()
if err != nil {
return nil, errors.Wrapf(err, "failed to compute block root for %s beacon block", versionName)
}
marshaledSSZ, err := sszObj.MarshalSSZ()
if err != nil {
return nil, errors.Wrapf(err, "failed to serialize %s beacon block", versionName)
}
return &blockProcessingResult{
consensusVersion: versionName,
blinded: blinded,
beaconBlockRoot: beaconBlockRoot,
marshalledSSZ: marshaledSSZ,
marshalJSON: jsonFn,
}, nil
}
func (c *beaconApiValidatorClient) proposeBeaconBlock(ctx context.Context, in *ethpb.GenericSignedBeaconBlock) (*ethpb.ProposeResponse, error) { func (c *beaconApiValidatorClient) proposeBeaconBlock(ctx context.Context, in *ethpb.GenericSignedBeaconBlock) (*ethpb.ProposeResponse, error) {
var res *blockProcessingResult var res *blockProcessingResult
var err error var err error
switch blockType := in.Block.(type) { switch blockType := in.Block.(type) {
case *ethpb.GenericSignedBeaconBlock_Phase0: case *ethpb.GenericSignedBeaconBlock_Phase0:
res, err = handlePhase0Block(blockType) res, err = buildBlockResult("phase0", false, blockType.Phase0, blockType.Phase0.Block, func() ([]byte, error) {
return json.Marshal(structs.SignedBeaconBlockPhase0FromConsensus(blockType.Phase0))
})
case *ethpb.GenericSignedBeaconBlock_Altair: case *ethpb.GenericSignedBeaconBlock_Altair:
res, err = handleAltairBlock(blockType) res, err = buildBlockResult("altair", false, blockType.Altair, blockType.Altair.Block, func() ([]byte, error) {
return json.Marshal(structs.SignedBeaconBlockAltairFromConsensus(blockType.Altair))
})
case *ethpb.GenericSignedBeaconBlock_Bellatrix: case *ethpb.GenericSignedBeaconBlock_Bellatrix:
res, err = handleBellatrixBlock(blockType) res, err = buildBlockResult("bellatrix", false, blockType.Bellatrix, blockType.Bellatrix.Block, func() ([]byte, error) {
signedBlock, err := structs.SignedBeaconBlockBellatrixFromConsensus(blockType.Bellatrix)
if err != nil {
return nil, errors.Wrap(err, "failed to convert bellatrix beacon block")
}
return json.Marshal(signedBlock)
})
case *ethpb.GenericSignedBeaconBlock_BlindedBellatrix: case *ethpb.GenericSignedBeaconBlock_BlindedBellatrix:
res, err = handleBlindedBellatrixBlock(blockType) res, err = buildBlockResult("bellatrix", true, blockType.BlindedBellatrix, blockType.BlindedBellatrix.Block, func() ([]byte, error) {
signedBlock, err := structs.SignedBlindedBeaconBlockBellatrixFromConsensus(blockType.BlindedBellatrix)
if err != nil {
return nil, errors.Wrap(err, "failed to convert blinded bellatrix beacon block")
}
return json.Marshal(signedBlock)
})
case *ethpb.GenericSignedBeaconBlock_Capella: case *ethpb.GenericSignedBeaconBlock_Capella:
res, err = handleCapellaBlock(blockType) res, err = buildBlockResult("capella", false, blockType.Capella, blockType.Capella.Block, func() ([]byte, error) {
signedBlock, err := structs.SignedBeaconBlockCapellaFromConsensus(blockType.Capella)
if err != nil {
return nil, errors.Wrap(err, "failed to convert capella beacon block")
}
return json.Marshal(signedBlock)
})
case *ethpb.GenericSignedBeaconBlock_BlindedCapella: case *ethpb.GenericSignedBeaconBlock_BlindedCapella:
res, err = handleBlindedCapellaBlock(blockType) res, err = buildBlockResult("capella", true, blockType.BlindedCapella, blockType.BlindedCapella.Block, func() ([]byte, error) {
signedBlock, err := structs.SignedBlindedBeaconBlockCapellaFromConsensus(blockType.BlindedCapella)
if err != nil {
return nil, errors.Wrap(err, "failed to convert blinded capella beacon block")
}
return json.Marshal(signedBlock)
})
case *ethpb.GenericSignedBeaconBlock_Deneb: case *ethpb.GenericSignedBeaconBlock_Deneb:
res, err = handleDenebBlockContents(blockType) res, err = buildBlockResult("deneb", false, blockType.Deneb, blockType.Deneb.Block, func() ([]byte, error) {
signedBlock, err := structs.SignedBeaconBlockContentsDenebFromConsensus(blockType.Deneb)
if err != nil {
return nil, errors.Wrap(err, "failed to convert deneb beacon block contents")
}
return json.Marshal(signedBlock)
})
case *ethpb.GenericSignedBeaconBlock_BlindedDeneb: case *ethpb.GenericSignedBeaconBlock_BlindedDeneb:
res, err = handleBlindedDenebBlock(blockType) res, err = buildBlockResult("deneb", true, blockType.BlindedDeneb, blockType.BlindedDeneb, func() ([]byte, error) {
signedBlock, err := structs.SignedBlindedBeaconBlockDenebFromConsensus(blockType.BlindedDeneb)
if err != nil {
return nil, errors.Wrap(err, "failed to convert deneb blinded beacon block")
}
return json.Marshal(signedBlock)
})
case *ethpb.GenericSignedBeaconBlock_Electra: case *ethpb.GenericSignedBeaconBlock_Electra:
res, err = handleElectraBlockContents(blockType) res, err = buildBlockResult("electra", false, blockType.Electra, blockType.Electra.Block, func() ([]byte, error) {
signedBlock, err := structs.SignedBeaconBlockContentsElectraFromConsensus(blockType.Electra)
if err != nil {
return nil, errors.Wrap(err, "failed to convert electra beacon block contents")
}
return json.Marshal(signedBlock)
})
case *ethpb.GenericSignedBeaconBlock_BlindedElectra: case *ethpb.GenericSignedBeaconBlock_BlindedElectra:
res, err = handleBlindedElectraBlock(blockType) res, err = buildBlockResult("electra", true, blockType.BlindedElectra, blockType.BlindedElectra, func() ([]byte, error) {
signedBlock, err := structs.SignedBlindedBeaconBlockElectraFromConsensus(blockType.BlindedElectra)
if err != nil {
return nil, errors.Wrap(err, "failed to convert electra blinded beacon block")
}
return json.Marshal(signedBlock)
})
case *ethpb.GenericSignedBeaconBlock_Fulu: case *ethpb.GenericSignedBeaconBlock_Fulu:
res, err = handleFuluBlockContents(blockType) res, err = buildBlockResult("fulu", false, blockType.Fulu, blockType.Fulu.Block, func() ([]byte, error) {
signedBlock, err := structs.SignedBeaconBlockContentsFuluFromConsensus(blockType.Fulu)
if err != nil {
return nil, errors.Wrap(err, "failed to convert fulu beacon block contents")
}
return json.Marshal(signedBlock)
})
case *ethpb.GenericSignedBeaconBlock_BlindedFulu: case *ethpb.GenericSignedBeaconBlock_BlindedFulu:
res, err = handleBlindedFuluBlock(blockType) res, err = buildBlockResult("fulu", true, blockType.BlindedFulu, blockType.BlindedFulu, func() ([]byte, error) {
signedBlock, err := structs.SignedBlindedBeaconBlockFuluFromConsensus(blockType.BlindedFulu)
if err != nil {
return nil, errors.Wrap(err, "failed to convert fulu blinded beacon block")
}
return json.Marshal(signedBlock)
})
default: default:
return nil, errors.Errorf("unsupported block type %T", in.Block) return nil, errors.Errorf("unsupported block type %T", in.Block)
} }
@@ -67,7 +162,7 @@ func (c *beaconApiValidatorClient) proposeBeaconBlock(ctx context.Context, in *e
// Try PostSSZ first with SSZ data // Try PostSSZ first with SSZ data
if res.marshalledSSZ != nil { if res.marshalledSSZ != nil {
_, _, err = c.jsonRestHandler.PostSSZ(ctx, endpoint, headers, bytes.NewBuffer(res.marshalledSSZ)) _, _, err = c.handler.PostSSZ(ctx, endpoint, headers, bytes.NewBuffer(res.marshalledSSZ))
if err != nil { if err != nil {
errJson := &httputil.DefaultJsonError{} errJson := &httputil.DefaultJsonError{}
// If PostSSZ fails with 406 (Not Acceptable), fall back to JSON // If PostSSZ fails with 406 (Not Acceptable), fall back to JSON
@@ -81,7 +176,7 @@ func (c *beaconApiValidatorClient) proposeBeaconBlock(ctx context.Context, in *e
return nil, errors.Wrap(jsonErr, "failed to marshal JSON") return nil, errors.Wrap(jsonErr, "failed to marshal JSON")
} }
// Reset headers for JSON // Reset headers for JSON
err = c.jsonRestHandler.Post(ctx, endpoint, headers, bytes.NewBuffer(jsonData), nil) err = c.handler.Post(ctx, endpoint, headers, bytes.NewBuffer(jsonData), nil)
// If JSON also fails, return that error // If JSON also fails, return that error
if err != nil { if err != nil {
return nil, errors.Wrap(err, "failed to submit block via JSON fallback") return nil, errors.Wrap(err, "failed to submit block via JSON fallback")
@@ -100,7 +195,7 @@ func (c *beaconApiValidatorClient) proposeBeaconBlock(ctx context.Context, in *e
return nil, errors.Wrap(jsonErr, "failed to marshal JSON") return nil, errors.Wrap(jsonErr, "failed to marshal JSON")
} }
// Reset headers for JSON // Reset headers for JSON
err = c.jsonRestHandler.Post(ctx, endpoint, headers, bytes.NewBuffer(jsonData), nil) err = c.handler.Post(ctx, endpoint, headers, bytes.NewBuffer(jsonData), nil)
errJson := &httputil.DefaultJsonError{} errJson := &httputil.DefaultJsonError{}
if err != nil { if err != nil {
if !errors.As(err, &errJson) { if !errors.As(err, &errJson) {
@@ -116,357 +211,3 @@ func (c *beaconApiValidatorClient) proposeBeaconBlock(ctx context.Context, in *e
return &ethpb.ProposeResponse{BlockRoot: res.beaconBlockRoot[:]}, nil return &ethpb.ProposeResponse{BlockRoot: res.beaconBlockRoot[:]}, nil
} }
func handlePhase0Block(block *ethpb.GenericSignedBeaconBlock_Phase0) (*blockProcessingResult, error) {
var res blockProcessingResult
res.consensusVersion = "phase0"
res.blinded = false
beaconBlockRoot, err := block.Phase0.Block.HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "failed to compute block root for phase0 beacon block")
}
res.beaconBlockRoot = beaconBlockRoot
// Marshal SSZ
ssz, err := block.Phase0.MarshalSSZ()
if err != nil {
return nil, errors.Wrap(err, "failed to serialize block for phase0 beacon block")
}
res.marshalledSSZ = ssz
// Set up JSON marshalling function for fallback
res.marshalJSON = func() ([]byte, error) {
signedBlock := structs.SignedBeaconBlockPhase0FromConsensus(block.Phase0)
return json.Marshal(signedBlock)
}
return &res, nil
}
func handleAltairBlock(block *ethpb.GenericSignedBeaconBlock_Altair) (*blockProcessingResult, error) {
var res blockProcessingResult
res.consensusVersion = "altair"
res.blinded = false
beaconBlockRoot, err := block.Altair.Block.HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "failed to compute block root for altair beacon block")
}
res.beaconBlockRoot = beaconBlockRoot
// Marshal SSZ
ssz, err := block.Altair.MarshalSSZ()
if err != nil {
return nil, errors.Wrap(err, "failed to serialize block for altair beacon block")
}
res.marshalledSSZ = ssz
// Set up JSON marshalling function for fallback
res.marshalJSON = func() ([]byte, error) {
signedBlock := structs.SignedBeaconBlockAltairFromConsensus(block.Altair)
return json.Marshal(signedBlock)
}
return &res, nil
}
func handleBellatrixBlock(block *ethpb.GenericSignedBeaconBlock_Bellatrix) (*blockProcessingResult, error) {
var res blockProcessingResult
res.consensusVersion = "bellatrix"
res.blinded = false
beaconBlockRoot, err := block.Bellatrix.Block.HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "failed to compute block root for bellatrix beacon block")
}
res.beaconBlockRoot = beaconBlockRoot
// Marshal SSZ
ssz, err := block.Bellatrix.MarshalSSZ()
if err != nil {
return nil, errors.Wrap(err, "failed to serialize block for bellatrix beacon block")
}
res.marshalledSSZ = ssz
// Set up JSON marshalling function for fallback
res.marshalJSON = func() ([]byte, error) {
signedBlock, err := structs.SignedBeaconBlockBellatrixFromConsensus(block.Bellatrix)
if err != nil {
return nil, errors.Wrap(err, "failed to convert bellatrix beacon block")
}
return json.Marshal(signedBlock)
}
return &res, nil
}
func handleBlindedBellatrixBlock(block *ethpb.GenericSignedBeaconBlock_BlindedBellatrix) (*blockProcessingResult, error) {
var res blockProcessingResult
res.consensusVersion = "bellatrix"
res.blinded = true
beaconBlockRoot, err := block.BlindedBellatrix.Block.HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "failed to compute block root for bellatrix beacon block")
}
res.beaconBlockRoot = beaconBlockRoot
// Marshal SSZ
ssz, err := block.BlindedBellatrix.MarshalSSZ()
if err != nil {
return nil, errors.Wrap(err, "failed to serialize block for bellatrix beacon block")
}
res.marshalledSSZ = ssz
// Set up JSON marshalling function for fallback
res.marshalJSON = func() ([]byte, error) {
signedBlock, err := structs.SignedBlindedBeaconBlockBellatrixFromConsensus(block.BlindedBellatrix)
if err != nil {
return nil, errors.Wrap(err, "failed to convert blinded bellatrix beacon block")
}
return json.Marshal(signedBlock)
}
return &res, nil
}
func handleCapellaBlock(block *ethpb.GenericSignedBeaconBlock_Capella) (*blockProcessingResult, error) {
var res blockProcessingResult
res.consensusVersion = "capella"
res.blinded = false
beaconBlockRoot, err := block.Capella.Block.HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "failed to compute block root for capella beacon block")
}
res.beaconBlockRoot = beaconBlockRoot
// Marshal SSZ
ssz, err := block.Capella.MarshalSSZ()
if err != nil {
return nil, errors.Wrap(err, "failed to serialize capella beacon block")
}
res.marshalledSSZ = ssz
// Set up JSON marshalling function for fallback
res.marshalJSON = func() ([]byte, error) {
signedBlock, err := structs.SignedBeaconBlockCapellaFromConsensus(block.Capella)
if err != nil {
return nil, errors.Wrap(err, "failed to convert capella beacon block")
}
return json.Marshal(signedBlock)
}
return &res, nil
}
func handleBlindedCapellaBlock(block *ethpb.GenericSignedBeaconBlock_BlindedCapella) (*blockProcessingResult, error) {
var res blockProcessingResult
res.consensusVersion = "capella"
res.blinded = true
beaconBlockRoot, err := block.BlindedCapella.Block.HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "failed to compute block root for blinded capella beacon block")
}
res.beaconBlockRoot = beaconBlockRoot
// Marshal SSZ
ssz, err := block.BlindedCapella.MarshalSSZ()
if err != nil {
return nil, errors.Wrap(err, "failed to serialize blinded capella beacon block")
}
res.marshalledSSZ = ssz
// Set up JSON marshalling function for fallback
res.marshalJSON = func() ([]byte, error) {
signedBlock, err := structs.SignedBlindedBeaconBlockCapellaFromConsensus(block.BlindedCapella)
if err != nil {
return nil, errors.Wrap(err, "failed to convert blinded capella beacon block")
}
return json.Marshal(signedBlock)
}
return &res, nil
}
func handleDenebBlockContents(block *ethpb.GenericSignedBeaconBlock_Deneb) (*blockProcessingResult, error) {
var res blockProcessingResult
res.consensusVersion = "deneb"
res.blinded = false
beaconBlockRoot, err := block.Deneb.Block.HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "failed to compute block root for deneb beacon block")
}
res.beaconBlockRoot = beaconBlockRoot
// Marshal SSZ
ssz, err := block.Deneb.MarshalSSZ()
if err != nil {
return nil, errors.Wrap(err, "failed to serialize deneb beacon block")
}
res.marshalledSSZ = ssz
// Set up JSON marshalling function for fallback
res.marshalJSON = func() ([]byte, error) {
signedBlock, err := structs.SignedBeaconBlockContentsDenebFromConsensus(block.Deneb)
if err != nil {
return nil, errors.Wrap(err, "failed to convert deneb beacon block contents")
}
return json.Marshal(signedBlock)
}
return &res, nil
}
func handleBlindedDenebBlock(block *ethpb.GenericSignedBeaconBlock_BlindedDeneb) (*blockProcessingResult, error) {
var res blockProcessingResult
res.consensusVersion = "deneb"
res.blinded = true
beaconBlockRoot, err := block.BlindedDeneb.HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "failed to compute block root for deneb blinded beacon block")
}
res.beaconBlockRoot = beaconBlockRoot
// Marshal SSZ
ssz, err := block.BlindedDeneb.MarshalSSZ()
if err != nil {
return nil, errors.Wrap(err, "failed to serialize blinded deneb beacon block")
}
res.marshalledSSZ = ssz
// Set up JSON marshalling function for fallback
res.marshalJSON = func() ([]byte, error) {
signedBlock, err := structs.SignedBlindedBeaconBlockDenebFromConsensus(block.BlindedDeneb)
if err != nil {
return nil, errors.Wrap(err, "failed to convert deneb blinded beacon block")
}
return json.Marshal(signedBlock)
}
return &res, nil
}
func handleElectraBlockContents(block *ethpb.GenericSignedBeaconBlock_Electra) (*blockProcessingResult, error) {
var res blockProcessingResult
res.consensusVersion = "electra"
res.blinded = false
beaconBlockRoot, err := block.Electra.Block.HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "failed to compute block root for electra beacon block")
}
res.beaconBlockRoot = beaconBlockRoot
// Marshal SSZ
ssz, err := block.Electra.MarshalSSZ()
if err != nil {
return nil, errors.Wrap(err, "failed to serialize electra beacon block")
}
res.marshalledSSZ = ssz
// Set up JSON marshalling function for fallback
res.marshalJSON = func() ([]byte, error) {
signedBlock, err := structs.SignedBeaconBlockContentsElectraFromConsensus(block.Electra)
if err != nil {
return nil, errors.Wrap(err, "failed to convert electra beacon block contents")
}
return json.Marshal(signedBlock)
}
return &res, nil
}
func handleBlindedElectraBlock(block *ethpb.GenericSignedBeaconBlock_BlindedElectra) (*blockProcessingResult, error) {
var res blockProcessingResult
res.consensusVersion = "electra"
res.blinded = true
beaconBlockRoot, err := block.BlindedElectra.HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "failed to compute block root for electra blinded beacon block")
}
res.beaconBlockRoot = beaconBlockRoot
// Marshal SSZ
ssz, err := block.BlindedElectra.MarshalSSZ()
if err != nil {
return nil, errors.Wrap(err, "failed to serialize blinded electra beacon block")
}
res.marshalledSSZ = ssz
// Set up JSON marshalling function for fallback
res.marshalJSON = func() ([]byte, error) {
signedBlock, err := structs.SignedBlindedBeaconBlockElectraFromConsensus(block.BlindedElectra)
if err != nil {
return nil, errors.Wrap(err, "failed to convert electra blinded beacon block")
}
return json.Marshal(signedBlock)
}
return &res, nil
}
func handleFuluBlockContents(block *ethpb.GenericSignedBeaconBlock_Fulu) (*blockProcessingResult, error) {
var res blockProcessingResult
res.consensusVersion = "fulu"
res.blinded = false
beaconBlockRoot, err := block.Fulu.Block.HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "failed to compute block root for fulu beacon block")
}
res.beaconBlockRoot = beaconBlockRoot
// Marshal SSZ
ssz, err := block.Fulu.MarshalSSZ()
if err != nil {
return nil, errors.Wrap(err, "failed to serialize fulu beacon block")
}
res.marshalledSSZ = ssz
// Set up JSON marshalling function for fallback
res.marshalJSON = func() ([]byte, error) {
signedBlock, err := structs.SignedBeaconBlockContentsFuluFromConsensus(block.Fulu)
if err != nil {
return nil, errors.Wrap(err, "failed to convert fulu beacon block contents")
}
return json.Marshal(signedBlock)
}
return &res, nil
}
func handleBlindedFuluBlock(block *ethpb.GenericSignedBeaconBlock_BlindedFulu) (*blockProcessingResult, error) {
var res blockProcessingResult
res.consensusVersion = "fulu"
res.blinded = true
beaconBlockRoot, err := block.BlindedFulu.HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "failed to compute block root for fulu blinded beacon block")
}
res.beaconBlockRoot = beaconBlockRoot
// Marshal SSZ
ssz, err := block.BlindedFulu.MarshalSSZ()
if err != nil {
return nil, errors.Wrap(err, "failed to serialize blinded fulu beacon block")
}
res.marshalledSSZ = ssz
// Set up JSON marshalling function for fallback
res.marshalJSON = func() ([]byte, error) {
signedBlock, err := structs.SignedBlindedBeaconBlockFuluFromConsensus(block.BlindedFulu)
if err != nil {
return nil, errors.Wrap(err, "failed to convert fulu blinded beacon block")
}
return json.Marshal(signedBlock)
}
return &res, nil
}

View File

@@ -103,13 +103,13 @@ func TestProposeBeaconBlock_SSZ_Error(t *testing.T) {
defer ctrl.Finish() defer ctrl.Finish()
ctx := t.Context() ctx := t.Context()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
// Expect PostSSZ to be called first with SSZ data // Expect PostSSZ to be called first with SSZ data
headers := map[string]string{ headers := map[string]string{
"Eth-Consensus-Version": testCase.consensusVersion, "Eth-Consensus-Version": testCase.consensusVersion,
} }
jsonRestHandler.EXPECT().PostSSZ( handler.EXPECT().PostSSZ(
gomock.Any(), gomock.Any(),
testCase.endpoint, testCase.endpoint,
headers, headers,
@@ -120,7 +120,7 @@ func TestProposeBeaconBlock_SSZ_Error(t *testing.T) {
// No JSON fallback expected for non-406 errors // No JSON fallback expected for non-406 errors
validatorClient := &beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := &beaconApiValidatorClient{handler: handler}
_, err := validatorClient.proposeBeaconBlock(ctx, testCase.block) _, err := validatorClient.proposeBeaconBlock(ctx, testCase.block)
assert.ErrorContains(t, testSuite.expectedErrorMessage, err) assert.ErrorContains(t, testSuite.expectedErrorMessage, err)
}) })
@@ -165,13 +165,13 @@ func TestProposeBeaconBlock_SSZSuccess_NoFallback(t *testing.T) {
defer ctrl.Finish() defer ctrl.Finish()
ctx := t.Context() ctx := t.Context()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
// Expect PostSSZ to be called and succeed // Expect PostSSZ to be called and succeed
headers := map[string]string{ headers := map[string]string{
"Eth-Consensus-Version": testCase.consensusVersion, "Eth-Consensus-Version": testCase.consensusVersion,
} }
jsonRestHandler.EXPECT().PostSSZ( handler.EXPECT().PostSSZ(
gomock.Any(), gomock.Any(),
testCase.endpoint, testCase.endpoint,
headers, headers,
@@ -181,7 +181,7 @@ func TestProposeBeaconBlock_SSZSuccess_NoFallback(t *testing.T) {
).Times(1) ).Times(1)
// Post should NOT be called when PostSSZ succeeds // Post should NOT be called when PostSSZ succeeds
jsonRestHandler.EXPECT().Post( handler.EXPECT().Post(
gomock.Any(), gomock.Any(),
gomock.Any(), gomock.Any(),
gomock.Any(), gomock.Any(),
@@ -189,7 +189,7 @@ func TestProposeBeaconBlock_SSZSuccess_NoFallback(t *testing.T) {
gomock.Any(), gomock.Any(),
).Times(0) ).Times(0)
validatorClient := &beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := &beaconApiValidatorClient{handler: handler}
_, err := validatorClient.proposeBeaconBlock(ctx, testCase.block) _, err := validatorClient.proposeBeaconBlock(ctx, testCase.block)
assert.NoError(t, err) assert.NoError(t, err)
}) })
@@ -200,7 +200,7 @@ func TestProposeBeaconBlock_NewerTypes_SSZMarshal(t *testing.T) {
t.Run("deneb", func(t *testing.T) { t.Run("deneb", func(t *testing.T) {
ctrl := gomock.NewController(t) ctrl := gomock.NewController(t)
defer ctrl.Finish() defer ctrl.Finish()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
var blockContents structs.SignedBeaconBlockContentsDeneb var blockContents structs.SignedBeaconBlockContentsDeneb
err := json.Unmarshal([]byte(rpctesting.DenebBlockContents), &blockContents) err := json.Unmarshal([]byte(rpctesting.DenebBlockContents), &blockContents)
@@ -211,14 +211,14 @@ func TestProposeBeaconBlock_NewerTypes_SSZMarshal(t *testing.T) {
denebBytes, err := genericSignedBlock.GetDeneb().MarshalSSZ() denebBytes, err := genericSignedBlock.GetDeneb().MarshalSSZ()
require.NoError(t, err) require.NoError(t, err)
jsonRestHandler.EXPECT().PostSSZ( handler.EXPECT().PostSSZ(
gomock.Any(), gomock.Any(),
"/eth/v2/beacon/blocks", "/eth/v2/beacon/blocks",
gomock.Any(), gomock.Any(),
bytes.NewBuffer(denebBytes), bytes.NewBuffer(denebBytes),
) )
validatorClient := &beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := &beaconApiValidatorClient{handler: handler}
proposeResponse, err := validatorClient.proposeBeaconBlock(t.Context(), genericSignedBlock) proposeResponse, err := validatorClient.proposeBeaconBlock(t.Context(), genericSignedBlock)
assert.NoError(t, err) assert.NoError(t, err)
require.NotNil(t, proposeResponse) require.NotNil(t, proposeResponse)
@@ -231,7 +231,7 @@ func TestProposeBeaconBlock_NewerTypes_SSZMarshal(t *testing.T) {
t.Run("blinded_deneb", func(t *testing.T) { t.Run("blinded_deneb", func(t *testing.T) {
ctrl := gomock.NewController(t) ctrl := gomock.NewController(t)
defer ctrl.Finish() defer ctrl.Finish()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
var blindedBlock structs.SignedBlindedBeaconBlockDeneb var blindedBlock structs.SignedBlindedBeaconBlockDeneb
err := json.Unmarshal([]byte(rpctesting.BlindedDenebBlock), &blindedBlock) err := json.Unmarshal([]byte(rpctesting.BlindedDenebBlock), &blindedBlock)
@@ -242,14 +242,14 @@ func TestProposeBeaconBlock_NewerTypes_SSZMarshal(t *testing.T) {
blindedDenebBytes, err := genericSignedBlock.GetBlindedDeneb().MarshalSSZ() blindedDenebBytes, err := genericSignedBlock.GetBlindedDeneb().MarshalSSZ()
require.NoError(t, err) require.NoError(t, err)
jsonRestHandler.EXPECT().PostSSZ( handler.EXPECT().PostSSZ(
gomock.Any(), gomock.Any(),
"/eth/v2/beacon/blinded_blocks", "/eth/v2/beacon/blinded_blocks",
gomock.Any(), gomock.Any(),
bytes.NewBuffer(blindedDenebBytes), bytes.NewBuffer(blindedDenebBytes),
) )
validatorClient := &beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := &beaconApiValidatorClient{handler: handler}
proposeResponse, err := validatorClient.proposeBeaconBlock(t.Context(), genericSignedBlock) proposeResponse, err := validatorClient.proposeBeaconBlock(t.Context(), genericSignedBlock)
assert.NoError(t, err) assert.NoError(t, err)
require.NotNil(t, proposeResponse) require.NotNil(t, proposeResponse)
@@ -262,7 +262,7 @@ func TestProposeBeaconBlock_NewerTypes_SSZMarshal(t *testing.T) {
t.Run("electra", func(t *testing.T) { t.Run("electra", func(t *testing.T) {
ctrl := gomock.NewController(t) ctrl := gomock.NewController(t)
defer ctrl.Finish() defer ctrl.Finish()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
var blockContents structs.SignedBeaconBlockContentsElectra var blockContents structs.SignedBeaconBlockContentsElectra
err := json.Unmarshal([]byte(rpctesting.ElectraBlockContents), &blockContents) err := json.Unmarshal([]byte(rpctesting.ElectraBlockContents), &blockContents)
@@ -273,14 +273,14 @@ func TestProposeBeaconBlock_NewerTypes_SSZMarshal(t *testing.T) {
electraBytes, err := genericSignedBlock.GetElectra().MarshalSSZ() electraBytes, err := genericSignedBlock.GetElectra().MarshalSSZ()
require.NoError(t, err) require.NoError(t, err)
jsonRestHandler.EXPECT().PostSSZ( handler.EXPECT().PostSSZ(
gomock.Any(), gomock.Any(),
"/eth/v2/beacon/blocks", "/eth/v2/beacon/blocks",
gomock.Any(), gomock.Any(),
bytes.NewBuffer(electraBytes), bytes.NewBuffer(electraBytes),
) )
validatorClient := &beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := &beaconApiValidatorClient{handler: handler}
proposeResponse, err := validatorClient.proposeBeaconBlock(t.Context(), genericSignedBlock) proposeResponse, err := validatorClient.proposeBeaconBlock(t.Context(), genericSignedBlock)
assert.NoError(t, err) assert.NoError(t, err)
require.NotNil(t, proposeResponse) require.NotNil(t, proposeResponse)
@@ -293,7 +293,7 @@ func TestProposeBeaconBlock_NewerTypes_SSZMarshal(t *testing.T) {
t.Run("blinded_electra", func(t *testing.T) { t.Run("blinded_electra", func(t *testing.T) {
ctrl := gomock.NewController(t) ctrl := gomock.NewController(t)
defer ctrl.Finish() defer ctrl.Finish()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
var blindedBlock structs.SignedBlindedBeaconBlockElectra var blindedBlock structs.SignedBlindedBeaconBlockElectra
err := json.Unmarshal([]byte(rpctesting.BlindedElectraBlock), &blindedBlock) err := json.Unmarshal([]byte(rpctesting.BlindedElectraBlock), &blindedBlock)
@@ -304,14 +304,14 @@ func TestProposeBeaconBlock_NewerTypes_SSZMarshal(t *testing.T) {
blindedElectraBytes, err := genericSignedBlock.GetBlindedElectra().MarshalSSZ() blindedElectraBytes, err := genericSignedBlock.GetBlindedElectra().MarshalSSZ()
require.NoError(t, err) require.NoError(t, err)
jsonRestHandler.EXPECT().PostSSZ( handler.EXPECT().PostSSZ(
gomock.Any(), gomock.Any(),
"/eth/v2/beacon/blinded_blocks", "/eth/v2/beacon/blinded_blocks",
gomock.Any(), gomock.Any(),
bytes.NewBuffer(blindedElectraBytes), bytes.NewBuffer(blindedElectraBytes),
) )
validatorClient := &beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := &beaconApiValidatorClient{handler: handler}
proposeResponse, err := validatorClient.proposeBeaconBlock(t.Context(), genericSignedBlock) proposeResponse, err := validatorClient.proposeBeaconBlock(t.Context(), genericSignedBlock)
assert.NoError(t, err) assert.NoError(t, err)
require.NotNil(t, proposeResponse) require.NotNil(t, proposeResponse)
@@ -324,7 +324,7 @@ func TestProposeBeaconBlock_NewerTypes_SSZMarshal(t *testing.T) {
t.Run("fulu", func(t *testing.T) { t.Run("fulu", func(t *testing.T) {
ctrl := gomock.NewController(t) ctrl := gomock.NewController(t)
defer ctrl.Finish() defer ctrl.Finish()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
var blockContents structs.SignedBeaconBlockContentsFulu var blockContents structs.SignedBeaconBlockContentsFulu
err := json.Unmarshal([]byte(rpctesting.FuluBlockContents), &blockContents) err := json.Unmarshal([]byte(rpctesting.FuluBlockContents), &blockContents)
@@ -335,14 +335,14 @@ func TestProposeBeaconBlock_NewerTypes_SSZMarshal(t *testing.T) {
fuluBytes, err := genericSignedBlock.GetFulu().MarshalSSZ() fuluBytes, err := genericSignedBlock.GetFulu().MarshalSSZ()
require.NoError(t, err) require.NoError(t, err)
jsonRestHandler.EXPECT().PostSSZ( handler.EXPECT().PostSSZ(
gomock.Any(), gomock.Any(),
"/eth/v2/beacon/blocks", "/eth/v2/beacon/blocks",
gomock.Any(), gomock.Any(),
bytes.NewBuffer(fuluBytes), bytes.NewBuffer(fuluBytes),
) )
validatorClient := &beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := &beaconApiValidatorClient{handler: handler}
proposeResponse, err := validatorClient.proposeBeaconBlock(t.Context(), genericSignedBlock) proposeResponse, err := validatorClient.proposeBeaconBlock(t.Context(), genericSignedBlock)
assert.NoError(t, err) assert.NoError(t, err)
require.NotNil(t, proposeResponse) require.NotNil(t, proposeResponse)
@@ -355,7 +355,7 @@ func TestProposeBeaconBlock_NewerTypes_SSZMarshal(t *testing.T) {
t.Run("blinded_fulu", func(t *testing.T) { t.Run("blinded_fulu", func(t *testing.T) {
ctrl := gomock.NewController(t) ctrl := gomock.NewController(t)
defer ctrl.Finish() defer ctrl.Finish()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
var blindedBlock structs.SignedBlindedBeaconBlockFulu var blindedBlock structs.SignedBlindedBeaconBlockFulu
err := json.Unmarshal([]byte(rpctesting.BlindedFuluBlock), &blindedBlock) err := json.Unmarshal([]byte(rpctesting.BlindedFuluBlock), &blindedBlock)
@@ -366,14 +366,14 @@ func TestProposeBeaconBlock_NewerTypes_SSZMarshal(t *testing.T) {
blindedFuluBytes, err := genericSignedBlock.GetBlindedFulu().MarshalSSZ() blindedFuluBytes, err := genericSignedBlock.GetBlindedFulu().MarshalSSZ()
require.NoError(t, err) require.NoError(t, err)
jsonRestHandler.EXPECT().PostSSZ( handler.EXPECT().PostSSZ(
gomock.Any(), gomock.Any(),
"/eth/v2/beacon/blinded_blocks", "/eth/v2/beacon/blinded_blocks",
gomock.Any(), gomock.Any(),
bytes.NewBuffer(blindedFuluBytes), bytes.NewBuffer(blindedFuluBytes),
) )
validatorClient := &beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := &beaconApiValidatorClient{handler: handler}
proposeResponse, err := validatorClient.proposeBeaconBlock(t.Context(), genericSignedBlock) proposeResponse, err := validatorClient.proposeBeaconBlock(t.Context(), genericSignedBlock)
assert.NoError(t, err) assert.NoError(t, err)
require.NotNil(t, proposeResponse) require.NotNil(t, proposeResponse)
@@ -588,10 +588,10 @@ func TestProposeBeaconBlock_SSZFails_406_FallbackToJSON(t *testing.T) {
defer ctrl.Finish() defer ctrl.Finish()
ctx := t.Context() ctx := t.Context()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
// Expect PostSSZ to be called first and fail // Expect PostSSZ to be called first and fail
jsonRestHandler.EXPECT().PostSSZ( handler.EXPECT().PostSSZ(
gomock.Any(), gomock.Any(),
testCase.endpoint, testCase.endpoint,
gomock.Any(), gomock.Any(),
@@ -603,7 +603,7 @@ func TestProposeBeaconBlock_SSZFails_406_FallbackToJSON(t *testing.T) {
}, },
).Times(1) ).Times(1)
jsonRestHandler.EXPECT().Post( handler.EXPECT().Post(
gomock.Any(), gomock.Any(),
testCase.endpoint, testCase.endpoint,
gomock.Any(), gomock.Any(),
@@ -613,13 +613,49 @@ func TestProposeBeaconBlock_SSZFails_406_FallbackToJSON(t *testing.T) {
nil, nil,
).Times(1) ).Times(1)
validatorClient := &beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := &beaconApiValidatorClient{handler: handler}
_, err := validatorClient.proposeBeaconBlock(ctx, testCase.block) _, err := validatorClient.proposeBeaconBlock(ctx, testCase.block)
assert.NoError(t, err) assert.NoError(t, err)
}) })
} }
} }
func TestProposeBeaconBlock_SSZFails_406_JSONFallbackFails(t *testing.T) {
ctrl := gomock.NewController(t)
defer ctrl.Finish()
ctx := t.Context()
handler := mock.NewMockHandler(ctrl)
handler.EXPECT().PostSSZ(
gomock.Any(),
"/eth/v2/beacon/blocks",
gomock.Any(),
gomock.Any(),
).Return(
nil, nil, &httputil.DefaultJsonError{
Code: http.StatusNotAcceptable,
Message: "SSZ not supported",
},
).Times(1)
handler.EXPECT().Post(
gomock.Any(),
"/eth/v2/beacon/blocks",
gomock.Any(),
gomock.Any(),
nil,
).Return(
errors.New("json fallback failed"),
).Times(1)
validatorClient := &beaconApiValidatorClient{handler: handler}
_, err := validatorClient.proposeBeaconBlock(ctx, &ethpb.GenericSignedBeaconBlock{
Block: generateSignedPhase0Block(),
})
assert.ErrorContains(t, "failed to submit block via JSON fallback", err)
}
func TestProposeBeaconBlock_SSZFails_Non406_NoFallback(t *testing.T) { func TestProposeBeaconBlock_SSZFails_Non406_NoFallback(t *testing.T) {
testCases := []struct { testCases := []struct {
name string name string
@@ -643,13 +679,13 @@ func TestProposeBeaconBlock_SSZFails_Non406_NoFallback(t *testing.T) {
defer ctrl.Finish() defer ctrl.Finish()
ctx := t.Context() ctx := t.Context()
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl) handler := mock.NewMockJsonRestHandler(ctrl)
// Expect PostSSZ to be called first and fail with non-406 error // Expect PostSSZ to be called first and fail with non-406 error
sszHeaders := map[string]string{ sszHeaders := map[string]string{
"Eth-Consensus-Version": testCase.consensusVersion, "Eth-Consensus-Version": testCase.consensusVersion,
} }
jsonRestHandler.EXPECT().PostSSZ( handler.EXPECT().PostSSZ(
gomock.Any(), gomock.Any(),
testCase.endpoint, testCase.endpoint,
sszHeaders, sszHeaders,
@@ -662,7 +698,7 @@ func TestProposeBeaconBlock_SSZFails_Non406_NoFallback(t *testing.T) {
).Times(1) ).Times(1)
// Post should NOT be called for non-406 errors // Post should NOT be called for non-406 errors
jsonRestHandler.EXPECT().Post( handler.EXPECT().Post(
gomock.Any(), gomock.Any(),
gomock.Any(), gomock.Any(),
gomock.Any(), gomock.Any(),
@@ -670,9 +706,47 @@ func TestProposeBeaconBlock_SSZFails_Non406_NoFallback(t *testing.T) {
gomock.Any(), gomock.Any(),
).Times(0) ).Times(0)
validatorClient := &beaconApiValidatorClient{jsonRestHandler: jsonRestHandler} validatorClient := &beaconApiValidatorClient{handler: handler}
_, err := validatorClient.proposeBeaconBlock(ctx, testCase.block) _, err := validatorClient.proposeBeaconBlock(ctx, testCase.block)
require.ErrorContains(t, "Internal server error", err) require.ErrorContains(t, "Internal server error", err)
}) })
} }
} }
type badHashable struct{}
func (badHashable) HashTreeRoot() ([32]byte, error) {
return [32]byte{}, errors.New("hash root error")
}
type badMarshaler struct{}
func (badMarshaler) MarshalSSZ() ([]byte, error) {
return nil, errors.New("marshal ssz error")
}
type okMarshaler struct{}
func (okMarshaler) MarshalSSZ() ([]byte, error) {
return []byte{1, 2, 3}, nil
}
type okHashable struct{}
func (okHashable) HashTreeRoot() ([32]byte, error) {
return [32]byte{1}, nil
}
func TestBuildBlockResult_HashTreeRootError(t *testing.T) {
_, err := buildBlockResult("phase0", false, okMarshaler{}, badHashable{}, func() ([]byte, error) {
return []byte(`{}`), nil
})
assert.ErrorContains(t, "failed to compute block root for phase0 beacon block", err)
}
func TestBuildBlockResult_MarshalSSZError(t *testing.T) {
_, err := buildBlockResult("phase0", false, badMarshaler{}, okHashable{}, func() ([]byte, error) {
return []byte(`{}`), nil
})
assert.ErrorContains(t, "failed to serialize phase0 beacon block", err)
}

Some files were not shown because too many files have changed in this diff Show More