Compare commits

...

53 Commits

Author SHA1 Message Date
james-prysm
495056625e Validator block v4 (#16594)
**What type of PR is this?**

Feature

**What does this PR do? Why is it needed?**

Introduces the validator connection point for rest api to call block v4
and envelope endpoints

builds on https://github.com/OffchainLabs/prysm/pull/16488 and
https://github.com/OffchainLabs/prysm/pull/16522

testing
```
participants:
  - el_type: geth
    el_image: ethpandaops/geth:epbs-devnet-0
    cl_type: prysm
    cl_image: gcr.io/offchainlabs/prysm/beacon-chain:latest
    vc_image: gcr.io/offchainlabs/prysm/validator:latest
    supernode: true
    count: 2
    cl_extra_params:
      - --subscribe-all-subnets
      - --verbosity=debug
    vc_extra_params:
      - --enable-beacon-rest-api
      - --verbosity=debug

  - el_type: geth
    el_image: ethpandaops/geth:epbs-devnet-0
    cl_type: prysm
    cl_image: gcr.io/offchainlabs/prysm/beacon-chain:latest
    vc_image: gcr.io/offchainlabs/prysm/validator:latest
    validator_count: 63
    cl_extra_params:
      - --verbosity=debug
    vc_extra_params:
      - --enable-beacon-rest-api
      - --verbosity=debug

network_params:
  fulu_fork_epoch: 0
  gloas_fork_epoch: 2
  seconds_per_slot: 6
  genesis_delay: 40

additional_services:
  - dora

global_log_level: debug

dora_params:
  image: ethpandaops/dora:gloas-support
  ```

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [ ] I have read [CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [ ] I have included a uniquely named [changelog fragment file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [ ] I have added a description with sufficient context for reviewers to understand this PR.
- [ ] I have tested that my changes work as expected and I added a testing plan to the PR description (if applicable).

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-17 22:03:57 +00:00
james-prysm
c298c504ef fixing wrong path name in execution payload bid api (#16690)
**What type of PR is this?**

Bug fix

**What does this PR do? Why is it needed?**

fixing wrong path name from /eth/v2/beacon/execution_payload/bid to
/eth/v1/beacon/execution_payload_bid

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-04-17 21:07:35 +00:00
Manu NALEPA
486e479a99 Prevent expensive state replay when computing sync committees members for the current period (#16688)
**What type of PR is this?**
Bug fix

**What does this PR do? Why is it needed?**
Every `EPOCHS_PER_SYNC_COMMITTEE_PERIOD=256` epochs,
`SYNC_COMMITTEE_SIZE=512` validators are randomly chosen to be part of
the sync committee.

When calling the the
[/eth/v1/validator/duties/sync/epoch](https://ethereum.github.io/beacon-APIs/#/Validator/getSyncCommitteeDuties)
endpoint with `epoch` being set to the first epoch of the current
period, the Prysm beacon node:
1. Finds the youngest state in the DB before this epoch
2. Replays (expensive) states up the requested epoch

While this is technically correct, the step `2.` is very resource
consuming.

This pull request leverages the fact that the `current_sync_committee`
and `next_sync_committee` fields do not change within a period.

==> If the requested epoch and the current epoch are within the same
period, then we can fetch `current_sync_committee` and
`next_sync_committee` from the state corresponding to the current epoch,
which is way less expensive.

**Which issues(s) does this PR fix?**

Fixes:
- https://github.com/OffchainLabs/prysm/issues/16686

**Other notes for review**
Please read commit by commit.

With a Nimbus VC connected:
**Before this PR**
<img width="936" height="308" alt="image"
src="https://github.com/user-attachments/assets/b76f588d-dc95-4916-af93-6ea80b092609"
/>

**After this PR**
<img width="941" height="305" alt="image"
src="https://github.com/user-attachments/assets/65302c90-be33-4525-be5c-a13338335d39"
/>

**Test plan**
Read how to reproduce the issue in the linked issue, and check that the
same reproduction steps do not reproduce the issue with this PR.

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-04-17 14:16:12 +00:00
terence
fbb65f6700 Queue Gloas data column sidecars arriving before their block (#16653)
In Gloas, data column sidecars can arrive via gossip before the block
here is to queue instead of dropping and re-request later

### Design notes
  - Subnet is verified before queuing (reject bad subnet immediately)
- Columns are stored in a fixed-size `[128]` array per block root,
indexed by column index — duplicates for the same index are ignored
- When the block arrives (via `beaconBlockSubscriber` or
`processPendingBlocks`), queued columns are verified against the block's
bid commitments and saved to storage
- Peers that sent columns failing verification (slot mismatch, invalid
sidecar, bad KZG proof) are downscored (PeerID is tracked)
  - A slot ticker prunes entries from past slots every slot boundary 
- RPC column fetch is skipped for blocks that already have pending
gossip columns
- Each Gloas sidecar is ~44 KB at 21 max blobs (21 cells × 2048 bytes +
proofs). 128 columns per block = ~5.5 MB per block root. The slot ticker
ensures at most one slot's worth of pending roots exist at any time.
2026-04-16 16:36:51 +00:00
Manu NALEPA
f0c7633c87 Pubkey cache: Use map+mutex instead of LRU cache (#16654)
**What type of PR is this?**
Optimization

**What does this PR do? Why is it needed?**
Uncompressing/verifying a (validator) public key from bytes is an
expensive operation.
To avoid doing this operation multiple times for the same public key, a
cache mapping raw, compressed public key to uncompressed, verified ones
is created and populated at node start.

**Before this PR**, a LRU cache with a 2M capacity is used.
The issue with this design is the following:
1. If the cache capacity (2M) is higher than the current count of active
public keys, then keys in this cache are never evicted. ==> Using a LRU
cache is useless. This is the case for all devnets, testnets and
mainnet.
2. If the cache capacity (2M) is lower than the current count of active
public keys, because validators attetations and block proposals are
randomly distributed, some keys will be evicted, then very shortly after
re-inserted. (The only valid case for using a LRU here is sync
committees, that last ~27H). ==> Using a LRU cache is useless.

In both cases 1. and 2., using a LRU cache is useless, and could be
replaced by a map (+ mutex for concurrent accesses). Additionally,
compared to a simple map, a LRU cache consumes some extra heap memory.

**After this PR**, the LRU cache is removed and replaced by a map (+
mutex for concurrent accesses.)

**Cache memory usage (source: Hoodi Pyroscope)**
- Before this PR: 222 MB
- After this PR:  158 MB

**==> Gain: 64 MB**

That's not a lot, but it is an easy saving.

**Before this PR**
<img width="940" height="586" alt="image"
src="https://github.com/user-attachments/assets/ef92abf5-e781-44cb-83b0-7db7b52ef371"
/>

**After this PR**
<img width="1018" height="274" alt="image"
src="https://github.com/user-attachments/assets/8261e18c-3edc-4530-a55c-4c9eaa9a6d6c"
/>

<img width="1020" height="922" alt="image"
src="https://github.com/user-attachments/assets/afa07aac-0b28-417b-815e-f00936d04c02"
/>


**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-04-16 14:14:04 +00:00
Barnabas Busa
321828e775 Fix MaxBuildersPerWithdrawalsSweep in minimal preset (#16623)
## Summary
- The minimal preset was missing the `MaxBuildersPerWithdrawalsSweep`
override, causing it to inherit the mainnet value of `16384` instead of
the correct minimal value of `16`
- This aligns with the [consensus-specs minimal gloas
preset](https://github.com/ethereum/consensus-specs/blob/master/presets/minimal/gloas.yaml#L23)

## Test plan
- [x] `go build ./config/params/` passes
- [ ] Verify minimal preset spec tests pass with the corrected value

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2026-04-14 20:19:15 +00:00
james-prysm
e2ffb42abe allow proposer preferences on the same epoch (#16610)
**What type of PR is this?**

Feature

**What does this PR do? Why is it needed?**

based on https://github.com/ethereum/consensus-specs/pull/5035

adds capabilities on the validator client side to submit current epoch
attestations,
- for current epoch submissions, we skip slot 0 and start at slot 1 for
submissions, we also need a 1 slot buffer if we start the validator
client mid epoch and and need to propose during that epoch.
- current epoch submissions don't fire in fulu, only next epoch
submissions fire for gloas at the last epoch of fulu before gloas

kurtosis test by cherry picking changes on epbs-devnet-1 and running
```
participants:
  - el_type: geth
    el_image: ethpandaops/geth:epbs-devnet-0
    cl_type: prysm
    cl_image: gcr.io/offchainlabs/prysm/beacon-chain:latest
    vc_image: gcr.io/offchainlabs/prysm/validator:latest
    supernode: true
    count: 4
    vc_extra_params:
      - "--verbosity=debug"

network_params:
  fulu_fork_epoch: 0
  gloas_fork_epoch: 2
  seconds_per_slot: 6
  genesis_delay: 40

additional_services:
  - dora

global_log_level: debug

dora_params:
  image: ethpandaops/dora:gloas-support
```

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-04-14 19:45:40 +00:00
james-prysm
d1bb9018d3 reversing checkpoint api change (#16660)
**What type of PR is this?**

Bug fix

**What does this PR do? Why is it needed?**

https://github.com/OffchainLabs/prysm/pull/16635 will break clients
checkpoint sync if the first slot of the epoch is missed, but it was
added to resolve some changes in gloas.

With https://github.com/ethereum/consensus-specs/pull/5094 we will be
able to keep the old approach for checkpoint sync endpoints and so the
previous pr 16635 is no longer needed.


**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-04-14 19:03:51 +00:00
james-prysm
8c70e4bbb1 implementing envelope rest apis (#16522)
**What type of PR is this?**

Feature

**What does this PR do? Why is it needed?**

adds 

- GET /eth/v1/validator/execution_payload_envelope/{slot} endpoint
- POST /eth/v1/beacon/execution_payload_envelope endpoint

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).

---------

Co-authored-by: james-prysm <jhe@offchainlabs.com>
2026-04-14 17:31:40 +00:00
Rupam
c004abc89d return 404 for requests that ask for pre checkpoint sync state (#16615)
<!-- Thanks for sending a PR! Before submitting:

1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
   fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
   in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
   Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->

**What type of PR is this?**

Bug fix

**What does this PR do? Why is it needed?**

- Prevents invalid/time-expensive replay for unavailable historical
slots by returning Not Found instead of trying to replay from genesis.
- Handles replay no-data errors as Not Found, so missing historical data
no longer surfaces as Internal Server Error in HTTP paths (this might
not be required, I just added it for consistency, lmk if i should
remove)
- Adds unit tests for:
i) slot earlier than earliest available
ii) slot before backfill low slot
iii) replay no-data mapping to not-found
iv) shared HTTP error mapping for no-data to 404

**Which issues(s) does this PR fix?**

Addresses #16191

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).

---------

Co-authored-by: Bastin <43618253+Inspector-Butters@users.noreply.github.com>
2026-04-14 15:30:20 +00:00
Aliz Fara
85316c5d16 Fix event subscription timeout handling (#16681)
**What type of PR is this?**

Bug fix

**What does this PR do? Why is it needed?**

This excludes `/eth/v1/events` from the global `http.TimeoutHandler`
while keeping the timeout behavior for other HTTP routes unchanged.

When `--api-timeout` is set, the timeout wrapper is incompatible with
Prysm's SSE event stream handling and can return `200 OK` with an empty
response body instead of keeping the stream open.

**Which issues(s) does this PR fix?**

Fixes #15710

**Other notes for review**

- Adds focused coverage for the SSE bypass and the unchanged timeout
behavior for non-SSE routes.
- Stabilizes `TestServer_StartStop` by replacing a goroutine log
assertion with `require.Eventually`.
- Validation used during review:
`go test ./api/server/httprest -run 'TestServer_TimeoutHandler' -count=1
-v`

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-04-14 14:45:03 +00:00
james-prysm
9069afc6d0 Get block v4 (#16488)
**What type of PR is this?**

Feature

**What does this PR do? Why is it needed?**

implements the new GET /eth/v4/validator/blocks/{slot} endpoint, we
don't hook up the validator client to use it yet for post gloas in this
pr.

**Which issues(s) does this PR fix?**

Fixes https://github.com/ethereum/beacon-APIs/pull/580

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-04-14 03:23:42 +00:00
Sahil Sojitra
9802242cfe perf(auth): optimize auth token handling #15763 (#15793)
<!-- Thanks for sending a PR! Before submitting:

1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
   fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
   in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
   Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->

**What type of PR is this?**
> Other

**What does this PR do? Why is it needed?**
This PR applies micro-optimizations to the auth token handling code. It
improves efficiency and readability by reducing unnecessary allocations
and adding an early length check before performing constant-time
comparison. [PR#15763](https://github.com/OffchainLabs/prysm/pull/15763)

**Changes Included**
- Use `strings.HasPrefix` + slicing instead of `strings.Split` to avoid
allocations
- Add early length check before `subtle.ConstantTimeCompare`

**Which issues(s) does this PR fix?**
No functional changes or security fixes; this PR improves performance
and code clarity in the auth token handling logic.

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.

---------

Co-authored-by: maradini77 <140460067+maradini77@users.noreply.github.com>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2026-04-13 18:20:38 +00:00
Alleysira
e75166cfe7 Fix blob index bounds check (#16640)
**What type of PR is this?**
Bug fix

**What does this PR do? Why is it needed?**

This PR addresses a runtime panic caused by a missing bounds check on
blob indices. I've implemented a fix and would like to hear from you.
Thanks!

##### What does this PR do?
- Add a bound check for `blob.Index`.
- Add 3 tests to show that without the fix prysm will panic.

##### Why is it needed?

`BlobAlignsWithBlock` accesses `commits[blob.Index]` without checking
that `blob.Index < len(commits)`. It only checks `blob.Index <
MaxBlobsPerBlock` (the spec-wide maximum, e.g. 6 for Deneb). If a blob
has an index that passes the spec max check but exceeds the actual
number of commitments in the block, the code panics with an
index-out-of-range runtime error. The added tests confirmed this.
Fortunately, I think this is unreachable because all callers validate
blob indices upstream:
- `blobValidatorFromRootReq` rejects blobs whose index wasn't in the
request
- `newSequentialBlobValidator` enforces strictly sequential indices (0,
1, 2, ...)
- `requestsForMissingIndices` only generates indices 0..len(commits)-1
                                                                      
The fix adds an explicit bounds check as defense-in-depth, so that if a
future caller bypasses upstream validation, the function returns
ErrIncorrectBlobIndex instead of panicking.

Three test functions are added to `blob_test.go`:
- `TestBlobAlignsWithBlock_OOBIndexReturnsError`: blob with `index >=
len(commits)` but `< MaxBlobsPerBlock` returns ErrIncorrectBlobIndex.
Without this fix, this test panics.
- `TestBlobAlignsWithBlock_MaxIndexEdge`: boundary test confirming the
last valid index succeeds and the first OOB index errors.
- `TestBlobAlignsWithBlock_AllValidIndicesSucceed`: all indices in
0..nCommitments-1 succeed without error or panic.

Without the fix:
```bash
$ go test ./beacon-chain/sync/verify/ -v -count=1
=== RUN   TestBlobAlignsWithBlock
=== RUN   TestBlobAlignsWithBlock/happy_path_blob_0
=== RUN   TestBlobAlignsWithBlock/mismatched_roots_blob_0
=== RUN   TestBlobAlignsWithBlock/mismatched_roots_-_fake_blob_0
=== RUN   TestBlobAlignsWithBlock/before_deneb_blob_0
--- PASS: TestBlobAlignsWithBlock (0.00s)
    --- PASS: TestBlobAlignsWithBlock/happy_path_blob_0 (0.00s)
    --- PASS: TestBlobAlignsWithBlock/mismatched_roots_blob_0 (0.00s)
    --- PASS: TestBlobAlignsWithBlock/mismatched_roots_-_fake_blob_0 (0.00s)
    --- PASS: TestBlobAlignsWithBlock/before_deneb_blob_0 (0.00s)
=== RUN   TestBlobAlignsWithBlock_OOBIndexReturnsError
--- FAIL: TestBlobAlignsWithBlock_OOBIndexReturnsError (0.00s)
panic: runtime error: index out of range [3] with length 3 [recovered, repanicked]

goroutine 116 [running]:
testing.tRunner.func1.2({0x12d1aa0, 0xc000363818})
        /home/alleysira/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.25.1.linux-amd64/src/testing/testing.go:1872 +0x237
testing.tRunner.func1()
        /home/alleysira/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.25.1.linux-amd64/src/testing/testing.go:1875 +0x35b
panic({0x12d1aa0?, 0xc000363818?})
        /home/alleysira/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.25.1.linux-amd64/src/runtime/panic.go:783 +0x132
github.com/OffchainLabs/prysm/v7/beacon-chain/sync/verify.BlobAlignsWithBlock({0xc00030de60, {0xf, 0x74, 0x6b, 0x28, 0xa, 0x5a, 0x94, 0xdd, 0x55, ...}}, ...)
        /home/alleysira/pr/prysm/beacon-chain/sync/verify/blob.go:44 +0x5b0
github.com/OffchainLabs/prysm/v7/beacon-chain/sync/verify.TestBlobAlignsWithBlock_OOBIndexReturnsError(0xc0003a2540)
        /home/alleysira/pr/prysm/beacon-chain/sync/verify/blob_test.go:109 +0x557
testing.tRunner(0xc0003a2540, 0x14b91f8)
        /home/alleysira/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.25.1.linux-amd64/src/testing/testing.go:1934 +0xea
created by testing.(*T).Run in goroutine 1
        /home/alleysira/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.25.1.linux-amd64/src/testing/testing.go:1997 +0x465
FAIL    github.com/OffchainLabs/prysm/v7/beacon-chain/sync/verify       0.016s
FAIL
```
With the fix the test pass:
```shell
go test ./beacon-chain/sync/verify/ -v -count=1
=== RUN   TestBlobAlignsWithBlock
=== RUN   TestBlobAlignsWithBlock/happy_path_blob_0
=== RUN   TestBlobAlignsWithBlock/mismatched_roots_blob_0
=== RUN   TestBlobAlignsWithBlock/mismatched_roots_-_fake_blob_0
=== RUN   TestBlobAlignsWithBlock/before_deneb_blob_0
--- PASS: TestBlobAlignsWithBlock (0.00s)
    --- PASS: TestBlobAlignsWithBlock/happy_path_blob_0 (0.00s)
    --- PASS: TestBlobAlignsWithBlock/mismatched_roots_blob_0 (0.00s)
    --- PASS: TestBlobAlignsWithBlock/mismatched_roots_-_fake_blob_0 (0.00s)
    --- PASS: TestBlobAlignsWithBlock/before_deneb_blob_0 (0.00s)
=== RUN   TestBlobAlignsWithBlock_OOBIndexReturnsError
--- PASS: TestBlobAlignsWithBlock_OOBIndexReturnsError (0.00s)
=== RUN   TestBlobAlignsWithBlock_MaxIndexEdge
--- PASS: TestBlobAlignsWithBlock_MaxIndexEdge (0.00s)
=== RUN   TestBlobAlignsWithBlock_AllValidIndicesSucceed
=== RUN   TestBlobAlignsWithBlock_AllValidIndicesSucceed/index_0
=== RUN   TestBlobAlignsWithBlock_AllValidIndicesSucceed/index_1
=== RUN   TestBlobAlignsWithBlock_AllValidIndicesSucceed/index_2
=== RUN   TestBlobAlignsWithBlock_AllValidIndicesSucceed/index_3
--- PASS: TestBlobAlignsWithBlock_AllValidIndicesSucceed (0.00s)
    --- PASS: TestBlobAlignsWithBlock_AllValidIndicesSucceed/index_0 (0.00s)
    --- PASS: TestBlobAlignsWithBlock_AllValidIndicesSucceed/index_1 (0.00s)
    --- PASS: TestBlobAlignsWithBlock_AllValidIndicesSucceed/index_2 (0.00s)
    --- PASS: TestBlobAlignsWithBlock_AllValidIndicesSucceed/index_3 (0.00s)
PASS
ok      github.com/OffchainLabs/prysm/v7/beacon-chain/sync/verify       0.017s
```

**Which issues(s) does this PR fix?**

None.

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-04-13 17:25:05 +00:00
satushh
8864484230 Fix processBatchedBlocks returning pre-filter block count (#16657)
**What type of PR is this?**

Bug fux

**What does this PR do? Why is it needed?**

Move bwbCount assignment **after** validUnprocessed so peer scoring only
credits actually processed blocks.

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-04-13 14:19:02 +00:00
terence
5bb13408d5 Use fork-aware deserialization for data columns from disk (#16650)
- `VerifiedRODataColumnFromDisk` now takes an epoch parameter
- Columns at or after the Gloas fork epoch unmarshal as
`DataColumnSidecarGloas`
- Earlier columns continue to unmarshal as `DataColumnSidecar` (Fulu)
- Previously all columns used the Fulu type, which fails for Gloas
2026-04-10 20:48:11 +00:00
Potuz
99327d7422 Fix initial sync bid validation failure (#16652)
During initial sync, state replay skips the last block's execution
payload envelope (no next block to verify delivery). When the parent
envelope was already saved by a previous batch, envelopesForBlocks
skipped it as "already processed", leaving getBatchPrestate unable to
apply it. This caused LatestBlockHash to be stale, failing bid
validation on the next block.

Two fixes:
- envelopesForBlocks: always include the parent envelope even if
persisted
- getBatchPrestate: when parent envelope is in DB, load and apply the
blinded form instead of the broken StateByRootInitialSync(env.BlockHash)
call that passed an execution hash where a beacon block root was
expected

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2026-04-10 19:02:37 +00:00
terence
527863e9de Set bid KZG commitments on gloas's data columns (#16649)
- Set bid commitments on verified columns during gossip validation
(`validateDataColumnGloas`)
- Add `setBidCommitments` helper for the RPC fetch path
(`FetchDataColumnSidecars`)
- Track `commitmentsByRoot` on `DataColumnSidecarsParams` so peer
fetches can set commitments before verification
2026-04-10 16:53:10 +00:00
terence
de34b4dfae Skip inclusion proof verification for Gloas data columns (#16647)
- Gloas data column sidecars don't carry block headers or inclusion
proofs
  - Skip `VerifyDataColumnSidecarInclusionProof` for Gloas sidecars
- Skip Gloas columns in the batch verifier `SidecarInclusionProven` loop
2026-04-10 15:18:48 +00:00
terence
ccf61fb91b Decode Gloas data columns using correct protobuf type over RPC (#16648)
- `readChunkedDataColumnSidecar` now branches on fork version
- Gloas columns decode as `DataColumnSidecarGloas`, Fulu as
`DataColumnSidecar`
- Previously all RPC columns decoded as Fulu, causing failures when
non-proposer nodes fetched Gloas columns from peers
2026-04-10 14:38:47 +00:00
terence
e9fdeee7bb Add missing fields to Gloas genesis block bid (#16646)
- The Gloas genesis block's `SignedExecutionPayloadBid` was missing
`PrevRandao` (32 bytes) and `FeeRecipient` (20 bytes)
  - This caused SSZ marshaling failures at genesis
2026-04-10 14:25:44 +00:00
satushh
9da54ce816 Fix package-level logger mutation (#16645)
**What type of PR is this?**

Bug Fix

**What does this PR do? Why is it needed?**

- Fix package-level logger mutation in initial-sync Resync and validator
proposer GetBlock.


**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [ ] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [ ] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [ ] I have added a description with sufficient context for reviewers
to understand this PR.
- [ ] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-04-09 14:40:44 +00:00
terence
ee9fa34b30 Fix Gloas data column KZG commitments for operation feed (#16643)
- Fix `WARN sync: Failed to get KZG commitments for operation feed
error=data column sidecar is not a fulu type` spam on Gloas devnet
- Gloas data column sidecars don't carry KZG commitments, they live in
the block's execution payload bid. Added `bidCommitmentsGloas` to
`RODataColumn` and populate it in `validateDataColumnGloas` using the
block already fetched from DB
- We want two things 1.) No extra DB lookups 2.) no function signature
changes

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2026-04-08 22:01:17 +00:00
terence
cf09469ac9 Add Gloas engine API method versions for devnet (#16642)
- Wire up `engine_newPayloadV5` for Gloas, using it based on slot.
Reuses `ExecutionPayloadDeneb` with
  execution requests (same params as V4, just version bump).
- Add Gloas to `engine_forkchoiceUpdatedV3` and `engine_getPayloadV5`
(shared with fulu).
- Add slot parameter to `NewPayload` interface so the engine client can
select V4 vs V5 at the fork boundary. (this will change later!)
- Add GloasEnabled() config helper and gloasEngineEndpoints for
capability exchange.
2026-04-08 20:02:38 +00:00
Potuz
f3dfcbab2a Use proposer preferences cache for payload attributes after Gloas (#16620)
## Summary
- Adds `ProposerPreferencesCache` to the blockchain service so
`trackedProposer()` can use Gloas gossip preferences (fee recipient, gas
limit) when constructing payload attributes for FCU
- When `PrepareAllPayloads` is enabled, checks the preferences cache
first, falling back to the default burn address
- When a validator is tracked, checks the preferences cache to override
the tracked validator's fee recipient
- Adds `GasLimit` field to `TrackedValidator` struct, populated from
proposer preferences

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 16:35:24 +00:00
terence
10cd675793 Construct data column sidecars from bid in Gloas blocks (#16638)
- Add `PopulateFromBid` as a new `ConstructionPopulator` that extracts
KZG commitments directly from the execution payload bid in Gloas (ePBS)
blocks
- In Gloas, the execution payload arrives separately via the payload
envelope, but the bid's KZG commitments are available in the block
immediately — this allows data column sidecars to be constructed from
the EL (`engine_getBlobsV2`) as soon as the block arrives, without
waiting for the envelope
- Wire `PopulateFromBid` into `processSidecarsFromExecutionFromBlock`
for Gloas blocks, replacing the previous early return that skipped EL
reconstruction entirely

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2026-04-08 13:38:22 +00:00
terence
f01575e44c Fix initial sync envelope validation for genesis blocks (#16637)
- Fixes initial sync failing with envelope does not match block when
syncing a Gloas chain from genesis
- Genesis (slot 0) has no separate execution payload envelope, its
execution block hash is embedded in the genesis state. The validation
loop incorrectly tried to match this hash transition against an
envelope, which always failed
2026-04-08 03:08:13 +00:00
satushh
129d6e1088 Fix swapped JSON tags in ChainReorgEvent struct (#16639)
**What type of PR is this?**

Bug fix

**What does this PR do? Why is it needed?**

The json tags in ChainReorgEvent struct were swapped. 

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [ ] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [ ] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [ ] I have added a description with sufficient context for reviewers
to understand this PR.
- [ ] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-04-07 11:41:15 +00:00
terence
883d78221f Use ValidatorIndex type for proposer_lookahead in beacon state proto (#16634)
- Adds `cast_type` annotation to `proposer_lookahead` field in
`BeaconStateFulu` and `BeaconStateGloas` protobuf definitions to use
`primitives.ValidatorIndex` instead of raw `uint64`
- Matches the spec type `Vector[ValidatorIndex, ...]` and is consistent
with how `ptc_window` already uses `cast_type` for its validator indices
- Updates `InitializeProposerLookahead` to return
`[]primitives.ValidatorIndex` directly, removing all `uint64` conversion
boilerplate
- Adds `proposerLookaheadVal()` copy method for `ToProto` consistency
with other slice fields
2026-04-07 02:42:24 +00:00
Manu NALEPA
6e4d7fd781 ProcessEffectiveBalanceUpdates: Avoid copying a validator when the computed effective balance is unchanged. (#16631)
**What type of PR is this?**
Bug fix

**What does this PR do? Why is it needed?**
In (electra) `ProcessEffectiveBalanceUpdates`, a `0x00...` validator
with a balance > 33.25 ETH enters in the
```go
if balance+downwardThreshold < val.EffectiveBalance() || val.EffectiveBalance()+upwardThreshold < balance {
...
}
```

condition.

The validator is copied (`newVal = val.Copy()`) and returned, **even if
the effective balance did not actually change**.
As a consequence, this validator is considered as "dirty" in the
validators field trie, and all the corresponding branches are
re-computed when computing the hash trie root of the the validators
field trie of the beacon state.

This PR adopts the same behavior as before Electra:

6f437b561a/beacon-chain/core/electra/effective_balance_updates.go (L32-L63)

and copies/considers dirty the validator only if its effective balance
changed.

**Which issues(s) does this PR fix?**
- https://github.com/OffchainLabs/prysm/issues/16630

**Other notes for review**
The first commit only introduces a new metric showing the issue.
The second commit actually solves the issue.

**Before this PR:**
**~9.400 validators** considered as dirty on mainnet every epoch

<img width="942" height="308" alt="image"
src="https://github.com/user-attachments/assets/31da9c92-aa0f-4d71-a402-92ed62738803"
/>


**After this PR:**
**~15 validators** considered as dirty on mainnet every epoch (reduction
of ~x620).

<img width="946" height="312" alt="image"
src="https://github.com/user-attachments/assets/afc5e72a-ccda-4636-87d9-dab2fbbf5c1c"
/>


**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2026-04-06 16:29:31 +00:00
terence
14f5e6f414 Downgrade genesis forkchoice balance underflow warning to debug (#16633)
- Demotes the "node with invalid balance, setting it to zero" warning to
DEBUG level for genesis nodes in forkchoice
- Non-genesis block retain the WARN level since underflow there
indicates a real bug
- This is required for gloas e2e test because it plans to fail on any
warning and error
2026-04-06 14:51:24 +00:00
Potuz
9d084bceb3 Fix finalized and justified state endpoint to not advance the slot (#16635)
Use StateByRoot with the checkpoint root instead of replaying to the
epoch start slot. The previous approach incorrectly advanced the state
beyond the checkpoint block's post-state.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-05 20:12:21 +00:00
terence
f79d2efc6e Fix zero head block hash in FCU at gloas genesis (#16629)
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2026-04-03 04:06:34 +00:00
terence
9dba7c5319 Check pending deposits before applying builder deposits (#16532)
Add `IsPendingValidator` check to `processDepositRequest` so that
deposit requests with builder credentials are routed to the validator
pending queue when a pending deposit with a valid signature already
exists for the same pubkey

Also updated spec test to alpha3 so we can merge this with green CI/CD

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2026-04-02 21:23:21 +00:00
terence
c02c057b7d core: implement cached PTC window in state (#16573)
This adds the PTC cache to the Gloas proto and native beacon state,
updates SSZ/hash-tree-root handling, initializes the cache on Gloas
upgrade, rotates it during epoch processing, and switches payload
committee lookups to read from the cached window instead of recomputing
PTC assignments on demand

Reference: https://github.com/ethereum/consensus-specs/pull/4979
2026-04-02 17:56:02 +00:00
terence
b6ec6a8eec fix: add Gloas genesis block support (#16627)
## Summary

- Add missing `*ethpb.BeaconStateGloas` case to
`NewGenesisBlockForState` type switch
- Create `gloasGenesisBlock()` with the correct Gloas block body
structure (`SignedExecutionPayloadBid` + `PayloadAttestations`)

Fixes the `unknown underlying type for state.BeaconState value` error
when starting a node from a Gloas genesis state.
2026-04-02 17:05:20 +00:00
terence
3ca8c3ba35 Support gloas blob protobuf for readonly (#16618)
- Refactor `RODataColumn` to support both Fulu and Gloas data column
sidecar protobuf types. Fulu-only accessors now return errors instead of
zero values when called on Gloas sidecars
- Wire up Gloas `DataColumnSidecarGloas` across gossip topic mappings,
pubsub decoding, validation, and RPC serving
  - Gloas duplicate check uses `(block_root, index)` per spec
- Precompute and broadcast Gloas data column sidecars during block
proposal, before the execution payload envelope, so receivers pass data
availability checks
- Fix `WriteDataColumnSidecarChunk` to encode the correct SSZ type per
fork
2026-04-02 15:44:41 +00:00
terence
1092c7135f Refactor gloas process_execution_payload into distinct entry points (#16600)
## Summary

- Decompose `process_execution_payload` into four explicit entry points,
one per caller
- Extract shared helpers: `cacheLatestBlockHeaderStateRoot`,
`setLatestBlockHeaderStateRoot`, `validatePayloadConsistency`,
`verifyPostStateRoot`
- Unexport package-internal functions:
`applyExecutionPayloadStateMutations`,
`verifyExecutionPayloadEnvelopeSignature`
- Rename `ApplyBlindedExecutionPayloadEnvelopeForStateGen` →
`ProcessBlindedExecutionPayload`
- Rename `ApplyExecutionPayloadNoVerifySig` →
`ProcessExecutionPayloadWithDeferredSig`

  ## Motivation

The previous code routed all callers through `ApplyExecutionPayload`,
which tried to serve every path at once. Each caller's assumptions were
implicit rather than visible in the code. Now each entry point reads
top-to-bottom as exactly the steps that path requires.

  ## Entry points

| Step | `ProcessExecutionPayload` |
`ProcessExecutionPayloadWithDeferredSig` | `ApplyExecutionPayload` |
`ProcessBlindedExecutionPayload` |
  |---|---|---|---|---|
  | **Caller** | gossip | init-sync | proposer | stategen / replay |
| **Verify signature** |  inline | 🔶 deferred (`SignatureBatch`) |  |
 |
| **Patch header state root** | `cacheLatestBlockHeaderStateRoot`
(computes HTR) | `setLatestBlockHeaderStateRoot` (caller-provided) |
`cacheLatestBlockHeaderStateRoot` (computes HTR) |
`setLatestBlockHeaderStateRoot` (caller-provided) |
| **Validate consistency** | `validatePayloadConsistency` (full) |
`validatePayloadConsistency` (full) | `validatePayloadConsistency`
(probably can be removed but outside the scope) | minimal bid checks
(builder index + block hash) |
| **Verify post-state root** |  `verifyPostStateRoot` | 
`verifyPostStateRoot` |  (caller computes it) |  (trusted) |
| **Envelope type** | `ROSignedExecutionPayloadEnvelope` |
`ROSignedExecutionPayloadEnvelope` | `ROExecutionPayloadEnvelope` |
`ROBlindedExecutionPayloadEnvelope` |
2026-04-02 09:32:11 +00:00
james-prysm
73033a9d67 allowing ptc duties for next epoch on grpc endpoint (#16608)
**What type of PR is this?**

 Bug fix


**What does this PR do? Why is it needed?**

adding next epoch fix for grpc endpoint ( currently unused) 

https://github.com/OffchainLabs/prysm/pull/16591

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-04-02 07:46:38 +00:00
james-prysm
a7b83c358a Pre fork proposer preferences (#16588)
**What type of PR is this?**

 Bug fix


**What does this PR do? Why is it needed?**

context https://github.com/ethereum/consensus-specs/pull/4947

This pr allows for proposer preferences topic to be subscribed 1 epoch
before gloas as well as allowing for publishing of proposer preferences
before the gloas fork. The digest used is still gloas despite being in
the fulu fork.

validator client submits mid epoch when it's 1 epoch away from fork to
avoid races

tested in kurtosis
```
participants:
  - el_type: geth
    el_image: ethpandaops/geth:epbs-devnet-0
    cl_type: prysm
    cl_image: gcr.io/offchainlabs/prysm/beacon-chain:latest
    vc_image: gcr.io/offchainlabs/prysm/validator:latest
    supernode: true
    count: 2

network_params:
  fulu_fork_epoch: 0
  gloas_fork_epoch: 2
  seconds_per_slot: 6
  genesis_delay: 40

additional_services:
  - dora

global_log_level: debug

dora_params:
  image: ethpandaops/dora:gloas-support

```

**Which issues(s) does this PR fix?**

Fixes #https://github.com/OffchainLabs/prysm/issues/16587

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-04-02 02:50:47 +00:00
Potuz
29a0fd6760 Add gRPC endpoint to submit signed execution payload bids (#16614)
## Summary
- Adds `SubmitSignedExecutionPayloadBid` RPC to the
`BeaconNodeValidator` gRPC service
- Broadcasts the signed bid to the P2P gossip network
- Updates all interface implementations (gRPC client, beacon-API
client), and mocks

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2026-04-01 20:24:23 +00:00
satushh
209e46bab7 PTC duties no longer computed from a pre-Gloas state at the Fulu to Gloas fork boundary (#16619)
**What type of PR is this?**

Bug fix

**What does this PR do? Why is it needed?**

At fork boundary the PTC was computed from a pre-Gloas state. So this PR
adds a state version check to avoid doing that.

Added extra tests to cover it. 

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [ ] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-04-01 19:14:46 +00:00
Preston Van Loon
108e2806cb Fastssz update to allow generics (#16628)
**What type of PR is this?**

Other

**What does this PR do? Why is it needed?**

Incorporates fastssz PR 19:
https://github.com/OffchainLabs/fastssz/pull/19.

**Which issues(s) does this PR fix?**

This allows for use of []primitives.ValidatorIndex in ssz code
generation.

**Other notes for review**

Most of the diff is regenerating ssz.go files. Review go.mod and
deps.bzl carefully.

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-04-01 20:44:08 +00:00
Potuz
68c4c36e65 Add beacon API endpoint to publish signed execution payload bids (#16612)
## Summary
- Adds `POST /eth/v2/beacon/execution_payload/bid` beacon API endpoint
- Accepts `SignedExecutionPayloadBid` as JSON or SSZ (Content-Type
based)
- Broadcasts the bid to the P2P gossip network; the existing gossip
subscriber handles caching

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-01 01:23:53 +00:00
Potuz
67cc68c3bb Potuz/replay post gloas (#16598)
Fix state replay failing on post-CL ancestor states for Gloas blocks

When replaying blocks from an ancestor state that is post-CL (before
execution payload delivery), the first block's bid validation fails
because state.latestBlockHash hasn't been updated with the ancestor's
delivered payload hash. This causes "bid parent block hash mismatch"
errors during forkchoice setup on node restart.

Before the replay loop, check if the first block's bid parentBlockHash
differs from state.latestBlockHash. If so, load and apply the ancestor's
execution payload envelope from DB to bring the state to post-EL.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2026-03-30 21:25:41 +00:00
satushh
c33f0d04b7 Re-add next-epoch lookahead for PTC duties (#16591)
**What type of PR is this?**

Feature

**What does this PR do? Why is it needed?**

Add current and next epoch lookahead as per
https://github.com/ethereum/beacon-APIs/pull/592

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [ ] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [ ] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [ ] I have added a description with sufficient context for reviewers
to understand this PR.
- [ ] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-03-30 18:10:27 +00:00
james-prysm
f05972a181 changing log to warn in fallback log (#16606)
**What type of PR is this?**

 Other

**What does this PR do? Why is it needed?**
changing info log to warn for fallback message

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-03-30 15:55:50 +00:00
Aarsh Shah
7352ae03c6 Fix our flakiest unit tests (#16395)
**What type of PR is this?**
Bug fix


**What does this PR do? Why is it needed?**

This PR fixes the flakiest unit tests in Prysm which are
`TestFilterSubnetPeers` and
`TestService_BroadcastAttestationWithDiscoveryAttempts`.

It also refactors `TestStartDiscV5_DiscoverAllPeers` to use a
require.Eventually to make it less flaky but it still remains flaky. We
can't fully unflake this test as the discV5 library does not expose any
deterministic events we can block on.

It also speeds them up by getting rid of "time.Sleep" and
"require.Eventually" in these tests by blocking on deterministic events
instead.

Details on how each test has been fixed/what was broken have been left
as comments on the corresponding changes.
2026-03-30 14:12:49 +00:00
Potuz
4f34624a54 fix: use attestation slot epoch for fork digest in gossip topic validation (#16604)
The attestation gossip validator was using currentForkDigest() to build
the expected topic prefix. This fails at fork boundaries because the
beacon node subscribes to the next fork's topics one epoch early: a
message arriving on the upcoming fork's topic would be compared against
the current fork digest, causing a spurious reject with the misleading
error "attestation's subnet does not match with pubsub topic".

Use the attestation's own slot epoch to derive the correct fork digest
instead, so the validation matches the topic the attestation was
actually published on.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-30 13:25:33 +00:00
Potuz
4e44fdf55e Gloas/forkchoice setup (#16599)
Fix forkchoice tree setup missing full payload nodes on restart

During forkchoice tree reconstruction on node restart,
buildForkchoiceChain never set HasPayload on chain entries, so
InsertChain never created full payload nodes. When a child block's bid
references the parent's delivered payload hash,
resolveParentPayloadStatus
looks for a full parent node that doesn't exist, causing "invalid parent
root" errors.

Add resolveChainPayloadStatus to determine which blocks had payloads
delivered by comparing consecutive bids. For the finalized root (tree
root), check the first chain block's bid to determine if a full node is
needed, and create it via the new MarkFullNode forkchoice method before
inserting the chain.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-30 12:58:41 +00:00
Preston Van Loon
139773aa3a Hdiff: save hot states at tree boundaries (#16589)
**What type of PR is this?**

Bug fix

**What does this PR do? Why is it needed?**

When performing a long initial sync, I found that states were not being
progressively saved. If the client was improperly shut down, then it
would be replaying many states on restart.

**Which issues(s) does this PR fix?**

**Other notes for review**

Testing:
- Start initial sync from genesis with hdiff enabled
- Do an improper shutdown (restart your computer, kill -9, etc)
- Restart with debug logs and see "Starting stategen" service completes
in a timely manner
- A failure would be logs indicating hundreds or thousands of states
being replayed. However long your node was on during step 1.

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-03-27 18:29:32 +00:00
Bharath Vedartham
6558e947ca fix: set default scoring params for proposer preferences (#16585)
Upon starting up prysm, I get the following error message: 
```
[2026-03-25 08:17:20.00] ERROR sync: Could not subscribe topic error=unrecognized topic provided for parameter registration: /eth2/4d21f163/proposer_preferences/ssz_snappy topic=/eth2/4d21f163/proposer_preferences/ssz_snappy
```
which I believe happens because the default peer scoring params for the
proposer preferences topic has not been setup. This PR sets the topic
params for the proposer preferences topic to the be the default block
topic params for now.

I rebuilt an image with the fix and ran a kurtosis devnet with the newly
built image. Upon startup, I see the log:
```
[2026-03-25 09:27:46.01]  INFO sync: Subscribed to topic=/eth2/4d21f163/proposer_preferences/ssz_snappy
```
which I think should indicate that the issue has been fixed.


- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).

---------

Co-authored-by: terence <terence@prysmaticlabs.com>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2026-03-27 17:41:23 +00:00
Barnabas Busa
d8150ac20c fix: add missing ePBS values to /eth/v1/config/spec (#16597)
## Summary

- Add `PTC_SIZE`, `MAX_PAYLOAD_ATTESTATIONS`, `BUILDER_REGISTRY_LIMIT`,
and `BUILDER_PENDING_WITHDRAWALS_LIMIT` as fieldparams-derived constants
exposed via the config spec endpoint.
- Add `MaxPayloadAttestations` constant to fieldparams (mainnet and
minimal configs).
- Update test expectations to include the new fields.

## Test plan

- [x] `TestGetSpec` passes with all 4 new values verified
- [ ] Verify `/eth/v1/config/spec` returns the new values on a running
node

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 16:33:49 +00:00
342 changed files with 12994 additions and 3784 deletions

View File

@@ -1,4 +1,4 @@
version: v1.7.0-alpha.2
version: v1.7.0-alpha.4
style: full
specrefs:
@@ -23,6 +23,8 @@ exceptions:
- PTC_SIZE#gloas
constants:
# heze
- DOMAIN_INCLUSION_LIST_COMMITTEE#heze
# phase0
- BASIS_POINTS#phase0
- ENDIANNESS#phase0
@@ -72,10 +74,30 @@ exceptions:
- GLOAS_FORK_EPOCH#gloas
- GLOAS_FORK_VERSION#gloas
- SYNC_MESSAGE_DUE_BPS_GLOAS#gloas
# heze
- HEZE_FORK_EPOCH#heze
- HEZE_FORK_VERSION#heze
- INCLUSION_LIST_SUBMISSION_DUE_BPS#heze
- MAX_BYTES_PER_INCLUSION_LIST#heze
- MAX_REQUEST_INCLUSION_LIST#heze
- PROPOSER_INCLUSION_LIST_CUTOFF_BPS#heze
- VIEW_FREEZE_CUTOFF_BPS#heze
ssz_objects:
# phase0
- Eth1Block#phase0
# fulu
- PartialDataColumnHeader#fulu
- PartialDataColumnPartsMetadata#fulu
- PartialDataColumnSidecar#fulu
# gloas
- PartialDataColumnHeader#gloas
# heze
- BeaconState#heze
- ExecutionPayloadBid#heze
- InclusionList#heze
- SignedExecutionPayloadBid#heze
- SignedInclusionList#heze
# capella
- LightClientBootstrap#capella
- LightClientFinalityUpdate#capella
@@ -105,6 +127,7 @@ exceptions:
dataclasses:
# phase0
- LatestMessage#phase0
- Seen#phase0
- Store#phase0
# altair
- LightClientStore#altair
@@ -121,6 +144,11 @@ exceptions:
- ExpectedWithdrawals#gloas
- LatestMessage#gloas
- Store#gloas
# heze
- GetInclusionListResponse#heze
- InclusionListStore#heze
- PayloadAttributes#heze
- Store#heze
functions:
# Functions implemented by KZG library for EIP-4844
@@ -177,11 +205,22 @@ exceptions:
- verify_cell_kzg_proof_batch_impl#fulu
# phase0
- compute_attestation_subnet_prefix_bits#phase0
- compute_min_epochs_for_block_requests#phase0
- compute_time_at_slot_ms#phase0
- is_not_from_future_slot#phase0
- is_within_slot_range#phase0
- update_proposer_boost_root#phase0
- is_proposer_equivocation#phase0
- record_block_timeliness#phase0
- compute_proposer_score#phase0
- get_attestation_score#phase0
- validate_attester_slashing_gossip#phase0
- validate_beacon_aggregate_and_proof_gossip#phase0
- validate_beacon_attestation_gossip#phase0
- validate_beacon_block_gossip#phase0
- validate_proposer_slashing_gossip#phase0
- validate_voluntary_exit_gossip#phase0
- calculate_committee_fraction#phase0
- compute_fork_version#phase0
- compute_pulled_up_tip#phase0
@@ -272,6 +311,7 @@ exceptions:
- upgrade_lc_store_to_capella#capella
- upgrade_lc_update_to_capella#capella
# deneb
- compute_max_request_blob_sidecars#deneb
- get_lc_execution_root#deneb
- is_valid_light_client_header#deneb
- prepare_execution_payload#deneb
@@ -282,6 +322,7 @@ exceptions:
- upgrade_lc_store_to_deneb#deneb
- upgrade_lc_update_to_deneb#deneb
# electra
- compute_max_request_blob_sidecars#electra
- compute_weak_subjectivity_period#electra
- current_sync_committee_gindex_at_slot#electra
- finalized_root_gindex_at_slot#electra
@@ -303,12 +344,20 @@ exceptions:
- upgrade_lc_store_to_electra#electra
- upgrade_lc_update_to_electra#electra
# fulu
- compute_max_request_data_column_sidecars#fulu
- compute_matrix#fulu
- verify_partial_data_column_header_inclusion_proof#fulu
- verify_partial_data_column_sidecar_kzg_proofs#fulu
- get_blob_parameters#fulu
- get_data_column_sidecars_from_block#fulu
- get_data_column_sidecars_from_column_sidecar#fulu
- recover_matrix#fulu
# gloas
- compute_ptc#gloas
- initialize_ptc_window#gloas
- is_payload_data_available#gloas
- is_pending_validator#gloas
- process_ptc_window#gloas
- compute_balance_weighted_acceptance#gloas
- compute_balance_weighted_selection#gloas
- compute_fork_version#gloas
@@ -406,6 +455,28 @@ exceptions:
- update_next_withdrawal_builder_index#gloas
- update_payload_expected_withdrawals#gloas
- update_proposer_boost_root#gloas
# heze
- compute_fork_version#heze
- get_forkchoice_store#heze
- get_inclusion_list_bits#heze
- get_inclusion_list_committee_assignment#heze
- get_inclusion_list_committee#heze
- get_inclusion_list_signature#heze
- get_inclusion_list_store#heze
- get_inclusion_list_submission_due_ms#heze
- get_inclusion_list_transactions#heze
- get_proposer_inclusion_list_cutoff_ms#heze
- get_view_freeze_cutoff_ms#heze
- is_inclusion_list_bits_inclusive#heze
- is_payload_inclusion_list_satisfied#heze
- is_valid_inclusion_list_signature#heze
- on_execution_payload#heze
- on_inclusion_list#heze
- prepare_execution_payload#heze
- process_inclusion_list#heze
- record_payload_inclusion_list_satisfaction#heze
- should_extend_payload#heze
- upgrade_to_heze#heze
presets:
# gloas
@@ -414,3 +485,5 @@ exceptions:
- MAX_BUILDERS_PER_WITHDRAWALS_SWEEP#gloas
- MAX_PAYLOAD_ATTESTATIONS#gloas
- PTC_SIZE#gloas
# heze
- INCLUSION_LIST_COMMITTEE_SIZE#heze

View File

@@ -273,16 +273,16 @@ filegroup(
url = "https://github.com/ethereum/EIPs/archive/5480440fe51742ed23342b68cf106cefd427e39d.tar.gz",
)
consensus_spec_version = "v1.7.0-alpha.2"
consensus_spec_version = "v1.7.0-alpha.4"
load("@prysm//tools:download_spectests.bzl", "consensus_spec_tests")
consensus_spec_tests(
name = "consensus_spec_tests",
flavors = {
"general": "sha256-iGQsGZ1cHah+2CSod9jC3kN8Ku4n6KO0hIwfINrn/po=",
"minimal": "sha256-TgcYt8N8sXSttdHTGvOa+exUZ1zn1UzlAMz0V7i37xc=",
"mainnet": "sha256-LnXyiLoJtrvEvbqLDSAAqpLMdN/lXv92SAgYG8fNjCs=",
"general": "sha256-kNJxuhCtW4RbuS9nb4U6JXHlPgTSg6G3hWeHFVB9gZ4=",
"minimal": "sha256-U1tCkXxtdI6mkEdk80i8z9LU2hAyf7Ztz5SBYo5oMzo=",
"mainnet": "sha256-Ga8VDOcNhTTdXDj8tSyBVYrwya9f1HO94ehJ5vv91r4=",
},
version = consensus_spec_version,
)
@@ -298,7 +298,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-Y/67Dg393PksZj5rTFNLntiJ6hNdB7Rxbu5gZE2gebY=",
integrity = "sha256-XHu5K/65mue+5po63L9yGTFjGfU1RGj4S56dmcHc2Rs=",
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
)

View File

@@ -46,7 +46,7 @@ func EnsureReady(ctx context.Context, provider HostProvider, checker ReadyChecke
"previous": startingHost,
"current": provider.CurrentHost(),
"tried": attemptedHosts,
}).Info("Switched to responsive beacon node")
}).Warn("Switched to responsive beacon node")
}
return true
}

View File

@@ -3,14 +3,15 @@ package api
import "net/http"
const (
VersionHeader = "Eth-Consensus-Version"
ExecutionPayloadBlindedHeader = "Eth-Execution-Payload-Blinded"
ExecutionPayloadValueHeader = "Eth-Execution-Payload-Value"
ConsensusBlockValueHeader = "Eth-Consensus-Block-Value"
JsonMediaType = "application/json"
OctetStreamMediaType = "application/octet-stream"
EventStreamMediaType = "text/event-stream"
KeepAlive = "keep-alive"
VersionHeader = "Eth-Consensus-Version"
ExecutionPayloadBlindedHeader = "Eth-Execution-Payload-Blinded"
ExecutionPayloadValueHeader = "Eth-Execution-Payload-Value"
ConsensusBlockValueHeader = "Eth-Consensus-Block-Value"
ExecutionPayloadIncludedHeader = "Eth-Execution-Payload-Included"
JsonMediaType = "application/json"
OctetStreamMediaType = "application/octet-stream"
EventStreamMediaType = "text/event-stream"
KeepAlive = "keep-alive"
)
// SetSSEHeaders sets the headers needed for a server-sent event response.

View File

@@ -29,6 +29,8 @@ type Server struct {
startFailure error
}
const eventStreamPath = "/eth/v1/events"
// New returns a new instance of the Server.
func New(ctx context.Context, opts ...Option) (*Server, error) {
g := &Server{
@@ -48,7 +50,17 @@ func New(ctx context.Context, opts ...Option) (*Server, error) {
handler = middleware.MiddlewareChain(g.cfg.router, g.cfg.middlewares)
if g.cfg.timeout > 0*time.Second {
defaultReadHeaderTimeout = g.cfg.timeout
handler = http.TimeoutHandler(handler, g.cfg.timeout, "request timed out")
baseHandler := handler
timeoutHandler := http.TimeoutHandler(baseHandler, g.cfg.timeout, "request timed out")
handler = http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// SSE streams stay open indefinitely, so the global timeout wrapper must not
// cancel `/eth/v1/events` before the handler starts streaming responses.
if r.URL != nil && r.URL.Path == eventStreamPath {
baseHandler.ServeHTTP(w, r)
return
}
timeoutHandler.ServeHTTP(w, r)
})
}
g.server = &http.Server{
Addr: g.cfg.httpAddr,

View File

@@ -7,7 +7,9 @@ import (
"net/http"
"net/http/httptest"
"net/url"
"strings"
"testing"
"time"
"github.com/OffchainLabs/prysm/v7/cmd/beacon-chain/flags"
"github.com/OffchainLabs/prysm/v7/testing/assert"
@@ -37,10 +39,18 @@ func TestServer_StartStop(t *testing.T) {
require.NoError(t, err)
g.Start()
go func() {
require.LogsContain(t, hook, "Starting HTTP server")
require.LogsDoNotContain(t, hook, "Starting API middleware")
}()
require.Eventually(t, func() bool {
foundStart := false
for _, entry := range hook.AllEntries() {
if strings.Contains(entry.Message, "Starting HTTP server") {
foundStart = true
}
if strings.Contains(entry.Message, "Starting API middleware") {
return false
}
}
return foundStart
}, time.Second, 10*time.Millisecond)
err = g.Stop()
require.NoError(t, err)
}
@@ -68,3 +78,51 @@ func TestServer_NilHandler_NotFoundHandlerRegistered(t *testing.T) {
g.cfg.router.ServeHTTP(writer, &http.Request{Method: "GET", Host: "localhost", URL: &url.URL{Path: "/foo"}})
assert.Equal(t, http.StatusNotFound, writer.Code)
}
func TestServer_TimeoutHandlerBypassesSSE(t *testing.T) {
handler := http.NewServeMux()
handler.HandleFunc(eventStreamPath, func(w http.ResponseWriter, _ *http.Request) {
time.Sleep(20 * time.Millisecond)
w.WriteHeader(http.StatusOK)
_, err := w.Write([]byte("stream-open"))
require.NoError(t, err)
})
g, err := New(t.Context(),
WithHTTPAddr("127.0.0.1:0"),
WithRouter(handler),
WithTimeout(5*time.Millisecond),
)
require.NoError(t, err)
req := httptest.NewRequest(http.MethodGet, eventStreamPath, nil)
writer := httptest.NewRecorder()
g.server.Handler.ServeHTTP(writer, req)
assert.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, "stream-open", writer.Body.String())
}
func TestServer_TimeoutHandlerStillAppliesToNonSSE(t *testing.T) {
handler := http.NewServeMux()
handler.HandleFunc("/foo", func(w http.ResponseWriter, _ *http.Request) {
time.Sleep(20 * time.Millisecond)
w.WriteHeader(http.StatusOK)
_, err := w.Write([]byte("ok"))
require.NoError(t, err)
})
g, err := New(t.Context(),
WithHTTPAddr("127.0.0.1:0"),
WithRouter(handler),
WithTimeout(5*time.Millisecond),
)
require.NoError(t, err)
req := httptest.NewRequest(http.MethodGet, "/foo", nil)
writer := httptest.NewRecorder()
g.server.Handler.ServeHTTP(writer, req)
assert.Equal(t, http.StatusServiceUnavailable, writer.Code)
assert.Equal(t, true, strings.Contains(writer.Body.String(), "request timed out"))
}

View File

@@ -54,11 +54,13 @@ go_test(
name = "go_default_test",
srcs = [
"conversions_block_execution_test.go",
"conversions_block_gloas_test.go",
"conversions_test.go",
],
embed = [":go_default_library"],
deps = [
"//consensus-types/blocks:go_default_library",
"//consensus-types/primitives:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//testing/assert:go_default_library",

View File

@@ -584,6 +584,13 @@ func (s *SignedBeaconBlockGloas) SigString() string {
return s.Signature
}
type BlockContentsGloas struct {
Block *BeaconBlockGloas `json:"block"`
ExecutionPayloadEnvelope *ExecutionPayloadEnvelope `json:"execution_payload_envelope"`
KzgProofs []string `json:"kzg_proofs"`
Blobs []string `json:"blobs"`
}
type ExecutionPayloadEnvelope struct {
Payload *ExecutionPayloadDeneb `json:"payload"`
ExecutionRequests *ExecutionRequests `json:"execution_requests"`

View File

@@ -2983,6 +2983,19 @@ func PayloadAttestationDataFromConsensus(d *eth.PayloadAttestationData) *Payload
}
}
func (b *SignedBeaconBlockGloas) ToGeneric() (*eth.GenericSignedBeaconBlock, error) {
if b == nil {
return nil, errNilValue
}
signed, err := b.ToConsensus()
if err != nil {
return nil, err
}
return &eth.GenericSignedBeaconBlock{
Block: &eth.GenericSignedBeaconBlock_Gloas{Gloas: signed},
}, nil
}
func (b *SignedBeaconBlockGloas) ToConsensus() (*eth.SignedBeaconBlockGloas, error) {
if b == nil {
return nil, errNilValue
@@ -3137,6 +3150,14 @@ func (b *BeaconBlockBodyGloas) ToConsensus() (*eth.BeaconBlockBodyGloas, error)
}, nil
}
func (b *BeaconBlockGloas) ToGeneric() (*eth.GenericBeaconBlock, error) {
block, err := b.ToConsensus()
if err != nil {
return nil, errors.Wrap(err, "could not convert gloas block to consensus")
}
return &eth.GenericBeaconBlock{Block: &eth.GenericBeaconBlock_Gloas{Gloas: block}}, nil
}
func (b *SignedExecutionPayloadBid) ToConsensus() (*eth.SignedExecutionPayloadBid, error) {
if b == nil {
return nil, errNilValue
@@ -3284,25 +3305,113 @@ func (d *PayloadAttestationData) ToConsensus() (*eth.PayloadAttestationData, err
}, nil
}
// SignedExecutionPayloadEnvelopeFromConsensus converts a proto envelope to the API struct.
func SignedExecutionPayloadEnvelopeFromConsensus(e *eth.SignedExecutionPayloadEnvelope) (*SignedExecutionPayloadEnvelope, error) {
payload, err := ExecutionPayloadDenebFromConsensus(e.Message.Payload)
// ExecutionPayloadEnvelopeFromConsensus converts a proto envelope to the API struct.
func ExecutionPayloadEnvelopeFromConsensus(e *eth.ExecutionPayloadEnvelope) (*ExecutionPayloadEnvelope, error) {
payload, err := ExecutionPayloadDenebFromConsensus(e.Payload)
if err != nil {
return nil, err
}
var requests *ExecutionRequests
if e.Message.ExecutionRequests != nil {
requests = ExecutionRequestsFromConsensus(e.Message.ExecutionRequests)
if e.ExecutionRequests != nil {
requests = ExecutionRequestsFromConsensus(e.ExecutionRequests)
}
return &ExecutionPayloadEnvelope{
Payload: payload,
ExecutionRequests: requests,
BuilderIndex: fmt.Sprintf("%d", e.BuilderIndex),
BeaconBlockRoot: hexutil.Encode(e.BeaconBlockRoot),
Slot: fmt.Sprintf("%d", e.Slot),
StateRoot: hexutil.Encode(e.StateRoot),
}, nil
}
// SignedExecutionPayloadEnvelopeFromConsensus converts a signed proto envelope to the API struct.
func SignedExecutionPayloadEnvelopeFromConsensus(e *eth.SignedExecutionPayloadEnvelope) (*SignedExecutionPayloadEnvelope, error) {
envelope, err := ExecutionPayloadEnvelopeFromConsensus(e.Message)
if err != nil {
return nil, err
}
return &SignedExecutionPayloadEnvelope{
Message: &ExecutionPayloadEnvelope{
Payload: payload,
ExecutionRequests: requests,
BuilderIndex: fmt.Sprintf("%d", e.Message.BuilderIndex),
BeaconBlockRoot: hexutil.Encode(e.Message.BeaconBlockRoot),
Slot: fmt.Sprintf("%d", e.Message.Slot),
StateRoot: hexutil.Encode(e.Message.StateRoot),
},
Message: envelope,
Signature: hexutil.Encode(e.Signature),
}, nil
}
// BlockContentsGloasFromConsensus converts a proto Gloas block and envelope to the API struct.
func BlockContentsGloasFromConsensus(block *eth.BeaconBlockGloas, envelope *eth.ExecutionPayloadEnvelope) (*BlockContentsGloas, error) {
b, err := BeaconBlockGloasFromConsensus(block)
if err != nil {
return nil, err
}
env, err := ExecutionPayloadEnvelopeFromConsensus(envelope)
if err != nil {
return nil, err
}
return &BlockContentsGloas{
Block: b,
ExecutionPayloadEnvelope: env,
KzgProofs: []string{}, // TODO: populate from blobs bundle
Blobs: []string{}, // TODO: populate from blobs bundle
}, nil
}
// ToConsensus converts the API struct to a proto ExecutionPayloadEnvelope.
func (e *ExecutionPayloadEnvelope) ToConsensus() (*eth.ExecutionPayloadEnvelope, error) {
if e == nil {
return nil, server.NewDecodeError(errNilValue, "ExecutionPayloadEnvelope")
}
payload, err := e.Payload.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Payload")
}
var requests *enginev1.ExecutionRequests
if e.ExecutionRequests != nil {
requests, err = e.ExecutionRequests.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionRequests")
}
}
builderIndex, err := strconv.ParseUint(e.BuilderIndex, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "BuilderIndex")
}
beaconBlockRoot, err := bytesutil.DecodeHexWithLength(e.BeaconBlockRoot, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "BeaconBlockRoot")
}
slot, err := strconv.ParseUint(e.Slot, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "Slot")
}
stateRoot, err := bytesutil.DecodeHexWithLength(e.StateRoot, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "StateRoot")
}
return &eth.ExecutionPayloadEnvelope{
Payload: payload,
ExecutionRequests: requests,
BuilderIndex: primitives.BuilderIndex(builderIndex),
BeaconBlockRoot: beaconBlockRoot,
Slot: primitives.Slot(slot),
StateRoot: stateRoot,
}, nil
}
// ToConsensus converts the API struct to a proto SignedExecutionPayloadEnvelope.
func (e *SignedExecutionPayloadEnvelope) ToConsensus() (*eth.SignedExecutionPayloadEnvelope, error) {
if e == nil {
return nil, server.NewDecodeError(errNilValue, "SignedExecutionPayloadEnvelope")
}
msg, err := e.Message.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Message")
}
sig, err := bytesutil.DecodeHexWithLength(e.Signature, fieldparams.BLSSignatureLength)
if err != nil {
return nil, server.NewDecodeError(err, "Signature")
}
return &eth.SignedExecutionPayloadEnvelope{
Message: msg,
Signature: sig,
}, nil
}

View File

@@ -0,0 +1,67 @@
package structs
import (
"testing"
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
eth "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/OffchainLabs/prysm/v7/testing/util"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
)
func testEnvelopeProto() *eth.ExecutionPayloadEnvelope {
return &eth.ExecutionPayloadEnvelope{
Payload: &enginev1.ExecutionPayloadDeneb{
ParentHash: fillByteSlice(common.HashLength, 0xaa),
FeeRecipient: fillByteSlice(20, 0xbb),
StateRoot: fillByteSlice(32, 0xcc),
ReceiptsRoot: fillByteSlice(32, 0xdd),
LogsBloom: fillByteSlice(256, 0xee),
PrevRandao: fillByteSlice(32, 0xff),
BaseFeePerGas: fillByteSlice(32, 0x11),
BlockHash: fillByteSlice(common.HashLength, 0x22),
},
ExecutionRequests: &enginev1.ExecutionRequests{},
BuilderIndex: 7,
BeaconBlockRoot: fillByteSlice(32, 0x33),
Slot: 42,
StateRoot: fillByteSlice(32, 0x44),
}
}
func TestExecutionPayloadEnvelopeFromConsensus(t *testing.T) {
env := testEnvelopeProto()
result, err := ExecutionPayloadEnvelopeFromConsensus(env)
require.NoError(t, err)
require.NotNil(t, result.Payload)
require.Equal(t, hexutil.Encode(env.Payload.ParentHash), result.Payload.ParentHash)
require.Equal(t, "7", result.BuilderIndex)
require.Equal(t, hexutil.Encode(env.BeaconBlockRoot), result.BeaconBlockRoot)
require.Equal(t, "42", result.Slot)
require.Equal(t, hexutil.Encode(env.StateRoot), result.StateRoot)
require.NotNil(t, result.ExecutionRequests)
}
func TestExecutionPayloadEnvelopeFromConsensus_NilRequests(t *testing.T) {
env := testEnvelopeProto()
env.ExecutionRequests = nil
result, err := ExecutionPayloadEnvelopeFromConsensus(env)
require.NoError(t, err)
require.Equal(t, (*ExecutionRequests)(nil), result.ExecutionRequests)
}
func TestBlockContentsGloasFromConsensus(t *testing.T) {
block := util.NewBeaconBlockGloas().Block
env := testEnvelopeProto()
result, err := BlockContentsGloasFromConsensus(block, env)
require.NoError(t, err)
require.NotNil(t, result.Block)
require.NotNil(t, result.Block.Body)
require.NotNil(t, result.ExecutionPayloadEnvelope)
require.Equal(t, hexutil.Encode(env.BeaconBlockRoot), result.ExecutionPayloadEnvelope.BeaconBlockRoot)
require.Equal(t, 0, len(result.KzgProofs))
require.Equal(t, 0, len(result.Blobs))
}

View File

@@ -5,6 +5,7 @@ import (
"testing"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
eth "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/testing/assert"
@@ -488,7 +489,7 @@ func TestBeaconStateGloasFromConsensus(t *testing.T) {
state.GenesisTime = 123
state.GenesisValidatorsRoot = bytes.Repeat([]byte{0x10}, 32)
state.Slot = 5
state.ProposerLookahead = []uint64{1, 2}
state.ProposerLookahead = []primitives.ValidatorIndex{1, 2}
state.LatestExecutionPayloadBid = &eth.ExecutionPayloadBid{
ParentBlockHash: bytes.Repeat([]byte{0x11}, 32),
ParentBlockRoot: bytes.Repeat([]byte{0x12}, 32),

View File

@@ -53,8 +53,8 @@ type ChainReorgEvent struct {
Slot string `json:"slot"`
Depth string `json:"depth"`
OldHeadBlock string `json:"old_head_block"`
NewHeadBlock string `json:"old_head_state"`
OldHeadState string `json:"new_head_block"`
NewHeadBlock string `json:"new_head_block"`
OldHeadState string `json:"old_head_state"`
NewHeadState string `json:"new_head_state"`
Epoch string `json:"epoch"`
ExecutionOptimistic bool `json:"execution_optimistic"`

View File

@@ -95,6 +95,14 @@ type ProduceBlockV3Response struct {
Data json.RawMessage `json:"data"` // represents the block values based on the version
}
// ProduceBlockV4Response is a wrapper json object for the returned block from the ProduceBlockV4 endpoint
type ProduceBlockV4Response struct {
Version string `json:"version"`
ConsensusBlockValue string `json:"consensus_block_value"`
ExecutionPayloadIncluded bool `json:"execution_payload_included"`
Data json.RawMessage `json:"data"`
}
type GetLivenessResponse struct {
Data []*Liveness `json:"data"`
}
@@ -151,6 +159,11 @@ type ValidatorParticipation struct {
PreviousEpochHeadAttestingGwei string `json:"previous_epoch_head_attesting_gwei"`
}
type GetValidatorExecutionPayloadEnvelopeResponse struct {
Version string `json:"version"`
Data *ExecutionPayloadEnvelope `json:"data"`
}
type ActiveSetChanges struct {
Epoch string `json:"epoch"`
ActivatedPublicKeys []string `json:"activated_public_keys"`

View File

@@ -141,6 +141,7 @@ go_test(
"service_test.go",
"setup_forkchoice_test.go",
"setup_test.go",
"tracked_proposer_test.go",
"weak_subjectivity_checks_test.go",
],
embed = [":go_default_library"],
@@ -154,7 +155,6 @@ go_test(
"//beacon-chain/core/altair:go_default_library",
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/gloas:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/signing:go_default_library",

View File

@@ -157,6 +157,13 @@ func (s *Service) hashForGenesisBlock(ctx context.Context, root [32]byte) ([]byt
if st.Version() < version.Bellatrix {
return nil, nil
}
if st.Version() >= version.Gloas {
h, err := st.LatestBlockHash()
if err != nil {
return nil, errors.Wrap(err, "could not get latest block hash")
}
return bytesutil.SafeCopyBytes(h[:]), nil
}
header, err := st.LatestExecutionPayloadHeader()
if err != nil {
return nil, errors.Wrap(err, "could not get latest execution payload header")

View File

@@ -17,10 +17,10 @@ import (
"github.com/OffchainLabs/prysm/v7/genesis"
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/OffchainLabs/prysm/v7/testing/assert"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/OffchainLabs/prysm/v7/testing/util"
"github.com/OffchainLabs/prysm/v7/time/slots"
"google.golang.org/protobuf/proto"
)
@@ -820,3 +820,23 @@ func Test_hashForGenesisRoot(t *testing.T) {
require.NoError(t, err)
require.Equal(t, [32]byte{}, [32]byte(genRoot))
}
func Test_hashForGenesisRoot_Gloas(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := t.Context()
c := setupBeaconChain(t, beaconDB)
expectedHash := [32]byte{1, 2, 3, 4, 5}
st, err := state_native.InitializeFromProtoGloas(&ethpb.BeaconStateGloas{
LatestBlockHash: expectedHash[:],
})
require.NoError(t, err)
genesis.StoreDuringTest(t, genesis.GenesisData{State: st})
genesisRoot := [32]byte{0xaa}
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, genesisRoot))
genHash, err := c.hashForGenesisBlock(ctx, genesisRoot)
require.NoError(t, err)
require.Equal(t, expectedHash, [32]byte(genHash))
}

View File

@@ -271,7 +271,7 @@ func (s *Service) notifyNewPayload(ctx context.Context, stVersion int, header in
}
}
lastValidHash, err = s.cfg.ExecutionEngineCaller.NewPayload(ctx, payload, versionedHashes, parentRoot, requests)
lastValidHash, err = s.cfg.ExecutionEngineCaller.NewPayload(ctx, payload, versionedHashes, parentRoot, requests, blk.Block().Slot())
if err == nil {
newPayloadValidNodeCount.Inc()
return true, nil

View File

@@ -75,7 +75,7 @@ func prepareGloasForkchoiceState(
ExecutionPayloadAvailability: make([]byte, 1024),
LatestBlockHash: make([]byte, 32),
PayloadExpectedWithdrawals: make([]*enginev1.Withdrawal, 0),
ProposerLookahead: make([]uint64, 64),
ProposerLookahead: make([]primitives.ValidatorIndex, 64),
}
st, err := state_native.InitializeFromProtoUnsafeGloas(base)
@@ -146,7 +146,7 @@ func testGloasState(t *testing.T, slot primitives.Slot, parentRoot [32]byte, blo
ExecutionPayloadAvailability: make([]byte, 1024),
LatestBlockHash: make([]byte, 32),
PayloadExpectedWithdrawals: make([]*enginev1.Withdrawal, 0),
ProposerLookahead: make([]uint64, 64),
ProposerLookahead: make([]primitives.ValidatorIndex, 64),
}
bid := util.HydrateSignedExecutionPayloadBid(&ethpb.SignedExecutionPayloadBid{
@@ -797,7 +797,7 @@ func TestSaveHead_GloasForkBoundary_PreforkBidForcesEmptyHead(t *testing.T) {
ExecutionPayloadAvailability: make([]byte, 1024),
LatestBlockHash: make([]byte, 32),
PayloadExpectedWithdrawals: make([]*enginev1.Withdrawal, 0),
ProposerLookahead: make([]uint64, 64),
ProposerLookahead: make([]primitives.ValidatorIndex, 64),
})
require.NoError(t, err2)
oldRoot := bytesutil.ToBytes32([]byte("oldroot1"))
@@ -874,7 +874,7 @@ func TestSaveHead_GloasForkBoundary_PostforkBidSetsFullHead(t *testing.T) {
ExecutionPayloadAvailability: make([]byte, 1024),
LatestBlockHash: make([]byte, 32),
PayloadExpectedWithdrawals: make([]*enginev1.Withdrawal, 0),
ProposerLookahead: make([]uint64, 64),
ProposerLookahead: make([]primitives.ValidatorIndex, 64),
})
require.NoError(t, err2)
oldRoot2 := bytesutil.ToBytes32([]byte("oldroot2"))

View File

@@ -96,6 +96,15 @@ func WithTrackedValidatorsCache(c *cache.TrackedValidatorsCache) Option {
}
}
// WithProposerPreferencesCache sets the proposer preferences cache used to
// look up fee recipient and gas limit from Gloas gossip preferences.
func WithProposerPreferencesCache(c *cache.ProposerPreferencesCache) Option {
return func(s *Service) error {
s.cfg.ProposerPreferencesCache = c
return nil
}
}
// WithAttestationCache for attestation lifecycle after chain inclusion.
func WithAttestationCache(c *cache.AttestationCache) Option {
return func(s *Service) error {

View File

@@ -186,7 +186,7 @@ func (s *Service) applyPayloadIfNeeded(ctx context.Context, b interfaces.ReadOnl
if err != nil {
return errors.Wrapf(err, "could not wrap blinded execution payload envelope for parent block with root %#x", parentRoot)
}
return gloas.ApplyBlindedExecutionPayloadEnvelopeForStateGen(ctx, preState, parentBlock.Block().StateRoot(), envelope)
return gloas.ProcessBlindedExecutionPayload(ctx, preState, parentBlock.Block().StateRoot(), envelope)
}
// getBatchPrestate returns the pre-state to apply to the first beacon block in the batch and returns true if it applied the first envelope before
@@ -198,48 +198,48 @@ func (s *Service) getBatchPrestate(ctx context.Context, b consensusblocks.ROBloc
}
return blockPreState, false, nil
}
parentRoot := b.Block().ParentRoot()
full, err := consensusblocks.BlockBuiltOnEnvelope(envelopes[0], b)
if err != nil {
return nil, false, errors.Wrap(err, "could not check if block builds on envelope")
}
blockPreState, err := s.cfg.StateGen.StateByRootInitialSync(ctx, parentRoot)
if err != nil {
return nil, false, errors.Wrap(err, "could not get block pre state")
}
if !full {
blockPreState, err := s.cfg.StateGen.StateByRootInitialSync(ctx, b.Block().ParentRoot())
if err != nil {
return nil, false, errors.Wrap(err, "could not get block pre state")
}
return blockPreState, false, nil
}
parentRoot := b.Block().ParentRoot()
parentBlock, err := s.cfg.BeaconDB.Block(ctx, parentRoot)
if err != nil {
return nil, false, errors.Wrap(err, "could not get parent block")
}
if s.cfg.BeaconDB.HasExecutionPayloadEnvelope(ctx, parentRoot) {
// This path should have been filtered already in init sync.
log.Debugf("Ignoring already processed envelope for blockroot %#x", parentRoot)
env, err := envelopes[0].Envelope()
// The parent envelope was already saved by a previous batch but the
// replayed state may not include it (replay skips the last block's
// envelope). Load the blinded form from DB and apply it.
blindedEnv, err := s.cfg.BeaconDB.ExecutionPayloadEnvelope(ctx, parentRoot)
if err != nil {
return nil, false, err
return nil, false, errors.Wrap(err, "could not load parent blinded envelope from DB")
}
blockPreState, err := s.cfg.StateGen.StateByRootInitialSync(ctx, env.BlockHash())
wrappedEnv, err := consensusblocks.WrappedROBlindedExecutionPayloadEnvelope(blindedEnv.Message)
if err != nil {
return nil, false, err
return nil, false, errors.Wrap(err, "could not wrap blinded envelope")
}
return blockPreState, false, nil
if err := gloas.ProcessBlindedExecutionPayload(ctx, blockPreState, parentBlock.Block().StateRoot(), wrappedEnv); err != nil {
return nil, false, errors.Wrap(err, "could not apply parent blinded envelope from DB")
}
return blockPreState, true, nil
}
env, err := envelopes[0].Envelope()
if err != nil {
return nil, false, err
}
// notify the engine of the new envelope
blockPreState, err := s.cfg.StateGen.StateByRootInitialSync(ctx, b.Block().ParentRoot())
if err != nil {
return nil, false, errors.Wrap(err, "could not get block pre state")
}
if _, err := s.notifyNewEnvelope(ctx, blockPreState, env); err != nil {
return nil, false, err
}
parentBlock, err := s.cfg.BeaconDB.Block(ctx, parentRoot)
if err != nil {
return nil, false, errors.Wrap(err, "could not get parent block")
}
if err := gloas.ApplyBlindedExecutionPayloadEnvelopeForStateGen(ctx, blockPreState, parentBlock.Block().StateRoot(), env); err != nil {
if err := gloas.ProcessBlindedExecutionPayload(ctx, blockPreState, parentBlock.Block().StateRoot(), env); err != nil {
return nil, false, err
}
return blockPreState, true, nil
@@ -323,7 +323,7 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlo
return invalidBlock{error: err}
}
if b.Root() == br && eidx < len(envelopes) {
envSigSet, err := gloas.ApplyExecutionPayloadNoVerifySig(ctx, preState, b.Block().StateRoot(), envelopes[eidx])
envSigSet, err := gloas.ProcessExecutionPayloadWithDeferredSig(ctx, preState, b.Block().StateRoot(), envelopes[eidx])
if err != nil {
return err
}
@@ -686,7 +686,7 @@ func (s *Service) handleBlockPayloadAttestations(ctx context.Context, blk interf
if len(atts) == 0 {
return nil
}
committee, err := gloas.PayloadCommittee(ctx, st, blk.Slot()-1)
committee, err := st.PayloadCommitteeReadOnly(blk.Slot() - 1)
if err != nil {
return err
}

View File

@@ -14,7 +14,6 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/cache"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/blocks"
statefeed "github.com/OffchainLabs/prysm/v7/beacon-chain/core/feed/state"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/gloas"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/signing"
@@ -3556,7 +3555,7 @@ func TestHandleBlockPayloadAttestations(t *testing.T) {
base, insertBlk := testGloasState(t, 1, parentRoot, blockHash)
insertGloasBlock(t, s, base, insertBlk, blockRoot)
ptc, err := gloas.PayloadCommittee(ctx, headState, 1)
ptc, err := headState.PayloadCommitteeReadOnly(1)
require.NoError(t, err)
require.NotEqual(t, 0, len(ptc))

View File

@@ -211,7 +211,7 @@ func (s *Service) callNewPayload(
requests *enginev1.ExecutionRequests,
slot primitives.Slot,
) (bool, error) {
_, err := s.cfg.ExecutionEngineCaller.NewPayload(ctx, payload, versionedHashes, &parentRoot, requests)
_, err := s.cfg.ExecutionEngineCaller.NewPayload(ctx, payload, versionedHashes, &parentRoot, requests, slot)
if err == nil {
return true, nil
}

View File

@@ -3,7 +3,6 @@ package blockchain
import (
"testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/gloas"
mockExecution "github.com/OffchainLabs/prysm/v7/beacon-chain/execution/testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
"github.com/OffchainLabs/prysm/v7/config/params"
@@ -51,7 +50,7 @@ func TestReceivePayloadAttestationMessage_ValidatorNotInPTC(t *testing.T) {
require.NoError(t, err)
s.head = &head{root: blockRoot, block: wsb, state: headState, slot: 1}
ptc, err := gloas.PayloadCommittee(ctx, headState, 1)
ptc, err := headState.PayloadCommitteeReadOnly(1)
require.NoError(t, err)
// Pick a validator index not in the PTC.
@@ -100,7 +99,7 @@ func TestReceivePayloadAttestationMessage_OK(t *testing.T) {
require.NoError(t, err)
s.head = &head{root: blockRoot, block: wsb, state: headState, slot: 1}
ptc, err := gloas.PayloadCommittee(ctx, headState, 1)
ptc, err := headState.PayloadCommitteeReadOnly(1)
require.NoError(t, err)
require.NotEqual(t, 0, len(ptc))

View File

@@ -73,29 +73,30 @@ type Service struct {
// config options for the service.
type config struct {
BeaconBlockBuf int
ChainStartFetcher execution.ChainStartFetcher
BeaconDB db.HeadAccessDatabase
DepositCache cache.DepositCache
PayloadIDCache *cache.PayloadIDCache
TrackedValidatorsCache *cache.TrackedValidatorsCache
AttestationCache *cache.AttestationCache
AttPool attestations.Pool
ExitPool voluntaryexits.PoolManager
SlashingPool slashings.PoolManager
BLSToExecPool blstoexec.PoolManager
P2P p2p.Accessor
MaxRoutines int
StateNotifier statefeed.Notifier
ForkChoiceStore f.ForkChoicer
AttService *attestations.Service
StateGen *stategen.State
SlasherAttestationsFeed *event.Feed
WeakSubjectivityCheckpt *ethpb.Checkpoint
BlockFetcher execution.POWBlockFetcher
FinalizedStateAtStartUp state.BeaconState
ExecutionEngineCaller execution.EngineCaller
SyncChecker Checker
BeaconBlockBuf int
ChainStartFetcher execution.ChainStartFetcher
BeaconDB db.HeadAccessDatabase
DepositCache cache.DepositCache
PayloadIDCache *cache.PayloadIDCache
TrackedValidatorsCache *cache.TrackedValidatorsCache
ProposerPreferencesCache *cache.ProposerPreferencesCache
AttestationCache *cache.AttestationCache
AttPool attestations.Pool
ExitPool voluntaryexits.PoolManager
SlashingPool slashings.PoolManager
BLSToExecPool blstoexec.PoolManager
P2P p2p.Accessor
MaxRoutines int
StateNotifier statefeed.Notifier
ForkChoiceStore f.ForkChoicer
AttService *attestations.Service
StateGen *stategen.State
SlasherAttestationsFeed *event.Feed
WeakSubjectivityCheckpt *ethpb.Checkpoint
BlockFetcher execution.POWBlockFetcher
FinalizedStateAtStartUp state.BeaconState
ExecutionEngineCaller execution.EngineCaller
SyncChecker Checker
}
// Checker is an interface used to determine if a node is in initial sync

View File

@@ -12,6 +12,7 @@ import (
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v7/runtime/version"
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/pkg/errors"
)
@@ -77,8 +78,12 @@ func (s *Service) setupForkchoiceTree(st state.BeaconState) error {
log.WithError(err).Error("Could not build forkchoice chain, starting with finalized block as head")
return nil
}
resolveChainPayloadStatus(chain)
s.cfg.ForkChoiceStore.Lock()
defer s.cfg.ForkChoiceStore.Unlock()
if err := s.markFinalizedRootFull(chain, fRoot); err != nil {
log.WithError(err).Error("Could not mark finalized root as full in forkchoice")
}
return s.cfg.ForkChoiceStore.InsertChain(s.ctx, chain)
}
@@ -145,6 +150,68 @@ func (s *Service) setupForkchoiceRoot(st state.BeaconState) error {
return nil
}
// resolveChainPayloadStatus determines which blocks in the chain had their
// execution payloads delivered by checking if consecutive blocks' bids indicate
// payload delivery. For each pair of blocks (chain[i], chain[i+1]), if the next
// block's bid parentBlockHash equals the current block's bid blockHash, the
// current block's payload was delivered.
func resolveChainPayloadStatus(chain []*forkchoicetypes.BlockAndCheckpoints) {
for i := 0; i < len(chain)-1; i++ {
curr := chain[i].Block.Block()
next := chain[i+1].Block.Block()
if curr.Version() < version.Gloas || next.Version() < version.Gloas {
continue
}
currBid, err := curr.Body().SignedExecutionPayloadBid()
if err != nil || currBid == nil || currBid.Message == nil {
continue
}
nextBid, err := next.Body().SignedExecutionPayloadBid()
if err != nil || nextBid == nil || nextBid.Message == nil {
continue
}
if bytes.Equal(nextBid.Message.ParentBlockHash, currBid.Message.BlockHash) {
chain[i].HasPayload = true
}
}
}
// markFinalizedRootFull checks whether the finalized root block's execution
// payload was delivered by inspecting the first block in the chain. If the first
// block's bid parentBlockHash equals the finalized block's bid blockHash, the
// finalized block's payload was delivered and a full node must be created in
// forkchoice. The caller must hold the forkchoice lock.
func (s *Service) markFinalizedRootFull(chain []*forkchoicetypes.BlockAndCheckpoints, fRoot [32]byte) error {
if len(chain) == 0 {
return nil
}
firstBlock := chain[0].Block.Block()
if firstBlock.Version() < version.Gloas {
return nil
}
firstBid, err := firstBlock.Body().SignedExecutionPayloadBid()
if err != nil || firstBid == nil || firstBid.Message == nil {
return nil
}
fBlock, err := s.cfg.BeaconDB.Block(s.ctx, fRoot)
if err != nil {
return errors.Wrap(err, "could not get finalized block")
}
if fBlock.Block().Version() < version.Gloas {
return nil
}
fBid, err := fBlock.Block().Body().SignedExecutionPayloadBid()
if err != nil || fBid == nil || fBid.Message == nil {
return nil
}
if !bytes.Equal(firstBid.Message.ParentBlockHash, fBid.Message.BlockHash) {
return nil
}
// The finalized block's payload was delivered. Create the full node.
s.cfg.ForkChoiceStore.MarkFullNode(fRoot)
return nil
}
func (s *Service) setupForkchoiceCheckpoints() error {
justified, err := s.cfg.BeaconDB.JustifiedCheckpoint(s.ctx)
if err != nil {

View File

@@ -94,6 +94,11 @@ func (mb *mockBroadcaster) BroadcastDataColumnSidecars(_ context.Context, _ []bl
return nil
}
func (mb *mockBroadcaster) BroadcastForEpoch(_ context.Context, _ proto.Message, _ primitives.Epoch) error {
mb.broadcastCalled = true
return nil
}
func (mb *mockBroadcaster) BroadcastBLSChanges(_ context.Context, _ []*ethpb.SignedBLSToExecutionChange) {
}

View File

@@ -8,11 +8,35 @@ import (
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
)
// proposerPreference returns a TrackedValidator from the ProposerPreferencesCache
// if a preference exists for the given slot.
func (s *Service) proposerPreference(slot primitives.Slot) (cache.TrackedValidator, bool) {
if s.cfg.ProposerPreferencesCache == nil {
return cache.TrackedValidator{}, false
}
pref, ok := s.cfg.ProposerPreferencesCache.Get(slot)
if !ok {
return cache.TrackedValidator{}, false
}
var feeRecipient primitives.ExecutionAddress
copy(feeRecipient[:], pref.FeeRecipient)
return cache.TrackedValidator{Active: true, FeeRecipient: feeRecipient, GasLimit: pref.GasLimit}, true
}
// trackedProposer returns whether the beacon node was informed, via the
// validators/prepare_proposer endpoint, of the proposer at the given slot.
// It only returns true if the tracked proposer is present and active.
//
// When PrepareAllPayloads is enabled, the node prepares payloads for every
// slot. After the Gloas fork, proposers broadcast their preferences (fee
// recipient, gas limit) via gossip into the ProposerPreferencesCache. When
// available, these preferences supply the fee recipient; otherwise the
// default (burn address) is used.
func (s *Service) trackedProposer(st state.ReadOnlyBeaconState, slot primitives.Slot) (cache.TrackedValidator, bool) {
if features.Get().PrepareAllPayloads {
if val, ok := s.proposerPreference(slot); ok {
return val, true
}
return cache.TrackedValidator{Active: true}, true
}
id, err := helpers.BeaconProposerIndexAtSlot(s.ctx, st, slot)
@@ -23,5 +47,8 @@ func (s *Service) trackedProposer(st state.ReadOnlyBeaconState, slot primitives.
if !ok {
return cache.TrackedValidator{}, false
}
if pref, ok := s.proposerPreference(slot); ok {
return pref, true
}
return val, val.Active
}

View File

@@ -0,0 +1,83 @@
package blockchain
import (
"testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/cache"
"github.com/OffchainLabs/prysm/v7/config/features"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/OffchainLabs/prysm/v7/testing/util"
"github.com/ethereum/go-ethereum/common"
)
func TestTrackedProposer_NotTracked(t *testing.T) {
service, _ := minimalTestService(t, WithPayloadIDCache(cache.NewPayloadIDCache()))
st, _ := util.DeterministicGenesisStateBellatrix(t, 1)
_, ok := service.trackedProposer(st, 0)
require.Equal(t, false, ok)
}
func TestTrackedProposer_Tracked(t *testing.T) {
service, _ := minimalTestService(t, WithPayloadIDCache(cache.NewPayloadIDCache()))
st, _ := util.DeterministicGenesisStateBellatrix(t, 1)
addr := common.HexToAddress("0x1234")
service.cfg.TrackedValidatorsCache.Set(cache.TrackedValidator{Active: true, FeeRecipient: primitives.ExecutionAddress(addr), Index: 0})
val, ok := service.trackedProposer(st, 0)
require.Equal(t, true, ok)
require.Equal(t, primitives.ExecutionAddress(addr), val.FeeRecipient)
}
func TestTrackedProposer_PrepareAllPayloads_Default(t *testing.T) {
resetCfg := features.InitWithReset(&features.Flags{PrepareAllPayloads: true})
defer resetCfg()
service, _ := minimalTestService(t, WithPayloadIDCache(cache.NewPayloadIDCache()))
st, _ := util.DeterministicGenesisStateBellatrix(t, 1)
val, ok := service.trackedProposer(st, 0)
require.Equal(t, true, ok)
require.Equal(t, true, val.Active)
require.Equal(t, params.BeaconConfig().EthBurnAddressHex, common.BytesToAddress(val.FeeRecipient[:]).String())
}
func TestTrackedProposer_PrepareAllPayloads_WithProposerPreference(t *testing.T) {
resetCfg := features.InitWithReset(&features.Flags{PrepareAllPayloads: true})
defer resetCfg()
prefCache := cache.NewProposerPreferencesCache()
service, _ := minimalTestService(t,
WithPayloadIDCache(cache.NewPayloadIDCache()),
WithProposerPreferencesCache(prefCache),
)
st, _ := util.DeterministicGenesisStateBellatrix(t, 1)
addr := common.HexToAddress("0xabcd")
prefCache.Add(0, addr.Bytes(), 42_000_000)
val, ok := service.trackedProposer(st, 0)
require.Equal(t, true, ok)
require.Equal(t, true, val.Active)
require.Equal(t, primitives.ExecutionAddress(addr), val.FeeRecipient)
require.Equal(t, uint64(42_000_000), val.GasLimit)
}
func TestTrackedProposer_TrackedWithProposerPreferenceOverride(t *testing.T) {
prefCache := cache.NewProposerPreferencesCache()
service, _ := minimalTestService(t,
WithPayloadIDCache(cache.NewPayloadIDCache()),
WithProposerPreferencesCache(prefCache),
)
st, _ := util.DeterministicGenesisStateBellatrix(t, 1)
trackedAddr := common.HexToAddress("0x1111")
prefAddr := common.HexToAddress("0x2222")
service.cfg.TrackedValidatorsCache.Set(cache.TrackedValidator{Active: true, FeeRecipient: primitives.ExecutionAddress(trackedAddr), Index: 0})
prefCache.Add(0, prefAddr.Bytes(), 50_000_000)
val, ok := service.trackedProposer(st, 0)
require.Equal(t, true, ok)
// Proposer preference overrides tracked validator.
require.Equal(t, primitives.ExecutionAddress(prefAddr), val.FeeRecipient)
require.Equal(t, uint64(50_000_000), val.GasLimit)
}

View File

@@ -21,6 +21,7 @@ type (
Active bool
FeeRecipient primitives.ExecutionAddress
Index primitives.ValidatorIndex
GasLimit uint64
}
TrackedValidatorsCache struct {

View File

@@ -192,11 +192,47 @@ func NewGenesisBlockForState(ctx context.Context, st state.BeaconState) (interfa
Block: electraGenesisBlock(root),
Signature: params.BeaconConfig().EmptySignature[:],
})
case *ethpb.BeaconStateGloas:
return blocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlockGloas{
Block: gloasGenesisBlock(root),
Signature: params.BeaconConfig().EmptySignature[:],
})
default:
return nil, ErrUnrecognizedState
}
}
func gloasGenesisBlock(root [fieldparams.RootLength]byte) *ethpb.BeaconBlockGloas {
return &ethpb.BeaconBlockGloas{
ParentRoot: params.BeaconConfig().ZeroHash[:],
StateRoot: root[:],
Body: &ethpb.BeaconBlockBodyGloas{
RandaoReveal: make([]byte, 96),
Eth1Data: &ethpb.Eth1Data{
DepositRoot: make([]byte, 32),
BlockHash: make([]byte, 32),
},
Graffiti: make([]byte, 32),
SyncAggregate: &ethpb.SyncAggregate{
SyncCommitteeBits: make([]byte, fieldparams.SyncCommitteeLength/8),
SyncCommitteeSignature: make([]byte, fieldparams.BLSSignatureLength),
},
SignedExecutionPayloadBid: &ethpb.SignedExecutionPayloadBid{
Message: &ethpb.ExecutionPayloadBid{
ParentBlockHash: make([]byte, 32),
ParentBlockRoot: make([]byte, 32),
BlockHash: make([]byte, 32),
PrevRandao: make([]byte, 32),
FeeRecipient: make([]byte, 20),
BlobKzgCommitments: make([][]byte, 0),
},
Signature: make([]byte, fieldparams.BLSSignatureLength),
},
PayloadAttestations: make([]*ethpb.PayloadAttestation, 0),
},
}
}
func electraGenesisBlock(root [fieldparams.RootLength]byte) *ethpb.BeaconBlockElectra {
return &ethpb.BeaconBlockElectra{
ParentRoot: params.BeaconConfig().ZeroHash[:],

View File

@@ -51,8 +51,10 @@ func ProcessEffectiveBalanceUpdates(st state.BeaconState) error {
if balance+downwardThreshold < val.EffectiveBalance() || val.EffectiveBalance()+upwardThreshold < balance {
effectiveBal := min(balance-balance%effBalanceInc, effectiveBalanceLimit)
newVal = val.Copy()
newVal.EffectiveBalance = effectiveBal
if effectiveBal != val.EffectiveBalance() {
newVal = val.Copy()
newVal.EffectiveBalance = effectiveBal
}
}
return newVal, nil
}

View File

@@ -35,6 +35,7 @@ go_library(
"//crypto/bls/common:go_default_library",
"//crypto/hash:go_default_library",
"//encoding/bytesutil:go_default_library",
"//monitoring/tracing/trace:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",

View File

@@ -29,7 +29,7 @@ func processDepositRequests(ctx context.Context, beaconState state.BeaconState,
// processDepositRequest processes the specific deposit request
//
// <spec fn="process_deposit_request" fork="gloas" hash="3c6b0310">
// <spec fn="process_deposit_request" fork="gloas" hash="0e8b94ab">
// def process_deposit_request(state: BeaconState, deposit_request: DepositRequest) -> None:
// # [New in Gloas:EIP7732]
// builder_pubkeys = [b.pubkey for b in state.builders]
@@ -40,8 +40,11 @@ func processDepositRequests(ctx context.Context, beaconState state.BeaconState,
// # already exists with this pubkey, apply the deposit to their balance
// is_builder = deposit_request.pubkey in builder_pubkeys
// is_validator = deposit_request.pubkey in validator_pubkeys
// is_builder_prefix = is_builder_withdrawal_credential(deposit_request.withdrawal_credentials)
// if is_builder or (is_builder_prefix and not is_validator):
// if is_builder or (
// is_builder_withdrawal_credential(deposit_request.withdrawal_credentials)
// and not is_validator
// and not is_pending_validator(state, deposit_request.pubkey)
// ):
// # Apply builder deposits immediately
// apply_deposit_for_builder(
// state,
@@ -65,37 +68,27 @@ func processDepositRequests(ctx context.Context, beaconState state.BeaconState,
// )
// </spec>
func processDepositRequest(beaconState state.BeaconState, request *enginev1.DepositRequest) error {
var err error
defer func() {
if err == nil {
builderDepositsProcessedTotal.Inc()
}
}()
if request == nil {
err = errors.New("nil deposit request")
return err
return errors.New("nil deposit request")
}
var applied bool
applied, err = applyBuilderDepositRequest(beaconState, request)
applied, err := applyBuilderDepositRequest(beaconState, request)
if err != nil {
err = errors.Wrap(err, "could not apply builder deposit")
return err
return errors.Wrap(err, "could not apply builder deposit")
}
if applied {
builderDepositsProcessedTotal.Inc()
return nil
}
if err = beaconState.AppendPendingDeposit(&ethpb.PendingDeposit{
if err := beaconState.AppendPendingDeposit(&ethpb.PendingDeposit{
PublicKey: request.Pubkey,
WithdrawalCredentials: request.WithdrawalCredentials,
Amount: request.Amount,
Signature: request.Signature,
Slot: beaconState.Slot(),
}); err != nil {
err = errors.Wrap(err, "could not append deposit request")
return err
return errors.Wrap(err, "could not append deposit request")
}
return nil
}
@@ -129,13 +122,7 @@ func applyBuilderDepositRequest(beaconState state.BeaconState, request *enginev1
}
pubkey := bytesutil.ToBytes48(request.Pubkey)
_, isValidator := beaconState.ValidatorIndexByPubkey(pubkey)
idx, isBuilder := beaconState.BuilderIndexByPubkey(pubkey)
isBuilderPrefix := helpers.IsBuilderWithdrawalCredential(request.WithdrawalCredentials)
if !isBuilder && (!isBuilderPrefix || isValidator) {
return false, nil
}
if isBuilder {
if err := beaconState.IncreaseBuilderBalance(idx, request.Amount); err != nil {
return false, err
@@ -143,6 +130,20 @@ func applyBuilderDepositRequest(beaconState state.BeaconState, request *enginev1
return true, nil
}
isBuilderPrefix := helpers.IsBuilderWithdrawalCredential(request.WithdrawalCredentials)
_, isValidator := beaconState.ValidatorIndexByPubkey(pubkey)
if !isBuilderPrefix || isValidator {
return false, nil
}
isPending, err := beaconState.IsPendingValidator(request.Pubkey)
if err != nil {
return false, err
}
if isPending {
return false, nil
}
if err := applyDepositForNewBuilder(
beaconState,
request.Pubkey,

View File

@@ -91,6 +91,33 @@ func TestProcessDepositRequest_ExistingBuilderIncreasesBalance(t *testing.T) {
require.Equal(t, 0, len(pending))
}
func TestProcessDepositRequest_BuilderDepositWithExistingPendingDepositStaysPending(t *testing.T) {
sk, err := bls.RandKey()
require.NoError(t, err)
validatorCred := validatorWithdrawalCredentials()
builderCred := builderWithdrawalCredentials()
existingPending := stateTesting.GeneratePendingDeposit(t, sk, 1234, validatorCred, 0)
req := depositRequestFromPending(stateTesting.GeneratePendingDeposit(t, sk, 200, builderCred, 1), 9)
st := newGloasState(t, nil, nil)
require.NoError(t, st.SetPendingDeposits([]*ethpb.PendingDeposit{existingPending}))
err = processDepositRequest(st, req)
require.NoError(t, err)
_, ok := st.BuilderIndexByPubkey(toBytes48(req.Pubkey))
require.Equal(t, false, ok)
pending, err := st.PendingDeposits()
require.NoError(t, err)
require.Equal(t, 2, len(pending))
require.DeepEqual(t, existingPending.PublicKey, pending[0].PublicKey)
require.DeepEqual(t, req.Pubkey, pending[1].PublicKey)
require.DeepEqual(t, req.WithdrawalCredentials, pending[1].WithdrawalCredentials)
require.Equal(t, req.Amount, pending[1].Amount)
}
func TestApplyDepositForBuilder_InvalidSignatureIgnoresDeposit(t *testing.T) {
sk, err := bls.RandKey()
require.NoError(t, err)

View File

@@ -17,7 +17,8 @@ import (
"github.com/pkg/errors"
)
// ProcessExecutionPayload processes the signed execution payload envelope for the Gloas fork.
// ProcessExecutionPayload is the gossip entry point: verify signature, validate
// consistency, apply state mutations, and verify the post-payload state root.
//
// <spec fn="process_execution_payload" fork="gloas" hash="36bd3af3">
// def process_execution_payload(
@@ -108,7 +109,7 @@ func ProcessExecutionPayload(
st state.BeaconState,
signedEnvelope interfaces.ROSignedExecutionPayloadEnvelope,
) error {
if err := VerifyExecutionPayloadEnvelopeSignature(st, signedEnvelope); err != nil {
if err := verifyExecutionPayloadEnvelopeSignature(st, signedEnvelope); err != nil {
return errors.Wrap(err, "signature verification failed")
}
@@ -117,29 +118,132 @@ func ProcessExecutionPayload(
return errors.Wrap(err, "could not get envelope from signed envelope")
}
if err := ApplyExecutionPayload(ctx, st, envelope); err != nil {
if err := cacheLatestBlockHeaderStateRoot(ctx, st); err != nil {
return err
}
r, err := st.HashTreeRoot(ctx)
if err != nil {
return errors.Wrap(err, "could not get hash tree root")
if err := validatePayloadConsistency(st, envelope); err != nil {
return err
}
if r != envelope.StateRoot() {
return fmt.Errorf("state root mismatch: expected %#x, got %#x", envelope.StateRoot(), r)
if err := applyExecutionPayloadStateMutations(ctx, st, envelope.ExecutionRequests(), envelope.BlockHash()); err != nil {
return err
}
return nil
return verifyPostStateRoot(ctx, st, envelope)
}
// ApplyExecutionPayload applies the execution payload envelope to the state and performs the same
// consistency checks as the full processing path. This keeps the post-payload state root computation
// on a shared code path, even though some bid/payload checks are not strictly required for the root itself.
// ProcessExecutionPayloadWithDeferredSig is the init-sync entry point: extract the
// signature for deferred verification, validate consistency, apply state
// mutations, and verify the post-payload state root. The caller provides the
// previousStateRoot to avoid recomputing it.
func ProcessExecutionPayloadWithDeferredSig(
ctx context.Context,
st state.BeaconState,
previousStateRoot [32]byte,
signedEnvelope interfaces.ROSignedExecutionPayloadEnvelope,
) (*bls.SignatureBatch, error) {
sigBatch, err := ExecutionPayloadEnvelopeSignatureBatch(st, signedEnvelope)
if err != nil {
return nil, errors.Wrap(err, "could not extract envelope signature batch")
}
envelope, err := signedEnvelope.Envelope()
if err != nil {
return nil, errors.Wrap(err, "could not get envelope from signed envelope")
}
if err := setLatestBlockHeaderStateRoot(st, previousStateRoot); err != nil {
return nil, errors.Wrap(err, "could not set latest block header state root")
}
if err := validatePayloadConsistency(st, envelope); err != nil {
return nil, err
}
if err := applyExecutionPayloadStateMutations(ctx, st, envelope.ExecutionRequests(), envelope.BlockHash()); err != nil {
return nil, err
}
if err := verifyPostStateRoot(ctx, st, envelope); err != nil {
return nil, err
}
return sigBatch, nil
}
// ProcessBlindedExecutionPayload is the replay/stategen entry
// point: patch the block header, do minimal bid consistency checks, and apply
// state mutations. No payload data is available — only the blinded envelope.
// A nil envelope is a no-op (the payload was not delivered for that slot).
func ProcessBlindedExecutionPayload(
ctx context.Context,
st state.BeaconState,
previousStateRoot [32]byte,
envelope interfaces.ROBlindedExecutionPayloadEnvelope,
) error {
if envelope == nil {
return nil
}
if err := setLatestBlockHeaderStateRoot(st, previousStateRoot); err != nil {
return errors.Wrap(err, "could not set latest block header state root")
}
if envelope.Slot() != st.Slot() {
return errors.Errorf("blinded envelope slot does not match state slot: envelope=%d, state=%d", envelope.Slot(), st.Slot())
}
latestBid, err := st.LatestExecutionPayloadBid()
if err != nil {
return errors.Wrap(err, "could not get latest execution payload bid")
}
if latestBid == nil {
return errors.New("latest execution payload bid is nil")
}
if envelope.BuilderIndex() != latestBid.BuilderIndex() {
return errors.Errorf(
"blinded envelope builder index does not match committed bid builder index: envelope=%d, bid=%d",
envelope.BuilderIndex(),
latestBid.BuilderIndex(),
)
}
bidBlockHash := latestBid.BlockHash()
envelopeBlockHash := envelope.BlockHash()
if bidBlockHash != envelopeBlockHash {
return errors.Errorf(
"blinded envelope block hash does not match committed bid block hash: envelope=%#x, bid=%#x",
envelopeBlockHash,
bidBlockHash,
)
}
return applyExecutionPayloadStateMutations(ctx, st, envelope.ExecutionRequests(), envelopeBlockHash)
}
// ApplyExecutionPayload patches the block header state root, validates
// consistency, and applies state mutations. No signature or post-state-root
// verification is performed. Used by the proposer path to compute the
// post-payload state root for the envelope.
func ApplyExecutionPayload(
ctx context.Context,
st state.BeaconState,
envelope interfaces.ROExecutionPayloadEnvelope,
) error {
if err := cacheLatestBlockHeaderStateRoot(ctx, st); err != nil {
return err
}
if err := validatePayloadConsistency(st, envelope); err != nil {
return err
}
return applyExecutionPayloadStateMutations(ctx, st, envelope.ExecutionRequests(), envelope.BlockHash())
}
func setLatestBlockHeaderStateRoot(st state.BeaconState, root [32]byte) error {
latestHeader := st.LatestBlockHeader()
latestHeader.StateRoot = root[:]
return st.SetLatestBlockHeader(latestHeader)
}
// cacheLatestBlockHeaderStateRoot fills in the state root on the latest block
// header if it hasn't been set yet (the spec's "cache latest block header
// state root" step).
func cacheLatestBlockHeaderStateRoot(ctx context.Context, st state.BeaconState) error {
latestHeader := st.LatestBlockHeader()
if len(latestHeader.StateRoot) == 0 || bytes.Equal(latestHeader.StateRoot, make([]byte, 32)) {
previousStateRoot, err := st.HashTreeRoot(ctx)
@@ -151,7 +255,13 @@ func ApplyExecutionPayload(
return errors.Wrap(err, "could not set latest block header")
}
}
return nil
}
// validatePayloadConsistency checks that the envelope and payload are consistent
// with the beacon block header, the committed bid, and the current state.
func validatePayloadConsistency(st state.BeaconState, envelope interfaces.ROExecutionPayloadEnvelope) error {
latestHeader := st.LatestBlockHeader()
blockHeaderRoot, err := latestHeader.HashTreeRoot()
if err != nil {
return errors.Wrap(err, "could not compute block header root")
@@ -190,7 +300,6 @@ func ApplyExecutionPayload(
if err != nil {
return errors.Wrap(err, "could not get withdrawals from payload")
}
ok, err := st.WithdrawalsMatchPayloadExpected(withdrawals)
if err != nil {
return errors.Wrap(err, "could not validate payload withdrawals")
@@ -225,14 +334,26 @@ func ApplyExecutionPayload(
return errors.Errorf("payload timestamp does not match expected timestamp: payload=%d, expected=%d", payload.Timestamp(), uint64(t.Unix()))
}
if err := ApplyExecutionPayloadStateMutations(ctx, st, envelope.ExecutionRequests(), [32]byte(payload.BlockHash())); err != nil {
return err
}
return nil
}
func ApplyExecutionPayloadStateMutations(
// verifyPostStateRoot checks that the post-payload state root matches the
// envelope's declared state root.
func verifyPostStateRoot(ctx context.Context, st state.BeaconState, envelope interfaces.ROExecutionPayloadEnvelope) error {
r, err := st.HashTreeRoot(ctx)
if err != nil {
return errors.Wrap(err, "could not compute post-envelope state root")
}
if r != envelope.StateRoot() {
return fmt.Errorf("state root mismatch: expected %#x, got %#x", envelope.StateRoot(), r)
}
return nil
}
// applyExecutionPayloadStateMutations applies the state-changing operations
// from an execution payload: process execution requests, queue builder payment,
// set execution payload availability, and update the latest block hash.
func applyExecutionPayloadStateMutations(
ctx context.Context,
st state.BeaconState,
executionRequests *enginev1.ExecutionRequests,
@@ -257,116 +378,6 @@ func ApplyExecutionPayloadStateMutations(
return nil
}
// ApplyBlindedExecutionPayloadEnvelopeForStateGen applies the post-bid state mutations from a
// blinded execution payload envelope for replay/state-generation paths.
// It patches the latest block header with the previous state root, validates minimal consistency
// with the committed bid, and then applies the state mutations.
// A nil envelope is a no-op (the payload was not delivered for that slot).
func ApplyBlindedExecutionPayloadEnvelopeForStateGen(
ctx context.Context,
st state.BeaconState,
previousStateRoot [32]byte,
envelope interfaces.ROBlindedExecutionPayloadEnvelope,
) error {
if envelope == nil {
return nil
}
latestHeader := st.LatestBlockHeader()
latestHeader.StateRoot = previousStateRoot[:]
if err := st.SetLatestBlockHeader(latestHeader); err != nil {
return errors.Wrap(err, "could not set latest block header")
}
if envelope.Slot() != st.Slot() {
return errors.Errorf("blinded envelope slot does not match state slot: envelope=%d, state=%d", envelope.Slot(), st.Slot())
}
latestBid, err := st.LatestExecutionPayloadBid()
if err != nil {
return errors.Wrap(err, "could not get latest execution payload bid")
}
if latestBid == nil {
return errors.New("latest execution payload bid is nil")
}
if envelope.BuilderIndex() != latestBid.BuilderIndex() {
return errors.Errorf(
"blinded envelope builder index does not match committed bid builder index: envelope=%d, bid=%d",
envelope.BuilderIndex(),
latestBid.BuilderIndex(),
)
}
bidBlockHash := latestBid.BlockHash()
envelopeBlockHash := envelope.BlockHash()
if bidBlockHash != envelopeBlockHash {
return errors.Errorf(
"blinded envelope block hash does not match committed bid block hash: envelope=%#x, bid=%#x",
envelopeBlockHash,
bidBlockHash,
)
}
return ApplyExecutionPayloadStateMutations(ctx, st, envelope.ExecutionRequests(), envelopeBlockHash)
}
func envelopePublicKey(st state.BeaconState, builderIdx primitives.BuilderIndex) (bls.PublicKey, error) {
if builderIdx == params.BeaconConfig().BuilderIndexSelfBuild {
return proposerPublicKey(st)
}
return builderPublicKey(st, builderIdx)
}
func proposerPublicKey(st state.BeaconState) (bls.PublicKey, error) {
header := st.LatestBlockHeader()
if header == nil {
return nil, fmt.Errorf("latest block header is nil")
}
proposerPubkey := st.PubkeyAtIndex(header.ProposerIndex)
publicKey, err := bls.PublicKeyFromBytes(proposerPubkey[:])
if err != nil {
return nil, fmt.Errorf("invalid proposer public key: %w", err)
}
return publicKey, nil
}
func builderPublicKey(st state.BeaconState, builderIdx primitives.BuilderIndex) (bls.PublicKey, error) {
builder, err := st.Builder(builderIdx)
if err != nil {
return nil, fmt.Errorf("failed to get builder: %w", err)
}
if builder == nil {
return nil, fmt.Errorf("builder at index %d not found", builderIdx)
}
publicKey, err := bls.PublicKeyFromBytes(builder.Pubkey)
if err != nil {
return nil, fmt.Errorf("invalid builder public key: %w", err)
}
return publicKey, nil
}
// processExecutionRequests processes deposits, withdrawals, and consolidations from execution requests.
// Spec v1.7.0-alpha.0 (pseudocode):
// for op in requests.deposits: process_deposit_request(state, op)
// for op in requests.withdrawals: process_withdrawal_request(state, op)
// for op in requests.consolidations: process_consolidation_request(state, op)
func processExecutionRequests(ctx context.Context, st state.BeaconState, rqs *enginev1.ExecutionRequests) error {
if err := processDepositRequests(ctx, st, rqs.Deposits); err != nil {
return errors.Wrap(err, "could not process deposit requests")
}
var err error
st, err = requests.ProcessWithdrawalRequests(ctx, st, rqs.Withdrawals)
if err != nil {
return errors.Wrap(err, "could not process withdrawal requests")
}
err = requests.ProcessConsolidationRequests(ctx, st, rqs.Consolidations)
if err != nil {
return errors.Wrap(err, "could not process consolidation requests")
}
return nil
}
// ExecutionPayloadEnvelopeSignatureBatch extracts the BLS signature from a signed execution payload
// envelope as a SignatureBatch for deferred batch verification.
func ExecutionPayloadEnvelopeSignatureBatch(
@@ -409,69 +420,25 @@ func ExecutionPayloadEnvelopeSignatureBatch(
}, nil
}
// ApplyExecutionPayloadNoVerifySig applies the execution payload envelope to the state without
// verifying the envelope signature (returned as a SignatureBatch for deferred batch verification).
// The caller provides previousStateRoot instead of recomputing it. After applying the payload,
// it verifies the post-envelope state root matches the envelope's declared state root.
func ApplyExecutionPayloadNoVerifySig(
ctx context.Context,
st state.BeaconState,
previousStateRoot [32]byte,
signedEnvelope interfaces.ROSignedExecutionPayloadEnvelope,
) (*bls.SignatureBatch, error) {
sigBatch, err := ExecutionPayloadEnvelopeSignatureBatch(st, signedEnvelope)
if err != nil {
return nil, errors.Wrap(err, "could not extract envelope signature batch")
}
envelope, err := signedEnvelope.Envelope()
if err != nil {
return nil, errors.Wrap(err, "could not get envelope from signed envelope")
}
latestHeader := st.LatestBlockHeader()
latestHeader.StateRoot = previousStateRoot[:]
if err := st.SetLatestBlockHeader(latestHeader); err != nil {
return nil, errors.Wrap(err, "could not set latest block header")
}
if err := ApplyExecutionPayload(ctx, st, envelope); err != nil {
return nil, err
}
r, err := st.HashTreeRoot(ctx)
if err != nil {
return nil, errors.Wrap(err, "could not compute post-envelope state root")
}
if r != envelope.StateRoot() {
return nil, fmt.Errorf("envelope state root mismatch: expected %#x, got %#x", envelope.StateRoot(), r)
}
return sigBatch, nil
}
// VerifyExecutionPayloadEnvelopeSignature verifies the BLS signature on a signed execution payload envelope.
// <spec fn="verify_execution_payload_envelope_signature" fork="gloas" style="full" hash="49483ae2">
// def verify_execution_payload_envelope_signature(
// verifyExecutionPayloadEnvelopeSignature verifies the BLS signature on a signed execution payload envelope.
//
// state: BeaconState, signed_envelope: SignedExecutionPayloadEnvelope
// <spec fn="verify_execution_payload_envelope_signature" fork="gloas" style="full" hash="49483ae2">
// def verify_execution_payload_envelope_signature(
// state: BeaconState, signed_envelope: SignedExecutionPayloadEnvelope
// ) -> bool:
// builder_index = signed_envelope.message.builder_index
// if builder_index == BUILDER_INDEX_SELF_BUILD:
// validator_index = state.latest_block_header.proposer_index
// pubkey = state.validators[validator_index].pubkey
// else:
// pubkey = state.builders[builder_index].pubkey
//
// ) -> bool:
//
// builder_index = signed_envelope.message.builder_index
// if builder_index == BUILDER_INDEX_SELF_BUILD:
// validator_index = state.latest_block_header.proposer_index
// pubkey = state.validators[validator_index].pubkey
// else:
// pubkey = state.builders[builder_index].pubkey
//
// signing_root = compute_signing_root(
// signed_envelope.message, get_domain(state, DOMAIN_BEACON_BUILDER)
// )
// return bls.Verify(pubkey, signing_root, signed_envelope.signature)
//
// </spec>
func VerifyExecutionPayloadEnvelopeSignature(st state.BeaconState, signedEnvelope interfaces.ROSignedExecutionPayloadEnvelope) error {
// signing_root = compute_signing_root(
// signed_envelope.message, get_domain(state, DOMAIN_BEACON_BUILDER)
// )
// return bls.Verify(pubkey, signing_root, signed_envelope.signature)
// </spec>
func verifyExecutionPayloadEnvelopeSignature(st state.BeaconState, signedEnvelope interfaces.ROSignedExecutionPayloadEnvelope) error {
envelope, err := signedEnvelope.Envelope()
if err != nil {
return fmt.Errorf("failed to get envelope: %w", err)
@@ -511,3 +478,56 @@ func VerifyExecutionPayloadEnvelopeSignature(st state.BeaconState, signedEnvelop
return nil
}
func envelopePublicKey(st state.BeaconState, builderIdx primitives.BuilderIndex) (bls.PublicKey, error) {
if builderIdx == params.BeaconConfig().BuilderIndexSelfBuild {
return proposerPublicKey(st)
}
return builderPublicKey(st, builderIdx)
}
func proposerPublicKey(st state.BeaconState) (bls.PublicKey, error) {
header := st.LatestBlockHeader()
if header == nil {
return nil, fmt.Errorf("latest block header is nil")
}
proposerPubkey := st.PubkeyAtIndex(header.ProposerIndex)
publicKey, err := bls.PublicKeyFromBytes(proposerPubkey[:])
if err != nil {
return nil, fmt.Errorf("invalid proposer public key: %w", err)
}
return publicKey, nil
}
func builderPublicKey(st state.BeaconState, builderIdx primitives.BuilderIndex) (bls.PublicKey, error) {
builder, err := st.Builder(builderIdx)
if err != nil {
return nil, fmt.Errorf("failed to get builder: %w", err)
}
if builder == nil {
return nil, fmt.Errorf("builder at index %d not found", builderIdx)
}
publicKey, err := bls.PublicKeyFromBytes(builder.Pubkey)
if err != nil {
return nil, fmt.Errorf("invalid builder public key: %w", err)
}
return publicKey, nil
}
// processExecutionRequests processes deposits, withdrawals, and consolidations from execution requests.
func processExecutionRequests(ctx context.Context, st state.BeaconState, rqs *enginev1.ExecutionRequests) error {
if err := processDepositRequests(ctx, st, rqs.Deposits); err != nil {
return errors.Wrap(err, "could not process deposit requests")
}
var err error
st, err = requests.ProcessWithdrawalRequests(ctx, st, rqs.Withdrawals)
if err != nil {
return errors.Wrap(err, "could not process withdrawal requests")
}
err = requests.ProcessConsolidationRequests(ctx, st, rqs.Consolidations)
if err != nil {
return errors.Wrap(err, "could not process consolidation requests")
}
return nil
}

View File

@@ -19,6 +19,7 @@ import (
"github.com/OffchainLabs/prysm/v7/crypto/bls"
"github.com/OffchainLabs/prysm/v7/crypto/hash"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v7/monitoring/tracing/trace"
eth "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/pkg/errors"
@@ -80,7 +81,7 @@ func ProcessPayloadAttestations(ctx context.Context, st state.BeaconState, body
// indexedPayloadAttestation converts a payload attestation into its indexed form.
func indexedPayloadAttestation(ctx context.Context, st state.ReadOnlyBeaconState, att *eth.PayloadAttestation) (*consensus_types.IndexedPayloadAttestation, error) {
committee, err := PayloadCommittee(ctx, st, att.Data.Slot)
committee, err := st.PayloadCommitteeReadOnly(att.Data.Slot)
if err != nil {
return nil, err
}
@@ -99,10 +100,10 @@ func indexedPayloadAttestation(ctx context.Context, st state.ReadOnlyBeaconState
}, nil
}
// PayloadCommittee returns the payload timeliness committee for a given slot for the state.
// computePTC computes the payload timeliness committee for a given slot.
//
// <spec fn="get_ptc" fork="gloas" hash="ae15f761">
// def get_ptc(state: BeaconState, slot: Slot) -> Vector[ValidatorIndex, PTC_SIZE]:
// <spec fn="compute_ptc" fork="gloas" hash="0f323552">
// def compute_ptc(state: BeaconState, slot: Slot) -> Vector[ValidatorIndex, PTC_SIZE]:
// """
// Get the payload timeliness committee for the given ``slot``.
// """
@@ -118,7 +119,7 @@ func indexedPayloadAttestation(ctx context.Context, st state.ReadOnlyBeaconState
// state, indices, seed, size=PTC_SIZE, shuffle_indices=False
// )
// </spec>
func PayloadCommittee(ctx context.Context, st state.ReadOnlyBeaconState, slot primitives.Slot) ([]primitives.ValidatorIndex, error) {
func computePTC(ctx context.Context, st state.ReadOnlyBeaconState, slot primitives.Slot) ([]primitives.ValidatorIndex, error) {
epoch := slots.ToEpoch(slot)
seed, err := ptcSeed(st, epoch, slot)
if err != nil {
@@ -166,7 +167,7 @@ func PayloadCommitteeIndex(
slot primitives.Slot,
validatorIndex primitives.ValidatorIndex,
) (uint64, error) {
ptc, err := PayloadCommittee(ctx, st, slot)
ptc, err := st.PayloadCommitteeReadOnly(slot)
if err != nil {
return 0, err
}
@@ -342,3 +343,43 @@ func validIndexedPayloadAttestation(st state.ReadOnlyBeaconState, att *consensus
}
return nil
}
// ProcessPTCWindow rotates the cached PTC window at epoch boundaries by computing
// PTC assignments for the new lookahead epoch and shifting the window.
//
// <spec fn="process_ptc_window" fork="gloas" hash="7be3d509">
// def process_ptc_window(state: BeaconState) -> None:
// """
// Update the cached PTC window.
// """
// # Shift all epochs forward by one
// state.ptc_window[: len(state.ptc_window) - SLOTS_PER_EPOCH] = state.ptc_window[SLOTS_PER_EPOCH:]
// # Fill in the last epoch
// next_epoch = Epoch(get_current_epoch(state) + MIN_SEED_LOOKAHEAD + 1)
// start_slot = compute_start_slot_at_epoch(next_epoch)
// state.ptc_window[len(state.ptc_window) - SLOTS_PER_EPOCH :] = [
// compute_ptc(state, Slot(slot)) for slot in range(start_slot, start_slot + SLOTS_PER_EPOCH)
// ]
// </spec>
func ProcessPTCWindow(ctx context.Context, st state.BeaconState) error {
_, span := trace.StartSpan(ctx, "gloas.ProcessPTCWindow")
defer span.End()
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
lastEpoch := slots.ToEpoch(st.Slot()) + params.BeaconConfig().MinSeedLookahead + 1
startSlot, err := slots.EpochStart(lastEpoch)
if err != nil {
return err
}
newSlots := make([]*eth.PTCs, slotsPerEpoch)
for i := range slotsPerEpoch {
ptc, err := computePTC(ctx, st, startSlot+primitives.Slot(i))
if err != nil {
return err
}
newSlots[i] = &eth.PTCs{ValidatorIndices: ptc}
}
return st.RotatePTCWindow(newSlots)
}

View File

@@ -2,13 +2,14 @@ package gloas_test
import (
"bytes"
"slices"
"testing"
"github.com/OffchainLabs/go-bitfield"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/gloas"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/signing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
@@ -119,7 +120,6 @@ func TestProcessPayloadAttestations_EmptyAggregationBits(t *testing.T) {
}
func TestProcessPayloadAttestations_HappyPath(t *testing.T) {
helpers.ClearCache()
setupTestConfig(t)
sk1, pk1 := newKey(t)
@@ -150,7 +150,6 @@ func TestProcessPayloadAttestations_HappyPath(t *testing.T) {
}
func TestProcessPayloadAttestations_MultipleAttestations(t *testing.T) {
helpers.ClearCache()
setupTestConfig(t)
sk1, pk1 := newKey(t)
@@ -211,12 +210,30 @@ func TestProcessPayloadAttestations_IndexedVerificationError(t *testing.T) {
errIndex: 0,
}
err := gloas.ProcessPayloadAttestations(t.Context(), errState, body)
require.ErrorContains(t, "failed to convert to indexed form", err)
require.ErrorContains(t, "failed to sample beacon committee 0", err)
require.ErrorContains(t, "failed to verify indexed form", err)
require.ErrorContains(t, "validator 0", err)
}
func newTestState(t *testing.T, vals []*eth.Validator, slot primitives.Slot) state.BeaconState {
t.Helper()
st, err := testutil.NewBeaconStateGloas(func(seed *eth.BeaconStateGloas) error {
seed.Slot = slot
seed.Validators = vals
seed.Balances = make([]uint64, len(vals))
for i, v := range vals {
seed.Balances[i] = v.EffectiveBalance
}
seed.PtcWindow = deterministicPTCWindow(len(vals))
return nil
})
require.NoError(t, err)
return st
}
func newPhase0TestState(t *testing.T, vals []*eth.Validator, slot primitives.Slot) state.BeaconState {
t.Helper()
st, err := testutil.NewBeaconState()
require.NoError(t, err)
for _, v := range vals {
@@ -224,10 +241,25 @@ func newTestState(t *testing.T, vals []*eth.Validator, slot primitives.Slot) sta
require.NoError(t, st.AppendBalance(v.EffectiveBalance))
}
require.NoError(t, st.SetSlot(slot))
require.NoError(t, helpers.UpdateCommitteeCache(t.Context(), st, slots.ToEpoch(slot)))
return st
}
func deterministicPTCWindow(validatorCount int) []*eth.PTCs {
window := make([]*eth.PTCs, 3*params.BeaconConfig().SlotsPerEpoch)
indices := make([]primitives.ValidatorIndex, fieldparams.PTCSize)
if validatorCount > 0 {
for i := range indices {
indices[i] = primitives.ValidatorIndex(i % validatorCount)
}
}
for i := range window {
window[i] = &eth.PTCs{
ValidatorIndices: slices.Clone(indices),
}
}
return window
}
func setupTestConfig(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
@@ -292,6 +324,50 @@ func signAttestation(t *testing.T, st state.ReadOnlyBeaconState, data *eth.Paylo
return agg.Marshal()
}
func TestProcessPTCWindow(t *testing.T) {
fuluSt, _ := testutil.DeterministicGenesisStateFulu(t, 256)
st, err := gloas.UpgradeToGloas(fuluSt)
require.NoError(t, err)
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
// Get original window.
origWindow, err := st.PTCWindow()
require.NoError(t, err)
windowSize := int(slotsPerEpoch.Mul(uint64(2 + params.BeaconConfig().MinSeedLookahead)))
require.Equal(t, windowSize, len(origWindow))
// Advance state to next epoch boundary so process_ptc_window sees a new epoch.
require.NoError(t, st.SetSlot(slotsPerEpoch))
// Process PTC window — should rotate.
require.NoError(t, gloas.ProcessPTCWindow(t.Context(), st))
newWindow, err := st.PTCWindow()
require.NoError(t, err)
require.Equal(t, windowSize, len(newWindow))
// The first two epochs should be the old epochs 1 and 2 (shifted left by one epoch).
for i := range 2 * slotsPerEpoch {
require.DeepEqual(t, origWindow[slotsPerEpoch+i], newWindow[i])
}
// The last epoch should be freshly computed — not all zeros.
lastStart := 2 * slotsPerEpoch
for i := range slotsPerEpoch {
ptcSlot := newWindow[lastStart+i]
require.NotNil(t, ptcSlot)
nonZero := false
for _, idx := range ptcSlot.ValidatorIndices {
if idx != 0 {
nonZero = true
break
}
}
require.Equal(t, true, nonZero, "last epoch slot %d should have non-zero validator indices", i)
}
}
type validatorLookupErrState struct {
state.BeaconState
errIndex primitives.ValidatorIndex

View File

@@ -242,13 +242,74 @@ func TestProcessExecutionPayload_Success(t *testing.T) {
require.Equal(t, primitives.Gwei(0), payment.Withdrawal.Amount)
}
func TestProcessExecutionPayloadWithDeferredSig_Success(t *testing.T) {
fixture := buildPayloadFixture(t, nil)
header := fixture.state.LatestBlockHeader()
var previousStateRoot [32]byte
copy(previousStateRoot[:], header.StateRoot)
sigBatch, err := ProcessExecutionPayloadWithDeferredSig(t.Context(), fixture.state, previousStateRoot, fixture.signed)
require.NoError(t, err)
require.NotNil(t, sigBatch)
require.Equal(t, 1, len(sigBatch.Signatures))
require.Equal(t, 1, len(sigBatch.PublicKeys))
require.Equal(t, 1, len(sigBatch.Messages))
require.Equal(t, 1, len(sigBatch.Descriptions))
require.Equal(t, "execution payload envelope signature", sigBatch.Descriptions[0])
valid, err := sigBatch.Verify()
require.NoError(t, err)
require.Equal(t, true, valid)
latestHash, err := fixture.state.LatestBlockHash()
require.NoError(t, err)
var expectedHash [32]byte
copy(expectedHash[:], fixture.payload.BlockHash)
require.Equal(t, expectedHash, latestHash)
available, err := fixture.state.ExecutionPayloadAvailability(fixture.slot)
require.NoError(t, err)
require.Equal(t, uint64(1), available)
updatedHeader := fixture.state.LatestBlockHeader()
require.DeepEqual(t, previousStateRoot[:], updatedHeader.StateRoot)
}
func TestProcessExecutionPayloadWithDeferredSig_PreviousStateRootMismatch(t *testing.T) {
fixture := buildPayloadFixture(t, nil)
previousStateRoot := [32]byte{0x42}
_, err := ProcessExecutionPayloadWithDeferredSig(t.Context(), fixture.state, previousStateRoot, fixture.signed)
require.ErrorContains(t, "envelope beacon block root does not match state latest block header root", err)
}
func TestApplyExecutionPayload_Success(t *testing.T) {
fixture := buildPayloadFixture(t, nil)
envelope, err := fixture.signed.Envelope()
require.NoError(t, err)
require.NoError(t, ApplyExecutionPayload(t.Context(), fixture.state, envelope))
latestHash, err := fixture.state.LatestBlockHash()
require.NoError(t, err)
var expectedHash [32]byte
copy(expectedHash[:], fixture.payload.BlockHash)
require.Equal(t, expectedHash, latestHash)
available, err := fixture.state.ExecutionPayloadAvailability(fixture.slot)
require.NoError(t, err)
require.Equal(t, uint64(1), available)
}
func TestApplyExecutionPayloadStateMutations_UpdatesAvailabilityAndLatestHash(t *testing.T) {
fixture := buildPayloadFixture(t, nil)
newHash := [32]byte{}
newHash[0] = 0x99
require.NoError(t, ApplyExecutionPayloadStateMutations(t.Context(), fixture.state, fixture.envelope.ExecutionRequests, newHash))
require.NoError(t, applyExecutionPayloadStateMutations(t.Context(), fixture.state, fixture.envelope.ExecutionRequests, newHash))
latestHash, err := fixture.state.LatestBlockHash()
require.NoError(t, err)
@@ -282,12 +343,12 @@ func TestQueueBuilderPayment_ZeroAmountClearsSlot(t *testing.T) {
require.Equal(t, primitives.Gwei(0), payment.Withdrawal.Amount)
}
func TestApplyBlindedExecutionPayloadEnvelopeForStateGen_NilEnvelope(t *testing.T) {
func TestProcessBlindedExecutionPayload_NilEnvelope(t *testing.T) {
fixture := buildPayloadFixture(t, nil)
require.NoError(t, ApplyBlindedExecutionPayloadEnvelopeForStateGen(t.Context(), fixture.state, [32]byte{}, nil))
require.NoError(t, ProcessBlindedExecutionPayload(t.Context(), fixture.state, [32]byte{}, nil))
}
func TestApplyBlindedExecutionPayloadEnvelopeForStateGen_Success(t *testing.T) {
func TestProcessBlindedExecutionPayload_Success(t *testing.T) {
fixture := buildPayloadFixture(t, nil)
st := fixture.state
@@ -305,7 +366,7 @@ func TestApplyBlindedExecutionPayloadEnvelopeForStateGen_Success(t *testing.T) {
wrappedEnv, err := blocks.WrappedROBlindedExecutionPayloadEnvelope(envelope.Message)
require.NoError(t, err)
require.NoError(t, ApplyBlindedExecutionPayloadEnvelopeForStateGen(t.Context(), st, stateRoot, wrappedEnv))
require.NoError(t, ProcessBlindedExecutionPayload(t.Context(), st, stateRoot, wrappedEnv))
latestHash, err := st.LatestBlockHash()
require.NoError(t, err)
@@ -319,7 +380,7 @@ func TestApplyBlindedExecutionPayloadEnvelopeForStateGen_Success(t *testing.T) {
require.DeepEqual(t, stateRoot[:], header.StateRoot)
}
func TestApplyBlindedExecutionPayloadEnvelopeForStateGen_SlotMismatch(t *testing.T) {
func TestProcessBlindedExecutionPayload_SlotMismatch(t *testing.T) {
fixture := buildPayloadFixture(t, nil)
envelope := &ethpb.SignedBlindedExecutionPayloadEnvelope{
@@ -331,11 +392,11 @@ func TestApplyBlindedExecutionPayloadEnvelopeForStateGen_SlotMismatch(t *testing
}
wrappedEnv, err := blocks.WrappedROBlindedExecutionPayloadEnvelope(envelope.Message)
require.NoError(t, err)
err = ApplyBlindedExecutionPayloadEnvelopeForStateGen(t.Context(), fixture.state, [32]byte{}, wrappedEnv)
err = ProcessBlindedExecutionPayload(t.Context(), fixture.state, [32]byte{}, wrappedEnv)
require.ErrorContains(t, "blinded envelope slot does not match state slot", err)
}
func TestApplyBlindedExecutionPayloadEnvelopeForStateGen_BuilderIndexMismatch(t *testing.T) {
func TestProcessBlindedExecutionPayload_BuilderIndexMismatch(t *testing.T) {
fixture := buildPayloadFixture(t, nil)
blockHash := [32]byte(fixture.payload.BlockHash)
@@ -349,11 +410,11 @@ func TestApplyBlindedExecutionPayloadEnvelopeForStateGen_BuilderIndexMismatch(t
}
wrappedEnv, err := blocks.WrappedROBlindedExecutionPayloadEnvelope(envelope.Message)
require.NoError(t, err)
err = ApplyBlindedExecutionPayloadEnvelopeForStateGen(t.Context(), fixture.state, [32]byte{}, wrappedEnv)
err = ProcessBlindedExecutionPayload(t.Context(), fixture.state, [32]byte{}, wrappedEnv)
require.ErrorContains(t, "builder index does not match", err)
}
func TestApplyBlindedExecutionPayloadEnvelopeForStateGen_BlockHashMismatch(t *testing.T) {
func TestProcessBlindedExecutionPayload_BlockHashMismatch(t *testing.T) {
fixture := buildPayloadFixture(t, nil)
wrongHash := bytes.Repeat([]byte{0xFF}, 32)
@@ -367,7 +428,7 @@ func TestApplyBlindedExecutionPayloadEnvelopeForStateGen_BlockHashMismatch(t *te
}
wrappedEnv, err := blocks.WrappedROBlindedExecutionPayloadEnvelope(envelope.Message)
require.NoError(t, err)
err = ApplyBlindedExecutionPayloadEnvelopeForStateGen(t.Context(), fixture.state, [32]byte{}, wrappedEnv)
err = ProcessBlindedExecutionPayload(t.Context(), fixture.state, [32]byte{}, wrappedEnv)
require.ErrorContains(t, "block hash does not match", err)
}
@@ -403,14 +464,14 @@ func TestVerifyExecutionPayloadEnvelopeSignature(t *testing.T) {
signed, err := blocks.WrappedROSignedExecutionPayloadEnvelope(signedProto)
require.NoError(t, err)
require.NoError(t, VerifyExecutionPayloadEnvelopeSignature(st, signed))
require.NoError(t, verifyExecutionPayloadEnvelopeSignature(st, signed))
})
t.Run("builder", func(t *testing.T) {
signed, err := blocks.WrappedROSignedExecutionPayloadEnvelope(fixture.signedProto)
require.NoError(t, err)
require.NoError(t, VerifyExecutionPayloadEnvelopeSignature(fixture.state, signed))
require.NoError(t, verifyExecutionPayloadEnvelopeSignature(fixture.state, signed))
})
t.Run("invalid signature", func(t *testing.T) {
@@ -436,7 +497,7 @@ func TestVerifyExecutionPayloadEnvelopeSignature(t *testing.T) {
badSigned, err := blocks.WrappedROSignedExecutionPayloadEnvelope(signedProto)
require.NoError(t, err)
err = VerifyExecutionPayloadEnvelopeSignature(st, badSigned)
err = verifyExecutionPayloadEnvelopeSignature(st, badSigned)
require.ErrorContains(t, "invalid signature format", err)
})
@@ -448,7 +509,7 @@ func TestVerifyExecutionPayloadEnvelopeSignature(t *testing.T) {
badSigned, err := blocks.WrappedROSignedExecutionPayloadEnvelope(signedProto)
require.NoError(t, err)
err = VerifyExecutionPayloadEnvelopeSignature(fixture.state, badSigned)
err = verifyExecutionPayloadEnvelopeSignature(fixture.state, badSigned)
require.ErrorContains(t, "invalid signature format", err)
})
})

View File

@@ -1,6 +1,8 @@
package gloas
import (
"context"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/time"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
state_native "github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native"
@@ -9,12 +11,13 @@ import (
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/pkg/errors"
)
// UpgradeToGloas updates inputs a generic state to return the version Gloas state.
//
// <spec fn="upgrade_to_gloas" fork="gloas" hash="6e66df25">
// <spec fn="upgrade_to_gloas" fork="gloas" hash="8f67112c">
// def upgrade_to_gloas(pre: fulu.BeaconState) -> BeaconState:
// epoch = fulu.get_current_epoch(pre)
//
@@ -81,6 +84,8 @@ import (
// latest_block_hash=pre.latest_execution_payload_header.block_hash,
// # [New in Gloas:EIP7732]
// payload_expected_withdrawals=[],
// # [New in Gloas:EIP7732]
// ptc_window=initialize_ptc_window(pre),
// )
//
// # [New in Gloas:EIP7732]
@@ -143,12 +148,73 @@ func UpgradeToGloas(beaconState state.BeaconState) (state.BeaconState, error) {
if err != nil {
return nil, errors.Wrap(err, "could not convert to gloas")
}
ptcWindow, err := initializePTCWindow(context.Background(), s)
if err != nil {
return nil, errors.Wrap(err, "failed to initialize ptc window")
}
if err := s.SetPTCWindow(ptcWindow); err != nil {
return nil, errors.Wrap(err, "failed to set ptc window")
}
if err := s.OnboardBuildersFromPendingDeposits(); err != nil {
return nil, errors.Wrap(err, "failed to onboard builders from pending deposits")
}
return s, nil
}
// initializePTCWindow builds the initial PTC window for the Gloas fork upgrade.
//
// <spec fn="initialize_ptc_window" fork="gloas" hash="3764b7f5">
// def initialize_ptc_window(
// state: BeaconState,
// ) -> Vector[Vector[ValidatorIndex, PTC_SIZE], (2 + MIN_SEED_LOOKAHEAD) * SLOTS_PER_EPOCH]:
// """
// Return the cached PTC window starting from the current epoch.
// Used to initialize the ``ptc_window`` field in the beacon state at genesis and after forks.
// """
// empty_previous_epoch = [
// Vector[ValidatorIndex, PTC_SIZE]([ValidatorIndex(0) for _ in range(PTC_SIZE)])
// for _ in range(SLOTS_PER_EPOCH)
// ]
//
// ptcs = []
// current_epoch = get_current_epoch(state)
// for e in range(1 + MIN_SEED_LOOKAHEAD):
// epoch = Epoch(current_epoch + e)
// start_slot = compute_start_slot_at_epoch(epoch)
// ptcs += [compute_ptc(state, Slot(start_slot + i)) for i in range(SLOTS_PER_EPOCH)]
//
// return empty_previous_epoch + ptcs
// </spec>
func initializePTCWindow(ctx context.Context, st state.ReadOnlyBeaconState) ([]*ethpb.PTCs, error) {
currentEpoch := slots.ToEpoch(st.Slot())
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
windowSize := slotsPerEpoch.Mul(uint64(2 + params.BeaconConfig().MinSeedLookahead))
window := make([]*ethpb.PTCs, 0, windowSize)
// Previous epoch has no cached data at fork time — fill with empty slots.
for range slotsPerEpoch {
window = append(window, &ethpb.PTCs{
ValidatorIndices: make([]primitives.ValidatorIndex, fieldparams.PTCSize),
})
}
// Compute PTC for current epoch through lookahead.
startSlot, err := slots.EpochStart(currentEpoch)
if err != nil {
return nil, err
}
totalSlots := slotsPerEpoch.Mul(uint64(1 + params.BeaconConfig().MinSeedLookahead))
for i := range totalSlots {
ptc, err := computePTC(ctx, st, startSlot+i)
if err != nil {
return nil, err
}
window = append(window, &ethpb.PTCs{ValidatorIndices: ptc})
}
return window, nil
}
func upgradeToGloas(beaconState state.BeaconState) (state.BeaconState, error) {
currentSyncCommittee, err := beaconState.CurrentSyncCommittee()
if err != nil {
@@ -226,10 +292,6 @@ func upgradeToGloas(beaconState state.BeaconState) (state.BeaconState, error) {
if err != nil {
return nil, err
}
proposerLookaheadU64 := make([]uint64, len(proposerLookahead))
for i, v := range proposerLookahead {
proposerLookaheadU64[i] = uint64(v)
}
executionPayloadAvailability := make([]byte, int((params.BeaconConfig().SlotsPerHistoricalRoot+7)/8))
for i := range executionPayloadAvailability {
@@ -293,7 +355,7 @@ func upgradeToGloas(beaconState state.BeaconState) (state.BeaconState, error) {
PendingDeposits: pendingDeposits,
PendingPartialWithdrawals: pendingPartialWithdrawals,
PendingConsolidations: pendingConsolidations,
ProposerLookahead: proposerLookaheadU64,
ProposerLookahead: proposerLookahead,
Builders: []*ethpb.Builder{},
NextWithdrawalBuilderIndex: primitives.BuilderIndex(0),
ExecutionPayloadAvailability: executionPayloadAvailability,

View File

@@ -103,7 +103,7 @@ func TestUpgradeToGloas_Basic(t *testing.T) {
}
func TestUpgradeToGloas_OnboardsBuilderDeposit(t *testing.T) {
st, _ := util.DeterministicGenesisStateFulu(t, 4)
st, _ := util.DeterministicGenesisStateFulu(t, params.BeaconConfig().MaxValidatorsPerCommittee)
sk, err := bls.RandKey()
require.NoError(t, err)

View File

@@ -658,8 +658,8 @@ func ComputeCommittee(
}
// InitializeProposerLookahead computes the list of the proposer indices for the next MIN_SEED_LOOKAHEAD + 1 epochs.
func InitializeProposerLookahead(ctx context.Context, state state.ReadOnlyBeaconState, epoch primitives.Epoch) ([]uint64, error) {
lookAhead := make([]uint64, 0, uint64(params.BeaconConfig().MinSeedLookahead+1)*uint64(params.BeaconConfig().SlotsPerEpoch))
func InitializeProposerLookahead(ctx context.Context, state state.ReadOnlyBeaconState, epoch primitives.Epoch) ([]primitives.ValidatorIndex, error) {
lookAhead := make([]primitives.ValidatorIndex, 0, uint64(params.BeaconConfig().MinSeedLookahead+1)*uint64(params.BeaconConfig().SlotsPerEpoch))
for i := range params.BeaconConfig().MinSeedLookahead + 1 {
indices, err := ActiveValidatorIndices(ctx, state, epoch+i)
if err != nil {
@@ -669,9 +669,7 @@ func InitializeProposerLookahead(ctx context.Context, state state.ReadOnlyBeacon
if err != nil {
return nil, errors.Wrap(err, "could not compute proposer indices")
}
for _, proposerIndex := range proposerIndices {
lookAhead = append(lookAhead, uint64(proposerIndex))
}
lookAhead = append(lookAhead, proposerIndices...)
}
return lookAhead, nil
}

View File

@@ -945,13 +945,8 @@ func TestInitializeProposerLookahead_RegressionTest(t *testing.T) {
endIdx := startIdx + slotsPerEpoch
actualProposers := proposerLookahead[startIdx:endIdx]
expectedUint64 := make([]uint64, len(expectedProposers))
for i, proposer := range expectedProposers {
expectedUint64[i] = uint64(proposer)
}
// This assertion would fail with the original bug:
for i, expected := range expectedUint64 {
for i, expected := range expectedProposers {
require.Equal(t, expected, actualProposers[i],
"Proposer index mismatch at slot %d in epoch %d", i, targetEpoch)
}

View File

@@ -1165,7 +1165,7 @@ func TestBeaconProposerIndexAtSlotFulu(t *testing.T) {
cfg := params.BeaconConfig().Copy()
cfg.FuluForkEpoch = 1
params.OverrideBeaconConfig(cfg)
lookahead := make([]uint64, 64)
lookahead := make([]primitives.ValidatorIndex, 64)
lookahead[0] = 15
lookahead[1] = 16
lookahead[34] = 42

View File

@@ -26,6 +26,7 @@ go_library(
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enr:go_default_library",
"@com_github_hashicorp_golang_lru//:go_default_library",

View File

@@ -33,24 +33,31 @@ func (Cgc) ENRKey() string { return params.BeaconNetworkConfig().CustodyGroupCou
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/p2p-interface.md#verify_data_column_sidecar
func VerifyDataColumnSidecar(sidecar blocks.RODataColumn) error {
// The sidecar index must be within the valid range.
if sidecar.Index >= fieldparams.NumberOfColumns {
index := sidecar.Index()
if index >= fieldparams.NumberOfColumns {
return ErrIndexTooLarge
}
// A sidecar for zero blobs is invalid.
if len(sidecar.KzgCommitments) == 0 {
kzgCommitments, err := sidecar.KzgCommitments()
if err != nil {
return errors.Wrap(err, "kzg commitments")
}
if len(kzgCommitments) == 0 {
return ErrNoKzgCommitments
}
// A sidecar with more commitments than the max blob count for this block is invalid.
slot := sidecar.Slot()
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(slot)
if len(sidecar.KzgCommitments) > maxBlobsPerBlock {
if len(kzgCommitments) > maxBlobsPerBlock {
return ErrTooManyCommitments
}
// The column length must be equal to the number of commitments/proofs.
if len(sidecar.Column) != len(sidecar.KzgCommitments) || len(sidecar.Column) != len(sidecar.KzgProofs) {
column := sidecar.Column()
kzgProofs := sidecar.KzgProofs()
if len(column) != len(kzgCommitments) || len(column) != len(kzgProofs) {
return ErrMismatchLength
}
@@ -67,7 +74,11 @@ func VerifyDataColumnSidecar(sidecar blocks.RODataColumn) error {
func VerifyDataColumnsSidecarKZGProofs(sidecars []blocks.RODataColumn) error {
commitmentsBySidecar := make([][][]byte, len(sidecars))
for i := range sidecars {
commitmentsBySidecar[i] = sidecars[i].KzgCommitments
c, err := sidecars[i].KzgCommitments()
if err != nil {
return errors.Wrapf(err, "sidecar %d kzg commitments", i)
}
commitmentsBySidecar[i] = c
}
return verifyDataColumnsSidecarKZGProofs(sidecars, commitmentsBySidecar)
}
@@ -87,10 +98,11 @@ func verifyDataColumnsSidecarKZGProofs(sidecars []blocks.RODataColumn, commitmen
// Compute the total count.
count := 0
for i, sidecar := range sidecars {
if len(sidecar.Column) != len(commitmentsBySidecar[i]) {
column := sidecar.Column()
if len(column) != len(commitmentsBySidecar[i]) {
return ErrMismatchLength
}
count += len(sidecar.Column)
count += len(column)
}
commitments := make([]kzg.Bytes48, 0, count)
@@ -99,7 +111,10 @@ func verifyDataColumnsSidecarKZGProofs(sidecars []blocks.RODataColumn, commitmen
proofs := make([]kzg.Bytes48, 0, count)
for sidecarIndex, sidecar := range sidecars {
for i := range sidecar.Column {
column := sidecar.Column()
kzgProofs := sidecar.KzgProofs()
index := sidecar.Index()
for i := range column {
var (
commitment kzg.Bytes48
cell kzg.Cell
@@ -107,8 +122,8 @@ func verifyDataColumnsSidecarKZGProofs(sidecars []blocks.RODataColumn, commitmen
)
commitmentBytes := commitmentsBySidecar[sidecarIndex][i]
cellBytes := sidecar.Column[i]
proofBytes := sidecar.KzgProofs[i]
cellBytes := column[i]
proofBytes := kzgProofs[i]
if len(commitmentBytes) != len(commitment) ||
len(cellBytes) != len(cell) ||
@@ -121,7 +136,7 @@ func verifyDataColumnsSidecarKZGProofs(sidecars []blocks.RODataColumn, commitmen
copy(proof[:], proofBytes)
commitments = append(commitments, commitment)
indices = append(indices, sidecar.Index)
indices = append(indices, index)
cells = append(cells, cell)
proofs = append(proofs, proof)
}
@@ -143,16 +158,27 @@ func verifyDataColumnsSidecarKZGProofs(sidecars []blocks.RODataColumn, commitmen
// VerifyDataColumnSidecarInclusionProof verifies if the given KZG commitments included in the given beacon block.
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/p2p-interface.md#verify_data_column_sidecar_inclusion_proof
func VerifyDataColumnSidecarInclusionProof(sidecar blocks.RODataColumn) error {
if sidecar.SignedBlockHeader == nil || sidecar.SignedBlockHeader.Header == nil {
if sidecar.IsGloas() {
return nil
}
signedBlockHeader, err := sidecar.SignedBlockHeader()
if err != nil {
return errors.Wrap(err, "signed block header")
}
if signedBlockHeader == nil || signedBlockHeader.Header == nil {
return ErrNilBlockHeader
}
root := sidecar.SignedBlockHeader.Header.BodyRoot
root := signedBlockHeader.Header.BodyRoot
if len(root) != fieldparams.RootLength {
return ErrBadRootLength
}
leaves := blocks.LeavesFromCommitments(sidecar.KzgCommitments)
kzgCommitments, err := sidecar.KzgCommitments()
if err != nil {
return errors.Wrap(err, "kzg commitments")
}
leaves := blocks.LeavesFromCommitments(kzgCommitments)
sparse, err := trie.GenerateTrieFromItems(leaves, fieldparams.LogMaxBlobCommitments)
if err != nil {
@@ -164,7 +190,11 @@ func VerifyDataColumnSidecarInclusionProof(sidecar blocks.RODataColumn) error {
return errors.Wrap(err, "hash tree root")
}
verified := trie.VerifyMerkleProof(root, hashTreeRoot[:], kzgPosition, sidecar.KzgCommitmentsInclusionProof)
kzgInclusionProof, err := sidecar.KzgCommitmentsInclusionProof()
if err != nil {
return errors.Wrap(err, "kzg commitments inclusion proof")
}
verified := trie.VerifyMerkleProof(root, hashTreeRoot[:], kzgPosition, kzgInclusionProof)
if !verified {
return ErrInvalidInclusionProof
}

View File

@@ -70,7 +70,8 @@ func TestVerifyDataColumnSidecarKZGProofs(t *testing.T) {
t.Run("size mismatch", func(t *testing.T) {
sidecars := generateRandomSidecars(t, seed, blobCount)
sidecars[0].Column[0] = sidecars[0].Column[0][:len(sidecars[0].Column[0])-1] // Remove one byte to create size mismatch
column := sidecars[0].Column()
column[0] = column[0][:len(column[0])-1] // Remove one byte to create size mismatch
err := peerdas.VerifyDataColumnsSidecarKZGProofs(sidecars)
require.ErrorIs(t, err, peerdas.ErrMismatchLength)
@@ -78,7 +79,7 @@ func TestVerifyDataColumnSidecarKZGProofs(t *testing.T) {
t.Run("invalid proof", func(t *testing.T) {
sidecars := generateRandomSidecars(t, seed, blobCount)
sidecars[0].Column[0][0]++ // It is OK to overflow
sidecars[0].Column()[0][0]++ // It is OK to overflow
err := peerdas.VerifyDataColumnsSidecarKZGProofs(sidecars)
require.ErrorIs(t, err, peerdas.ErrInvalidKZGProof)
@@ -92,7 +93,7 @@ func TestVerifyDataColumnSidecarKZGProofs(t *testing.T) {
t.Run("with commitments", func(t *testing.T) {
sidecars := generateRandomSidecars(t, seed, blobCount)
err := peerdas.VerifyDataColumnsSidecarKZGProofsWithCommitments(sidecars, sidecarCommitments(sidecars))
err := peerdas.VerifyDataColumnsSidecarKZGProofsWithCommitments(sidecars, sidecarCommitments(t, sidecars))
require.NoError(t, err)
})
}
@@ -202,7 +203,7 @@ func Test_VerifyKZGInclusionProofColumn(t *testing.T) {
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
roDataColumn := blocks.RODataColumn{DataColumnSidecar: tc.dataColumnSidecar}
roDataColumn := blocks.NewRODataColumnNoVerify(tc.dataColumnSidecar)
err = peerdas.VerifyDataColumnSidecarInclusionProof(roDataColumn)
if tc.expectedError == nil {
require.NoError(t, err)
@@ -214,6 +215,13 @@ func Test_VerifyKZGInclusionProofColumn(t *testing.T) {
}
}
func TestVerifyDataColumnSidecarInclusionProof_SkipsGloas(t *testing.T) {
dc := &ethpb.DataColumnSidecarGloas{Index: 0, Column: [][]byte{{0x01}}, KzgProofs: [][]byte{make([]byte, 48)}}
roCol, err := blocks.NewRODataColumnGloas(dc)
require.NoError(t, err)
require.NoError(t, peerdas.VerifyDataColumnSidecarInclusionProof(roCol))
}
func TestComputeSubnetForDataColumnSidecar(t *testing.T) {
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
@@ -354,10 +362,12 @@ func BenchmarkVerifyDataColumnSidecarKZGProofs_DiffCommitments_Batch4(b *testing
}
}
func sidecarCommitments(sidecars []blocks.RODataColumn) [][][]byte {
func sidecarCommitments(t *testing.T, sidecars []blocks.RODataColumn) [][][]byte {
commitmentsBySidecar := make([][][]byte, len(sidecars))
for i := range sidecars {
commitmentsBySidecar[i] = sidecars[i].KzgCommitments
var err error
commitmentsBySidecar[i], err = sidecars[i].KzgCommitments()
require.NoError(t, err)
}
return commitmentsBySidecar
}

View File

@@ -79,9 +79,9 @@ func recoverCellsForBlobs(verifiedRoSidecars []blocks.VerifiedRODataColumn, blob
cells := make([]kzg.Cell, 0, sidecarCount)
for _, sidecar := range verifiedRoSidecars {
cell := sidecar.Column[blobIndex]
cell := sidecar.Column()[blobIndex]
cells = append(cells, kzg.Cell(cell))
cellsIndices = append(cellsIndices, sidecar.Index)
cellsIndices = append(cellsIndices, sidecar.Index())
}
recoveredCells, err := kzg.RecoverCells(cellsIndices, cells)
@@ -116,9 +116,9 @@ func recoverCellsAndProofsForBlobs(verifiedRoSidecars []blocks.VerifiedRODataCol
cells := make([]kzg.Cell, 0, sidecarCount)
for _, sidecar := range verifiedRoSidecars {
cell := sidecar.Column[blobIndex]
cell := sidecar.Column()[blobIndex]
cells = append(cells, kzg.Cell(cell))
cellsIndices = append(cellsIndices, sidecar.Index)
cellsIndices = append(cellsIndices, sidecar.Index())
}
recoveredCells, recoveredProofs, err := kzg.RecoverCellsAndKZGProofs(cellsIndices, cells)
@@ -151,10 +151,10 @@ func ReconstructDataColumnSidecars(verifiedRoSidecars []blocks.VerifiedRODataCol
referenceSidecar := verifiedRoSidecars[0]
// Check if all columns have the same length and are commmitted to the same block.
blobCount := len(referenceSidecar.Column)
blobCount := len(referenceSidecar.Column())
blockRoot := referenceSidecar.BlockRoot()
for _, sidecar := range verifiedRoSidecars[1:] {
if len(sidecar.Column) != blobCount {
if len(sidecar.Column()) != blobCount {
return nil, ErrColumnLengthsDiffer
}
@@ -171,7 +171,7 @@ func ReconstructDataColumnSidecars(verifiedRoSidecars []blocks.VerifiedRODataCol
// Sort the input sidecars by index.
sort.Slice(verifiedRoSidecars, func(i, j int) bool {
return verifiedRoSidecars[i].Index < verifiedRoSidecars[j].Index
return verifiedRoSidecars[i].Index() < verifiedRoSidecars[j].Index()
})
// Recover cells and compute proofs in parallel.
@@ -209,9 +209,9 @@ func reconstructIfNeeded(verifiedDataColumnSidecars []blocks.VerifiedRODataColum
}
// Check if the sidecars are sorted by index and do not contain duplicates.
previousColumnIndex := verifiedDataColumnSidecars[0].Index
previousColumnIndex := verifiedDataColumnSidecars[0].Index()
for _, dataColumnSidecar := range verifiedDataColumnSidecars[1:] {
columnIndex := dataColumnSidecar.Index
columnIndex := dataColumnSidecar.Index()
if columnIndex <= previousColumnIndex {
return nil, ErrDataColumnSidecarsNotSortedByIndex
}
@@ -226,7 +226,7 @@ func reconstructIfNeeded(verifiedDataColumnSidecars []blocks.VerifiedRODataColum
}
// If all column sidecars corresponding to (non-extended) blobs are present, no need to reconstruct.
if verifiedDataColumnSidecars[cellsPerBlob-1].Index == uint64(cellsPerBlob-1) {
if verifiedDataColumnSidecars[cellsPerBlob-1].Index() == uint64(cellsPerBlob-1) {
return verifiedDataColumnSidecars, nil
}
@@ -415,9 +415,9 @@ func ReconstructBlobs(verifiedDataColumnSidecars []blocks.VerifiedRODataColumn,
}
// Check if the sidecars are sorted by index and do not contain duplicates.
previousColumnIndex := verifiedDataColumnSidecars[0].Index
previousColumnIndex := verifiedDataColumnSidecars[0].Index()
for _, dataColumnSidecar := range verifiedDataColumnSidecars[1:] {
columnIndex := dataColumnSidecar.Index
columnIndex := dataColumnSidecar.Index()
if columnIndex <= previousColumnIndex {
return nil, ErrDataColumnSidecarsNotSortedByIndex
}
@@ -433,7 +433,7 @@ func ReconstructBlobs(verifiedDataColumnSidecars []blocks.VerifiedRODataColumn,
// Verify that the actual blob count from the first sidecar matches the expected count
referenceSidecar := verifiedDataColumnSidecars[0]
actualBlobCount := len(referenceSidecar.Column)
actualBlobCount := len(referenceSidecar.Column())
if actualBlobCount != blobCount {
return nil, errors.Errorf("blob count mismatch: expected %d, got %d", blobCount, actualBlobCount)
}
@@ -448,7 +448,7 @@ func ReconstructBlobs(verifiedDataColumnSidecars []blocks.VerifiedRODataColumn,
// Check if all columns have the same length and are committed to the same block.
blockRoot := referenceSidecar.BlockRoot()
for _, sidecar := range verifiedDataColumnSidecars[1:] {
if len(sidecar.Column) != blobCount {
if len(sidecar.Column()) != blobCount {
return nil, ErrColumnLengthsDiffer
}
@@ -458,7 +458,7 @@ func ReconstructBlobs(verifiedDataColumnSidecars []blocks.VerifiedRODataColumn,
}
// Check if we have all non-extended columns (0..63) - if so, no reconstruction needed.
hasAllNonExtendedColumns := verifiedDataColumnSidecars[cellsPerBlob-1].Index == uint64(cellsPerBlob-1)
hasAllNonExtendedColumns := verifiedDataColumnSidecars[cellsPerBlob-1].Index() == uint64(cellsPerBlob-1)
var reconstructedCells map[int][]kzg.Cell
if !hasAllNonExtendedColumns {
@@ -480,7 +480,7 @@ func ReconstructBlobs(verifiedDataColumnSidecars []blocks.VerifiedRODataColumn,
var cell []byte
if hasAllNonExtendedColumns {
// Use existing cells from sidecars
cell = verifiedDataColumnSidecars[columnIndex].Column[blobIndex]
cell = verifiedDataColumnSidecars[columnIndex].Column()[blobIndex]
} else {
// Use reconstructed cells
cell = reconstructedCells[blobIndex][columnIndex][:]
@@ -501,8 +501,14 @@ func ReconstructBlobs(verifiedDataColumnSidecars []blocks.VerifiedRODataColumn,
func blobSidecarsFromDataColumnSidecars(roBlock blocks.ROBlock, dataColumnSidecars []blocks.VerifiedRODataColumn, indices []int) ([]*blocks.VerifiedROBlob, error) {
referenceSidecar := dataColumnSidecars[0]
kzgCommitments := referenceSidecar.KzgCommitments
signedBlockHeader := referenceSidecar.SignedBlockHeader
kzgCommitments, err := referenceSidecar.KzgCommitments()
if err != nil {
return nil, errors.Wrap(err, "kzg commitments")
}
signedBlockHeader, err := referenceSidecar.SignedBlockHeader()
if err != nil {
return nil, errors.Wrap(err, "signed block header")
}
verifiedROBlobs := make([]*blocks.VerifiedROBlob, 0, len(indices))
for _, blobIndex := range indices {
@@ -511,7 +517,7 @@ func blobSidecarsFromDataColumnSidecars(roBlock blocks.ROBlock, dataColumnSideca
// Compute the content of the blob.
for columnIndex := range fieldparams.CellsPerBlob {
dataColumnSidecar := dataColumnSidecars[columnIndex]
cell := dataColumnSidecar.Column[blobIndex]
cell := dataColumnSidecar.Column()[blobIndex]
if copy(blob[kzg.BytesPerCell*columnIndex:], cell) != kzg.BytesPerCell {
return nil, errors.New("wrong cell size - should never happen")
}

View File

@@ -36,7 +36,7 @@ func TestReconstructDataColumnSidecars(t *testing.T) {
_, _, verifiedRoSidecars := util.GenerateTestFuluBlockWithSidecars(t, 3)
// Arbitrarily alter the column with index 3
verifiedRoSidecars[3].Column = verifiedRoSidecars[3].Column[1:]
verifiedRoSidecars[3].DataColumnSidecar().Column = verifiedRoSidecars[3].DataColumnSidecar().Column[1:]
_, err := peerdas.ReconstructDataColumnSidecars(verifiedRoSidecars)
require.ErrorIs(t, err, peerdas.ErrColumnLengthsDiffer)
@@ -88,7 +88,10 @@ func TestReconstructDataColumnSidecars(t *testing.T) {
require.NoError(t, err)
// Verify that the reconstructed sidecars are equal to the original ones.
require.DeepSSZEqual(t, inputVerifiedRoSidecars, reconstructedVerifiedRoSidecars)
require.Equal(t, len(inputVerifiedRoSidecars), len(reconstructedVerifiedRoSidecars))
for i := range inputVerifiedRoSidecars {
require.DeepSSZEqual(t, inputVerifiedRoSidecars[i].DataColumnSidecar(), reconstructedVerifiedRoSidecars[i].DataColumnSidecar())
}
})
}

View File

@@ -10,6 +10,7 @@ import (
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/pkg/errors"
)
@@ -23,11 +24,13 @@ var (
var (
_ ConstructionPopulator = (*BlockReconstructionSource)(nil)
_ ConstructionPopulator = (*SidecarReconstructionSource)(nil)
_ ConstructionPopulator = (*BidReconstructionSource)(nil)
)
const (
BlockType = "BeaconBlock"
SidecarType = "DataColumnSidecar"
BidType = "ExecutionPayloadBid"
)
type (
@@ -37,7 +40,7 @@ type (
ConstructionPopulator interface {
Slot() primitives.Slot
Root() [fieldparams.RootLength]byte
ProposerIndex() primitives.ValidatorIndex
ProposerIndex() (primitives.ValidatorIndex, error)
Commitments() ([][]byte, error)
Type() string
@@ -49,11 +52,17 @@ type (
blocks.ROBlock
}
// DataColumnSidecar is a ConstructionPopulator that uses a data column sidecar as the source of data
// SidecarReconstructionSource is a ConstructionPopulator that uses a data column sidecar as the source of data
SidecarReconstructionSource struct {
blocks.VerifiedRODataColumn
}
// BidReconstructionSource is a ConstructionPopulator that uses the execution payload bid
// from a Gloas beacon block to extract KZG commitments for data column sidecar construction.
BidReconstructionSource struct {
blocks.ROBlock
}
blockInfo struct {
signedBlockHeader *ethpb.SignedBeaconBlockHeader
kzgCommitments [][]byte
@@ -71,6 +80,14 @@ func PopulateFromSidecar(sidecar blocks.VerifiedRODataColumn) *SidecarReconstruc
return &SidecarReconstructionSource{VerifiedRODataColumn: sidecar}
}
// PopulateFromBid creates a BidReconstructionSource from a Gloas beacon block.
// In Gloas (ePBS), the execution payload is delivered separately via the payload envelope,
// but the KZG commitments are available in the bid embedded in the block, allowing
// data column sidecars to be constructed from the EL as soon as the block arrives.
func PopulateFromBid(block blocks.ROBlock) *BidReconstructionSource {
return &BidReconstructionSource{ROBlock: block}
}
// ValidatorsCustodyRequirement returns the number of custody groups regarding the validator indices attached to the beacon node.
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/validator.md#validator-custody
func ValidatorsCustodyRequirement(st beaconState.ReadOnlyBalances, validatorsIndex map[primitives.ValidatorIndex]bool) (uint64, error) {
@@ -111,33 +128,93 @@ func DataColumnSidecars(cellsPerBlob [][]kzg.Cell, proofsPerBlob [][]kzg.Proof,
if err != nil {
return nil, errors.Wrap(err, "rotate cells and proofs")
}
info, err := src.extract()
if err != nil {
return nil, errors.Wrap(err, "extract block info")
}
isGloas := slots.ToEpoch(src.Slot()) >= params.BeaconConfig().GloasForkEpoch
root := src.Root()
roSidecars := make([]blocks.RODataColumn, 0, numberOfColumns)
for idx := range numberOfColumns {
sidecar := &ethpb.DataColumnSidecar{
Index: idx,
Column: cells[idx],
KzgCommitments: info.kzgCommitments,
KzgProofs: proofs[idx],
SignedBlockHeader: info.signedBlockHeader,
KzgCommitmentsInclusionProof: info.kzgInclusionProof,
if isGloas {
for idx := range numberOfColumns {
sidecar := &ethpb.DataColumnSidecarGloas{
Index: idx,
Column: cells[idx],
KzgProofs: proofs[idx],
Slot: src.Slot(),
BeaconBlockRoot: root[:],
}
if len(sidecar.Column) != len(sidecar.KzgProofs) {
return nil, ErrSizeMismatch
}
roSidecar, err := blocks.NewRODataColumnGloasWithRoot(sidecar, root)
if err != nil {
return nil, errors.Wrap(err, "new ro data column gloas")
}
roSidecars = append(roSidecars, roSidecar)
}
} else {
info, err := src.extract()
if err != nil {
return nil, errors.Wrap(err, "extract block info")
}
for idx := range numberOfColumns {
sidecar := &ethpb.DataColumnSidecar{
Index: idx,
Column: cells[idx],
KzgCommitments: info.kzgCommitments,
KzgProofs: proofs[idx],
SignedBlockHeader: info.signedBlockHeader,
KzgCommitmentsInclusionProof: info.kzgInclusionProof,
}
if len(sidecar.KzgCommitments) != len(sidecar.Column) || len(sidecar.KzgCommitments) != len(sidecar.KzgProofs) {
return nil, ErrSizeMismatch
}
roSidecar, err := blocks.NewRODataColumnWithRoot(sidecar, root)
if err != nil {
return nil, errors.Wrap(err, "new ro data column")
}
roSidecars = append(roSidecars, roSidecar)
}
}
if len(sidecar.KzgCommitments) != len(sidecar.Column) || len(sidecar.KzgCommitments) != len(sidecar.KzgProofs) {
dataColumnComputationTime.Observe(float64(time.Since(start).Milliseconds()))
return roSidecars, nil
}
// DataColumnSidecarsGloas constructs Gloas-format data column sidecars directly from cells, proofs,
// slot, and block root. Used by the proposer when building sidecars outside the ConstructionPopulator flow.
func DataColumnSidecarsGloas(
cellsPerBlob [][]kzg.Cell,
proofsPerBlob [][]kzg.Proof,
slot primitives.Slot,
beaconBlockRoot [32]byte,
) ([]blocks.RODataColumn, error) {
const numberOfColumns = uint64(fieldparams.NumberOfColumns)
if len(cellsPerBlob) == 0 {
return nil, nil
}
start := time.Now()
cells, proofs, err := rotateRowsToCols(cellsPerBlob, proofsPerBlob, numberOfColumns)
if err != nil {
return nil, errors.Wrap(err, "rotate cells and proofs")
}
roSidecars := make([]blocks.RODataColumn, 0, numberOfColumns)
for idx := range numberOfColumns {
sidecar := &ethpb.DataColumnSidecarGloas{
Index: idx,
Column: cells[idx],
KzgProofs: proofs[idx],
Slot: slot,
BeaconBlockRoot: beaconBlockRoot[:],
}
if len(sidecar.Column) != len(sidecar.KzgProofs) {
return nil, ErrSizeMismatch
}
roSidecar, err := blocks.NewRODataColumnWithRoot(sidecar, src.Root())
roSidecar, err := blocks.NewRODataColumnGloasWithRoot(sidecar, beaconBlockRoot)
if err != nil {
return nil, errors.Wrap(err, "new ro data column")
return nil, errors.Wrap(err, "new ro data column gloas")
}
roSidecars = append(roSidecars, roSidecar)
}
dataColumnComputationTime.Observe(float64(time.Since(start).Milliseconds()))
return roSidecars, nil
}
@@ -148,8 +225,8 @@ func (s *BlockReconstructionSource) Slot() primitives.Slot {
}
// ProposerIndex returns the proposer index of the source
func (s *BlockReconstructionSource) ProposerIndex() primitives.ValidatorIndex {
return s.Block().ProposerIndex()
func (s *BlockReconstructionSource) ProposerIndex() (primitives.ValidatorIndex, error) {
return s.Block().ProposerIndex(), nil
}
// Commitments returns the blob KZG commitments of the source
@@ -168,32 +245,24 @@ func (s *BlockReconstructionSource) Type() string {
return BlockType
}
// extract extracts the block information from the source
func (b *BlockReconstructionSource) extract() (*blockInfo, error) {
block := b.Block()
header, err := b.Header()
if err != nil {
return nil, errors.Wrap(err, "header")
}
commitments, err := block.Body().BlobKzgCommitments()
commitments, err := b.Block().Body().BlobKzgCommitments()
if err != nil {
return nil, errors.Wrap(err, "commitments")
}
inclusionProof, err := blocks.MerkleProofKZGCommitments(block.Body())
inclusionProof, err := blocks.MerkleProofKZGCommitments(b.Block().Body())
if err != nil {
return nil, errors.Wrap(err, "merkle proof kzg commitments")
}
info := &blockInfo{
return &blockInfo{
signedBlockHeader: header,
kzgCommitments: commitments,
kzgInclusionProof: inclusionProof,
}
return info, nil
}, nil
}
// rotateRowsToCols takes a 2D slice of cells and proofs, where the x is rows (blobs) and y is columns,
@@ -235,7 +304,7 @@ func (s *SidecarReconstructionSource) Root() [fieldparams.RootLength]byte {
// Commmitments returns the blob KZG commitments of the source
func (s *SidecarReconstructionSource) Commitments() ([][]byte, error) {
return s.KzgCommitments, nil
return s.KzgCommitments()
}
// Type returns the type of the source
@@ -243,13 +312,61 @@ func (s *SidecarReconstructionSource) Type() string {
return SidecarType
}
// extract extracts the block information from the source
func (s *SidecarReconstructionSource) extract() (*blockInfo, error) {
info := &blockInfo{
signedBlockHeader: s.SignedBlockHeader,
kzgCommitments: s.KzgCommitments,
kzgInclusionProof: s.KzgCommitmentsInclusionProof,
sbh, err := s.SignedBlockHeader()
if err != nil {
return nil, err
}
return info, nil
comms, err := s.KzgCommitments()
if err != nil {
return nil, err
}
incProof, err := s.KzgCommitmentsInclusionProof()
if err != nil {
return nil, err
}
return &blockInfo{
signedBlockHeader: sbh,
kzgCommitments: comms,
kzgInclusionProof: incProof,
}, nil
}
// Slot returns the slot of the source
func (s *BidReconstructionSource) Slot() primitives.Slot {
return s.Block().Slot()
}
// ProposerIndex returns the proposer index of the source
func (s *BidReconstructionSource) ProposerIndex() (primitives.ValidatorIndex, error) {
return s.Block().ProposerIndex(), nil
}
// Commitments returns the blob KZG commitments from the execution payload bid
func (s *BidReconstructionSource) Commitments() ([][]byte, error) {
bid, err := s.Block().Body().SignedExecutionPayloadBid()
if err != nil {
return nil, errors.Wrap(err, "signed execution payload bid")
}
return bid.Message.BlobKzgCommitments, nil
}
// Type returns the type of the source
func (s *BidReconstructionSource) Type() string {
return BidType
}
func (s *BidReconstructionSource) extract() (*blockInfo, error) {
commitments, err := s.Commitments()
if err != nil {
return nil, err
}
header, err := s.Header()
if err != nil {
return nil, errors.Wrap(err, "header")
}
return &blockInfo{
signedBlockHeader: header,
kzgCommitments: commitments,
}, nil
}

View File

@@ -7,6 +7,7 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
state_native "github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
@@ -176,22 +177,24 @@ func TestDataColumnSidecars(t *testing.T) {
// Verify each sidecar has the expected structure
for i, sidecar := range sidecars {
require.Equal(t, uint64(i), sidecar.Index)
require.Equal(t, 2, len(sidecar.Column))
require.Equal(t, 2, len(sidecar.KzgCommitments))
require.Equal(t, 2, len(sidecar.KzgProofs))
require.Equal(t, uint64(i), sidecar.Index())
require.Equal(t, 2, len(sidecar.Column()))
comms, err := sidecar.KzgCommitments()
require.NoError(t, err)
require.Equal(t, 2, len(comms))
require.Equal(t, 2, len(sidecar.KzgProofs()))
// Verify commitments match what we set
require.DeepEqual(t, commitment1, sidecar.KzgCommitments[0])
require.DeepEqual(t, commitment2, sidecar.KzgCommitments[1])
require.DeepEqual(t, commitment1, comms[0])
require.DeepEqual(t, commitment2, comms[1])
// Verify column data comes from the correct cells
require.Equal(t, byte(i), sidecar.Column[0][0])
require.Equal(t, byte(i+128), sidecar.Column[1][0])
require.Equal(t, byte(i), sidecar.Column()[0][0])
require.Equal(t, byte(i+128), sidecar.Column()[1][0])
// Verify proofs come from the correct proofs
require.Equal(t, byte(i), sidecar.KzgProofs[0][0])
require.Equal(t, byte(i+128), sidecar.KzgProofs[1][0])
require.Equal(t, byte(i), sidecar.KzgProofs()[0][0])
require.Equal(t, byte(i+128), sidecar.KzgProofs()[1][0])
}
})
}
@@ -241,7 +244,9 @@ func TestReconstructionSource(t *testing.T) {
src := peerdas.PopulateFromBlock(rob)
require.Equal(t, rob.Block().Slot(), src.Slot())
require.Equal(t, rob.Root(), src.Root())
require.Equal(t, rob.Block().ProposerIndex(), src.ProposerIndex())
srcPI, err := src.ProposerIndex()
require.NoError(t, err)
require.Equal(t, rob.Block().ProposerIndex(), srcPI)
commitments, err := src.Commitments()
require.NoError(t, err)
@@ -257,7 +262,11 @@ func TestReconstructionSource(t *testing.T) {
src := peerdas.PopulateFromSidecar(referenceSidecar)
require.Equal(t, referenceSidecar.Slot(), src.Slot())
require.Equal(t, referenceSidecar.BlockRoot(), src.Root())
require.Equal(t, referenceSidecar.ProposerIndex(), src.ProposerIndex())
refPI, err := referenceSidecar.ProposerIndex()
require.NoError(t, err)
srcPI, err := src.ProposerIndex()
require.NoError(t, err)
require.Equal(t, refPI, srcPI)
commitments, err := src.Commitments()
require.NoError(t, err)
@@ -267,4 +276,87 @@ func TestReconstructionSource(t *testing.T) {
require.Equal(t, peerdas.SidecarType, src.Type())
})
t.Run("from bid", func(t *testing.T) {
bidCommitment1 := make([]byte, 48)
bidCommitment2 := make([]byte, 48)
bidCommitment1[0] = 0xAA
bidCommitment2[0] = 0xBB
gloasBlockPb := util.NewBeaconBlockGloas()
gloasBlockPb.Block.Body.SignedExecutionPayloadBid.Message.BlobKzgCommitments = [][]byte{bidCommitment1, bidCommitment2}
gloasBlockPb.Block.Slot = 42
gloasBlockPb.Block.ProposerIndex = 7
signedGloasBlock, err := blocks.NewSignedBeaconBlock(gloasBlockPb)
require.NoError(t, err)
gloasRob, err := blocks.NewROBlock(signedGloasBlock)
require.NoError(t, err)
src := peerdas.PopulateFromBid(gloasRob)
require.Equal(t, primitives.Slot(42), src.Slot())
require.Equal(t, gloasRob.Root(), src.Root())
bidPI, err := src.ProposerIndex()
require.NoError(t, err)
require.Equal(t, primitives.ValidatorIndex(7), bidPI)
commitments, err := src.Commitments()
require.NoError(t, err)
require.Equal(t, 2, len(commitments))
require.DeepEqual(t, bidCommitment1, commitments[0])
require.DeepEqual(t, bidCommitment2, commitments[1])
require.Equal(t, peerdas.BidType, src.Type())
})
}
func TestPopulateFromBid_DataColumnSidecars(t *testing.T) {
const numberOfColumns = fieldparams.NumberOfColumns
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.GloasForkEpoch = 0
params.OverrideBeaconConfig(cfg)
bidCommitment1 := make([]byte, 48)
bidCommitment2 := make([]byte, 48)
bidCommitment1[0] = 0xAA
bidCommitment2[0] = 0xBB
gloasBlockPb := util.NewBeaconBlockGloas()
gloasBlockPb.Block.Body.SignedExecutionPayloadBid.Message.BlobKzgCommitments = [][]byte{bidCommitment1, bidCommitment2}
signedGloasBlock, err := blocks.NewSignedBeaconBlock(gloasBlockPb)
require.NoError(t, err)
gloasRob, err := blocks.NewROBlock(signedGloasBlock)
require.NoError(t, err)
cellsPerBlob := [][]kzg.Cell{
make([]kzg.Cell, numberOfColumns),
make([]kzg.Cell, numberOfColumns),
}
proofsPerBlob := [][]kzg.Proof{
make([]kzg.Proof, numberOfColumns),
make([]kzg.Proof, numberOfColumns),
}
for i := range numberOfColumns {
cellsPerBlob[0][i][0] = byte(i)
proofsPerBlob[0][i][0] = byte(i)
cellsPerBlob[1][i][0] = byte(i + 128)
proofsPerBlob[1][i][0] = byte(i + 128)
}
sidecars, err := peerdas.DataColumnSidecars(cellsPerBlob, proofsPerBlob, peerdas.PopulateFromBid(gloasRob))
require.NoError(t, err)
require.Equal(t, int(numberOfColumns), len(sidecars))
for i, sidecar := range sidecars {
require.Equal(t, true, sidecar.IsGloas())
require.Equal(t, uint64(i), sidecar.Index())
require.Equal(t, 2, len(sidecar.Column()))
require.Equal(t, 2, len(sidecar.KzgProofs()))
}
}

View File

@@ -46,16 +46,21 @@ func DataColumnsAlignWithBlock(block blocks.ROBlock, dataColumns []blocks.ROData
return ErrRootMismatch
}
dcKzgCommitments, err := dataColumn.KzgCommitments()
if err != nil {
return errors.Wrap(err, "kzg commitments")
}
// Check if the content length of the data column sidecar matches the block.
if len(dataColumn.Column) != blockCommitmentCount ||
len(dataColumn.KzgCommitments) != blockCommitmentCount ||
len(dataColumn.KzgProofs) != blockCommitmentCount {
if len(dataColumn.Column()) != blockCommitmentCount ||
len(dcKzgCommitments) != blockCommitmentCount ||
len(dataColumn.KzgProofs()) != blockCommitmentCount {
return ErrBlockColumnSizeMismatch
}
// Check if the commitments of the data column sidecar match the block.
for i := range blockCommitments {
if !bytes.Equal(blockCommitments[i], dataColumn.KzgCommitments[i]) {
if !bytes.Equal(blockCommitments[i], dcKzgCommitments[i]) {
return ErrCommitmentMismatch
}
}

View File

@@ -41,21 +41,21 @@ func TestDataColumnsAlignWithBlock(t *testing.T) {
t.Run("column size mismatch", func(t *testing.T) {
block, sidecars, _ := util.GenerateTestFuluBlockWithSidecars(t, 2, util.WithSlot(fs))
sidecars[0].Column = [][]byte{}
sidecars[0].DataColumnSidecar().Column = [][]byte{}
err := peerdas.DataColumnsAlignWithBlock(block, sidecars)
require.ErrorIs(t, err, peerdas.ErrBlockColumnSizeMismatch)
})
t.Run("KZG commitments size mismatch", func(t *testing.T) {
block, sidecars, _ := util.GenerateTestFuluBlockWithSidecars(t, 2, util.WithSlot(fs))
sidecars[0].KzgCommitments = [][]byte{}
sidecars[0].DataColumnSidecar().KzgCommitments = [][]byte{}
err := peerdas.DataColumnsAlignWithBlock(block, sidecars)
require.ErrorIs(t, err, peerdas.ErrBlockColumnSizeMismatch)
})
t.Run("KZG proofs mismatch", func(t *testing.T) {
block, sidecars, _ := util.GenerateTestFuluBlockWithSidecars(t, 2, util.WithSlot(fs))
sidecars[0].KzgProofs = [][]byte{}
sidecars[0].DataColumnSidecar().KzgProofs = [][]byte{}
err := peerdas.DataColumnsAlignWithBlock(block, sidecars)
require.ErrorIs(t, err, peerdas.ErrBlockColumnSizeMismatch)
})
@@ -63,7 +63,7 @@ func TestDataColumnsAlignWithBlock(t *testing.T) {
t.Run("commitment mismatch", func(t *testing.T) {
block, _, _ := util.GenerateTestFuluBlockWithSidecars(t, 2, util.WithSlot(fs))
_, alteredSidecars, _ := util.GenerateTestFuluBlockWithSidecars(t, 2, util.WithSlot(fs))
alteredSidecars[1].KzgCommitments[0][0]++ // Overflow is OK
alteredSidecars[1].DataColumnSidecar().KzgCommitments[0][0]++ // Overflow is OK
err := peerdas.DataColumnsAlignWithBlock(block, alteredSidecars)
require.ErrorIs(t, err, peerdas.ErrCommitmentMismatch)
})

View File

@@ -130,7 +130,7 @@ func gloasOperations(ctx context.Context, st state.BeaconState, block interfaces
//
// Spec definition:
//
// <spec fn="process_epoch" fork="gloas" hash="393b69ef">
// <spec fn="process_epoch" fork="gloas" hash="bf3575a9">
// def process_epoch(state: BeaconState) -> None:
// process_justification_and_finalization(state)
// process_inactivity_updates(state)
@@ -149,6 +149,8 @@ func gloasOperations(ctx context.Context, st state.BeaconState, block interfaces
// process_participation_flag_updates(state)
// process_sync_committee_updates(state)
// process_proposer_lookahead(state)
// # [New in Gloas:EIP7732]
// process_ptc_window(state)
// </spec>
func processEpochGloas(ctx context.Context, state state.BeaconState) error {
_, span := trace.StartSpan(ctx, "gloas.ProcessEpoch")
@@ -222,5 +224,5 @@ func processEpochGloas(ctx context.Context, state state.BeaconState) error {
if err := fulu.ProcessProposerLookahead(ctx, state); err != nil {
return err
}
return nil
return gloas.ProcessPTCWindow(ctx, state)
}

View File

@@ -183,7 +183,7 @@ func newGloasForkBoundaryState(
CurrentJustifiedCheckpoint: &ethpb.Checkpoint{Root: make([]byte, 32)},
FinalizedCheckpoint: &ethpb.Checkpoint{Root: make([]byte, 32)},
PayloadExpectedWithdrawals: make([]*engine.Withdrawal, 0),
ProposerLookahead: make([]uint64, 0),
ProposerLookahead: make([]primitives.ValidatorIndex, 0),
Builders: make([]*ethpb.Builder, 0),
}
for i := range protoState.BlockRoots {

View File

@@ -116,17 +116,32 @@ func CalculateStateRoot(
rollback state.BeaconState,
signed interfaces.ReadOnlySignedBeaconBlock,
) ([32]byte, error) {
ctx, span := trace.StartSpan(ctx, "core.state.CalculateStateRoot")
st, err := CalculatePostState(ctx, rollback, signed)
if err != nil {
return [32]byte{}, err
}
return st.HashTreeRoot(ctx)
}
// CalculatePostState returns the post-block state after processing the given
// block on a copy of the input state. It is identical to CalculateStateRoot
// but returns the full state instead of just its hash tree root.
func CalculatePostState(
ctx context.Context,
rollback state.BeaconState,
signed interfaces.ReadOnlySignedBeaconBlock,
) (state.BeaconState, error) {
ctx, span := trace.StartSpan(ctx, "core.state.CalculatePostState")
defer span.End()
if ctx.Err() != nil {
tracing.AnnotateError(span, ctx.Err())
return [32]byte{}, ctx.Err()
return nil, ctx.Err()
}
if rollback == nil || rollback.IsNil() {
return [32]byte{}, errors.New("nil state")
return nil, errors.New("nil state")
}
if signed == nil || signed.IsNil() || signed.Block().IsNil() {
return [32]byte{}, errors.New("nil block")
return nil, errors.New("nil block")
}
// Copy state to avoid mutating the state reference.
@@ -136,22 +151,22 @@ func CalculateStateRoot(
var err error
state, err = ProcessSlotsForBlock(ctx, state, signed.Block())
if err != nil {
return [32]byte{}, errors.Wrap(err, "could not process slots")
return nil, errors.Wrap(err, "could not process slots")
}
// Execute per block transition.
if features.Get().EnableProposerPreprocessing {
state, err = processBlockForProposing(ctx, state, signed)
if err != nil {
return [32]byte{}, errors.Wrap(err, "could not process block for proposing")
return nil, errors.Wrap(err, "could not process block for proposing")
}
} else {
state, err = ProcessBlockForStateRoot(ctx, state, signed)
if err != nil {
return [32]byte{}, errors.Wrap(err, "could not process block")
return nil, errors.Wrap(err, "could not process block")
}
}
return state.HashTreeRoot(ctx)
return state, nil
}
// processBlockVerifySigs processes the block and verifies the signatures within it. Block signatures are not verified as this block is not yet signed.

View File

@@ -204,7 +204,7 @@ func (s *LazilyPersistentStoreColumn) columnsNotStored(sidecars []blocks.RODataC
sum = s.store.Summary(sc.BlockRoot())
lastRoot = sc.BlockRoot()
}
if sum.HasIndex(sc.Index) {
if sum.HasIndex(sc.Index()) {
stored[i] = struct{}{}
}
}

View File

@@ -875,7 +875,7 @@ func TestColumnsNotStored(t *testing.T) {
if len(tc.stored) > 0 {
resultIndices := make(map[uint64]bool)
for _, col := range result {
resultIndices[col.Index] = true
resultIndices[col.Index()] = true
}
for _, storedIdx := range tc.stored {
require.Equal(t, false, resultIndices[storedIdx],
@@ -887,8 +887,8 @@ func TestColumnsNotStored(t *testing.T) {
if len(tc.expected) > 0 && len(tc.stored) == 0 {
// Only check exact order for non-stored cases (where we know they stay in same order)
for i, expectedIdx := range tc.expected {
require.Equal(t, columns[expectedIdx].Index, result[i].Index,
fmt.Sprintf("column %d: expected index %d, got %d", i, columns[expectedIdx].Index, result[i].Index))
require.Equal(t, columns[expectedIdx].Index(), result[i].Index(),
fmt.Sprintf("column %d: expected index %d, got %d", i, columns[expectedIdx].Index(), result[i].Index()))
}
}

View File

@@ -66,10 +66,11 @@ type dataColumnCacheEntry struct {
// stash will return an error if the given data column Index is out of bounds.
// It will overwrite any existing entry for the same index.
func (e *dataColumnCacheEntry) stash(sc blocks.RODataColumn) error {
if sc.Index >= fieldparams.NumberOfColumns {
return errors.Wrapf(errColumnIndexTooHigh, "index=%d", sc.Index)
index := sc.Index()
if index >= fieldparams.NumberOfColumns {
return errors.Wrapf(errColumnIndexTooHigh, "index=%d", index)
}
e.scs[sc.Index] = sc
e.scs[index] = sc
return nil
}

View File

@@ -23,7 +23,7 @@ func TestEnsureDeleteSetDiskSummary(t *testing.T) {
expect, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, []util.DataColumnParam{{Index: 1}})
require.NoError(t, entry.stash(expect[0]))
require.Equal(t, 1, len(entry.scs))
cols, err := nonDupe.append([]blocks.RODataColumn{}, peerdas.NewColumnIndicesFromSlice([]uint64{expect[0].Index}))
cols, err := nonDupe.append([]blocks.RODataColumn{}, peerdas.NewColumnIndicesFromSlice([]uint64{expect[0].Index()}))
require.NoError(t, err)
require.DeepEqual(t, expect[0], cols[0])
@@ -109,10 +109,10 @@ func TestAppendDataColumns(t *testing.T) {
require.NoError(t, err)
require.Equal(t, len(expected), len(actual))
slices.SortFunc(actual, func(i, j blocks.RODataColumn) int {
return int(i.Index) - int(j.Index)
return int(i.Index()) - int(j.Index())
})
for i := range expected {
require.Equal(t, expected[i].Index, actual[i].Index)
require.Equal(t, expected[i].Index(), actual[i].Index())
}
require.Equal(t, 1, len(original))
})

View File

@@ -369,7 +369,7 @@ func (dcs *DataColumnStorage) Save(dataColumnSidecars []blocks.VerifiedRODataCol
// Group data column sidecars by root.
for _, dataColumnSidecar := range dataColumnSidecars {
// Check if the data column index is too large.
if dataColumnSidecar.Index >= mandatoryNumberOfColumns {
if dataColumnSidecar.Index() >= mandatoryNumberOfColumns {
return errDataColumnIndexTooLarge
}
@@ -396,7 +396,7 @@ func (dcs *DataColumnStorage) Save(dataColumnSidecars []blocks.VerifiedRODataCol
// Get all indices.
indices := make([]uint64, 0, len(dataColumnSidecars))
for _, dataColumnSidecar := range dataColumnSidecars {
indices = append(indices, dataColumnSidecar.Index)
indices = append(indices, dataColumnSidecar.Index())
}
// Compute the data columns ident.
@@ -546,7 +546,7 @@ func (dcs *DataColumnStorage) Get(root [fieldparams.RootLength]byte, indices []u
return nil, errors.Wrap(err, "seek")
}
verifiedRODataColumn, err := verification.VerifiedRODataColumnFromDisk(file, root, metadata.sszEncodedDataColumnSidecarSize)
verifiedRODataColumn, err := verification.VerifiedRODataColumnFromDisk(file, root, metadata.sszEncodedDataColumnSidecarSize, summary.epoch)
if err != nil {
return nil, errors.Wrap(err, "verified RO data column from disk")
}
@@ -733,7 +733,7 @@ func (dcs *DataColumnStorage) saveDataColumnSidecarsExistingFile(filePath string
for _, dataColumnSidecar := range dataColumnSidecars {
// Extract the data columns index.
dataColumnIndex := dataColumnSidecar.Index
dataColumnIndex := dataColumnSidecar.Index()
ok, _, err := metadata.indices.get(dataColumnIndex)
if err != nil {
@@ -830,7 +830,7 @@ func (dcs *DataColumnStorage) saveDataColumnSidecarsNewFile(filePath string, inp
for _, dataColumnSidecar := range dataColumnSidecars {
// Extract the data column index.
dataColumnIndex := dataColumnSidecar.Index
dataColumnIndex := dataColumnSidecar.Index()
// Skip if the data column is already stored.
ok, _, err := indices.get(dataColumnIndex)

View File

@@ -112,7 +112,7 @@ func TestSaveDataColumnsSidecars(t *testing.T) {
alteredVerifiedRoDataColumnSidecars = append(alteredVerifiedRoDataColumnSidecars, verifiedRoDataColumnSidecars[0])
altered, err := blocks.NewRODataColumnWithRoot(
verifiedRoDataColumnSidecars[1].RODataColumn.DataColumnSidecar,
verifiedRoDataColumnSidecars[1].RODataColumn.DataColumnSidecar(),
verifiedRoDataColumnSidecars[0].BlockRoot(),
)
require.NoError(t, err)
@@ -263,7 +263,7 @@ func TestSaveDataColumnsSidecars(t *testing.T) {
)
// Build expected bytes.
firstSszEncodedDataColumnSidecar, err := expectedDataColumnSidecars[0].MarshalSSZ()
firstSszEncodedDataColumnSidecar, err := expectedDataColumnSidecars[0].RODataColumn.DataColumnSidecar().MarshalSSZ()
require.NoError(t, err)
dataColumnSidecarsCount := len(expectedDataColumnSidecars)
@@ -272,7 +272,7 @@ func TestSaveDataColumnsSidecars(t *testing.T) {
sszEncodedDataColumnSidecars := make([]byte, 0, dataColumnSidecarsCount*sszEncodedDataColumnSidecarSize)
sszEncodedDataColumnSidecars = append(sszEncodedDataColumnSidecars, firstSszEncodedDataColumnSidecar...)
for _, dataColumnSidecar := range expectedDataColumnSidecars[1:] {
sszEncodedDataColumnSidecar, err := dataColumnSidecar.MarshalSSZ()
sszEncodedDataColumnSidecar, err := dataColumnSidecar.RODataColumn.DataColumnSidecar().MarshalSSZ()
require.NoError(t, err)
sszEncodedDataColumnSidecars = append(sszEncodedDataColumnSidecars, sszEncodedDataColumnSidecar...)
}
@@ -362,11 +362,17 @@ func TestGetDataColumnSidecars(t *testing.T) {
verifiedRODataColumnSidecars, err := dataColumnStorage.Get(root, nil)
require.NoError(t, err)
require.DeepSSZEqual(t, expectedVerifiedRoDataColumnSidecars, verifiedRODataColumnSidecars)
require.Equal(t, len(expectedVerifiedRoDataColumnSidecars), len(verifiedRODataColumnSidecars))
for i := range expectedVerifiedRoDataColumnSidecars {
require.DeepSSZEqual(t, expectedVerifiedRoDataColumnSidecars[i].DataColumnSidecar(), verifiedRODataColumnSidecars[i].DataColumnSidecar())
}
verifiedRODataColumnSidecars, err = dataColumnStorage.Get(root, []uint64{12, 13, 14})
require.NoError(t, err)
require.DeepSSZEqual(t, expectedVerifiedRoDataColumnSidecars, verifiedRODataColumnSidecars)
require.Equal(t, len(expectedVerifiedRoDataColumnSidecars), len(verifiedRODataColumnSidecars))
for i := range expectedVerifiedRoDataColumnSidecars {
require.DeepSSZEqual(t, expectedVerifiedRoDataColumnSidecars[i].DataColumnSidecar(), verifiedRODataColumnSidecars[i].DataColumnSidecar())
}
})
}

View File

@@ -60,6 +60,10 @@ var (
GetPayloadMethodV5,
GetBlobsV2,
}
gloasEngineEndpoints = []string{
NewPayloadMethodV5,
}
)
// ClientVersionV1 represents the response from engine_getClientVersionV1.
@@ -80,6 +84,8 @@ const (
NewPayloadMethodV3 = "engine_newPayloadV3"
// NewPayloadMethodV4 is the engine_newPayloadVX method added at Electra.
NewPayloadMethodV4 = "engine_newPayloadV4"
// NewPayloadMethodV5 is the engine_newPayloadVX method added at Gloas.
NewPayloadMethodV5 = "engine_newPayloadV5"
// ForkchoiceUpdatedMethod v1 request string for JSON-RPC.
ForkchoiceUpdatedMethod = "engine_forkchoiceUpdatedV1"
// ForkchoiceUpdatedMethodV2 v2 request string for JSON-RPC.
@@ -148,7 +154,7 @@ type Reconstructor interface {
// EngineCaller defines a client that can interact with an Ethereum
// execution node's engine service via JSON-RPC.
type EngineCaller interface {
NewPayload(ctx context.Context, payload interfaces.ExecutionData, versionedHashes []common.Hash, parentBlockRoot *common.Hash, executionRequests *pb.ExecutionRequests) ([]byte, error)
NewPayload(ctx context.Context, payload interfaces.ExecutionData, versionedHashes []common.Hash, parentBlockRoot *common.Hash, executionRequests *pb.ExecutionRequests, slot primitives.Slot) ([]byte, error)
ForkchoiceUpdated(
ctx context.Context, state *pb.ForkchoiceState, attrs payloadattribute.Attributer,
) (*pb.PayloadIDBytes, []byte, error)
@@ -161,7 +167,7 @@ type EngineCaller interface {
var ErrEmptyBlockHash = errors.New("Block hash is empty 0x0000...")
// NewPayload request calls the engine_newPayloadVX method via JSON-RPC.
func (s *Service) NewPayload(ctx context.Context, payload interfaces.ExecutionData, versionedHashes []common.Hash, parentBlockRoot *common.Hash, executionRequests *pb.ExecutionRequests) ([]byte, error) {
func (s *Service) NewPayload(ctx context.Context, payload interfaces.ExecutionData, versionedHashes []common.Hash, parentBlockRoot *common.Hash, executionRequests *pb.ExecutionRequests, slot primitives.Slot) ([]byte, error) {
ctx, span := trace.StartSpan(ctx, "powchain.engine-api-client.NewPayload")
defer span.End()
defer func(start time.Time) {
@@ -195,7 +201,11 @@ func (s *Service) NewPayload(ctx context.Context, payload interfaces.ExecutionDa
if err != nil {
return nil, errors.Wrap(err, "failed to encode execution requests")
}
err = s.rpcClient.CallContext(ctx, result, NewPayloadMethodV4, payloadPb, versionedHashes, parentBlockRoot, flattenedRequests)
method := NewPayloadMethodV4
if slots.ToEpoch(slot) >= params.BeaconConfig().GloasForkEpoch {
method = NewPayloadMethodV5
}
err = s.rpcClient.CallContext(ctx, result, method, payloadPb, versionedHashes, parentBlockRoot, flattenedRequests)
if err != nil {
return nil, handleRPCError(err)
}
@@ -261,7 +271,7 @@ func (s *Service) ForkchoiceUpdated(
if err != nil {
return nil, nil, handleRPCError(err)
}
case version.Deneb, version.Electra, version.Fulu:
case version.Deneb, version.Electra, version.Fulu, version.Gloas:
a, err := attrs.PbV3()
if err != nil {
return nil, nil, err
@@ -295,7 +305,7 @@ func (s *Service) ForkchoiceUpdated(
func getPayloadMethodAndMessage(slot primitives.Slot) (string, proto.Message) {
epoch := slots.ToEpoch(slot)
if epoch >= params.BeaconConfig().FuluForkEpoch {
if epoch >= params.BeaconConfig().GloasForkEpoch || epoch >= params.BeaconConfig().FuluForkEpoch {
return GetPayloadMethodV5, &pb.ExecutionBundleFulu{}
}
if epoch >= params.BeaconConfig().ElectraForkEpoch {
@@ -347,6 +357,10 @@ func (s *Service) ExchangeCapabilities(ctx context.Context) ([]string, error) {
supportedEngineEndpoints = append(supportedEngineEndpoints, fuluEngineEndpoints...)
}
if params.GloasEnabled() {
supportedEngineEndpoints = append(supportedEngineEndpoints, gloasEngineEndpoints...)
}
elSupportedEndpointsSlice := make([]string, len(supportedEngineEndpoints))
if err := s.rpcClient.CallContext(ctx, &elSupportedEndpointsSlice, ExchangeCapabilities, supportedEngineEndpoints); err != nil {
return nil, handleRPCError(err)

View File

@@ -129,7 +129,7 @@ func TestClient_IPC(t *testing.T) {
require.Equal(t, true, ok)
wrappedPayload, err := blocks.WrappedExecutionPayload(req)
require.NoError(t, err)
latestValidHash, err := srv.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil)
latestValidHash, err := srv.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil, 0)
require.NoError(t, err)
require.DeepEqual(t, bytesutil.ToBytes32(want.LatestValidHash), bytesutil.ToBytes32(latestValidHash))
})
@@ -140,7 +140,7 @@ func TestClient_IPC(t *testing.T) {
require.Equal(t, true, ok)
wrappedPayload, err := blocks.WrappedExecutionPayloadCapella(req)
require.NoError(t, err)
latestValidHash, err := srv.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil)
latestValidHash, err := srv.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil, 0)
require.NoError(t, err)
require.DeepEqual(t, bytesutil.ToBytes32(want.LatestValidHash), bytesutil.ToBytes32(latestValidHash))
})
@@ -605,7 +605,7 @@ func TestClient_HTTP(t *testing.T) {
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayload(execPayload)
require.NoError(t, err)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil, 0)
require.NoError(t, err)
require.DeepEqual(t, want.LatestValidHash, resp)
})
@@ -619,7 +619,7 @@ func TestClient_HTTP(t *testing.T) {
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayloadCapella(execPayload)
require.NoError(t, err)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil, 0)
require.NoError(t, err)
require.DeepEqual(t, want.LatestValidHash, resp)
})
@@ -633,7 +633,7 @@ func TestClient_HTTP(t *testing.T) {
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayloadDeneb(execPayload)
require.NoError(t, err)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'}, nil)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'}, nil, 0)
require.NoError(t, err)
require.DeepEqual(t, want.LatestValidHash, resp)
})
@@ -672,7 +672,7 @@ func TestClient_HTTP(t *testing.T) {
},
}
client := newPayloadV4Setup(t, want, execPayload, requests)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'}, requests)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'}, requests, 0)
require.NoError(t, err)
require.DeepEqual(t, want.LatestValidHash, resp)
})
@@ -686,7 +686,7 @@ func TestClient_HTTP(t *testing.T) {
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayload(execPayload)
require.NoError(t, err)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil, 0)
require.ErrorIs(t, ErrAcceptedSyncingPayloadStatus, err)
require.DeepEqual(t, []uint8(nil), resp)
})
@@ -700,7 +700,7 @@ func TestClient_HTTP(t *testing.T) {
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayloadCapella(execPayload)
require.NoError(t, err)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil, 0)
require.ErrorIs(t, ErrAcceptedSyncingPayloadStatus, err)
require.DeepEqual(t, []uint8(nil), resp)
})
@@ -714,7 +714,7 @@ func TestClient_HTTP(t *testing.T) {
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayloadDeneb(execPayload)
require.NoError(t, err)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'}, nil)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'}, nil, 0)
require.ErrorIs(t, ErrAcceptedSyncingPayloadStatus, err)
require.DeepEqual(t, []uint8(nil), resp)
})
@@ -753,7 +753,7 @@ func TestClient_HTTP(t *testing.T) {
},
}
client := newPayloadV4Setup(t, want, execPayload, requests)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'}, requests)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'}, requests, 0)
require.ErrorIs(t, ErrAcceptedSyncingPayloadStatus, err)
require.DeepEqual(t, []uint8(nil), resp)
})
@@ -767,7 +767,7 @@ func TestClient_HTTP(t *testing.T) {
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayload(execPayload)
require.NoError(t, err)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil, 0)
require.ErrorIs(t, ErrInvalidBlockHashPayloadStatus, err)
require.DeepEqual(t, []uint8(nil), resp)
})
@@ -781,7 +781,7 @@ func TestClient_HTTP(t *testing.T) {
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayloadCapella(execPayload)
require.NoError(t, err)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil, 0)
require.ErrorIs(t, ErrInvalidBlockHashPayloadStatus, err)
require.DeepEqual(t, []uint8(nil), resp)
})
@@ -795,7 +795,7 @@ func TestClient_HTTP(t *testing.T) {
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayloadDeneb(execPayload)
require.NoError(t, err)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'}, nil)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'}, nil, 0)
require.ErrorIs(t, ErrInvalidBlockHashPayloadStatus, err)
require.DeepEqual(t, []uint8(nil), resp)
})
@@ -833,7 +833,7 @@ func TestClient_HTTP(t *testing.T) {
},
}
client := newPayloadV4Setup(t, want, execPayload, requests)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'}, requests)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'}, requests, 0)
require.ErrorIs(t, ErrInvalidBlockHashPayloadStatus, err)
require.DeepEqual(t, []uint8(nil), resp)
})
@@ -847,7 +847,7 @@ func TestClient_HTTP(t *testing.T) {
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayload(execPayload)
require.NoError(t, err)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil, 0)
require.ErrorIs(t, ErrInvalidPayloadStatus, err)
require.DeepEqual(t, want.LatestValidHash, resp)
})
@@ -861,7 +861,7 @@ func TestClient_HTTP(t *testing.T) {
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayloadCapella(execPayload)
require.NoError(t, err)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil, 0)
require.ErrorIs(t, ErrInvalidPayloadStatus, err)
require.DeepEqual(t, want.LatestValidHash, resp)
})
@@ -875,7 +875,7 @@ func TestClient_HTTP(t *testing.T) {
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayloadDeneb(execPayload)
require.NoError(t, err)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'}, nil)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'}, nil, 0)
require.ErrorIs(t, ErrInvalidPayloadStatus, err)
require.DeepEqual(t, want.LatestValidHash, resp)
})
@@ -914,7 +914,7 @@ func TestClient_HTTP(t *testing.T) {
},
}
client := newPayloadV4Setup(t, want, execPayload, requests)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'}, requests)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'}, requests, 0)
require.ErrorIs(t, ErrInvalidPayloadStatus, err)
require.DeepEqual(t, want.LatestValidHash, resp)
})
@@ -928,7 +928,7 @@ func TestClient_HTTP(t *testing.T) {
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayload(execPayload)
require.NoError(t, err)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil, 0)
require.ErrorIs(t, err, ErrUnknownPayloadStatus)
require.DeepEqual(t, []uint8(nil), resp)
})

View File

@@ -49,7 +49,7 @@ type EngineClient struct {
}
// NewPayload --
func (e *EngineClient) NewPayload(_ context.Context, _ interfaces.ExecutionData, _ []common.Hash, _ *common.Hash, _ *pb.ExecutionRequests) ([]byte, error) {
func (e *EngineClient) NewPayload(_ context.Context, _ interfaces.ExecutionData, _ []common.Hash, _ *common.Hash, _ *pb.ExecutionRequests, _ primitives.Slot) ([]byte, error) {
return e.NewPayloadResp, e.ErrNewPayload
}

View File

@@ -325,15 +325,20 @@ func (f *ForkChoice) updateBalances() error {
if pn != nil && vote.currentRoot != zHash {
if pending {
if pn.node.balance < oldBalance {
log.WithFields(logrus.Fields{
"nodeRoot": fmt.Sprintf("%#x", bytesutil.Trunc(vote.currentRoot[:])),
"oldBalance": oldBalance,
"nodeBalance": pn.node.balance,
"nodeWeight": pn.node.weight,
"proposerBoostRoot": fmt.Sprintf("%#x", bytesutil.Trunc(f.store.proposerBoostRoot[:])),
"previousProposerBoostRoot": fmt.Sprintf("%#x", bytesutil.Trunc(f.store.previousProposerBoostRoot[:])),
"previousProposerBoostScore": f.store.previousProposerBoostScore,
}).Warning("node with invalid balance, setting it to zero")
if pn.node.slot == 0 {
log.WithField("nodeRoot", fmt.Sprintf("%#x", bytesutil.Trunc(vote.currentRoot[:]))).
Debug("Genesis node pending balance underflow, clamping to zero")
} else {
log.WithFields(logrus.Fields{
"nodeRoot": fmt.Sprintf("%#x", bytesutil.Trunc(vote.currentRoot[:])),
"oldBalance": oldBalance,
"nodeBalance": pn.node.balance,
"nodeWeight": pn.node.weight,
"proposerBoostRoot": fmt.Sprintf("%#x", bytesutil.Trunc(f.store.proposerBoostRoot[:])),
"previousProposerBoostRoot": fmt.Sprintf("%#x", bytesutil.Trunc(f.store.previousProposerBoostRoot[:])),
"previousProposerBoostScore": f.store.previousProposerBoostScore,
}).Warning("node with invalid balance, setting it to zero")
}
pn.node.balance = 0
} else {
pn.node.balance -= oldBalance

View File

@@ -372,6 +372,28 @@ func (s *Store) nodeTreeDump(ctx context.Context, n *Node, nodes []*forkchoice2.
return nodes, nil
}
// MarkFullNode creates a full payload node for an existing empty node at the
// given beacon block root. This is used during forkchoice tree reconstruction on
// startup to mark blocks whose execution payload was delivered. The caller must
// hold the forkchoice write lock.
func (f *ForkChoice) MarkFullNode(root [32]byte) {
s := f.store
en := s.emptyNodeByRoot[root]
if en == nil {
return
}
if _, ok := s.fullNodeByRoot[root]; ok {
return
}
s.fullNodeByRoot[root] = &PayloadNode{
node: en.node,
optimistic: true,
timestamp: time.Now(),
full: true,
children: make([]*Node, 0),
}
}
// InsertPayload inserts a full node into forkchoice after the Gloas fork.
func (f *ForkChoice) InsertPayload(pe interfaces.ROExecutionPayloadEnvelope) error {
if pe.IsNil() {

View File

@@ -79,7 +79,7 @@ func prepareGloasForkchoiceState(
ExecutionPayloadAvailability: make([]byte, 1024),
LatestBlockHash: make([]byte, 32),
PayloadExpectedWithdrawals: make([]*enginev1.Withdrawal, 0),
ProposerLookahead: make([]uint64, 64),
ProposerLookahead: make([]primitives.ValidatorIndex, 64),
}
st, err := state_native.InitializeFromProtoUnsafeGloas(base)

View File

@@ -112,4 +112,5 @@ type Setter interface {
SetBalancesByRooter(BalancesByRooter)
InsertSlashedIndex(context.Context, primitives.ValidatorIndex)
SetPTCVote(root [32]byte, ptcIdx uint64, payloadPresent, blobDataAvailable bool)
MarkFullNode(root [32]byte)
}

View File

@@ -775,6 +775,7 @@ func (b *BeaconNode) registerBlockchainService(fc forkchoice.ForkChoicer, gs *st
blockchain.WithBlobStorage(b.BlobStorage),
blockchain.WithDataColumnStorage(b.DataColumnStorage),
blockchain.WithTrackedValidatorsCache(b.trackedValidatorsCache),
blockchain.WithProposerPreferencesCache(b.proposerPreferencesCache),
blockchain.WithPayloadIDCache(b.payloadIDCache),
blockchain.WithSyncChecker(b.syncChecker),
blockchain.WithSlasherEnabled(b.slasherEnabled),

View File

@@ -200,6 +200,7 @@ go_test(
"@com_github_libp2p_go_libp2p//core/peer:go_default_library",
"@com_github_libp2p_go_libp2p//core/protocol:go_default_library",
"@com_github_libp2p_go_libp2p//p2p/security/noise:go_default_library",
"@com_github_libp2p_go_libp2p//p2p/transport/tcp:go_default_library",
"@com_github_libp2p_go_libp2p_pubsub//:go_default_library",
"@com_github_libp2p_go_libp2p_pubsub//pb:go_default_library",
"@com_github_multiformats_go_multiaddr//:go_default_library",

View File

@@ -64,6 +64,31 @@ func (s *Service) Broadcast(ctx context.Context, msg proto.Message) error {
return s.broadcastObject(ctx, castMsg, fmt.Sprintf(topic, forkDigest))
}
// BroadcastForEpoch broadcasts a message using the fork digest for the given epoch.
// Use this when the target epoch's fork digest differs from the current one,
// e.g. broadcasting proposer preferences in the epoch before gloas activation.
func (s *Service) BroadcastForEpoch(ctx context.Context, msg proto.Message, epoch primitives.Epoch) error {
ctx, span := trace.StartSpan(ctx, "p2p.BroadcastForEpoch")
defer span.End()
twoSlots := time.Duration(2*params.BeaconConfig().SecondsPerSlot) * time.Second
ctx, cancel := context.WithTimeout(ctx, twoSlots)
defer cancel()
forkDigest := params.ForkDigest(epoch)
topic, ok := GossipTypeMapping[reflect.TypeOf(msg)]
if !ok {
tracing.AnnotateError(span, ErrMessageNotMapped)
return ErrMessageNotMapped
}
castMsg, ok := msg.(ssz.Marshaler)
if !ok {
return errors.Errorf("message of %T does not support marshaller interface", msg)
}
return s.broadcastObject(ctx, castMsg, fmt.Sprintf(topic, forkDigest))
}
// BroadcastAttestation broadcasts an attestation to the p2p network, the message is assumed to be
// broadcasted to the current fork.
func (s *Service) BroadcastAttestation(ctx context.Context, subnet uint64, att ethpb.Att) error {
@@ -373,7 +398,7 @@ func (s *Service) broadcastDataColumnSidecars(ctx context.Context, forkDigest [f
slotPerRoot := make(map[[fieldparams.RootLength]byte]primitives.Slot, 1)
topicFunc := func(sidecar blocks.VerifiedRODataColumn) (topic string, wrappedSubIdx uint64, subnet uint64) {
subnet = peerdas.ComputeSubnetForDataColumnSidecar(sidecar.Index)
subnet = peerdas.ComputeSubnetForDataColumnSidecar(sidecar.Index())
topic = dataColumnSubnetToTopic(subnet, forkDigest)
wrappedSubIdx = subnet + dataColumnSubnetVal
return
@@ -413,7 +438,7 @@ func (s *Service) broadcastDataColumnSidecars(ctx context.Context, forkDigest [f
topic, _, _ := topicFunc(sidecar)
if err := s.batchObject(ctx, &messageBatch, sidecar, topic); err != nil {
if err := s.batchObject(ctx, &messageBatch, &sidecar, topic); err != nil {
tracing.AnnotateError(span, err)
log.WithError(err).Error("Cannot batch data column sidecar")
return
@@ -421,7 +446,7 @@ func (s *Service) broadcastDataColumnSidecars(ctx context.Context, forkDigest [f
if logLevel >= logrus.DebugLevel {
root := sidecar.BlockRoot()
timings.Store(rootAndIndex{root: root, index: sidecar.Index}, time.Now())
timings.Store(rootAndIndex{root: root, index: sidecar.Index()}, time.Now())
}
})
}
@@ -443,7 +468,7 @@ func (s *Service) broadcastDataColumnSidecars(ctx context.Context, forkDigest [f
}
// Publish individually (not batched) since we just found peers.
if err := s.broadcastObject(ctx, sidecar, topic); err != nil {
if err := s.broadcastObject(ctx, &sidecar, topic); err != nil {
tracing.AnnotateError(span, err)
log.WithError(err).Error("Cannot broadcast data column sidecar")
return
@@ -453,7 +478,7 @@ func (s *Service) broadcastDataColumnSidecars(ctx context.Context, forkDigest [f
if logLevel >= logrus.DebugLevel {
root := sidecar.BlockRoot()
timings.Store(rootAndIndex{root: root, index: sidecar.Index}, time.Now())
timings.Store(rootAndIndex{root: root, index: sidecar.Index()}, time.Now())
}
})
}

View File

@@ -315,9 +315,12 @@ func TestService_BroadcastAttestationWithDiscoveryAttempts(t *testing.T) {
}
}()
ps1Tracer := p2ptest.NewGossipTracer()
ps1, err := pubsub.NewGossipSub(t.Context(), hosts[0],
pubsub.WithMessageSigning(false),
pubsub.WithStrictSignatureVerification(false),
pubsub.WithRawTracer(ps1Tracer),
)
require.NoError(t, err)
@@ -369,33 +372,17 @@ func TestService_BroadcastAttestationWithDiscoveryAttempts(t *testing.T) {
// External peer subscribes to the topic.
topic += p.Encoding().ProtocolSuffix()
// We don't use our internal subscribe method
// due to using floodsub over here.
_, err = ps1Tracer.JoinAndWatchTopic(t.Context(), topic, p)
require.NoError(t, err)
tpHandle, err := p2.JoinTopic(topic)
require.NoError(t, err)
sub, err := tpHandle.Subscribe()
require.NoError(t, err)
tpHandle, err = p.JoinTopic(topic)
require.NoError(t, err)
_, err = tpHandle.Subscribe()
require.NoError(t, err)
// This test specifically tests discovery-based peer finding, which requires
// time for nodes to discover each other. Using a fixed sleep here is intentional
// as we're testing the discovery timing behavior.
time.Sleep(500 * time.Millisecond)
// Verify mesh establishment after discovery
require.Eventually(t, func() bool {
return len(p.pubsub.ListPeers(topic)) > 0 && len(p2.pubsub.ListPeers(topic)) > 0
}, 5*time.Second, 10*time.Millisecond, "libp2p mesh did not establish")
nodePeers := p.pubsub.ListPeers(topic)
nodePeers2 := p2.pubsub.ListPeers(topic)
assert.Equal(t, 1, len(nodePeers))
assert.Equal(t, 1, len(nodePeers2))
// Block until gossipsub is ready to deliver a published message to p2.
require.NoError(t, ps1Tracer.CanPublishToPeer(t.Context(), topic, p2.PeerID()))
// Async listen for the pubsub, must be before the broadcast.
var wg sync.WaitGroup
@@ -406,10 +393,10 @@ func TestService_BroadcastAttestationWithDiscoveryAttempts(t *testing.T) {
defer cancel()
incomingMessage, err := sub.Next(ctx)
require.NoError(t, err)
require.NoError(tt, err)
result := &ethpb.Attestation{}
require.NoError(t, p.Encoding().DecodeGossip(incomingMessage.Data, result))
require.NoError(tt, p.Encoding().DecodeGossip(incomingMessage.Data, result))
if !proto.Equal(result, msg) {
tt.Errorf("Did not receive expected message, got %+v, wanted %+v", result, msg)
}
@@ -815,7 +802,7 @@ func TestService_BroadcastDataColumn(t *testing.T) {
var result ethpb.DataColumnSidecar
require.NoError(t, service.Encoding().DecodeGossip(msg.Data, &result))
require.DeepEqual(t, &result, verifiedRoSidecar)
require.DeepEqual(t, &result, verifiedRoSidecar.DataColumnSidecar())
}
type topicInvoked struct {

View File

@@ -186,11 +186,6 @@ func TestStartDiscV5_DiscoverAllPeers(t *testing.T) {
bootListener, err := s.createListener(ipAddr, pkey)
require.NoError(t, err)
defer bootListener.Close()
// Allow bootnode's table to have its initial refresh. This allows
// inbound nodes to be added in.
time.Sleep(5 * time.Second)
bootNode := bootListener.Self()
var listeners []*listenerWrapper
@@ -227,15 +222,13 @@ func TestStartDiscV5_DiscoverAllPeers(t *testing.T) {
}
}()
// Wait for the nodes to have their local routing tables to be populated with the other nodes
time.Sleep(discoveryWaitTime)
var nodes []*enode.Node
lastListener := listeners[len(listeners)-1]
nodes := lastListener.Lookup(bootNode.ID())
if len(nodes) < 4 {
t.Errorf("The node's local table doesn't have the expected number of nodes. "+
"Expected more than or equal to %d but got %d", 4, len(nodes))
}
require.Eventually(t, func() bool {
nodes = lastListener.Lookup(bootNode.ID())
return len(nodes) > 4
}, 10*time.Second, 100*time.Millisecond, fmt.Errorf("The node's local table doesn't have the expected number of nodes. "+
"Expected more than or equal to %d but got %d", 4, len(nodes)))
}
func TestCreateLocalNode(t *testing.T) {

View File

@@ -52,7 +52,9 @@ const (
// lightClientFinalityUpdateWeight specifies the scoring weight that we apply to
// our light client finality update topic.
lightClientFinalityUpdateWeight = 0.05
// signedProposerPreferencesWeight specifies the scoring weight that we apply to
// our signed proposer preferences topic.
signedProposerPreferencesWeight = 0.05
// maxInMeshScore describes the max score a peer can attain from being in the mesh.
maxInMeshScore = 10
// maxFirstDeliveryScore describes the max score a peer can obtain from first deliveries.
@@ -151,6 +153,9 @@ func (s *Service) topicScoreParams(topic string) (*pubsub.TopicScoreParams, erro
case strings.Contains(topic, GossipExecutionPayloadEnvelopeMessage):
// TODO: Revisit scoring params for execution payload envelope gossip.
return defaultBlockTopicParams(), nil
case strings.Contains(topic, GossipSignedProposerPreferencesMessage):
// TODO: Revisit scoring params for signed proposer preferences gossip.
return defaultBlockTopicParams(), nil
default:
return nil, errors.Errorf("unrecognized topic provided for parameter registration: %s", topic)
}

View File

@@ -92,6 +92,11 @@ func GossipTopicMappings(topic string, epoch primitives.Epoch) proto.Message {
return &ethpb.LightClientFinalityUpdateCapella{}
}
return gossipMessage(topic)
case DataColumnSubnetTopicFormat:
if epoch >= params.BeaconConfig().GloasForkEpoch {
return &ethpb.DataColumnSidecarGloas{}
}
return gossipMessage(topic)
default:
return gossipMessage(topic)
}
@@ -153,6 +158,7 @@ func init() {
GossipTypeMapping[reflect.TypeFor[*ethpb.SignedBeaconBlockFulu]()] = BlockSubnetTopicFormat
// Specially handle Gloas objects.
GossipTypeMapping[reflect.TypeFor[*ethpb.SignedBeaconBlockGloas]()] = BlockSubnetTopicFormat
GossipTypeMapping[reflect.TypeFor[*ethpb.DataColumnSidecarGloas]()] = DataColumnSubnetTopicFormat
// Payload attestation messages.
GossipTypeMapping[reflect.TypeFor[*ethpb.PayloadAttestationMessage]()] = PayloadAttestationMessageTopicFormat

View File

@@ -47,6 +47,7 @@ type (
// Broadcaster broadcasts messages to peers over the p2p pubsub protocol.
Broadcaster interface {
Broadcast(context.Context, proto.Message) error
BroadcastForEpoch(context.Context, proto.Message, primitives.Epoch) error
BroadcastAttestation(ctx context.Context, subnet uint64, att ethpb.Att) error
BroadcastSyncCommitteeMessage(ctx context.Context, subnet uint64, sMsg *ethpb.SyncCommitteeMessage) error
BroadcastBlob(ctx context.Context, subnet uint64, blob *ethpb.BlobSidecar) error

View File

@@ -24,6 +24,7 @@ import (
"github.com/libp2p/go-libp2p/core/host"
"github.com/libp2p/go-libp2p/core/peer"
noise "github.com/libp2p/go-libp2p/p2p/security/noise"
libp2ptcp "github.com/libp2p/go-libp2p/p2p/transport/tcp"
"github.com/multiformats/go-multiaddr"
logTest "github.com/sirupsen/logrus/hooks/test"
)
@@ -35,7 +36,12 @@ func createHost(t *testing.T, port uint) (host.Host, *ecdsa.PrivateKey, net.IP)
ipAddr := net.ParseIP("127.0.0.1")
listen, err := multiaddr.NewMultiaddr(fmt.Sprintf("/ip4/%s/tcp/%d", ipAddr, port))
require.NoError(t, err, "Failed to p2p listen")
h, err := libp2p.New([]libp2p.Option{privKeyOption(pkey), libp2p.ListenAddrs(listen), libp2p.Security(noise.ID, noise.New)}...)
h, err := libp2p.New([]libp2p.Option{
privKeyOption(pkey),
libp2p.ListenAddrs(listen),
libp2p.Security(noise.ID, noise.New),
libp2p.Transport(libp2ptcp.NewTCPTransport, libp2ptcp.DisableReuseport()),
}...)
require.NoError(t, err)
return h, pkey, ipAddr
}

View File

@@ -5,6 +5,8 @@ go_library(
testonly = True,
srcs = [
"fuzz_p2p.go",
"gossiptracer.go",
"log.go",
"mock_broadcaster.go",
"mock_host.go",
"mock_listener.go",

View File

@@ -138,6 +138,11 @@ func (*FakeP2P) Disconnect(_ peer.ID) error {
return nil
}
// BroadcastForEpoch -- fake.
func (*FakeP2P) BroadcastForEpoch(_ context.Context, _ proto.Message, _ primitives.Epoch) error {
return nil
}
// Broadcast -- fake.
func (*FakeP2P) Broadcast(_ context.Context, _ proto.Message) error {
return nil

View File

@@ -0,0 +1,298 @@
package testing
import (
"context"
"errors"
"fmt"
"sync"
pubsub "github.com/libp2p/go-libp2p-pubsub"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/libp2p/go-libp2p/core/protocol"
)
type topicPeer struct {
topic string
peer peer.ID
}
// GossipTracer implements pubsub.RawTracer for use in tests. It allows callers
// to block until specific gossipsub-internal events have fired, which is useful
// for avoiding races between the various maps maintained by the pubsub event loop.
//
// Individual methods (RemovePeer, Prune, ValidateMessage, etc.) can be extended
// as needed by future tests.
type GossipTracer struct {
mu sync.Mutex
addPeerWaiters map[peer.ID]chan struct{}
addedPeers map[peer.ID]bool
joinedTopics map[string]bool
graftedPeers map[topicPeer]bool
graftWaiters map[topicPeer]chan struct{}
topicWaiters map[string]*topicEventWaiter
}
// NewGossipTracer returns a new tracer ready for use. Pass it to
// pubsub.NewGossipSub via pubsub.WithRawTracer(tracer).
func NewGossipTracer() *GossipTracer {
return &GossipTracer{
addPeerWaiters: make(map[peer.ID]chan struct{}),
addedPeers: make(map[peer.ID]bool),
joinedTopics: make(map[string]bool),
graftedPeers: make(map[topicPeer]bool),
graftWaiters: make(map[topicPeer]chan struct{}),
topicWaiters: make(map[string]*topicEventWaiter),
}
}
func (t *GossipTracer) waitForAddPeer(ctx context.Context, pid peer.ID) error {
t.mu.Lock()
if t.addedPeers[pid] {
t.mu.Unlock()
return nil
}
ch, ok := t.addPeerWaiters[pid]
if !ok {
ch = make(chan struct{})
t.addPeerWaiters[pid] = ch
}
t.mu.Unlock()
select {
case <-ch:
return nil
case <-ctx.Done():
return ctx.Err()
}
}
func (t *GossipTracer) waitForGraft(ctx context.Context, topic string, pid peer.ID) error {
key := topicPeer{topic: topic, peer: pid}
t.mu.Lock()
if t.graftedPeers[key] {
t.mu.Unlock()
return nil
}
ch, ok := t.graftWaiters[key]
if !ok {
ch = make(chan struct{})
t.graftWaiters[key] = ch
}
t.mu.Unlock()
select {
case <-ch:
return nil
case <-ctx.Done():
return ctx.Err()
}
}
func (t *GossipTracer) isSubscribed(topic string) bool {
t.mu.Lock()
defer t.mu.Unlock()
return t.joinedTopics[topic]
}
// CanPublishToPeer blocks until the gossipsub event loop is in a state where
// publishing a message on the given topic will successfully reach pid.
//
// The conditions depend on whether we have locally subscribed to the topic:
// - Subscribed (mesh path): waits until pid has been grafted into our mesh
// for the topic.
// - Not subscribed (fanout path): waits until both PeerJoin (pid is in
// p.topics[topic]) and AddPeer (pid is in p.peers with an rpcQueue) have
// fired.
//
// Note: You must call 'JoinAndWatchTopic' first before calling this method.
func (t *GossipTracer) CanPublishToPeer(ctx context.Context, topic string, pid peer.ID) error {
if t.isSubscribed(topic) {
return t.waitForGraft(ctx, topic, pid)
}
// Fanout path: need both PeerJoin and AddPeer.
w := t.getTopicWaiter(topic)
if w == nil {
return errors.New("topic waiter not found, please call JoinAndWatchTopic first")
}
if err := w.waitForPeerJoin(ctx, pid); err != nil {
return fmt.Errorf("wait for peer join: %w", err)
}
if err := t.waitForAddPeer(ctx, pid); err != nil {
return fmt.Errorf("wait for add peer: %w", err)
}
return nil
}
// topicEventWaiter tracks PeerJoin/PeerLeave events for a single topic.
type topicEventWaiter struct {
mu sync.Mutex
joined map[peer.ID]struct{}
waiters map[peer.ID]chan struct{}
}
type topicJoiner interface {
JoinTopic(topic string, opts ...pubsub.TopicOpt) (*pubsub.Topic, error)
}
func (t *GossipTracer) JoinAndWatchTopic(ctx context.Context, topic string, joiner topicJoiner) (*pubsub.Topic, error) {
topicHandle, err := joiner.JoinTopic(topic)
if err != nil {
return nil, fmt.Errorf("join topic: %w", err)
}
if err := t.watchTopic(ctx, topicHandle); err != nil {
return nil, fmt.Errorf("watch topic: %w", err)
}
return topicHandle, nil
}
func (t *GossipTracer) watchTopic(ctx context.Context, topicHandle *pubsub.Topic) error {
ev, err := topicHandle.EventHandler()
if err != nil {
return fmt.Errorf("event handler: %w", err)
}
w := &topicEventWaiter{
joined: make(map[peer.ID]struct{}),
waiters: make(map[peer.ID]chan struct{}),
}
// Register the waiter so CanPublishToPeer can find it.
t.mu.Lock()
defer t.mu.Unlock()
t.topicWaiters[topicHandle.String()] = w
go func() {
defer ev.Cancel()
for {
pe, err := ev.NextPeerEvent(ctx)
if err != nil {
if ctx.Err() == nil {
log.WithError(err).Debug("NextPeerEvent failed")
}
return
}
if pe.Type == pubsub.PeerJoin {
w.handlePeerJoin(pe.Peer)
}
}
}()
return nil
}
func (t *GossipTracer) getTopicWaiter(topic string) *topicEventWaiter {
t.mu.Lock()
defer t.mu.Unlock()
return t.topicWaiters[topic]
}
func (w *topicEventWaiter) handlePeerJoin(pid peer.ID) {
w.mu.Lock()
defer w.mu.Unlock()
w.joined[pid] = struct{}{}
if ch, ok := w.waiters[pid]; ok {
close(ch)
delete(w.waiters, pid)
}
}
func (w *topicEventWaiter) waitForPeerJoin(ctx context.Context, pid peer.ID) error {
w.mu.Lock()
if _, ok := w.joined[pid]; ok {
w.mu.Unlock()
return nil
}
ch, ok := w.waiters[pid]
if !ok {
ch = make(chan struct{})
w.waiters[pid] = ch
}
w.mu.Unlock()
select {
case <-ch:
return nil
case <-ctx.Done():
return ctx.Err()
}
}
// --- pubsub.RawTracer implementation ---
// AddPeer is invoked by the gossipsub event loop after a peer has been fully
// registered in p.peers (i.e., it has an rpcQueue and an outbound stream).
func (t *GossipTracer) AddPeer(p peer.ID, proto protocol.ID) {
t.mu.Lock()
defer t.mu.Unlock()
t.addedPeers[p] = true
if ch, ok := t.addPeerWaiters[p]; ok {
close(ch)
delete(t.addPeerWaiters, p)
}
}
// RemovePeer can be extended by future tests to track peer removal.
func (t *GossipTracer) RemovePeer(p peer.ID) {}
// Join is invoked when we locally subscribe to a topic (a mesh is created).
func (t *GossipTracer) Join(topic string) {
t.mu.Lock()
defer t.mu.Unlock()
t.joinedTopics[topic] = true
}
// Leave is invoked when we unsubscribe from a topic (mesh is torn down).
func (t *GossipTracer) Leave(topic string) {
t.mu.Lock()
defer t.mu.Unlock()
delete(t.joinedTopics, topic)
}
// Graft is invoked when a peer is added to our mesh for a topic.
func (t *GossipTracer) Graft(p peer.ID, topic string) {
t.mu.Lock()
defer t.mu.Unlock()
key := topicPeer{topic: topic, peer: p}
t.graftedPeers[key] = true
if ch, ok := t.graftWaiters[key]; ok {
close(ch)
delete(t.graftWaiters, key)
}
}
// Prune can be extended by future tests to track mesh prunes.
func (t *GossipTracer) Prune(p peer.ID, topic string) {}
// ValidateMessage can be extended by future tests to track message validation.
func (t *GossipTracer) ValidateMessage(msg *pubsub.Message) {}
// DeliverMessage can be extended by future tests to track message delivery.
func (t *GossipTracer) DeliverMessage(msg *pubsub.Message) {}
// RejectMessage can be extended by future tests to track message rejection.
func (t *GossipTracer) RejectMessage(msg *pubsub.Message, reason string) {}
// DuplicateMessage can be extended by future tests to track duplicate messages.
func (t *GossipTracer) DuplicateMessage(msg *pubsub.Message) {}
// ThrottlePeer can be extended by future tests to track peer throttling.
func (t *GossipTracer) ThrottlePeer(p peer.ID) {}
// RecvRPC can be extended by future tests to track incoming RPCs.
func (t *GossipTracer) RecvRPC(rpc *pubsub.RPC) {}
// SendRPC can be extended by future tests to track outgoing RPCs.
func (t *GossipTracer) SendRPC(rpc *pubsub.RPC, p peer.ID) {}
// DropRPC can be extended by future tests to track dropped RPCs.
func (t *GossipTracer) DropRPC(rpc *pubsub.RPC, p peer.ID) {}
// UndeliverableMessage can be extended by future tests to track undeliverable messages.
func (t *GossipTracer) UndeliverableMessage(msg *pubsub.Message) {}

View File

@@ -0,0 +1,5 @@
package testing
import "github.com/sirupsen/logrus"
var log = logrus.WithField("package", "beacon-chain/p2p/testing")

View File

@@ -7,6 +7,7 @@ import (
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"google.golang.org/protobuf/proto"
)
@@ -15,6 +16,7 @@ import (
type MockBroadcaster struct {
BroadcastCalled atomic.Bool
BroadcastMessages []proto.Message
BroadcastEpochs []primitives.Epoch
BroadcastAttestations []ethpb.Att
msgLock sync.Mutex
attLock sync.Mutex
@@ -29,6 +31,14 @@ func (m *MockBroadcaster) Broadcast(_ context.Context, msg proto.Message) error
return nil
}
// BroadcastForEpoch records a broadcast occurred with the target epoch.
func (m *MockBroadcaster) BroadcastForEpoch(ctx context.Context, msg proto.Message, epoch primitives.Epoch) error {
m.msgLock.Lock()
m.BroadcastEpochs = append(m.BroadcastEpochs, epoch)
m.msgLock.Unlock()
return m.Broadcast(ctx, msg)
}
// BroadcastAttestation records a broadcast occurred.
func (m *MockBroadcaster) BroadcastAttestation(_ context.Context, _ uint64, a ethpb.Att) error {
m.BroadcastCalled.Store(true)

View File

@@ -13,7 +13,6 @@ import (
"github.com/libp2p/go-libp2p/core/network"
"github.com/libp2p/go-libp2p/core/peer"
ma "github.com/multiformats/go-multiaddr"
log "github.com/sirupsen/logrus"
)
const (

View File

@@ -211,6 +211,12 @@ func (p *TestP2P) ReceivePubSub(topic string, msg proto.Message) {
}
}
// BroadcastForEpoch mocks broadcasting for a specific epoch.
func (p *TestP2P) BroadcastForEpoch(_ context.Context, _ proto.Message, _ primitives.Epoch) error {
p.BroadcastCalled.Store(true)
return nil
}
// Broadcast a message.
func (p *TestP2P) Broadcast(_ context.Context, _ proto.Message) error {
p.BroadcastCalled.Store(true)

View File

@@ -10,6 +10,7 @@ import (
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1/metadata"
ssz "github.com/prysmaticlabs/fastssz"
)
func init() {
@@ -43,6 +44,8 @@ var (
// LightClientFinalityUpdateMap maps the fork-version to the underlying data type for that
// particular fork period.
LightClientFinalityUpdateMap map[[4]byte]func() (interfaces.LightClientFinalityUpdate, error)
// DataColumnSidecarMap maps the fork-version to the underlying data column sidecar type.
DataColumnSidecarMap map[[4]byte]func() (ssz.Unmarshaler, error)
)
// InitializeDataMaps initializes all the relevant object maps. This function is called to
@@ -253,4 +256,13 @@ func InitializeDataMaps() {
return lightclientConsensusTypes.NewEmptyFinalityUpdateElectra(), nil
},
}
DataColumnSidecarMap = map[[4]byte]func() (ssz.Unmarshaler, error){
bytesutil.ToBytes4(params.BeaconConfig().FuluForkVersion): func() (ssz.Unmarshaler, error) {
return &ethpb.DataColumnSidecar{}, nil
},
bytesutil.ToBytes4(params.BeaconConfig().GloasForkVersion): func() (ssz.Unmarshaler, error) {
return &ethpb.DataColumnSidecarGloas{}, nil
},
}
}

View File

@@ -20,7 +20,6 @@ go_library(
"//beacon-chain/core/epoch/precompute:go_default_library",
"//beacon-chain/core/feed:go_default_library",
"//beacon-chain/core/feed/operation:go_default_library",
"//beacon-chain/core/gloas:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/core/transition:go_default_library",

View File

@@ -5,7 +5,6 @@ import (
"context"
"sort"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/gloas"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
coreTime "github.com/OffchainLabs/prysm/v7/beacon-chain/core/time"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
@@ -139,24 +138,19 @@ func (s *Service) SyncCommitteeDuties(ctx context.Context, st state.BeaconState,
nextSyncCommitteeFirstEpoch := currentSyncCommitteeFirstEpoch + params.BeaconConfig().EpochsPerSyncCommitteePeriod
isCurrentCommittee := requestedEpoch < nextSyncCommitteeFirstEpoch
var committee [][]byte
syncCommitteeFunc := st.NextSyncCommittee
if isCurrentCommittee {
sc, err := st.CurrentSyncCommittee()
if err != nil {
return nil, &RpcError{Err: errors.Wrap(err, "could not get sync committee"), Reason: Internal}
}
committee = sc.Pubkeys
} else {
sc, err := st.NextSyncCommittee()
if err != nil {
return nil, &RpcError{Err: errors.Wrap(err, "could not get sync committee"), Reason: Internal}
}
committee = sc.Pubkeys
syncCommitteeFunc = st.CurrentSyncCommittee
}
sc, err := syncCommitteeFunc()
if err != nil {
return nil, &RpcError{Err: errors.Wrap(err, "could not get sync committee"), Reason: Internal}
}
// Build pubkey → positions map from committee pubkeys.
committeePubkeys := make(map[[fieldparams.BLSPubkeyLength]byte][]uint64)
for j, pk := range committee {
for j, pk := range sc.Pubkeys {
var pk48 [fieldparams.BLSPubkeyLength]byte
copy(pk48[:], pk)
committeePubkeys[pk48] = append(committeePubkeys[pk48], uint64(j))
@@ -200,7 +194,7 @@ func (s *Service) PTCDuties(ctx context.Context, st state.BeaconState, epoch pri
_, span := trace.StartSpan(ctx, "coreService.PTCDuties")
defer span.End()
if len(indices) == 0 || epoch < params.BeaconConfig().GloasForkEpoch {
if len(indices) == 0 || epoch < params.BeaconConfig().GloasForkEpoch || st.Version() < version.Gloas {
return []*PTCDutyResult{}, nil
}
@@ -221,7 +215,7 @@ func (s *Service) PTCDuties(ctx context.Context, st state.BeaconState, epoch pri
return nil, &RpcError{Err: ctx.Err(), Reason: Internal}
}
ptc, err := gloas.PayloadCommittee(ctx, st, slot)
ptc, err := st.PayloadCommitteeReadOnly(slot)
if err != nil {
return nil, &RpcError{Err: err, Reason: Internal}
}

View File

@@ -210,6 +210,36 @@ func TestProposalDependentRootV2(t *testing.T) {
})
}
func TestPTCDuties_ForkBoundaryFuluToGloas(t *testing.T) {
helpers.ClearCache()
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.GloasForkEpoch = 1 // Epoch 0 is Fulu, epoch 1 is GloAS.
params.OverrideBeaconConfig(cfg)
// Create a Fulu state at epoch 0 (last Fulu epoch before GloAS fork).
numVals := params.BeaconConfig().MinGenesisActiveValidatorCount
st, _ := util.DeterministicGenesisStateFulu(t, numVals)
indices := []primitives.ValidatorIndex{0, 1, 2}
s := &Service{}
t.Run("current Fulu epoch returns empty", func(t *testing.T) {
duties, rpcErr := s.PTCDuties(t.Context(), st, 0, indices)
require.Equal(t, (*RpcError)(nil), rpcErr)
assert.Equal(t, 0, len(duties), "pre-GloAS epoch should yield no PTC assignments")
})
t.Run("next epoch at GloAS boundary returns empty on Fulu state", func(t *testing.T) {
// Epoch 1 == GloasForkEpoch, so the epoch check passes, but the
// state is still Fulu — PTC should NOT be computed.
duties, rpcErr := s.PTCDuties(t.Context(), st, 1, indices)
require.Equal(t, (*RpcError)(nil), rpcErr)
assert.Equal(t, 0, len(duties),
"GloAS-epoch PTC must not be computed from a Fulu state")
})
}
func TestFindValidatorIndexInCommittee(t *testing.T) {
committee := []primitives.ValidatorIndex{10, 20, 30}
assert.Equal(t, uint64(0), findValidatorIndexInCommittee(committee, 10))

View File

@@ -393,6 +393,16 @@ func (s *Service) validatorEndpoints(
handler: server.ProduceBlockV3,
methods: []string{http.MethodGet},
},
{
template: "/eth/v4/validator/blocks/{slot}",
name: namespace + ".ProduceBlockV4",
middleware: []middleware.Middleware{
middleware.AcceptHeaderHandler([]string{api.JsonMediaType, api.OctetStreamMediaType}),
middleware.AcceptEncodingHeaderHandler(),
},
handler: server.ProduceBlockV4,
methods: []string{http.MethodGet},
},
{
template: "/eth/v1/validator/beacon_committee_selections",
name: namespace + ".BeaconCommitteeSelections",
@@ -411,6 +421,15 @@ func (s *Service) validatorEndpoints(
handler: server.SyncCommitteeSelections,
methods: []string{http.MethodPost},
},
{
template: "/eth/v1/validator/execution_payload_envelope/{slot}",
name: namespace + ".ExecutionPayloadEnvelope",
middleware: []middleware.Middleware{
middleware.AcceptHeaderHandler([]string{api.JsonMediaType}),
},
handler: server.ExecutionPayloadEnvelope,
methods: []string{http.MethodGet},
},
}
}
@@ -910,6 +929,26 @@ func (s *Service) beaconEndpoints(
handler: server.GetExecutionPayloadEnvelope,
methods: []string{http.MethodGet},
},
{
template: "/eth/v1/beacon/execution_payload_envelope",
name: namespace + ".PublishExecutionPayloadEnvelope",
middleware: []middleware.Middleware{
middleware.ContentTypeHandler([]string{api.JsonMediaType}),
middleware.AcceptHeaderHandler([]string{api.JsonMediaType}),
},
handler: server.PublishExecutionPayloadEnvelope,
methods: []string{http.MethodPost},
},
{
template: "/eth/v1/beacon/execution_payload_bid",
name: namespace + ".PublishSignedExecutionPayloadBid",
middleware: []middleware.Middleware{
middleware.ContentTypeHandler([]string{api.JsonMediaType, api.OctetStreamMediaType}),
middleware.AcceptHeaderHandler([]string{api.JsonMediaType}),
},
handler: server.PublishSignedExecutionPayloadBid,
methods: []string{http.MethodPost},
},
}
}

View File

@@ -34,6 +34,8 @@ func Test_endpoints(t *testing.T) {
"/eth/v1/beacon/states/{state_id}/pending_consolidations": {http.MethodGet},
"/eth/v1/beacon/states/{state_id}/proposer_lookahead": {http.MethodGet},
"/eth/v1/beacon/execution_payload_envelope/{block_id}": {http.MethodGet},
"/eth/v1/beacon/execution_payload_envelope": {http.MethodPost},
"/eth/v1/beacon/execution_payload_bid": {http.MethodPost},
"/eth/v1/beacon/headers": {http.MethodGet},
"/eth/v1/beacon/headers/{block_id}": {http.MethodGet},
"/eth/v2/beacon/blinded_blocks": {http.MethodPost},
@@ -93,24 +95,26 @@ func Test_endpoints(t *testing.T) {
}
validatorRoutes := map[string][]string{
"/eth/v1/validator/duties/attester/{epoch}": {http.MethodPost},
"/eth/v1/validator/duties/proposer/{epoch}": {http.MethodGet},
"/eth/v2/validator/duties/proposer/{epoch}": {http.MethodGet},
"/eth/v1/validator/duties/sync/{epoch}": {http.MethodPost},
"/eth/v1/validator/duties/ptc/{epoch}": {http.MethodPost},
"/eth/v3/validator/blocks/{slot}": {http.MethodGet},
"/eth/v1/validator/attestation_data": {http.MethodGet},
"/eth/v2/validator/aggregate_attestation": {http.MethodGet},
"/eth/v2/validator/aggregate_and_proofs": {http.MethodPost},
"/eth/v1/validator/beacon_committee_subscriptions": {http.MethodPost},
"/eth/v1/validator/sync_committee_subscriptions": {http.MethodPost},
"/eth/v1/validator/beacon_committee_selections": {http.MethodPost},
"/eth/v1/validator/sync_committee_selections": {http.MethodPost},
"/eth/v1/validator/sync_committee_contribution": {http.MethodGet},
"/eth/v1/validator/contribution_and_proofs": {http.MethodPost},
"/eth/v1/validator/prepare_beacon_proposer": {http.MethodPost},
"/eth/v1/validator/register_validator": {http.MethodPost},
"/eth/v1/validator/liveness/{epoch}": {http.MethodPost},
"/eth/v1/validator/duties/attester/{epoch}": {http.MethodPost},
"/eth/v1/validator/duties/proposer/{epoch}": {http.MethodGet},
"/eth/v2/validator/duties/proposer/{epoch}": {http.MethodGet},
"/eth/v1/validator/duties/sync/{epoch}": {http.MethodPost},
"/eth/v1/validator/duties/ptc/{epoch}": {http.MethodPost},
"/eth/v3/validator/blocks/{slot}": {http.MethodGet},
"/eth/v4/validator/blocks/{slot}": {http.MethodGet},
"/eth/v1/validator/attestation_data": {http.MethodGet},
"/eth/v2/validator/aggregate_attestation": {http.MethodGet},
"/eth/v2/validator/aggregate_and_proofs": {http.MethodPost},
"/eth/v1/validator/beacon_committee_subscriptions": {http.MethodPost},
"/eth/v1/validator/sync_committee_subscriptions": {http.MethodPost},
"/eth/v1/validator/beacon_committee_selections": {http.MethodPost},
"/eth/v1/validator/sync_committee_selections": {http.MethodPost},
"/eth/v1/validator/execution_payload_envelope/{slot}": {http.MethodGet},
"/eth/v1/validator/sync_committee_contribution": {http.MethodGet},
"/eth/v1/validator/contribution_and_proofs": {http.MethodPost},
"/eth/v1/validator/prepare_beacon_proposer": {http.MethodPost},
"/eth/v1/validator/register_validator": {http.MethodPost},
"/eth/v1/validator/liveness/{epoch}": {http.MethodPost},
}
prysmBeaconRoutes := map[string][]string{

View File

@@ -66,6 +66,8 @@ go_library(
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_prysmaticlabs_fastssz//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@org_golang_google_grpc//codes:go_default_library",
"@org_golang_google_grpc//status:go_default_library",
],
)
@@ -73,6 +75,7 @@ go_test(
name = "go_default_test",
srcs = [
"handlers_equivocation_test.go",
"handlers_gloas_bid_test.go",
"handlers_gloas_test.go",
"handlers_pool_test.go",
"handlers_state_test.go",
@@ -133,6 +136,10 @@ go_test(
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
"@com_github_stretchr_testify//mock:go_default_library",
"@org_golang_google_grpc//codes:go_default_library",
"@org_golang_google_grpc//status:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",
"@org_golang_google_protobuf//types/known/emptypb:go_default_library",
"@org_uber_go_mock//gomock:go_default_library",
],
)

View File

@@ -643,6 +643,7 @@ func (s *Server) publishBlockSSZ(ctx context.Context, w http.ResponseWriter, r *
}
var sszDecoders = map[string]blockDecoder{
version.String(version.Gloas): decodeGloasSSZ,
version.String(version.Fulu): decodeFuluSSZ,
version.String(version.Electra): decodeElectraSSZ,
version.String(version.Deneb): decodeDenebSSZ,
@@ -660,6 +661,18 @@ func decodeSSZToGenericBlock(versionHeader string, body []byte) (*eth.GenericSig
return nil, errors.New("body does not represent a valid block type")
}
func decodeGloasSSZ(body []byte) (*eth.GenericSignedBeaconBlock, error) {
gloasBlock := &eth.SignedBeaconBlockGloas{}
if err := gloasBlock.UnmarshalSSZ(body); err != nil {
return nil, decodingError(
version.String(version.Gloas), err,
)
}
return &eth.GenericSignedBeaconBlock{
Block: &eth.GenericSignedBeaconBlock_Gloas{Gloas: gloasBlock},
}, nil
}
func decodeFuluSSZ(body []byte) (*eth.GenericSignedBeaconBlock, error) {
fuluBlock := &eth.SignedBeaconBlockContentsFulu{}
if err := fuluBlock.UnmarshalSSZ(body); err != nil {
@@ -798,6 +811,7 @@ func (s *Server) publishBlock(ctx context.Context, w http.ResponseWriter, r *htt
}
var jsonDecoders = map[string]blockDecoder{
version.String(version.Gloas): decodeGloasJSON,
version.String(version.Fulu): decodeFuluJSON,
version.String(version.Electra): decodeElectraJSON,
version.String(version.Deneb): decodeDenebJSON,
@@ -815,6 +829,13 @@ func decodeJSONToGenericBlock(versionHeader string, body []byte) (*eth.GenericSi
return nil, fmt.Errorf("body does not represent a valid block type")
}
func decodeGloasJSON(body []byte) (*eth.GenericSignedBeaconBlock, error) {
return decodeGenericJSON[*structs.SignedBeaconBlockGloas](
body,
version.String(version.Gloas),
)
}
func decodeFuluJSON(body []byte) (*eth.GenericSignedBeaconBlock, error) {
return decodeGenericJSON[*structs.SignedBeaconBlockContentsFulu](
body,

View File

@@ -1,6 +1,8 @@
package beacon
import (
"encoding/json"
"io"
"net/http"
"github.com/OffchainLabs/prysm/v7/api"
@@ -9,8 +11,11 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/rpc/eth/shared"
"github.com/OffchainLabs/prysm/v7/monitoring/tracing/trace"
"github.com/OffchainLabs/prysm/v7/network/httputil"
eth "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/runtime/version"
"github.com/pkg/errors"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
)
// GetExecutionPayloadEnvelope retrieves a full execution payload envelope by beacon block root.
@@ -77,3 +82,96 @@ func (s *Server) GetExecutionPayloadEnvelope(w http.ResponseWriter, r *http.Requ
Data: jsonEnvelope,
})
}
// PublishExecutionPayloadEnvelope broadcasts a signed execution payload envelope.
//
// Endpoint: POST /eth/v1/beacon/execution_payload_envelope
func (s *Server) PublishExecutionPayloadEnvelope(w http.ResponseWriter, r *http.Request) {
ctx, span := trace.StartSpan(r.Context(), "beacon.PublishExecutionPayloadEnvelope")
defer span.End()
body, err := io.ReadAll(r.Body)
if err != nil {
httputil.HandleError(w, "could not read request body: "+err.Error(), http.StatusInternalServerError)
return
}
var jsonEnvelope structs.SignedExecutionPayloadEnvelope
if err := json.Unmarshal(body, &jsonEnvelope); err != nil {
httputil.HandleError(w, "could not decode request body: "+err.Error(), http.StatusBadRequest)
return
}
consensus, err := jsonEnvelope.ToConsensus()
if err != nil {
httputil.HandleError(w, "invalid signed execution payload envelope: "+err.Error(), http.StatusBadRequest)
return
}
if _, err := s.V1Alpha1ValidatorServer.PublishExecutionPayloadEnvelope(ctx, consensus); err != nil {
if st, ok := status.FromError(err); ok {
switch st.Code() {
case codes.InvalidArgument:
httputil.HandleError(w, st.Message(), http.StatusBadRequest)
default:
httputil.HandleError(w, st.Message(), http.StatusInternalServerError)
}
return
}
httputil.HandleError(w, "could not publish execution payload envelope: "+err.Error(), http.StatusInternalServerError)
return
}
w.WriteHeader(http.StatusOK)
}
// PublishSignedExecutionPayloadBid broadcasts a signed execution payload bid to the P2P network.
func (s *Server) PublishSignedExecutionPayloadBid(w http.ResponseWriter, r *http.Request) {
ctx, span := trace.StartSpan(r.Context(), "beacon.PublishSignedExecutionPayloadBid")
defer span.End()
if shared.IsSyncing(ctx, w, s.SyncChecker, s.HeadFetcher, s.TimeFetcher, s.OptimisticModeFetcher) {
return
}
versionHeader := r.Header.Get(api.VersionHeader)
if versionHeader == "" {
httputil.HandleError(w, api.VersionHeader+" header is required", http.StatusBadRequest)
return
}
var signedBid *eth.SignedExecutionPayloadBid
if httputil.IsRequestSsz(r) {
body, err := io.ReadAll(r.Body)
if err != nil {
httputil.HandleError(w, "Could not read request body: "+err.Error(), http.StatusBadRequest)
return
}
signedBid = &eth.SignedExecutionPayloadBid{}
if err := signedBid.UnmarshalSSZ(body); err != nil {
httputil.HandleError(w, "Could not unmarshal SSZ: "+err.Error(), http.StatusBadRequest)
return
}
} else {
var jsonBid structs.SignedExecutionPayloadBid
if err := json.NewDecoder(r.Body).Decode(&jsonBid); err != nil {
if errors.Is(err, io.EOF) {
httputil.HandleError(w, "No data submitted", http.StatusBadRequest)
return
}
httputil.HandleError(w, "Could not decode request body: "+err.Error(), http.StatusBadRequest)
return
}
var err error
signedBid, err = jsonBid.ToConsensus()
if err != nil {
httputil.HandleError(w, "Could not convert bid to consensus type: "+err.Error(), http.StatusBadRequest)
return
}
}
if err := s.Broadcaster.Broadcast(ctx, signedBid); err != nil {
httputil.HandleError(w, "Could not broadcast execution payload bid: "+err.Error(), http.StatusInternalServerError)
return
}
}

View File

@@ -0,0 +1,224 @@
package beacon
import (
"bytes"
"context"
"encoding/json"
"fmt"
"net/http"
"strings"
"net/http/httptest"
"testing"
"github.com/OffchainLabs/prysm/v7/api"
"github.com/OffchainLabs/prysm/v7/api/server/structs"
chainMock "github.com/OffchainLabs/prysm/v7/beacon-chain/blockchain/testing"
p2pMock "github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/testing"
mockSync "github.com/OffchainLabs/prysm/v7/beacon-chain/sync/initial-sync/testing"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/testing/assert"
"github.com/OffchainLabs/prysm/v7/testing/require"
"google.golang.org/protobuf/proto"
)
func testJSONSignedBid() *structs.SignedExecutionPayloadBid {
hex32 := "0x" + strings.Repeat("00", 32)
hex20 := "0x" + strings.Repeat("00", 20)
hex96 := "0x" + strings.Repeat("00", 96)
return &structs.SignedExecutionPayloadBid{
Message: &structs.ExecutionPayloadBid{
ParentBlockHash: hex32,
ParentBlockRoot: hex32,
BlockHash: hex32,
PrevRandao: hex32,
FeeRecipient: hex20,
GasLimit: "30000000",
BuilderIndex: "1",
Slot: "100",
Value: "0",
ExecutionPayment: "0",
BlobKzgCommitments: []string{},
},
Signature: hex96,
}
}
func TestPublishSignedExecutionPayloadBid_NoVersionHeader(t *testing.T) {
s := &Server{
SyncChecker: &mockSync.Sync{IsSyncing: false},
HeadFetcher: &chainMock.ChainService{},
TimeFetcher: &chainMock.ChainService{},
OptimisticModeFetcher: &chainMock.ChainService{},
}
req := httptest.NewRequest(http.MethodPost, "/eth/v2/beacon/execution_payload/bid", nil)
w := httptest.NewRecorder()
w.Body = &bytes.Buffer{}
s.PublishSignedExecutionPayloadBid(w, req)
require.Equal(t, http.StatusBadRequest, w.Code)
assert.Equal(t, true, bytes.Contains(w.Body.Bytes(), []byte("header is required")))
}
func TestPublishSignedExecutionPayloadBid_EmptyBody(t *testing.T) {
s := &Server{
SyncChecker: &mockSync.Sync{IsSyncing: false},
HeadFetcher: &chainMock.ChainService{},
TimeFetcher: &chainMock.ChainService{},
OptimisticModeFetcher: &chainMock.ChainService{},
}
req := httptest.NewRequest(http.MethodPost, "/eth/v2/beacon/execution_payload/bid", nil)
req.Header.Set(api.VersionHeader, "gloas")
w := httptest.NewRecorder()
w.Body = &bytes.Buffer{}
s.PublishSignedExecutionPayloadBid(w, req)
require.Equal(t, http.StatusBadRequest, w.Code)
assert.Equal(t, true, bytes.Contains(w.Body.Bytes(), []byte("No data submitted")))
}
func TestPublishSignedExecutionPayloadBid_Syncing(t *testing.T) {
s := &Server{
SyncChecker: &mockSync.Sync{IsSyncing: true},
HeadFetcher: &chainMock.ChainService{},
TimeFetcher: &chainMock.ChainService{},
OptimisticModeFetcher: &chainMock.ChainService{},
}
req := httptest.NewRequest(http.MethodPost, "/eth/v2/beacon/execution_payload/bid", nil)
req.Header.Set(api.VersionHeader, "gloas")
w := httptest.NewRecorder()
w.Body = &bytes.Buffer{}
s.PublishSignedExecutionPayloadBid(w, req)
require.Equal(t, http.StatusServiceUnavailable, w.Code)
}
func TestPublishSignedExecutionPayloadBid_JSON(t *testing.T) {
broadcaster := &p2pMock.MockBroadcaster{}
s := &Server{
SyncChecker: &mockSync.Sync{IsSyncing: false},
HeadFetcher: &chainMock.ChainService{},
TimeFetcher: &chainMock.ChainService{},
OptimisticModeFetcher: &chainMock.ChainService{},
Broadcaster: broadcaster,
}
bid := testJSONSignedBid()
body, err := json.Marshal(bid)
require.NoError(t, err)
req := httptest.NewRequest(http.MethodPost, "/eth/v2/beacon/execution_payload/bid", bytes.NewReader(body))
req.Header.Set(api.VersionHeader, "gloas")
req.Header.Set("Content-Type", "application/json")
w := httptest.NewRecorder()
w.Body = &bytes.Buffer{}
s.PublishSignedExecutionPayloadBid(w, req)
require.Equal(t, http.StatusOK, w.Code)
require.Equal(t, 1, len(broadcaster.BroadcastMessages))
}
func TestPublishSignedExecutionPayloadBid_MalformedJSON(t *testing.T) {
s := &Server{
SyncChecker: &mockSync.Sync{IsSyncing: false},
HeadFetcher: &chainMock.ChainService{},
TimeFetcher: &chainMock.ChainService{},
OptimisticModeFetcher: &chainMock.ChainService{},
}
req := httptest.NewRequest(http.MethodPost, "/eth/v2/beacon/execution_payload/bid", bytes.NewReader([]byte("{bad json")))
req.Header.Set(api.VersionHeader, "gloas")
req.Header.Set("Content-Type", "application/json")
w := httptest.NewRecorder()
w.Body = &bytes.Buffer{}
s.PublishSignedExecutionPayloadBid(w, req)
require.Equal(t, http.StatusBadRequest, w.Code)
assert.Equal(t, true, bytes.Contains(w.Body.Bytes(), []byte("Could not decode request body")))
}
func TestPublishSignedExecutionPayloadBid_InvalidSSZ(t *testing.T) {
s := &Server{
SyncChecker: &mockSync.Sync{IsSyncing: false},
HeadFetcher: &chainMock.ChainService{},
TimeFetcher: &chainMock.ChainService{},
OptimisticModeFetcher: &chainMock.ChainService{},
}
req := httptest.NewRequest(http.MethodPost, "/eth/v2/beacon/execution_payload/bid", bytes.NewReader([]byte{0x01, 0x02}))
req.Header.Set(api.VersionHeader, "gloas")
req.Header.Set("Content-Type", "application/octet-stream")
w := httptest.NewRecorder()
w.Body = &bytes.Buffer{}
s.PublishSignedExecutionPayloadBid(w, req)
require.Equal(t, http.StatusBadRequest, w.Code)
assert.Equal(t, true, bytes.Contains(w.Body.Bytes(), []byte("Could not unmarshal SSZ")))
}
func TestPublishSignedExecutionPayloadBid_SSZ(t *testing.T) {
broadcaster := &p2pMock.MockBroadcaster{}
s := &Server{
SyncChecker: &mockSync.Sync{IsSyncing: false},
HeadFetcher: &chainMock.ChainService{},
TimeFetcher: &chainMock.ChainService{},
OptimisticModeFetcher: &chainMock.ChainService{},
Broadcaster: broadcaster,
}
bid := &ethpb.SignedExecutionPayloadBid{
Message: &ethpb.ExecutionPayloadBid{
ParentBlockHash: make([]byte, 32),
ParentBlockRoot: make([]byte, 32),
BlockHash: make([]byte, 32),
PrevRandao: make([]byte, 32),
FeeRecipient: make([]byte, 20),
GasLimit: 30000000,
BuilderIndex: 1,
Slot: 100,
Value: 0,
ExecutionPayment: 0,
},
Signature: make([]byte, 96),
}
sszBytes, err := bid.MarshalSSZ()
require.NoError(t, err)
req := httptest.NewRequest(http.MethodPost, "/eth/v2/beacon/execution_payload/bid", bytes.NewReader(sszBytes))
req.Header.Set(api.VersionHeader, "gloas")
req.Header.Set("Content-Type", "application/octet-stream")
w := httptest.NewRecorder()
w.Body = &bytes.Buffer{}
s.PublishSignedExecutionPayloadBid(w, req)
require.Equal(t, http.StatusOK, w.Code)
require.Equal(t, 1, len(broadcaster.BroadcastMessages))
}
// errorBroadcaster is a test broadcaster that always returns an error.
type errorBroadcaster struct{ p2pMock.MockBroadcaster }
func (e *errorBroadcaster) Broadcast(_ context.Context, _ proto.Message) error {
return fmt.Errorf("broadcast failed")
}
func TestPublishSignedExecutionPayloadBid_BroadcastError(t *testing.T) {
s := &Server{
SyncChecker: &mockSync.Sync{IsSyncing: false},
HeadFetcher: &chainMock.ChainService{},
TimeFetcher: &chainMock.ChainService{},
OptimisticModeFetcher: &chainMock.ChainService{},
Broadcaster: &errorBroadcaster{},
}
bid := testJSONSignedBid()
body, err := json.Marshal(bid)
require.NoError(t, err)
req := httptest.NewRequest(http.MethodPost, "/eth/v2/beacon/execution_payload/bid", bytes.NewReader(body))
req.Header.Set(api.VersionHeader, "gloas")
req.Header.Set("Content-Type", "application/json")
w := httptest.NewRecorder()
w.Body = &bytes.Buffer{}
s.PublishSignedExecutionPayloadBid(w, req)
require.Equal(t, http.StatusInternalServerError, w.Code)
assert.Equal(t, true, bytes.Contains(w.Body.Bytes(), []byte("Could not broadcast")))
}

View File

@@ -2,10 +2,12 @@ package beacon
import (
"bytes"
"encoding/json"
"net/http"
"net/http/httptest"
"testing"
"github.com/OffchainLabs/prysm/v7/api/server/structs"
chainMock "github.com/OffchainLabs/prysm/v7/beacon-chain/blockchain/testing"
dbTest "github.com/OffchainLabs/prysm/v7/beacon-chain/db/testing"
executiontesting "github.com/OffchainLabs/prysm/v7/beacon-chain/execution/testing"
@@ -17,7 +19,12 @@ import (
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/runtime/version"
"github.com/OffchainLabs/prysm/v7/testing/assert"
mock2 "github.com/OffchainLabs/prysm/v7/testing/mock"
"github.com/OffchainLabs/prysm/v7/testing/require"
"go.uber.org/mock/gomock"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
"google.golang.org/protobuf/types/known/emptypb"
)
func TestGetExecutionPayloadEnvelope_AcceptsSlotID(t *testing.T) {
@@ -105,3 +112,84 @@ func TestGetExecutionPayloadEnvelope_BlockNotFound(t *testing.T) {
require.Equal(t, http.StatusNotFound, w.Code)
assert.Equal(t, true, bytes.Contains(w.Body.Bytes(), []byte("Block not found")))
}
func testSignedEnvelope() *ethpb.SignedExecutionPayloadEnvelope {
return &ethpb.SignedExecutionPayloadEnvelope{
Message: &ethpb.ExecutionPayloadEnvelope{
Payload: &enginev1.ExecutionPayloadDeneb{
ParentHash: bytesutil.PadTo([]byte("parent"), 32),
FeeRecipient: bytesutil.PadTo([]byte("fee"), 20),
StateRoot: bytesutil.PadTo([]byte("state"), 32),
ReceiptsRoot: bytesutil.PadTo([]byte("receipts"), 32),
LogsBloom: make([]byte, 256),
PrevRandao: bytesutil.PadTo([]byte("randao"), 32),
BaseFeePerGas: bytesutil.PadTo([]byte{1}, 32),
BlockHash: bytesutil.PadTo([]byte("blockhash"), 32),
Transactions: [][]byte{},
Withdrawals: []*enginev1.Withdrawal{},
},
ExecutionRequests: &enginev1.ExecutionRequests{},
BuilderIndex: primitives.BuilderIndex(42),
BeaconBlockRoot: bytesutil.PadTo([]byte("beacon-root"), 32),
Slot: primitives.Slot(100),
StateRoot: bytesutil.PadTo([]byte("envelope-state"), 32),
},
Signature: bytesutil.PadTo([]byte("sig"), 96),
}
}
func TestPublishExecutionPayloadEnvelope_OK(t *testing.T) {
ctrl := gomock.NewController(t)
signed := testSignedEnvelope()
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().PublishExecutionPayloadEnvelope(
gomock.Any(), gomock.Any(),
).Return(&emptypb.Empty{}, nil)
jsonEnvelope, err := structs.SignedExecutionPayloadEnvelopeFromConsensus(signed)
require.NoError(t, err)
body, err := json.Marshal(jsonEnvelope)
require.NoError(t, err)
s := &Server{V1Alpha1ValidatorServer: v1alpha1Server}
req := httptest.NewRequest(http.MethodPost, "/eth/v1/beacon/execution_payload_envelope", bytes.NewReader(body))
w := httptest.NewRecorder()
w.Body = &bytes.Buffer{}
s.PublishExecutionPayloadEnvelope(w, req)
require.Equal(t, http.StatusOK, w.Code)
}
func TestPublishExecutionPayloadEnvelope_InvalidBody(t *testing.T) {
s := &Server{}
req := httptest.NewRequest(http.MethodPost, "/eth/v1/beacon/execution_payload_envelope", bytes.NewReader([]byte("not json")))
w := httptest.NewRecorder()
w.Body = &bytes.Buffer{}
s.PublishExecutionPayloadEnvelope(w, req)
require.Equal(t, http.StatusBadRequest, w.Code)
}
func TestPublishExecutionPayloadEnvelope_ServerError(t *testing.T) {
ctrl := gomock.NewController(t)
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().PublishExecutionPayloadEnvelope(
gomock.Any(), gomock.Any(),
).Return(nil, status.Error(codes.Internal, "broadcast failed"))
signed := testSignedEnvelope()
jsonEnvelope, err := structs.SignedExecutionPayloadEnvelopeFromConsensus(signed)
require.NoError(t, err)
body, err := json.Marshal(jsonEnvelope)
require.NoError(t, err)
s := &Server{V1Alpha1ValidatorServer: v1alpha1Server}
req := httptest.NewRequest(http.MethodPost, "/eth/v1/beacon/execution_payload_envelope", bytes.NewReader(body))
w := httptest.NewRecorder()
w.Body = &bytes.Buffer{}
s.PublishExecutionPayloadEnvelope(w, req)
require.Equal(t, http.StatusInternalServerError, w.Code)
}

View File

@@ -5,7 +5,6 @@ import (
"net/http"
"net/url"
"strconv"
"strings"
"github.com/OffchainLabs/prysm/v7/api"
"github.com/OffchainLabs/prysm/v7/api/server/structs"
@@ -35,8 +34,7 @@ func (s *Server) Blobs(w http.ResponseWriter, r *http.Request) {
httputil.HandleError(w, err.Error(), http.StatusBadRequest)
return
}
segments := strings.Split(r.URL.Path, "/")
blockId := segments[len(segments)-1]
blockId := r.PathValue("block_id")
verifiedBlobs, rpcErr := s.Blocker.BlobSidecars(ctx, blockId, options.WithIndices(indices))
if rpcErr != nil {
@@ -131,8 +129,7 @@ func (s *Server) GetBlobs(w http.ResponseWriter, r *http.Request) {
ctx, span := trace.StartSpan(r.Context(), "beacon.GetBlobs")
defer span.End()
segments := strings.Split(r.URL.Path, "/")
blockId := segments[len(segments)-1]
blockId := r.PathValue("block_id")
// Check if versioned_hashes parameter is provided
versionedHashesStr := r.URL.Query()["versioned_hashes"]

View File

@@ -64,6 +64,7 @@ func TestBlobs(t *testing.T) {
t.Run("genesis", func(t *testing.T) {
u := "http://foo.example/genesis"
request := httptest.NewRequest("GET", u, nil)
request.SetPathValue("block_id", "genesis")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.Blocker = &lookup.BeaconDbBlocker{}
@@ -78,6 +79,7 @@ func TestBlobs(t *testing.T) {
t.Run("head", func(t *testing.T) {
u := "http://foo.example/head"
request := httptest.NewRequest("GET", u, nil)
request.SetPathValue("block_id", "head")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.Blocker = &lookup.BeaconDbBlocker{
@@ -126,6 +128,7 @@ func TestBlobs(t *testing.T) {
t.Run("finalized", func(t *testing.T) {
u := "http://foo.example/finalized"
request := httptest.NewRequest("GET", u, nil)
request.SetPathValue("block_id", "finalized")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.Blocker = &lookup.BeaconDbBlocker{
@@ -150,6 +153,7 @@ func TestBlobs(t *testing.T) {
t.Run("root", func(t *testing.T) {
u := "http://foo.example/" + hexutil.Encode(blockRoot[:])
request := httptest.NewRequest("GET", u, nil)
request.SetPathValue("block_id", hexutil.Encode(blockRoot[:]))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.Blocker = &lookup.BeaconDbBlocker{
@@ -174,6 +178,7 @@ func TestBlobs(t *testing.T) {
t.Run("slot", func(t *testing.T) {
u := fmt.Sprintf("http://foo.example/%d", es)
request := httptest.NewRequest("GET", u, nil)
request.SetPathValue("block_id", fmt.Sprintf("%d", es))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.Blocker = &lookup.BeaconDbBlocker{
@@ -198,6 +203,7 @@ func TestBlobs(t *testing.T) {
t.Run("slot not found", func(t *testing.T) {
u := fmt.Sprintf("http://foo.example/%d", es-1)
request := httptest.NewRequest("GET", u, nil)
request.SetPathValue("block_id", fmt.Sprintf("%d", es-1))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.Blocker = &lookup.BeaconDbBlocker{
@@ -215,6 +221,7 @@ func TestBlobs(t *testing.T) {
t.Run("one blob only", func(t *testing.T) {
u := fmt.Sprintf("http://foo.example/%d?indices=2", es)
request := httptest.NewRequest("GET", u, nil)
request.SetPathValue("block_id", fmt.Sprintf("%d", es))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.Blocker = &lookup.BeaconDbBlocker{
@@ -246,6 +253,7 @@ func TestBlobs(t *testing.T) {
t.Run("no blobs returns an empty array", func(t *testing.T) {
u := fmt.Sprintf("http://foo.example/%d", es)
request := httptest.NewRequest("GET", u, nil)
request.SetPathValue("block_id", fmt.Sprintf("%d", es))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.Blocker = &lookup.BeaconDbBlocker{
@@ -271,6 +279,7 @@ func TestBlobs(t *testing.T) {
overLimit := params.BeaconConfig().MaxBlobsPerBlock(ds)
u := fmt.Sprintf("http://foo.example/%d?indices=%d", es, overLimit)
request := httptest.NewRequest("GET", u, nil)
request.SetPathValue("block_id", fmt.Sprintf("%d", es))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.Blocker = &lookup.BeaconDbBlocker{}
@@ -285,6 +294,7 @@ func TestBlobs(t *testing.T) {
t.Run("outside retention period returns 200 with what we have", func(t *testing.T) {
u := fmt.Sprintf("http://foo.example/%d", es)
request := httptest.NewRequest("GET", u, nil)
request.SetPathValue("block_id", fmt.Sprintf("%d", es))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
moc := &mockChain.ChainService{FinalizedCheckPoint: &eth.Checkpoint{Root: blockRoot[:]}, Block: denebBlock}
@@ -315,6 +325,7 @@ func TestBlobs(t *testing.T) {
u := fmt.Sprintf("http://foo.example/%d", es+128)
request := httptest.NewRequest("GET", u, nil)
request.SetPathValue("block_id", fmt.Sprintf("%d", es+128))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.Blocker = &lookup.BeaconDbBlocker{
@@ -345,6 +356,7 @@ func TestBlobs(t *testing.T) {
u := "http://foo.example/31"
request := httptest.NewRequest("GET", u, nil)
request.SetPathValue("block_id", "31")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.Blocker = &lookup.BeaconDbBlocker{
@@ -367,6 +379,7 @@ func TestBlobs(t *testing.T) {
t.Run("malformed block ID", func(t *testing.T) {
u := "http://foo.example/foo"
request := httptest.NewRequest("GET", u, nil)
request.SetPathValue("block_id", "foo")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.Blocker = &lookup.BeaconDbBlocker{}
@@ -381,6 +394,7 @@ func TestBlobs(t *testing.T) {
t.Run("ssz", func(t *testing.T) {
u := "http://foo.example/finalized?indices=0"
request := httptest.NewRequest("GET", u, nil)
request.SetPathValue("block_id", "finalized")
request.Header.Add("Accept", "application/octet-stream")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -404,6 +418,7 @@ func TestBlobs(t *testing.T) {
t.Run("ssz multiple blobs", func(t *testing.T) {
u := "http://foo.example/finalized"
request := httptest.NewRequest("GET", u, nil)
request.SetPathValue("block_id", "finalized")
request.Header.Add("Accept", "application/octet-stream")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -455,6 +470,7 @@ func TestBlobs_Electra(t *testing.T) {
t.Run("max blobs for electra", func(t *testing.T) {
u := fmt.Sprintf("http://foo.example/%d", es)
request := httptest.NewRequest("GET", u, nil)
request.SetPathValue("block_id", fmt.Sprintf("%d", es))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.Blocker = &lookup.BeaconDbBlocker{
@@ -487,6 +503,7 @@ func TestBlobs_Electra(t *testing.T) {
limit := params.BeaconConfig().MaxBlobsPerBlock(es) - 1
u := fmt.Sprintf("http://foo.example/%d?indices=%d", es, limit)
request := httptest.NewRequest("GET", u, nil)
request.SetPathValue("block_id", fmt.Sprintf("%d", es))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.Blocker = &lookup.BeaconDbBlocker{
@@ -519,6 +536,7 @@ func TestBlobs_Electra(t *testing.T) {
overLimit := params.BeaconConfig().MaxBlobsPerBlock(es)
u := fmt.Sprintf("http://foo.example/%d?indices=%d", es, overLimit)
request := httptest.NewRequest("GET", u, nil)
request.SetPathValue("block_id", fmt.Sprintf("%d", es))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.Blocker = &lookup.BeaconDbBlocker{}
@@ -617,6 +635,7 @@ func TestGetBlobs(t *testing.T) {
t.Run("genesis", func(t *testing.T) {
u := "http://foo.example/genesis"
request := httptest.NewRequest("GET", u, nil)
request.SetPathValue("block_id", "genesis")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.Blocker = &lookup.BeaconDbBlocker{}
@@ -631,6 +650,7 @@ func TestGetBlobs(t *testing.T) {
t.Run("head", func(t *testing.T) {
u := "http://foo.example/head"
request := httptest.NewRequest("GET", u, nil)
request.SetPathValue("block_id", "head")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.Blocker = &lookup.BeaconDbBlocker{
@@ -665,6 +685,7 @@ func TestGetBlobs(t *testing.T) {
t.Run("finalized", func(t *testing.T) {
u := "http://foo.example/finalized"
request := httptest.NewRequest("GET", u, nil)
request.SetPathValue("block_id", "finalized")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.Blocker = &lookup.BeaconDbBlocker{
@@ -688,6 +709,7 @@ func TestGetBlobs(t *testing.T) {
t.Run("root", func(t *testing.T) {
u := "http://foo.example/" + hexutil.Encode(blockRoot[:])
request := httptest.NewRequest("GET", u, nil)
request.SetPathValue("block_id", hexutil.Encode(blockRoot[:]))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.Blocker = &lookup.BeaconDbBlocker{
@@ -711,6 +733,7 @@ func TestGetBlobs(t *testing.T) {
t.Run("slot", func(t *testing.T) {
u := "http://foo.example/123"
request := httptest.NewRequest("GET", u, nil)
request.SetPathValue("block_id", "123")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.Blocker = &lookup.BeaconDbBlocker{
@@ -734,6 +757,7 @@ func TestGetBlobs(t *testing.T) {
t.Run("slot not found", func(t *testing.T) {
u := "http://foo.example/122"
request := httptest.NewRequest("GET", u, nil)
request.SetPathValue("block_id", "122")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.Blocker = &lookup.BeaconDbBlocker{
@@ -751,6 +775,7 @@ func TestGetBlobs(t *testing.T) {
t.Run("no blobs returns an empty array", func(t *testing.T) {
u := "http://foo.example/123"
request := httptest.NewRequest("GET", u, nil)
request.SetPathValue("block_id", "123")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.Blocker = &lookup.BeaconDbBlocker{
@@ -774,6 +799,7 @@ func TestGetBlobs(t *testing.T) {
t.Run("outside retention period still returns 200 what we have in db ", func(t *testing.T) {
u := "http://foo.example/123"
request := httptest.NewRequest("GET", u, nil)
request.SetPathValue("block_id", "123")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
moc := &mockChain.ChainService{FinalizedCheckPoint: &eth.Checkpoint{Root: blockRoot[:]}, Block: denebBlock}
@@ -803,6 +829,7 @@ func TestGetBlobs(t *testing.T) {
u := "http://foo.example/333"
request := httptest.NewRequest("GET", u, nil)
request.SetPathValue("block_id", "333")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.Blocker = &lookup.BeaconDbBlocker{
@@ -832,6 +859,7 @@ func TestGetBlobs(t *testing.T) {
u := "http://foo.example/31"
request := httptest.NewRequest("GET", u, nil)
request.SetPathValue("block_id", "31")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.Blocker = &lookup.BeaconDbBlocker{
@@ -853,6 +881,7 @@ func TestGetBlobs(t *testing.T) {
t.Run("malformed block ID", func(t *testing.T) {
u := "http://foo.example/foo"
request := httptest.NewRequest("GET", u, nil)
request.SetPathValue("block_id", "foo")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.Blocker = &lookup.BeaconDbBlocker{}
@@ -867,6 +896,7 @@ func TestGetBlobs(t *testing.T) {
t.Run("ssz", func(t *testing.T) {
u := "http://foo.example/finalized"
request := httptest.NewRequest("GET", u, nil)
request.SetPathValue("block_id", "finalized")
request.Header.Add("Accept", "application/octet-stream")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -889,6 +919,7 @@ func TestGetBlobs(t *testing.T) {
t.Run("ssz multiple blobs", func(t *testing.T) {
u := "http://foo.example/finalized"
request := httptest.NewRequest("GET", u, nil)
request.SetPathValue("block_id", "finalized")
request.Header.Add("Accept", "application/octet-stream")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -910,6 +941,7 @@ func TestGetBlobs(t *testing.T) {
t.Run("versioned_hashes invalid hex", func(t *testing.T) {
u := "http://foo.example/finalized?versioned_hashes=invalidhex,invalid2hex"
request := httptest.NewRequest("GET", u, nil)
request.SetPathValue("block_id", "finalized")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.Blocker = &lookup.BeaconDbBlocker{
@@ -935,6 +967,7 @@ func TestGetBlobs(t *testing.T) {
shortHash := "0x1234567890abcdef1234567890abcdef"
u := fmt.Sprintf("http://foo.example/finalized?versioned_hashes=%s", shortHash)
request := httptest.NewRequest("GET", u, nil)
request.SetPathValue("block_id", "finalized")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.Blocker = &lookup.BeaconDbBlocker{
@@ -961,6 +994,7 @@ func TestGetBlobs(t *testing.T) {
u := fmt.Sprintf("http://foo.example/finalized?versioned_hashes=%s", hexutil.Encode(versionedHash[:]))
request := httptest.NewRequest("GET", u, nil)
request.SetPathValue("block_id", "finalized")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.Blocker = &lookup.BeaconDbBlocker{
@@ -990,6 +1024,7 @@ func TestGetBlobs(t *testing.T) {
u := fmt.Sprintf("http://foo.example/finalized?versioned_hashes=%s&versioned_hashes=%s",
hexutil.Encode(versionedHash1[:]), hexutil.Encode(versionedHash3[:]))
request := httptest.NewRequest("GET", u, nil)
request.SetPathValue("block_id", "finalized")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.Blocker = &lookup.BeaconDbBlocker{
@@ -1026,6 +1061,7 @@ func TestGetBlobs(t *testing.T) {
u := "http://foo.example/323"
request := httptest.NewRequest("GET", u, nil)
request.SetPathValue("block_id", "323")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.Blocker = &lookup.BeaconDbBlocker{
@@ -1063,6 +1099,7 @@ func TestGetBlobs(t *testing.T) {
u := fmt.Sprintf("http://foo.example/%d", fuluForkSlot)
request := httptest.NewRequest("GET", u, nil)
request.SetPathValue("block_id", fmt.Sprintf("%d", fuluForkSlot))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
// Create an empty blob storage (won't be used but needs to be non-nil)
@@ -1117,6 +1154,7 @@ func TestGetBlobs(t *testing.T) {
hexutil.Encode(versionedHash1[:]),
hexutil.Encode(versionedHash2[:]))
request := httptest.NewRequest("GET", u, nil)
request.SetPathValue("block_id", fmt.Sprintf("%d", fuluForkSlot2))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
// Create an empty blob storage (won't be used but needs to be non-nil)

View File

@@ -22,6 +22,7 @@ go_test(
embed = [":go_default_library"],
deps = [
"//api/server/structs:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//testing/assert:go_default_library",

View File

@@ -191,6 +191,11 @@ func prepareConfigSpec() (map[string]any, error) {
data["KZG_COMMITMENTS_INCLUSION_PROOF_DEPTH"] = convertValueForJSON(reflect.ValueOf(uint64(4)), "KZG_COMMITMENTS_INCLUSION_PROOF_DEPTH")
// UPDATE_TIMEOUT is derived from SLOTS_PER_EPOCH * EPOCHS_PER_SYNC_COMMITTEE_PERIOD
data["UPDATE_TIMEOUT"] = convertValueForJSON(reflect.ValueOf(uint64(config.SlotsPerEpoch)*uint64(config.EpochsPerSyncCommitteePeriod)), "UPDATE_TIMEOUT")
// Add Gloas config values from fieldparams required by the /eth/v1/config/spec API.
data["PTC_SIZE"] = convertValueForJSON(reflect.ValueOf(uint64(fieldparams.PTCSize)), "PTC_SIZE")
data["MAX_PAYLOAD_ATTESTATIONS"] = convertValueForJSON(reflect.ValueOf(uint64(fieldparams.MaxPayloadAttestations)), "MAX_PAYLOAD_ATTESTATIONS")
data["BUILDER_REGISTRY_LIMIT"] = convertValueForJSON(reflect.ValueOf(uint64(fieldparams.BuilderRegistryLimit)), "BUILDER_REGISTRY_LIMIT")
data["BUILDER_PENDING_WITHDRAWALS_LIMIT"] = convertValueForJSON(reflect.ValueOf(uint64(fieldparams.BuilderPendingWithdrawalsLimit)), "BUILDER_PENDING_WITHDRAWALS_LIMIT")
return data, nil
}

View File

@@ -9,9 +9,11 @@ import (
"net/http"
"net/http/httptest"
"reflect"
"strconv"
"testing"
"github.com/OffchainLabs/prysm/v7/api/server/structs"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/testing/assert"
@@ -229,7 +231,7 @@ func TestGetSpec(t *testing.T) {
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), &resp))
data, ok := resp.Data.(map[string]any)
require.Equal(t, true, ok)
assert.Equal(t, 198, len(data))
assert.Equal(t, 202, len(data))
for k, v := range data {
t.Run(k, func(t *testing.T) {
switch k {
@@ -651,6 +653,14 @@ func TestGetSpec(t *testing.T) {
assert.Equal(t, "128", v) // From fieldparams.NumberOfColumns
case "UPDATE_TIMEOUT":
assert.Equal(t, "1782", v) // SlotsPerEpoch (27) * EpochsPerSyncCommitteePeriod (66)
case "PTC_SIZE":
assert.Equal(t, strconv.FormatUint(uint64(fieldparams.PTCSize), 10), v)
case "MAX_PAYLOAD_ATTESTATIONS":
assert.Equal(t, strconv.FormatUint(uint64(fieldparams.MaxPayloadAttestations), 10), v)
case "BUILDER_REGISTRY_LIMIT":
assert.Equal(t, strconv.FormatUint(uint64(fieldparams.BuilderRegistryLimit), 10), v)
case "BUILDER_PENDING_WITHDRAWALS_LIMIT":
assert.Equal(t, strconv.FormatUint(uint64(fieldparams.BuilderPendingWithdrawalsLimit), 10), v)
default:
t.Errorf("Incorrect key: %s", k)
}

Some files were not shown because too many files have changed in this diff Show More