Compare commits

...

107 Commits

Author SHA1 Message Date
james-prysm
495056625e Validator block v4 (#16594)
**What type of PR is this?**

Feature

**What does this PR do? Why is it needed?**

Introduces the validator connection point for rest api to call block v4
and envelope endpoints

builds on https://github.com/OffchainLabs/prysm/pull/16488 and
https://github.com/OffchainLabs/prysm/pull/16522

testing
```
participants:
  - el_type: geth
    el_image: ethpandaops/geth:epbs-devnet-0
    cl_type: prysm
    cl_image: gcr.io/offchainlabs/prysm/beacon-chain:latest
    vc_image: gcr.io/offchainlabs/prysm/validator:latest
    supernode: true
    count: 2
    cl_extra_params:
      - --subscribe-all-subnets
      - --verbosity=debug
    vc_extra_params:
      - --enable-beacon-rest-api
      - --verbosity=debug

  - el_type: geth
    el_image: ethpandaops/geth:epbs-devnet-0
    cl_type: prysm
    cl_image: gcr.io/offchainlabs/prysm/beacon-chain:latest
    vc_image: gcr.io/offchainlabs/prysm/validator:latest
    validator_count: 63
    cl_extra_params:
      - --verbosity=debug
    vc_extra_params:
      - --enable-beacon-rest-api
      - --verbosity=debug

network_params:
  fulu_fork_epoch: 0
  gloas_fork_epoch: 2
  seconds_per_slot: 6
  genesis_delay: 40

additional_services:
  - dora

global_log_level: debug

dora_params:
  image: ethpandaops/dora:gloas-support
  ```

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [ ] I have read [CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [ ] I have included a uniquely named [changelog fragment file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [ ] I have added a description with sufficient context for reviewers to understand this PR.
- [ ] I have tested that my changes work as expected and I added a testing plan to the PR description (if applicable).

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-17 22:03:57 +00:00
james-prysm
c298c504ef fixing wrong path name in execution payload bid api (#16690)
**What type of PR is this?**

Bug fix

**What does this PR do? Why is it needed?**

fixing wrong path name from /eth/v2/beacon/execution_payload/bid to
/eth/v1/beacon/execution_payload_bid

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-04-17 21:07:35 +00:00
Manu NALEPA
486e479a99 Prevent expensive state replay when computing sync committees members for the current period (#16688)
**What type of PR is this?**
Bug fix

**What does this PR do? Why is it needed?**
Every `EPOCHS_PER_SYNC_COMMITTEE_PERIOD=256` epochs,
`SYNC_COMMITTEE_SIZE=512` validators are randomly chosen to be part of
the sync committee.

When calling the the
[/eth/v1/validator/duties/sync/epoch](https://ethereum.github.io/beacon-APIs/#/Validator/getSyncCommitteeDuties)
endpoint with `epoch` being set to the first epoch of the current
period, the Prysm beacon node:
1. Finds the youngest state in the DB before this epoch
2. Replays (expensive) states up the requested epoch

While this is technically correct, the step `2.` is very resource
consuming.

This pull request leverages the fact that the `current_sync_committee`
and `next_sync_committee` fields do not change within a period.

==> If the requested epoch and the current epoch are within the same
period, then we can fetch `current_sync_committee` and
`next_sync_committee` from the state corresponding to the current epoch,
which is way less expensive.

**Which issues(s) does this PR fix?**

Fixes:
- https://github.com/OffchainLabs/prysm/issues/16686

**Other notes for review**
Please read commit by commit.

With a Nimbus VC connected:
**Before this PR**
<img width="936" height="308" alt="image"
src="https://github.com/user-attachments/assets/b76f588d-dc95-4916-af93-6ea80b092609"
/>

**After this PR**
<img width="941" height="305" alt="image"
src="https://github.com/user-attachments/assets/65302c90-be33-4525-be5c-a13338335d39"
/>

**Test plan**
Read how to reproduce the issue in the linked issue, and check that the
same reproduction steps do not reproduce the issue with this PR.

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-04-17 14:16:12 +00:00
terence
fbb65f6700 Queue Gloas data column sidecars arriving before their block (#16653)
In Gloas, data column sidecars can arrive via gossip before the block
here is to queue instead of dropping and re-request later

### Design notes
  - Subnet is verified before queuing (reject bad subnet immediately)
- Columns are stored in a fixed-size `[128]` array per block root,
indexed by column index — duplicates for the same index are ignored
- When the block arrives (via `beaconBlockSubscriber` or
`processPendingBlocks`), queued columns are verified against the block's
bid commitments and saved to storage
- Peers that sent columns failing verification (slot mismatch, invalid
sidecar, bad KZG proof) are downscored (PeerID is tracked)
  - A slot ticker prunes entries from past slots every slot boundary 
- RPC column fetch is skipped for blocks that already have pending
gossip columns
- Each Gloas sidecar is ~44 KB at 21 max blobs (21 cells × 2048 bytes +
proofs). 128 columns per block = ~5.5 MB per block root. The slot ticker
ensures at most one slot's worth of pending roots exist at any time.
2026-04-16 16:36:51 +00:00
Manu NALEPA
f0c7633c87 Pubkey cache: Use map+mutex instead of LRU cache (#16654)
**What type of PR is this?**
Optimization

**What does this PR do? Why is it needed?**
Uncompressing/verifying a (validator) public key from bytes is an
expensive operation.
To avoid doing this operation multiple times for the same public key, a
cache mapping raw, compressed public key to uncompressed, verified ones
is created and populated at node start.

**Before this PR**, a LRU cache with a 2M capacity is used.
The issue with this design is the following:
1. If the cache capacity (2M) is higher than the current count of active
public keys, then keys in this cache are never evicted. ==> Using a LRU
cache is useless. This is the case for all devnets, testnets and
mainnet.
2. If the cache capacity (2M) is lower than the current count of active
public keys, because validators attetations and block proposals are
randomly distributed, some keys will be evicted, then very shortly after
re-inserted. (The only valid case for using a LRU here is sync
committees, that last ~27H). ==> Using a LRU cache is useless.

In both cases 1. and 2., using a LRU cache is useless, and could be
replaced by a map (+ mutex for concurrent accesses). Additionally,
compared to a simple map, a LRU cache consumes some extra heap memory.

**After this PR**, the LRU cache is removed and replaced by a map (+
mutex for concurrent accesses.)

**Cache memory usage (source: Hoodi Pyroscope)**
- Before this PR: 222 MB
- After this PR:  158 MB

**==> Gain: 64 MB**

That's not a lot, but it is an easy saving.

**Before this PR**
<img width="940" height="586" alt="image"
src="https://github.com/user-attachments/assets/ef92abf5-e781-44cb-83b0-7db7b52ef371"
/>

**After this PR**
<img width="1018" height="274" alt="image"
src="https://github.com/user-attachments/assets/8261e18c-3edc-4530-a55c-4c9eaa9a6d6c"
/>

<img width="1020" height="922" alt="image"
src="https://github.com/user-attachments/assets/afa07aac-0b28-417b-815e-f00936d04c02"
/>


**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-04-16 14:14:04 +00:00
Barnabas Busa
321828e775 Fix MaxBuildersPerWithdrawalsSweep in minimal preset (#16623)
## Summary
- The minimal preset was missing the `MaxBuildersPerWithdrawalsSweep`
override, causing it to inherit the mainnet value of `16384` instead of
the correct minimal value of `16`
- This aligns with the [consensus-specs minimal gloas
preset](https://github.com/ethereum/consensus-specs/blob/master/presets/minimal/gloas.yaml#L23)

## Test plan
- [x] `go build ./config/params/` passes
- [ ] Verify minimal preset spec tests pass with the corrected value

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2026-04-14 20:19:15 +00:00
james-prysm
e2ffb42abe allow proposer preferences on the same epoch (#16610)
**What type of PR is this?**

Feature

**What does this PR do? Why is it needed?**

based on https://github.com/ethereum/consensus-specs/pull/5035

adds capabilities on the validator client side to submit current epoch
attestations,
- for current epoch submissions, we skip slot 0 and start at slot 1 for
submissions, we also need a 1 slot buffer if we start the validator
client mid epoch and and need to propose during that epoch.
- current epoch submissions don't fire in fulu, only next epoch
submissions fire for gloas at the last epoch of fulu before gloas

kurtosis test by cherry picking changes on epbs-devnet-1 and running
```
participants:
  - el_type: geth
    el_image: ethpandaops/geth:epbs-devnet-0
    cl_type: prysm
    cl_image: gcr.io/offchainlabs/prysm/beacon-chain:latest
    vc_image: gcr.io/offchainlabs/prysm/validator:latest
    supernode: true
    count: 4
    vc_extra_params:
      - "--verbosity=debug"

network_params:
  fulu_fork_epoch: 0
  gloas_fork_epoch: 2
  seconds_per_slot: 6
  genesis_delay: 40

additional_services:
  - dora

global_log_level: debug

dora_params:
  image: ethpandaops/dora:gloas-support
```

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-04-14 19:45:40 +00:00
james-prysm
d1bb9018d3 reversing checkpoint api change (#16660)
**What type of PR is this?**

Bug fix

**What does this PR do? Why is it needed?**

https://github.com/OffchainLabs/prysm/pull/16635 will break clients
checkpoint sync if the first slot of the epoch is missed, but it was
added to resolve some changes in gloas.

With https://github.com/ethereum/consensus-specs/pull/5094 we will be
able to keep the old approach for checkpoint sync endpoints and so the
previous pr 16635 is no longer needed.


**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-04-14 19:03:51 +00:00
james-prysm
8c70e4bbb1 implementing envelope rest apis (#16522)
**What type of PR is this?**

Feature

**What does this PR do? Why is it needed?**

adds 

- GET /eth/v1/validator/execution_payload_envelope/{slot} endpoint
- POST /eth/v1/beacon/execution_payload_envelope endpoint

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).

---------

Co-authored-by: james-prysm <jhe@offchainlabs.com>
2026-04-14 17:31:40 +00:00
Rupam
c004abc89d return 404 for requests that ask for pre checkpoint sync state (#16615)
<!-- Thanks for sending a PR! Before submitting:

1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
   fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
   in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
   Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->

**What type of PR is this?**

Bug fix

**What does this PR do? Why is it needed?**

- Prevents invalid/time-expensive replay for unavailable historical
slots by returning Not Found instead of trying to replay from genesis.
- Handles replay no-data errors as Not Found, so missing historical data
no longer surfaces as Internal Server Error in HTTP paths (this might
not be required, I just added it for consistency, lmk if i should
remove)
- Adds unit tests for:
i) slot earlier than earliest available
ii) slot before backfill low slot
iii) replay no-data mapping to not-found
iv) shared HTTP error mapping for no-data to 404

**Which issues(s) does this PR fix?**

Addresses #16191

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).

---------

Co-authored-by: Bastin <43618253+Inspector-Butters@users.noreply.github.com>
2026-04-14 15:30:20 +00:00
Aliz Fara
85316c5d16 Fix event subscription timeout handling (#16681)
**What type of PR is this?**

Bug fix

**What does this PR do? Why is it needed?**

This excludes `/eth/v1/events` from the global `http.TimeoutHandler`
while keeping the timeout behavior for other HTTP routes unchanged.

When `--api-timeout` is set, the timeout wrapper is incompatible with
Prysm's SSE event stream handling and can return `200 OK` with an empty
response body instead of keeping the stream open.

**Which issues(s) does this PR fix?**

Fixes #15710

**Other notes for review**

- Adds focused coverage for the SSE bypass and the unchanged timeout
behavior for non-SSE routes.
- Stabilizes `TestServer_StartStop` by replacing a goroutine log
assertion with `require.Eventually`.
- Validation used during review:
`go test ./api/server/httprest -run 'TestServer_TimeoutHandler' -count=1
-v`

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-04-14 14:45:03 +00:00
james-prysm
9069afc6d0 Get block v4 (#16488)
**What type of PR is this?**

Feature

**What does this PR do? Why is it needed?**

implements the new GET /eth/v4/validator/blocks/{slot} endpoint, we
don't hook up the validator client to use it yet for post gloas in this
pr.

**Which issues(s) does this PR fix?**

Fixes https://github.com/ethereum/beacon-APIs/pull/580

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-04-14 03:23:42 +00:00
Sahil Sojitra
9802242cfe perf(auth): optimize auth token handling #15763 (#15793)
<!-- Thanks for sending a PR! Before submitting:

1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
   fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
   in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
   Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->

**What type of PR is this?**
> Other

**What does this PR do? Why is it needed?**
This PR applies micro-optimizations to the auth token handling code. It
improves efficiency and readability by reducing unnecessary allocations
and adding an early length check before performing constant-time
comparison. [PR#15763](https://github.com/OffchainLabs/prysm/pull/15763)

**Changes Included**
- Use `strings.HasPrefix` + slicing instead of `strings.Split` to avoid
allocations
- Add early length check before `subtle.ConstantTimeCompare`

**Which issues(s) does this PR fix?**
No functional changes or security fixes; this PR improves performance
and code clarity in the auth token handling logic.

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.

---------

Co-authored-by: maradini77 <140460067+maradini77@users.noreply.github.com>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2026-04-13 18:20:38 +00:00
Alleysira
e75166cfe7 Fix blob index bounds check (#16640)
**What type of PR is this?**
Bug fix

**What does this PR do? Why is it needed?**

This PR addresses a runtime panic caused by a missing bounds check on
blob indices. I've implemented a fix and would like to hear from you.
Thanks!

##### What does this PR do?
- Add a bound check for `blob.Index`.
- Add 3 tests to show that without the fix prysm will panic.

##### Why is it needed?

`BlobAlignsWithBlock` accesses `commits[blob.Index]` without checking
that `blob.Index < len(commits)`. It only checks `blob.Index <
MaxBlobsPerBlock` (the spec-wide maximum, e.g. 6 for Deneb). If a blob
has an index that passes the spec max check but exceeds the actual
number of commitments in the block, the code panics with an
index-out-of-range runtime error. The added tests confirmed this.
Fortunately, I think this is unreachable because all callers validate
blob indices upstream:
- `blobValidatorFromRootReq` rejects blobs whose index wasn't in the
request
- `newSequentialBlobValidator` enforces strictly sequential indices (0,
1, 2, ...)
- `requestsForMissingIndices` only generates indices 0..len(commits)-1
                                                                      
The fix adds an explicit bounds check as defense-in-depth, so that if a
future caller bypasses upstream validation, the function returns
ErrIncorrectBlobIndex instead of panicking.

Three test functions are added to `blob_test.go`:
- `TestBlobAlignsWithBlock_OOBIndexReturnsError`: blob with `index >=
len(commits)` but `< MaxBlobsPerBlock` returns ErrIncorrectBlobIndex.
Without this fix, this test panics.
- `TestBlobAlignsWithBlock_MaxIndexEdge`: boundary test confirming the
last valid index succeeds and the first OOB index errors.
- `TestBlobAlignsWithBlock_AllValidIndicesSucceed`: all indices in
0..nCommitments-1 succeed without error or panic.

Without the fix:
```bash
$ go test ./beacon-chain/sync/verify/ -v -count=1
=== RUN   TestBlobAlignsWithBlock
=== RUN   TestBlobAlignsWithBlock/happy_path_blob_0
=== RUN   TestBlobAlignsWithBlock/mismatched_roots_blob_0
=== RUN   TestBlobAlignsWithBlock/mismatched_roots_-_fake_blob_0
=== RUN   TestBlobAlignsWithBlock/before_deneb_blob_0
--- PASS: TestBlobAlignsWithBlock (0.00s)
    --- PASS: TestBlobAlignsWithBlock/happy_path_blob_0 (0.00s)
    --- PASS: TestBlobAlignsWithBlock/mismatched_roots_blob_0 (0.00s)
    --- PASS: TestBlobAlignsWithBlock/mismatched_roots_-_fake_blob_0 (0.00s)
    --- PASS: TestBlobAlignsWithBlock/before_deneb_blob_0 (0.00s)
=== RUN   TestBlobAlignsWithBlock_OOBIndexReturnsError
--- FAIL: TestBlobAlignsWithBlock_OOBIndexReturnsError (0.00s)
panic: runtime error: index out of range [3] with length 3 [recovered, repanicked]

goroutine 116 [running]:
testing.tRunner.func1.2({0x12d1aa0, 0xc000363818})
        /home/alleysira/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.25.1.linux-amd64/src/testing/testing.go:1872 +0x237
testing.tRunner.func1()
        /home/alleysira/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.25.1.linux-amd64/src/testing/testing.go:1875 +0x35b
panic({0x12d1aa0?, 0xc000363818?})
        /home/alleysira/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.25.1.linux-amd64/src/runtime/panic.go:783 +0x132
github.com/OffchainLabs/prysm/v7/beacon-chain/sync/verify.BlobAlignsWithBlock({0xc00030de60, {0xf, 0x74, 0x6b, 0x28, 0xa, 0x5a, 0x94, 0xdd, 0x55, ...}}, ...)
        /home/alleysira/pr/prysm/beacon-chain/sync/verify/blob.go:44 +0x5b0
github.com/OffchainLabs/prysm/v7/beacon-chain/sync/verify.TestBlobAlignsWithBlock_OOBIndexReturnsError(0xc0003a2540)
        /home/alleysira/pr/prysm/beacon-chain/sync/verify/blob_test.go:109 +0x557
testing.tRunner(0xc0003a2540, 0x14b91f8)
        /home/alleysira/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.25.1.linux-amd64/src/testing/testing.go:1934 +0xea
created by testing.(*T).Run in goroutine 1
        /home/alleysira/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.25.1.linux-amd64/src/testing/testing.go:1997 +0x465
FAIL    github.com/OffchainLabs/prysm/v7/beacon-chain/sync/verify       0.016s
FAIL
```
With the fix the test pass:
```shell
go test ./beacon-chain/sync/verify/ -v -count=1
=== RUN   TestBlobAlignsWithBlock
=== RUN   TestBlobAlignsWithBlock/happy_path_blob_0
=== RUN   TestBlobAlignsWithBlock/mismatched_roots_blob_0
=== RUN   TestBlobAlignsWithBlock/mismatched_roots_-_fake_blob_0
=== RUN   TestBlobAlignsWithBlock/before_deneb_blob_0
--- PASS: TestBlobAlignsWithBlock (0.00s)
    --- PASS: TestBlobAlignsWithBlock/happy_path_blob_0 (0.00s)
    --- PASS: TestBlobAlignsWithBlock/mismatched_roots_blob_0 (0.00s)
    --- PASS: TestBlobAlignsWithBlock/mismatched_roots_-_fake_blob_0 (0.00s)
    --- PASS: TestBlobAlignsWithBlock/before_deneb_blob_0 (0.00s)
=== RUN   TestBlobAlignsWithBlock_OOBIndexReturnsError
--- PASS: TestBlobAlignsWithBlock_OOBIndexReturnsError (0.00s)
=== RUN   TestBlobAlignsWithBlock_MaxIndexEdge
--- PASS: TestBlobAlignsWithBlock_MaxIndexEdge (0.00s)
=== RUN   TestBlobAlignsWithBlock_AllValidIndicesSucceed
=== RUN   TestBlobAlignsWithBlock_AllValidIndicesSucceed/index_0
=== RUN   TestBlobAlignsWithBlock_AllValidIndicesSucceed/index_1
=== RUN   TestBlobAlignsWithBlock_AllValidIndicesSucceed/index_2
=== RUN   TestBlobAlignsWithBlock_AllValidIndicesSucceed/index_3
--- PASS: TestBlobAlignsWithBlock_AllValidIndicesSucceed (0.00s)
    --- PASS: TestBlobAlignsWithBlock_AllValidIndicesSucceed/index_0 (0.00s)
    --- PASS: TestBlobAlignsWithBlock_AllValidIndicesSucceed/index_1 (0.00s)
    --- PASS: TestBlobAlignsWithBlock_AllValidIndicesSucceed/index_2 (0.00s)
    --- PASS: TestBlobAlignsWithBlock_AllValidIndicesSucceed/index_3 (0.00s)
PASS
ok      github.com/OffchainLabs/prysm/v7/beacon-chain/sync/verify       0.017s
```

**Which issues(s) does this PR fix?**

None.

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-04-13 17:25:05 +00:00
satushh
8864484230 Fix processBatchedBlocks returning pre-filter block count (#16657)
**What type of PR is this?**

Bug fux

**What does this PR do? Why is it needed?**

Move bwbCount assignment **after** validUnprocessed so peer scoring only
credits actually processed blocks.

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-04-13 14:19:02 +00:00
terence
5bb13408d5 Use fork-aware deserialization for data columns from disk (#16650)
- `VerifiedRODataColumnFromDisk` now takes an epoch parameter
- Columns at or after the Gloas fork epoch unmarshal as
`DataColumnSidecarGloas`
- Earlier columns continue to unmarshal as `DataColumnSidecar` (Fulu)
- Previously all columns used the Fulu type, which fails for Gloas
2026-04-10 20:48:11 +00:00
Potuz
99327d7422 Fix initial sync bid validation failure (#16652)
During initial sync, state replay skips the last block's execution
payload envelope (no next block to verify delivery). When the parent
envelope was already saved by a previous batch, envelopesForBlocks
skipped it as "already processed", leaving getBatchPrestate unable to
apply it. This caused LatestBlockHash to be stale, failing bid
validation on the next block.

Two fixes:
- envelopesForBlocks: always include the parent envelope even if
persisted
- getBatchPrestate: when parent envelope is in DB, load and apply the
blinded form instead of the broken StateByRootInitialSync(env.BlockHash)
call that passed an execution hash where a beacon block root was
expected

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2026-04-10 19:02:37 +00:00
terence
527863e9de Set bid KZG commitments on gloas's data columns (#16649)
- Set bid commitments on verified columns during gossip validation
(`validateDataColumnGloas`)
- Add `setBidCommitments` helper for the RPC fetch path
(`FetchDataColumnSidecars`)
- Track `commitmentsByRoot` on `DataColumnSidecarsParams` so peer
fetches can set commitments before verification
2026-04-10 16:53:10 +00:00
terence
de34b4dfae Skip inclusion proof verification for Gloas data columns (#16647)
- Gloas data column sidecars don't carry block headers or inclusion
proofs
  - Skip `VerifyDataColumnSidecarInclusionProof` for Gloas sidecars
- Skip Gloas columns in the batch verifier `SidecarInclusionProven` loop
2026-04-10 15:18:48 +00:00
terence
ccf61fb91b Decode Gloas data columns using correct protobuf type over RPC (#16648)
- `readChunkedDataColumnSidecar` now branches on fork version
- Gloas columns decode as `DataColumnSidecarGloas`, Fulu as
`DataColumnSidecar`
- Previously all RPC columns decoded as Fulu, causing failures when
non-proposer nodes fetched Gloas columns from peers
2026-04-10 14:38:47 +00:00
terence
e9fdeee7bb Add missing fields to Gloas genesis block bid (#16646)
- The Gloas genesis block's `SignedExecutionPayloadBid` was missing
`PrevRandao` (32 bytes) and `FeeRecipient` (20 bytes)
  - This caused SSZ marshaling failures at genesis
2026-04-10 14:25:44 +00:00
satushh
9da54ce816 Fix package-level logger mutation (#16645)
**What type of PR is this?**

Bug Fix

**What does this PR do? Why is it needed?**

- Fix package-level logger mutation in initial-sync Resync and validator
proposer GetBlock.


**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [ ] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [ ] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [ ] I have added a description with sufficient context for reviewers
to understand this PR.
- [ ] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-04-09 14:40:44 +00:00
terence
ee9fa34b30 Fix Gloas data column KZG commitments for operation feed (#16643)
- Fix `WARN sync: Failed to get KZG commitments for operation feed
error=data column sidecar is not a fulu type` spam on Gloas devnet
- Gloas data column sidecars don't carry KZG commitments, they live in
the block's execution payload bid. Added `bidCommitmentsGloas` to
`RODataColumn` and populate it in `validateDataColumnGloas` using the
block already fetched from DB
- We want two things 1.) No extra DB lookups 2.) no function signature
changes

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2026-04-08 22:01:17 +00:00
terence
cf09469ac9 Add Gloas engine API method versions for devnet (#16642)
- Wire up `engine_newPayloadV5` for Gloas, using it based on slot.
Reuses `ExecutionPayloadDeneb` with
  execution requests (same params as V4, just version bump).
- Add Gloas to `engine_forkchoiceUpdatedV3` and `engine_getPayloadV5`
(shared with fulu).
- Add slot parameter to `NewPayload` interface so the engine client can
select V4 vs V5 at the fork boundary. (this will change later!)
- Add GloasEnabled() config helper and gloasEngineEndpoints for
capability exchange.
2026-04-08 20:02:38 +00:00
Potuz
f3dfcbab2a Use proposer preferences cache for payload attributes after Gloas (#16620)
## Summary
- Adds `ProposerPreferencesCache` to the blockchain service so
`trackedProposer()` can use Gloas gossip preferences (fee recipient, gas
limit) when constructing payload attributes for FCU
- When `PrepareAllPayloads` is enabled, checks the preferences cache
first, falling back to the default burn address
- When a validator is tracked, checks the preferences cache to override
the tracked validator's fee recipient
- Adds `GasLimit` field to `TrackedValidator` struct, populated from
proposer preferences

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 16:35:24 +00:00
terence
10cd675793 Construct data column sidecars from bid in Gloas blocks (#16638)
- Add `PopulateFromBid` as a new `ConstructionPopulator` that extracts
KZG commitments directly from the execution payload bid in Gloas (ePBS)
blocks
- In Gloas, the execution payload arrives separately via the payload
envelope, but the bid's KZG commitments are available in the block
immediately — this allows data column sidecars to be constructed from
the EL (`engine_getBlobsV2`) as soon as the block arrives, without
waiting for the envelope
- Wire `PopulateFromBid` into `processSidecarsFromExecutionFromBlock`
for Gloas blocks, replacing the previous early return that skipped EL
reconstruction entirely

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2026-04-08 13:38:22 +00:00
terence
f01575e44c Fix initial sync envelope validation for genesis blocks (#16637)
- Fixes initial sync failing with envelope does not match block when
syncing a Gloas chain from genesis
- Genesis (slot 0) has no separate execution payload envelope, its
execution block hash is embedded in the genesis state. The validation
loop incorrectly tried to match this hash transition against an
envelope, which always failed
2026-04-08 03:08:13 +00:00
satushh
129d6e1088 Fix swapped JSON tags in ChainReorgEvent struct (#16639)
**What type of PR is this?**

Bug fix

**What does this PR do? Why is it needed?**

The json tags in ChainReorgEvent struct were swapped. 

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [ ] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [ ] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [ ] I have added a description with sufficient context for reviewers
to understand this PR.
- [ ] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-04-07 11:41:15 +00:00
terence
883d78221f Use ValidatorIndex type for proposer_lookahead in beacon state proto (#16634)
- Adds `cast_type` annotation to `proposer_lookahead` field in
`BeaconStateFulu` and `BeaconStateGloas` protobuf definitions to use
`primitives.ValidatorIndex` instead of raw `uint64`
- Matches the spec type `Vector[ValidatorIndex, ...]` and is consistent
with how `ptc_window` already uses `cast_type` for its validator indices
- Updates `InitializeProposerLookahead` to return
`[]primitives.ValidatorIndex` directly, removing all `uint64` conversion
boilerplate
- Adds `proposerLookaheadVal()` copy method for `ToProto` consistency
with other slice fields
2026-04-07 02:42:24 +00:00
Manu NALEPA
6e4d7fd781 ProcessEffectiveBalanceUpdates: Avoid copying a validator when the computed effective balance is unchanged. (#16631)
**What type of PR is this?**
Bug fix

**What does this PR do? Why is it needed?**
In (electra) `ProcessEffectiveBalanceUpdates`, a `0x00...` validator
with a balance > 33.25 ETH enters in the
```go
if balance+downwardThreshold < val.EffectiveBalance() || val.EffectiveBalance()+upwardThreshold < balance {
...
}
```

condition.

The validator is copied (`newVal = val.Copy()`) and returned, **even if
the effective balance did not actually change**.
As a consequence, this validator is considered as "dirty" in the
validators field trie, and all the corresponding branches are
re-computed when computing the hash trie root of the the validators
field trie of the beacon state.

This PR adopts the same behavior as before Electra:

6f437b561a/beacon-chain/core/electra/effective_balance_updates.go (L32-L63)

and copies/considers dirty the validator only if its effective balance
changed.

**Which issues(s) does this PR fix?**
- https://github.com/OffchainLabs/prysm/issues/16630

**Other notes for review**
The first commit only introduces a new metric showing the issue.
The second commit actually solves the issue.

**Before this PR:**
**~9.400 validators** considered as dirty on mainnet every epoch

<img width="942" height="308" alt="image"
src="https://github.com/user-attachments/assets/31da9c92-aa0f-4d71-a402-92ed62738803"
/>


**After this PR:**
**~15 validators** considered as dirty on mainnet every epoch (reduction
of ~x620).

<img width="946" height="312" alt="image"
src="https://github.com/user-attachments/assets/afc5e72a-ccda-4636-87d9-dab2fbbf5c1c"
/>


**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2026-04-06 16:29:31 +00:00
terence
14f5e6f414 Downgrade genesis forkchoice balance underflow warning to debug (#16633)
- Demotes the "node with invalid balance, setting it to zero" warning to
DEBUG level for genesis nodes in forkchoice
- Non-genesis block retain the WARN level since underflow there
indicates a real bug
- This is required for gloas e2e test because it plans to fail on any
warning and error
2026-04-06 14:51:24 +00:00
Potuz
9d084bceb3 Fix finalized and justified state endpoint to not advance the slot (#16635)
Use StateByRoot with the checkpoint root instead of replaying to the
epoch start slot. The previous approach incorrectly advanced the state
beyond the checkpoint block's post-state.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-05 20:12:21 +00:00
terence
f79d2efc6e Fix zero head block hash in FCU at gloas genesis (#16629)
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2026-04-03 04:06:34 +00:00
terence
9dba7c5319 Check pending deposits before applying builder deposits (#16532)
Add `IsPendingValidator` check to `processDepositRequest` so that
deposit requests with builder credentials are routed to the validator
pending queue when a pending deposit with a valid signature already
exists for the same pubkey

Also updated spec test to alpha3 so we can merge this with green CI/CD

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2026-04-02 21:23:21 +00:00
terence
c02c057b7d core: implement cached PTC window in state (#16573)
This adds the PTC cache to the Gloas proto and native beacon state,
updates SSZ/hash-tree-root handling, initializes the cache on Gloas
upgrade, rotates it during epoch processing, and switches payload
committee lookups to read from the cached window instead of recomputing
PTC assignments on demand

Reference: https://github.com/ethereum/consensus-specs/pull/4979
2026-04-02 17:56:02 +00:00
terence
b6ec6a8eec fix: add Gloas genesis block support (#16627)
## Summary

- Add missing `*ethpb.BeaconStateGloas` case to
`NewGenesisBlockForState` type switch
- Create `gloasGenesisBlock()` with the correct Gloas block body
structure (`SignedExecutionPayloadBid` + `PayloadAttestations`)

Fixes the `unknown underlying type for state.BeaconState value` error
when starting a node from a Gloas genesis state.
2026-04-02 17:05:20 +00:00
terence
3ca8c3ba35 Support gloas blob protobuf for readonly (#16618)
- Refactor `RODataColumn` to support both Fulu and Gloas data column
sidecar protobuf types. Fulu-only accessors now return errors instead of
zero values when called on Gloas sidecars
- Wire up Gloas `DataColumnSidecarGloas` across gossip topic mappings,
pubsub decoding, validation, and RPC serving
  - Gloas duplicate check uses `(block_root, index)` per spec
- Precompute and broadcast Gloas data column sidecars during block
proposal, before the execution payload envelope, so receivers pass data
availability checks
- Fix `WriteDataColumnSidecarChunk` to encode the correct SSZ type per
fork
2026-04-02 15:44:41 +00:00
terence
1092c7135f Refactor gloas process_execution_payload into distinct entry points (#16600)
## Summary

- Decompose `process_execution_payload` into four explicit entry points,
one per caller
- Extract shared helpers: `cacheLatestBlockHeaderStateRoot`,
`setLatestBlockHeaderStateRoot`, `validatePayloadConsistency`,
`verifyPostStateRoot`
- Unexport package-internal functions:
`applyExecutionPayloadStateMutations`,
`verifyExecutionPayloadEnvelopeSignature`
- Rename `ApplyBlindedExecutionPayloadEnvelopeForStateGen` →
`ProcessBlindedExecutionPayload`
- Rename `ApplyExecutionPayloadNoVerifySig` →
`ProcessExecutionPayloadWithDeferredSig`

  ## Motivation

The previous code routed all callers through `ApplyExecutionPayload`,
which tried to serve every path at once. Each caller's assumptions were
implicit rather than visible in the code. Now each entry point reads
top-to-bottom as exactly the steps that path requires.

  ## Entry points

| Step | `ProcessExecutionPayload` |
`ProcessExecutionPayloadWithDeferredSig` | `ApplyExecutionPayload` |
`ProcessBlindedExecutionPayload` |
  |---|---|---|---|---|
  | **Caller** | gossip | init-sync | proposer | stategen / replay |
| **Verify signature** |  inline | 🔶 deferred (`SignatureBatch`) |  |
 |
| **Patch header state root** | `cacheLatestBlockHeaderStateRoot`
(computes HTR) | `setLatestBlockHeaderStateRoot` (caller-provided) |
`cacheLatestBlockHeaderStateRoot` (computes HTR) |
`setLatestBlockHeaderStateRoot` (caller-provided) |
| **Validate consistency** | `validatePayloadConsistency` (full) |
`validatePayloadConsistency` (full) | `validatePayloadConsistency`
(probably can be removed but outside the scope) | minimal bid checks
(builder index + block hash) |
| **Verify post-state root** |  `verifyPostStateRoot` | 
`verifyPostStateRoot` |  (caller computes it) |  (trusted) |
| **Envelope type** | `ROSignedExecutionPayloadEnvelope` |
`ROSignedExecutionPayloadEnvelope` | `ROExecutionPayloadEnvelope` |
`ROBlindedExecutionPayloadEnvelope` |
2026-04-02 09:32:11 +00:00
james-prysm
73033a9d67 allowing ptc duties for next epoch on grpc endpoint (#16608)
**What type of PR is this?**

 Bug fix


**What does this PR do? Why is it needed?**

adding next epoch fix for grpc endpoint ( currently unused) 

https://github.com/OffchainLabs/prysm/pull/16591

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-04-02 07:46:38 +00:00
james-prysm
a7b83c358a Pre fork proposer preferences (#16588)
**What type of PR is this?**

 Bug fix


**What does this PR do? Why is it needed?**

context https://github.com/ethereum/consensus-specs/pull/4947

This pr allows for proposer preferences topic to be subscribed 1 epoch
before gloas as well as allowing for publishing of proposer preferences
before the gloas fork. The digest used is still gloas despite being in
the fulu fork.

validator client submits mid epoch when it's 1 epoch away from fork to
avoid races

tested in kurtosis
```
participants:
  - el_type: geth
    el_image: ethpandaops/geth:epbs-devnet-0
    cl_type: prysm
    cl_image: gcr.io/offchainlabs/prysm/beacon-chain:latest
    vc_image: gcr.io/offchainlabs/prysm/validator:latest
    supernode: true
    count: 2

network_params:
  fulu_fork_epoch: 0
  gloas_fork_epoch: 2
  seconds_per_slot: 6
  genesis_delay: 40

additional_services:
  - dora

global_log_level: debug

dora_params:
  image: ethpandaops/dora:gloas-support

```

**Which issues(s) does this PR fix?**

Fixes #https://github.com/OffchainLabs/prysm/issues/16587

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-04-02 02:50:47 +00:00
Potuz
29a0fd6760 Add gRPC endpoint to submit signed execution payload bids (#16614)
## Summary
- Adds `SubmitSignedExecutionPayloadBid` RPC to the
`BeaconNodeValidator` gRPC service
- Broadcasts the signed bid to the P2P gossip network
- Updates all interface implementations (gRPC client, beacon-API
client), and mocks

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2026-04-01 20:24:23 +00:00
satushh
209e46bab7 PTC duties no longer computed from a pre-Gloas state at the Fulu to Gloas fork boundary (#16619)
**What type of PR is this?**

Bug fix

**What does this PR do? Why is it needed?**

At fork boundary the PTC was computed from a pre-Gloas state. So this PR
adds a state version check to avoid doing that.

Added extra tests to cover it. 

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [ ] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-04-01 19:14:46 +00:00
Preston Van Loon
108e2806cb Fastssz update to allow generics (#16628)
**What type of PR is this?**

Other

**What does this PR do? Why is it needed?**

Incorporates fastssz PR 19:
https://github.com/OffchainLabs/fastssz/pull/19.

**Which issues(s) does this PR fix?**

This allows for use of []primitives.ValidatorIndex in ssz code
generation.

**Other notes for review**

Most of the diff is regenerating ssz.go files. Review go.mod and
deps.bzl carefully.

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-04-01 20:44:08 +00:00
Potuz
68c4c36e65 Add beacon API endpoint to publish signed execution payload bids (#16612)
## Summary
- Adds `POST /eth/v2/beacon/execution_payload/bid` beacon API endpoint
- Accepts `SignedExecutionPayloadBid` as JSON or SSZ (Content-Type
based)
- Broadcasts the bid to the P2P gossip network; the existing gossip
subscriber handles caching

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-01 01:23:53 +00:00
Potuz
67cc68c3bb Potuz/replay post gloas (#16598)
Fix state replay failing on post-CL ancestor states for Gloas blocks

When replaying blocks from an ancestor state that is post-CL (before
execution payload delivery), the first block's bid validation fails
because state.latestBlockHash hasn't been updated with the ancestor's
delivered payload hash. This causes "bid parent block hash mismatch"
errors during forkchoice setup on node restart.

Before the replay loop, check if the first block's bid parentBlockHash
differs from state.latestBlockHash. If so, load and apply the ancestor's
execution payload envelope from DB to bring the state to post-EL.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2026-03-30 21:25:41 +00:00
satushh
c33f0d04b7 Re-add next-epoch lookahead for PTC duties (#16591)
**What type of PR is this?**

Feature

**What does this PR do? Why is it needed?**

Add current and next epoch lookahead as per
https://github.com/ethereum/beacon-APIs/pull/592

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [ ] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [ ] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [ ] I have added a description with sufficient context for reviewers
to understand this PR.
- [ ] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-03-30 18:10:27 +00:00
james-prysm
f05972a181 changing log to warn in fallback log (#16606)
**What type of PR is this?**

 Other

**What does this PR do? Why is it needed?**
changing info log to warn for fallback message

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-03-30 15:55:50 +00:00
Aarsh Shah
7352ae03c6 Fix our flakiest unit tests (#16395)
**What type of PR is this?**
Bug fix


**What does this PR do? Why is it needed?**

This PR fixes the flakiest unit tests in Prysm which are
`TestFilterSubnetPeers` and
`TestService_BroadcastAttestationWithDiscoveryAttempts`.

It also refactors `TestStartDiscV5_DiscoverAllPeers` to use a
require.Eventually to make it less flaky but it still remains flaky. We
can't fully unflake this test as the discV5 library does not expose any
deterministic events we can block on.

It also speeds them up by getting rid of "time.Sleep" and
"require.Eventually" in these tests by blocking on deterministic events
instead.

Details on how each test has been fixed/what was broken have been left
as comments on the corresponding changes.
2026-03-30 14:12:49 +00:00
Potuz
4f34624a54 fix: use attestation slot epoch for fork digest in gossip topic validation (#16604)
The attestation gossip validator was using currentForkDigest() to build
the expected topic prefix. This fails at fork boundaries because the
beacon node subscribes to the next fork's topics one epoch early: a
message arriving on the upcoming fork's topic would be compared against
the current fork digest, causing a spurious reject with the misleading
error "attestation's subnet does not match with pubsub topic".

Use the attestation's own slot epoch to derive the correct fork digest
instead, so the validation matches the topic the attestation was
actually published on.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-30 13:25:33 +00:00
Potuz
4e44fdf55e Gloas/forkchoice setup (#16599)
Fix forkchoice tree setup missing full payload nodes on restart

During forkchoice tree reconstruction on node restart,
buildForkchoiceChain never set HasPayload on chain entries, so
InsertChain never created full payload nodes. When a child block's bid
references the parent's delivered payload hash,
resolveParentPayloadStatus
looks for a full parent node that doesn't exist, causing "invalid parent
root" errors.

Add resolveChainPayloadStatus to determine which blocks had payloads
delivered by comparing consecutive bids. For the finalized root (tree
root), check the first chain block's bid to determine if a full node is
needed, and create it via the new MarkFullNode forkchoice method before
inserting the chain.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-30 12:58:41 +00:00
Preston Van Loon
139773aa3a Hdiff: save hot states at tree boundaries (#16589)
**What type of PR is this?**

Bug fix

**What does this PR do? Why is it needed?**

When performing a long initial sync, I found that states were not being
progressively saved. If the client was improperly shut down, then it
would be replaying many states on restart.

**Which issues(s) does this PR fix?**

**Other notes for review**

Testing:
- Start initial sync from genesis with hdiff enabled
- Do an improper shutdown (restart your computer, kill -9, etc)
- Restart with debug logs and see "Starting stategen" service completes
in a timely manner
- A failure would be logs indicating hundreds or thousands of states
being replayed. However long your node was on during step 1.

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-03-27 18:29:32 +00:00
Bharath Vedartham
6558e947ca fix: set default scoring params for proposer preferences (#16585)
Upon starting up prysm, I get the following error message: 
```
[2026-03-25 08:17:20.00] ERROR sync: Could not subscribe topic error=unrecognized topic provided for parameter registration: /eth2/4d21f163/proposer_preferences/ssz_snappy topic=/eth2/4d21f163/proposer_preferences/ssz_snappy
```
which I believe happens because the default peer scoring params for the
proposer preferences topic has not been setup. This PR sets the topic
params for the proposer preferences topic to the be the default block
topic params for now.

I rebuilt an image with the fix and ran a kurtosis devnet with the newly
built image. Upon startup, I see the log:
```
[2026-03-25 09:27:46.01]  INFO sync: Subscribed to topic=/eth2/4d21f163/proposer_preferences/ssz_snappy
```
which I think should indicate that the issue has been fixed.


- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).

---------

Co-authored-by: terence <terence@prysmaticlabs.com>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2026-03-27 17:41:23 +00:00
Barnabas Busa
d8150ac20c fix: add missing ePBS values to /eth/v1/config/spec (#16597)
## Summary

- Add `PTC_SIZE`, `MAX_PAYLOAD_ATTESTATIONS`, `BUILDER_REGISTRY_LIMIT`,
and `BUILDER_PENDING_WITHDRAWALS_LIMIT` as fieldparams-derived constants
exposed via the config spec endpoint.
- Add `MaxPayloadAttestations` constant to fieldparams (mainnet and
minimal configs).
- Update test expectations to include the new fields.

## Test plan

- [x] `TestGetSpec` passes with all 4 new values verified
- [ ] Verify `/eth/v1/config/spec` returns the new values on a running
node

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 16:33:49 +00:00
Potuz
543746d95d Gloas Init sync (#16528)
From the commit comments:
   
 Fetch Payloads along side blocks on init sync

Adds envelopes to the fetchRequestResponse struct which is populated by
    a call to fetchPayloads.

When requesting block batches if the batch is across the Fulu fork it is
    truncated. For a batch that is purely Gloas it requests the
    corresponding payload envelopes by range.

The batch fetcher verifies payloads--blocks consistency, that is that
the full batch could be imported completely into the blockchain without
gaps. If both payloads and blocks are not consistent and were delivered
    by the same peer, we downscore them

    The batch fetcher verifies self-consistency of the payloads, that is
    that the payloads follow the parehthash chain. If they don't we
    downscore the peer that served them.

    It changes the signature of fetchSidecars and fetchBlocksFromPeer to
    modify the response in place.

    Filter processed blocks and payloads

Add round robin changes to the batch processors. When receiving a batch,
    we filter all blocks that were processed. Since the payload fetcher
enforces consistency with payloads, removing all payloads envelopes that
match the removed blocks also keeps a consistent batch. The only problem
may be the first payload that may be processed. The usual finalized slot
check for blocks does not necessarily work for Payloads since we may not
    have processed a payload from a finalized slot. Hence we explicitly
    check the first payload in the batch against our DB.

In addition, for the unfinalized section we do an extra check against
    forkchoice before insertion.

It modifies ReceiveBlockBatch to deal with batches with payloads. It
adds extra safety mechanisms in case the compatibility constraints from
init sync were violated.

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 13:37:15 +00:00
Potuz
8e8c990a04 Ignore blocks and data columns based on old blocks. (#16579)
This PR ignores blocks and data column sidecars that are built on top of
parents which are both canonical and before our justified checkpoint.
There are a number of extra conditions that are added to make this
feature bullet proof:

- Only ignore blocks/sidecars that are for the current epoch. This means
that these are blocks/sidecars that are right now trying to reorg every
single block including our justified checkpoint.
- Only ignore blocks/sidecars based on canonical blocks. This means that
these are now starting to create a long fork and aren't building on an
already long fork.

These checks can both be relaxed safely, specially the second one. But
still under normal circumstances these checks as of now should solve all
issues with lagging proposers. The situation with a lagging proposer is
as follows:

- The node is lagging and has as head slot 37 (epoch 1)
- The chain has advanced and has finalized 32, justified 64 and is
currently in slot 97 (epoch 3).
- The lagging node proposes on top of its head at 37.
- The parent root is unfinalized, so it's considered valid to propose on
top of it.
- The dependent root for the head state is that of slot 63 which is
reorged in the lagging node's branch, the dependent root is the parent
itself at slot 37. Thus we can't use head to validate this node and need
to regenerate by processing slots from 37 onwards to epoch 2. Just to
check if the proposer is the right one.

Notice that these checks **will not** ignore blocks and branches that
are synced this way over RPC. In particular, if we get attestations for
these blocks and we request these blocks we will still process them. If
we get a child block for these blocks we will still request the parent
and process them. The attestation path will be patched separately.

Fixes #16549

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 11:26:41 +00:00
terence
3c9eae6064 Fix empty blob KZG commitments in self-build execution payload bids (#16595)
Fix self-build execution payload bids sending empty `BlobKzgCommitments`
instead of the actual commitments from the EL's blobs bundle
2026-03-27 01:29:31 +00:00
Manu NALEPA
c0ee666996 Introduce a new cache that keeps all seen aggregated attestations for at least one epoch (#16370)
**What type of PR is this?**
Bug fix

**What does this PR do? Why is it needed?**
Regarding incoming gossip messages on the `beacon_aggregate_and_proof`
topic, the
[specification](https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/p2p-interface.md#beacon_aggregate_and_proof)
states:

1.
> [IGNORE] `aggregate.data.slot` is within the last
`ATTESTATION_PROPAGATION_SLOT_RANGE` slots (with a
`MAXIMUM_GOSSIP_CLOCK_DISPARITY` allowance) -- i.e. `aggregate.data.slot
+ ATTESTATION_PROPAGATION_SLOT_RANGE >= current_slot >=
aggregate.data.slot` (a client MAY queue future aggregates for
processing at the appropriate slot).

2.
> [IGNORE] A valid aggregate attestation defined by
`hash_tree_root(aggregate.data)` whose `aggregation_bits` is a
non-strict superset has not already been seen. (via aggregate gossip,
within a verified block, or through the creation of an equivalent
aggregate locally).

The current Prysm implementation checks `2.` by looking into the
`aggregatedAtt` cache and the `blockAtt` cache.
However, these caches are wiped for needs of other parts of the beacon
node.

==> There is possibly some time where an incoming duplicated (or
superset) message is accepted by `1.` because the attestation is still
timely, but also accepted by `2.`, because a previous identical (or
superset) message was previously here, but is already deleted from
caches.

This PR introduces a new `seenAggregatedAtt` cache, that contains
attestations that were previously in the `aggregatedAtt` cache and the
`blockAtt` cache. Attestations enter in the new `seenAggregatedAtt`
cache when they are deleted from the `aggregatedAtt` cache or from the
`blockAtt` cache.

New incoming aggregations are now, for the rule `2.`, tested against the
`aggregatedAtt` cache, the `blockAtt` cache and (new) the
`seenAggregatedAtt` cache.


**Which issues(s) does this PR fix?**
- https://github.com/OffchainLabs/prysm/issues/16350

**Other notes for review**
Please read commit by commit.

_Instead of creating a new cache, couldn't we prevent pruning
`aggregatedAtt` and `blockAtt` caches before the end of the epoch?_
These caches need to be pruned before the end of the epoch, for example,
when proposing a block, to prevent packing attestations already present
in another blocks.

_There is already a cache named `seenAtt`, that looks like to be quite
close to the new introduced `aggregatedAttTTL` cache. Why not reusing
it?_
`seenAtt` contains all seen (unaggregated or aggregated) attestations.
To comply with `2.`, we need to memoize only aggregated attestations.

_How can I be sure that (a) the cache size is contained and (b) this
cache is useful?_
You can look at new added metrics. In the following graph, you can see
that (a - green) the size of the cache is regularly pruned. All
aggregated attestations with `slot` older than 1 epoch are pruned. Also
you can see that (b - yellow) there is quite a lot of attestations that
are now IGNOREd, that would have been previously ACCEPTed without this
cache, and so re-gossiped over the P2P network, which is precisely what
we want to avoid.
<img width="1021" height="313" alt="image"
src="https://github.com/user-attachments/assets/72cc5118-5ea4-4479-8313-2c9c7031ce37"
/>


**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-03-26 23:43:53 +00:00
Barnabas Busa
3e61778d38 Truncate commit hash to 7 chars in /eth/v1/node/version (#16592)
## Summary

- Truncate the git commit hash from full length to 7 characters
(standard short hash) in the `/eth/v1/node/version` API response for
readability.
- Update corresponding test to match the truncated commit format.
- Add changelog entry.

## Test plan

- [x] Existing `TestGetVersion` and `TestGetVersionV2` tests pass.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2026-03-26 18:55:05 +00:00
james-prysm
0bfe736730 implementing /eth/v2/validator/duties/proposer/{epoch} (#16303)
**What type of PR is this?**
Feature
**What does this PR do? Why is it needed?**

implements https://github.com/ethereum/beacon-APIs/pull/563

note: similar changes should be done on gRPC probably

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-03-26 16:00:09 +00:00
satushh
277797f6f8 Replace Kill with graceful stop in e2e (#16578)
**What type of PR is this?**

Bug fix

**What does this PR do? Why is it needed?**

All e2e component Stop() methods use Process.Kill() (SIGKILL), which
terminates processes immediately without allowing them to perform
cleanup (flushing data to disk, closing network connections, releasing
resources). This can potentially lead to corrupted states or flaky e2e
tests.

This PR introduces a helpers.GracefulStop() function that sends SIGTERM
first, giving the process 5 seconds to shut down cleanly, then falls
back to SIGKILL if the process is still alive.
All 8 e2e component Stop() methods are updated to use it. 

This is a correctness fix & not tied to a specific test failure.

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2026-03-26 15:41:08 +00:00
Potuz
2898a5e8a2 Fix replay blocks (#16482)
This PR fixes any `StateByRoot` possible acess to return correctly the
CL post-state if a block root is asked for and the EL post-state if a
block hash is asked for.

⚠️ This fix includes logical changes on stategen's behavior for example
on `replayBlocks`. `replayBlocks` also added in the end a process slots
call to reach the requested target slot, I removed this and made it part
of the caller to decide if calling it or not. Luckily the two callsites
for this function did not need it.


Replay blocks however applies all blocks that are given passed,
including all needed execution payload envelopes between these blocks.
The function is aware and determines which execution payloads need to be
applied, even if some non-canonical ones are found on the DB. However,
the return state **is always the CL post-state** of the last applied
block. It will never apply the last payload envelope if it exist at that
height. It is the responsibility of the caller to apply this payload.
For that, the helper `applyPayloadIfNeeded` was added.

State replay assumes that the given payloads are valid, in particular it
does not perform state root computations except when processing slots.
We could avoid those computations by a simple refactoring, but I left
this for a future PR. In this PR I do save the state root computation
for payload processing and a few hashtreeroots for the corresponding
block.

The function `loadStateByRoot` is shortcircuited to give the right reply
in a fast way: if the passed root (that could be a hash or a blockroot)
is not found in any cache. Then it tries to find a block with this root,
if there is no block with that root, we assume it is a beacon block hash
and call this function again recursively with the corresponding beacon
block root. All that remains is applying the payload envelope. This way
we are certain that `StateByRoot()` returns the correct post-state if
available.

The main change is in the replayer itself to choose which order of
payloads need to be applied when doing the state transition. We rely on
process slots to deal with the state root computation for missing
payloads. Otherwise we rely on the child payload envelope to deal with
the state root computation when available. We could use the same
technique to avoid the process slots state root computation by inlining
it in the replayer, which will be the case for a future PR.


I fixed `FillInForkchoiceMissingNodes` but I *did not* fix
`onBlockBatch` to be Gloas compatible yet.

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 14:09:19 +00:00
Tushar Jain
ca8cc65d72 Add --disable-log-color flag (#16574)
<!-- Thanks for sending a PR! Before submitting:

1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
   fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
   in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
   Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->

**What type of PR is this?**

Feature

**What does this PR do? Why is it needed?**

This PR adds a --disable-log-color boolean flag to both the beacon-chain
and validator binaries. When set, DisableColors=true is passed to the
text formatter, suppressing ANSI codes regardless of whether the output
is a TTY. Default behavior (colors on) is unchanged.
Since 7.1.3, Prysm forces ANSI color codes in text log output
unconditionally via formatter.ForceColors = true in the
prefixed.TextFormatter setup. This causes raw escape sequences (e.g.
^[[32m) to appear when stderr is redirected to a file, pipe, or log
aggregator.

**Which issues(s) does this PR fix?**
Fixes #16570 

**Other notes for review**
**Tested by doing the following**
1. Build both binaries:
     bazel build //cmd/beacon-chain:beacon-chain
bazel build //cmd/validator:validator
   
2. Verify flag appears in help: 
```
 ./bazel-bin/cmd/beacon-chain/beacon-chain_/beacon-chain --help | grep disable-log-color
./bazel-bin/cmd/validator/validator_/validator --help | grep disable-log-color
```
   
3. Confirm colors are present by default when redirected (existing
behavior):
```
 ./bazel-bin/cmd/beacon-chain/beacon-chain_/beacon-chain
./bazel-bin/cmd/validator/validator_/validator
```                               
4. Confirm colors are suppressed with the flag:
```
./bazel-bin/cmd/beacon-chain/beacon-chain_/beacon-chain --disable-log-color
./bazel-bin/cmd/validator/validator_/validator --disable-log-color
```

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).

---------

Co-authored-by: Bastin <43618253+Inspector-Butters@users.noreply.github.com>
2026-03-25 14:36:29 +00:00
terence
445487b4a7 Use highest bid cache to select P2P bid over self-build in proposer (#16577)
- Share the `HighestExecutionPayloadBidCache` between the sync and
proposer layers. The cache is now created in node.go and injected into
both the sync service and the validator RPC server.
- When building a Gloas block, compare the highest P2P bid value against
the local EL block value and use the highest with respect to self build
flag
  - Respect OverrideBuilder and skipMevBoost flags in the Gloas path
- Skip caching the self-build execution payload envelope when a remote
P2P bid wins. The winning builder is responsible for producing the
envelope, caching a self-build envelope
2026-03-24 21:48:41 +00:00
james-prysm
86edeef90f adding fallback to origin block root on previous dependent root is empty (#16576)
**What type of PR is this?**

Bug fix

**What does this PR do? Why is it needed?**

Adding fall back to the previous dependent root when it's equal to
empty. This fixes an issue for certain validator client architecture
comparing attester dependent root to head event for reorg purposes.
tested against prysm and lode star, lodestar will have issues currently
when it's post fulu.


**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-03-24 21:38:34 +00:00
Preston Van Loon
bfbca75862 Hdiff restart support (#16389)
<!-- Thanks for sending a PR! Before submitting:

1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
   fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
   in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
   Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->

**What type of PR is this?**

Feature

**What does this PR do? Why is it needed?**

Prior to this PR, beacon nodes could not resume sync from an hdiff
database. This PR introduces cache re-population to enable successful
restarts of the beacon node when using hdiff.

**Which issues(s) does this PR fix?**

**Other notes for review**

I am syncing from mainnet genesis with this feature. So far, it's synced
nearly 6M slots and I have restarted it multiple times. I've also been
doing some manual testing of state fetch via the beacon API. This
testing includes random sampling of states, performing HTR, and
verifying this root against the appropriate block header.

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).

---------

Co-authored-by: Bastin <43618253+Inspector-Butters@users.noreply.github.com>
2026-03-24 16:12:58 +00:00
Alleysira
1f72a1428c Fix/invert execution payment validation (#16565)
<!-- Thanks for sending a PR! Before submitting:

1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
   fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
   in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
   Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->

**What type of PR is this?**

Bug fix


**What does this PR do? Why is it needed?**

**Which issues(s) does this PR fix?**

Fixes #16562 

**Other notes for review**

All tests pass:

```bash
# Verification package
go test -run "TestBidVerifier" ./beacon-chain/verification/ -v -count=1
# PASS (3.7s)

# Sync package
go test -run "TestValidateExecutionPayloadBid" ./beacon-chain/sync/ -v -count=1
# PASS (1.5s)
```

Drafted with the help of Claude :)

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Co-authored-by: terence <terence@prysmaticlabs.com>
2026-03-24 04:01:03 +00:00
james-prysm
101dd55710 gloas validator client proposer preferences (#16548)
**What type of PR is this?**

Feature

**What does this PR do? Why is it needed?**

adds validator client call to proposer preferences. adds todo comments
for when we fully replace prepare beacon proposer endpoint with proposer
preferences.

**Which issues(s) does this PR fix?**
partially addresses https://github.com/OffchainLabs/prysm/issues/16545

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [ ] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-03-23 21:54:36 +00:00
terence
7d797ee4f9 Fix payload attributes to use expected withdrawals from state when empty (#16566)
- When the parent block is empty in Gloas, `process_withdrawals` returns
early and does not update `payload_expected_withdrawals` in the beacon
state. However, the proposer and FCU attribute builders were always
computing fresh withdrawals via `ExpectedWithdrawals()`, causing a
mismatch when the execution payload envelope is verified
against`state.payload_expected_withdrawals`.
- We adds a `PayloadWithdrawals()` state getter that branches on
`IsParentBlockFull()`: computes fresh withdrawals via
`ExpectedWithdrawalsGloas()` when full, returns the existing
`PayloadExpectedWithdrawals()` when empty.
- Patches three call sites: `proposer_execution_payload.go`,
`execution_engine.go`, and `gloas.go`.
2026-03-23 18:07:51 +00:00
terence
45e38d430f Fix flaky TestVerifyConnectivity by using local TCP listener (#16575)
- Replace hardcoded external Google IP (`142.250.68.46:80`) in
`TestVerifyConnectivity` with a local TCP listener
  - The Google IP became unreachable, causing CI flakes across all PRs
  - Local listener is deterministic and has no external dependency
2026-03-23 16:50:17 +00:00
james-prysm
0a643b177d adding proposer duties to duties v2 (#16564)
**What type of PR is this?**

 Feature


**What does this PR do? Why is it needed?**

proposer lookahead was introduced in fulu but we didn't use it for our
get duties endpoint. proposer preferences explicitly needs to know the
proposers in the next epoch, so this pr will add the required duties
used for proposer preferences.

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).

---------

Co-authored-by: terence <terence@prysmaticlabs.com>
2026-03-23 15:42:17 +00:00
Bastin
9ea9e1f07c Fix logging level bug for non text formats (#16567)
**What type of PR is this?**
Bug fix

**What does this PR do? Why is it needed?**
There is a bug that makes non-text format logs not respect the
`--verbosity` flag. This PR introduces a fix for `json`, `fluentd`, and
`journald` log formats.

**How to test:**
if you run the beacon node without this PR using `--log-format
json|fluentd|journald` paired with `--verbosity info` you see that debug
logs are printing. then run with this PR to see that it respects the
verbosity flag. you can test with different verbosity levels.
you can see journald logs using `journalctl -f`.

**Note for reviewer:**
each commit is separate and adds a fix for a format. both for
beacon-chain and validator.

This doesn't change the behavior of the ephemeral log file. that is
always debug, and always text format.
2026-03-22 11:31:56 +00:00
Potuz
8fb4d85bbd Record Payload ID in the cache on late blocks (#16563)
h/t to @terencechain for finding this.
2026-03-21 16:39:22 +00:00
james-prysm
259f526c8d split distributed validator components via an abstracted aggregator selector (#16509)
**What type of PR is this?**

Bug fix

**What does this PR do? Why is it needed?**

- PR attempts to fix #16362 with the recommendation to assume dvts are
aggregates to not block

- refactors code to separate dvt logic from normal logic for selectors
code

uses https://github.com/ObolNetwork/kurtosis-charon/pull/76 for testing,
uses kurtosis behind the scene

**Which issues(s) does this PR fix?**

Fixes #16362

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-03-20 18:58:37 +00:00
satushh
77b5a7a5b3 Preallocate validatorKeys slice in insertValidatorHashes (#16558)
**What does this PR do? Why is it needed?**

Pre-allocate validatorKeys slice in insertValidatorHashes to avoid
repeated reallocations during migration
2026-03-20 15:05:59 +00:00
terence
fb9d9d93de Fix forkchoice to update node weights on balance changes (#16560)
- Fix `updateBalances` to also recompute node weights when a validator's
balance changes, not only when their vote slot changes.
- Previously, if `oldBalance != newBalance` but `currentSlot ==
nextSlot`, the balance delta was silently skipped, leading to incorrect
fork choice weights.
2026-03-20 14:29:48 +00:00
terence
7781e40abf Fix forkchoice safe/finalized hash for reorged payloads (#16556)
In ePBS, a checkpoint finalizes a beacon block root, not a payload. We
should be using parent payload hash for the given block root as a
checkpoint finalizes a beacon block root, not a payload

  This was observed repeatedly on the ePBS devnet (prysm-geth-3)

  **Timeline (episode 3, block 92173):**
- `16:46:23` — Slot 101472 (epoch 3171, slotInEpoch=0) synced,
blockHash=`0x0bd75d`, parentHash=`0xf53040`. Geth imports as EL block
92173.
- `16:46:35` — Slot 101473 (slotInEpoch=1) synced, blockHash=`0x800117`,
parentHash=`0xf53040` (same parent). Geth reorgs: drops `0x0bd75d`,
canonical is now `0x800117`.
- `16:50:59` — CL justified checkpoint advances. `latestHashForRoot`
returns `0x0bd75d` (the reorged-out hash) as `safeBlockHash`.
- `16:50:59+` — Every `forkchoiceUpdated` fails: `"Invalid forkchoice
state: safe block not in canonical chain"`. Node cannot propose blocks.

**Fix:** Rename `latestHashForRoot` → `latestParentHashForRoot` and
always return
  the parent hash
2026-03-20 03:22:39 +00:00
terence
e77724401d Fix envelopes-by-range to only serve canonical payloads backward (#16553)
- The `ExecutionPayloadEnvelopesByRange` handler previously served any
envelope found in the DB for canonical blocks, without verifying the
payload was actually included (full vs empty). This could serve
empty/withheld payloads as if they were canonical.
- Fix by walking the `ParentBlockHash` chain backward from a successor
block: a child's `bid.ParentBlockHash` determines which parent payload
it built on, so only envelopes reachable through this chain are
canonical.
- Add `parent_block_hash` field to `BlindedExecutionPayloadEnvelope`
proto so the walk can follow the chain using only blinded envelopes from
the DB, without loading full beacon blocks.
  - Handle stale `BlockHash → Root` index look up in the DB.

---------

Co-authored-by: satushh <satushh@gmail.com>
2026-03-19 22:54:50 +00:00
terence
f43ba7851c p2p: add execution payload bid gossip topic and verification (#16539)
This PR implements the `execution_payload_bid` gossip topic from the
Gloas spec. Builders broadcast signed bids for future execution
payloads, validated against proposer

- Gossip topic registration, mapping, and subscriber for
`execution_payload_bid`
- Verification: current/next slot, builder active, payment non-zero,
parent block root seen, parent block hash match, builder balance,
signature
- `HighestExecutionPayloadBidCache` keyed by (slot, parent_hash,
parent_root) with pruning
  - Per-builder seen cache for dedup
  - `BlockHash` on `ForkchoiceFetcher` interface
  - Export `ValidatePayloadBidSignature` for verifier reuse

  > **Note:** This is PR 3 of 3 in a stack. Please review in order:
  > 1. Proposer preferences P2P
  > 2. Proposer preferences RPC endpoint
  > 3. **This PR** — execution payload bid P2P
2026-03-19 21:45:39 +00:00
Preston Van Loon
dfc5bbef7f V7.1.3 changelog (#16555)
**What type of PR is this?**

Documentation


**What does this PR do? Why is it needed?**

**Which issues(s) does this PR fix?**

Changelog for v7.1.3

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [ ] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-03-19 21:18:13 +00:00
Barnabas Busa
6430e27257 Include commit hash in /eth/v1/node/version (#16541)
## Summary
- Adds the git commit hash to the `/eth/v1/node/version` response string
- Changes format from `Prysm/<tag> (<os> <arch>)` to
`Prysm/<tag>-<commit> (<os> <arch>)`
- Makes it easy to distinguish nodes running different commits

## Test plan
- [x] Verify `TestGetVersion` passes with updated assertion

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2026-03-19 20:13:52 +00:00
terence
b141b5ccd2 sync: reject index-1 atts when payload is unseen (#16559)
This PR Implements the gossip validation rules from consensus-specs
[#4939](https://github.com/ethereum/consensus-specs/pull/4939) That is:

- [REJECT] attestations with `CommitteeIndex == 1` (payload-present
vote) when the execution payload for the attested block is known invalid
- [IGNORE] attestations with `CommitteeIndex == 1` when the execution
payload has not been seen, and request the payload envelope via
`ExecutionPayloadEnvelopesByRoot`
2026-03-19 19:13:20 +00:00
james-prysm
696a08f3b9 changing grpc proposer preferences to take array instead (#16554)
**What type of PR is this?**

Other

**What does this PR do? Why is it needed?**

It makes more sense for this endpoint to take an array instead, this was
an oversight from review. this is a breaking change but should be
acceptable as we havent released yet

**Which issues(s) does this PR fix?**

Fixes  https://github.com/OffchainLabs/prysm/pull/16538

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [ ] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-03-19 13:37:23 +00:00
Manu NALEPA
65d428db58 Refactor state retrieval to use ReadOnly methods when possible, avoiding unnecessary copies. (#16511)
**What type of PR is this?**
Optimization (@nisdas)

**What does this PR do? Why is it needed?**
This PR:
- Defines a new `StateManager.StateByRootNoCopy` to retrieve a state
**without** copying it.
(This function returns a `state.ReadOnlyBeaconState` to prevent the
caller modifying it.)
- Changes the signature of functions using `state.BeaconState` to
`state.ReadOnlyBeaconState` when possible
(principle of least privilege)
- Modify the `StateManager.StateByRootIfCachedNoCopy` function to also
check in the epoch boundary cache.

> [!TIP]
> Please read commit by commit 

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2026-03-18 21:23:54 +00:00
satushh
a6e669d8bc Correct log in VerifyBlobKZGProofBatch (#16552)
Print correct length log in VerifyBlobKZGProofBatch.
2026-03-18 05:55:37 +00:00
terence
605ab1c7ac rpc: add SubmitSignedProposerPreferences gRPC endpoint (#16538)
This pr adds the validator-facing gRPC endpoint for submitting signed
proposer preferences. Validators call this to broadcast their preferred
fee_recipient and gas_limit for a future proposal slot. The handler
validates the epoch, broadcasts via P2P, and caches preferences for
downstream bid validation.

  - Proto: `SubmitSignedProposerPreferences` RPC in `validator.proto`
  - RPC handler with epoch and sync validation, P2P broadcast
  - Local submissions bypass full gossip verification (trusted VC)

  > **Note:** This is PR 2 of 3 in a stack. Please review in order:
  > 1. Proposer preferences P2P
  > 2. **This PR** — proposer preferences RPC endpoint
  > 3. Execution payload bid P2P
2026-03-17 16:14:49 +00:00
satushh
93f7214b32 Voluntary exit gloas (#16527)
**What type of PR is this?**

Feature

**What does this PR do? Why is it needed?**

This PR implements voluntary exit specific logic for gloas builder,
specifically the following ones:

-
[get_pending_balance_to_withdraw_for_builder](https://github.com/ethereum/consensus-specs/blob/master/specs/gloas/beacon-chain.md#new-get_pending_balance_to_withdraw_for_builder)
-
[process_voluntary_exit](https://github.com/ethereum/consensus-specs/blob/master/specs/gloas/beacon-chain.md#modified-process_voluntary_exit)
-
[initiate_builder_exit](https://github.com/ethereum/consensus-specs/blob/master/specs/gloas/beacon-chain.md#new-initiate_builder_exit)
-
[MIN_BUILDER_WITHDRAWABILITY_DELAY](https://github.com/ethereum/consensus-specs/blob/master/specs/gloas/beacon-chain.md#time-parameters)

---------

Co-authored-by: Potuz <potuz@prysmaticlabs.com>
2026-03-17 15:33:09 +00:00
Manu NALEPA
1bffcc84f4 Beacon state: Replace *ethpb.Validator by CompactValidator (#16535)
**What type of PR is this?**
Optimisation (@nisdas)

**What does this PR do? Why is it needed?**
The beacon state contains the list of all validators that have been at
least once deposited on the network.

| Network  | Active       | Total                | Ratio      |
|----------|-----------|---------------|---------|
| Hoodi      | 1,224,855 | **1,294,521**  | 94.1%   |
| Mainnet  | 948,883   | **2,229,313** | 43.56% |

Currently, the validators are stored in the beacon state like this:
```go
type BeaconState struct {
...
	validatorsMultiValue                *MultiValueValidators
...
}

type MultiValueValidators = multi_value_slice.Slice[*ethpb.Validator]

type Validator struct {
	state                      protoimpl.MessageState                                            `protogen:"open.v1"`
	PublicKey                  []byte                                                            `protobuf:"bytes,1,opt,name=public_key,json=publicKey,proto3" json:"public_key,omitempty" spec-name:"pubkey" ssz-size:"48"`
	WithdrawalCredentials      []byte                                                            `protobuf:"bytes,2,opt,name=withdrawal_credentials,json=withdrawalCredentials,proto3" json:"withdrawal_credentials,omitempty" ssz-size:"32"`
	EffectiveBalance           uint64                                                            `protobuf:"varint,3,opt,name=effective_balance,json=effectiveBalance,proto3" json:"effective_balance,omitempty"`
	Slashed                    bool                                                              `protobuf:"varint,4,opt,name=slashed,proto3" json:"slashed,omitempty"`
	ActivationEligibilityEpoch github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Epoch `protobuf:"varint,5,opt,name=activation_eligibility_epoch,json=activationEligibilityEpoch,proto3" json:"activation_eligibility_epoch,omitempty" cast-type:"github.com/OffchainLabs/prysm/v7/consensus-types/primitives.Epoch"`
	ActivationEpoch            github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Epoch `protobuf:"varint,6,opt,name=activation_epoch,json=activationEpoch,proto3" json:"activation_epoch,omitempty" cast-type:"github.com/OffchainLabs/prysm/v7/consensus-types/primitives.Epoch"`
	ExitEpoch                  github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Epoch `protobuf:"varint,7,opt,name=exit_epoch,json=exitEpoch,proto3" json:"exit_epoch,omitempty" cast-type:"github.com/OffchainLabs/prysm/v7/consensus-types/primitives.Epoch"`
	WithdrawableEpoch          github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Epoch `protobuf:"varint,8,opt,name=withdrawable_epoch,json=withdrawableEpoch,proto3" json:"withdrawable_epoch,omitempty" cast-type:"github.com/OffchainLabs/prysm/v7/consensus-types/primitives.Epoch"`
	unknownFields              protoimpl.UnknownFields
	sizeCache                  protoimpl.SizeCache
}
```

Some fields, used only by protobuf (`state`, `unknownFields`,
`sizeCache`) are not used by the beacon node.
Some other fields, stored as slices (`PublicKey`,
`WithdrawalCredentials`) need extra memory (`ptr+len+cap`) while they
could be stored as arrays because their sizes are fixed and known in
advance.

We define a new custom type called `CompactValidator`, which is a
fixed-size, pointer-free representation of a validator:

```go
type CompactValidator struct {
	PublicKey                  [fieldparams.BLSPubkeyLength]byte // 48 bytes
	WithdrawalCredentials      [32]byte                          // 32 bytes
	EffectiveBalance           uint64                            // 8 bytes
	Slashed                    bool                              // 1 byte
	ActivationEligibilityEpoch primitives.Epoch                  // 8 bytes
	ActivationEpoch            primitives.Epoch                  // 8 bytes
	ExitEpoch                  primitives.Epoch                  // 8 bytes
	WithdrawableEpoch          primitives.Epoch                  // 8 bytes
}
```

Here is the comparison, per validator, between storing a
`*ethpb.Validator` and a `CompactValidator`.

| Component | `*ethpb.Validator` | `CompactValidator` | Wasted |

|------------------------------------------|--------------------|--------------------|---------|
| Pointer to struct | 8 B | 0 B | 8 B |
| `state` (protoimpl.MessageState) | 8 B | 0 B | 8 B |
| `PublicKey` slice header (ptr+len+cap) | 24 B | 0 B | 24 B |
| `PublicKey` backing array (heap) | 48 B | 48 B (inline) | 0 B |
| `PublicKey` malloc overhead | ~16 B | 0 B | ~16 B |
| `WithdrawalCredentials` slice header | 24 B | 0 B | 24 B |
| `WithdrawalCredentials` backing array | 32 B | 32 B (inline) | 0 B |
| `WithdrawalCredentials` malloc overhead | ~16 B | 0 B | ~16 B |
| `EffectiveBalance` (uint64) | 8 B | 8 B | 0 B |
| `Slashed` (bool + padding) | 8 B | 8 B | 0 B |
| `ActivationEligibilityEpoch` | 8 B | 8 B | 0 B |
| `ActivationEpoch` | 8 B | 8 B | 0 B |
| `ExitEpoch` | 8 B | 8 B | 0 B |
| `WithdrawableEpoch` | 8 B | 8 B | 0 B |
| `unknownFields` ([]byte header) | 24 B | 0 B | 24 B |
| `sizeCache` (int32 + padding) | 8 B | 0 B | 8 B |
| Struct malloc overhead | ~16 B | 0 B | ~16 B |
| **Total** | **~264 B** | **128 B** | **~136 B (52%)** |

We waste `~136 B` per validator.
With a total of `2,229,313` validators (mainnet), it represents `~290
MB`.
Storing `CompactValidator` instead of `*ethpb.Validator` also reduces
the garbage collection pressure.

The following graph represents, for a Hoodi supernode (200 validators
managed):
1. The heap memory usage (`go_memstats_alloc_bytes`) without this PR
(`ref`) and with this PR (`current`)
2. `1.`, but averaged over the latest four hours.
3. The diff of the graphs in `2`. (The bigger, the better)

> [!NOTE]
> The improvement is, after stabilisation, **~300MB-450MB**,
representing **~8%-11%**.
>
> <img width="1028" height="917" alt="image"
src="https://github.com/user-attachments/assets/9179d9b1-bc50-4248-9f47-82a7646d58ab"
/>
>
> For some reasons, the gain is higher than initialy expected.

**For the reviewers:**
This PR should be reviewed commit by commit:
Commits 1 and 2 are independent and unrelated to this PR
Commit 3 is an oversight of:
- https://github.com/OffchainLabs/prysm/pull/16420

Commit 4 implements the `CompactValidator`
Commit 5 uses the `CompactValidator`. The key change is:
```go
// MultiValueValidators is a multi-value slice of compact validators.
type MultiValueValidators = multi_value_slice.Slice[stateutil.CompactValidator]
```

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).

---------

Co-authored-by: Bastin <43618253+Inspector-Butters@users.noreply.github.com>
2026-03-17 13:38:58 +00:00
terence
7728ad4aa2 p2p: add signed proposer preferences gossip topic and verification (#16537)
This PR Implements the `signed_proposer_preferences` gossip topic from
the Gloas spec. Proposers broadcast their `fee_recipient` and `gas_limit
preferences` for upcoming slots, which are cached for downstream bid
validation.

- Gossip topic registration, mapping, and subscriber for
`proposer_preferences`
- Verification: next-epoch check, valid proposal slot, signature
validation
- Slot-keyed `ProposerPreferencesCache` with pruning in the fork watcher
  - Protobuf `ProposerPreferences` and `SignedProposerPreferences`

  > **Note:** This is PR 1 of 3 in a stack. Please review in order:
  > 1. **This PR** — proposer preferences P2P
  > 2. Proposer preferences RPC endpoint
  > 3. Execution payload bid P2P
2026-03-17 00:12:37 +00:00
terence
1362654669 proposer: use correct access root to get pre state (#16496) 2026-03-16 18:06:31 +00:00
dkutzmarks-rgb
3cec3997f8 ci: add SBOM export workflow (#16531)
## Summary
- Adds CycloneDX SBOM generation via cdxgen and uploads to Dependency
Track
- Runs on push to the default branch and weekly (randomized schedule)

## Details
- SBOM format: CycloneDX 1.6 (required by Dependency Track)
- Generator: cdxgen v12.1.1 (Docker image)
- Runner: `ubuntu-latest`
- Skips SBOM generation if no commits in the last 7 days

---------

Co-authored-by: Preston Van Loon <preston@pvl.dev>
2026-03-16 15:46:59 +00:00
terence
1236519810 fix: guard against fork boundary IsParentBlockFull false positive (#16510)
`UpgradeToGloas` initializes both `latestExecutionPayloadBid.BlockHash`
and `latestBlockHash` to the last Fulu EL hash. If the first Gloas block
(or any subsequent block in a run of empty-variant slots) has no bid,
`process_execution_payload_bid` never runs, `bid.Slot` stays `0`, and
the two fields remain equal

`IsParentBlockFull() = true`. `saveHead` and `lateBlockTasks` both used
this result to pick EL block hash as the state access key, but the state
was only ever saved under the beacon block root
2026-03-16 14:55:40 +00:00
Manu NALEPA
36052ed1bb Reduce log noise by returning nil error when ignoring already-seen data column sidecars during gossip validation (#16536)
**What type of PR is this?**
Other

**What does this PR do? Why is it needed?**
Starting at:
- https://github.com/OffchainLabs/prysm/pull/16515

For a super node, the
> data column sidecar already seen

log takes almost **50%** of the whole debug log volume.

The reason is, previously, if a given data column sidecar was already
seen, the message was ignored with the corresponding `err == nil`.
Starting at this PR, the message is still ignored but `err != nil`.
The direct consequence is the corresponding error is printed on the
console.

This present pull request sets back to `nil` the corresponding error.
The rationale is receiving multiple times the same sidecar from
different peers is an expected and normal behavior.
==> We don't need to consider this event as an error.

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-03-14 22:42:59 +00:00
terence
458d4ebe54 Use read only validator accessor in IsPayloadTimelyCommittee (#16530)
This PR replaces `ValidatorAtIndex` with `ValidatorAtIndexReadOnly` in
`IsPayloadTimelyCommittee` to avoid copying the entire validator on each
call

CPU and heap profiling of ePBS devnet nodes shows `PayloadCommittee` →
`CopyValidator` as the single largest hotspot, consuming ~18-20% of CPU
time across all servers. `IsPayloadTimelyCommittee` calls
`ValidatorAtIndex` in a loop over candidate committee members, and each
call deep-copies the full validator struct (~1 KB) only to read
`EffectiveBalance`

`ValidatorAtIndexReadOnly` returns a read-only reference to the
underlying validator, eliminating the copy. The only field accessed is
`EffectiveBalance()`, which is available on the read-only interface
2026-03-13 20:23:10 +00:00
james-prysm
8680f3f8bb adding grpc endpoints for attester, proposer, sync duties, and ptc duties (#16416)
<!-- Thanks for sending a PR! Before submitting:

1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
   fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
   in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
   Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->

**What type of PR is this?**

Other

**What does this PR do? Why is it needed?**

~~DEPENDS ON https://github.com/OffchainLabs/prysm/pull/16402~~

Adding grpc endpoints for attester, proposer, and sync duties. This pr
doesn't utilize the apis.
In a future pr we will include a transition from using duties v2
endpoint to the split duties endpoints for the fork.

TESTING

Both the GetDutiesV2 endpoint and the new ones added (
GetAttesterDuties, GetProposerDutiesV2, GetSyncCommitteeDuties,
GetPTCDuties) use the same underlying helper function under
coreservice.duty function, so all core duties should be the same. Note
that the new gRPC endpoints use validator indices instead of pubkeys to
match REST api endpoints.

Using grpcurl (install with brew install grpcurl):
```
  # GetDutiesV2 - composite endpoint
  # Note: public_keys are base64-encoded BLS pubkeys
  grpcurl -plaintext -d '{
    "epoch": '$EPOCH',
    "public_keys": ["<base64_pubkey_1>", "<base64_pubkey_2>"]
  }' localhost:4000
  ethereum.eth.v1alpha1.BeaconNodeValidator/GetDutiesV2
```

```
  # GetAttesterDuties
  grpcurl -plaintext -d '{
    "epoch": '$EPOCH',
    "validator_indices": [0, 1, 2, 3, 4]
  }' localhost:4000
  ethereum.eth.v1alpha1.BeaconNodeValidator/GetAttesterDuties
```

```
  # GetProposerDutiesV2
  grpcurl -plaintext -d '{
    "epoch": '$EPOCH'
  }' localhost:4000
  ethereum.eth.v1alpha1.BeaconNodeValidator/GetProposerDutiesV2
```

```
  # GetSyncCommitteeDuties
  grpcurl -plaintext -d '{
    "epoch": '$EPOCH',
    "validator_indices": [0, 1, 2, 3, 4]
  }' localhost:4000 ethereum.eth.v1alpha1.BeaconNodeValidator/Get
  SyncCommitteeDuties
```

```
  # GetPTCDuties (Gloas+ only)
  grpcurl -plaintext -d '{
    "epoch": '$EPOCH',
    "validator_indices": [0, 1, 2, 3, 4]
  }' localhost:4000
  ethereum.eth.v1alpha1.BeaconNodeValidator/GetPTCDuties
```

  Verification Checklist

  1. Attester duty data match: For each validator in
  GetDutiesV2.CurrentEpochDuties, confirm attester_slot,
  committee_index, committee_length, committees_at_slot,
  validator_committee_index match the corresponding AttesterDuty
  from GetAttesterDuties

  2. Proposer duty data match: For each validator with non-empty
  proposer_slots in GetDutiesV2, confirm those slots appear in
  GetProposerDutiesV2 response for that validator index
  3. Sync committee flag match: For each validator where
  is_sync_committee=true in GetDutiesV2, confirm that validator
  appears in GetSyncCommitteeDuties response

  4. PTC duty data match: For each validator with non-empty
  ptc_slots in GetDutiesV2, confirm those slots appear in
  GetPTCDuties response for that validator index

  5. Dependent root — attester:
  GetDutiesV2.PreviousDutyDependentRoot ==
  GetAttesterDuties.dependent_root (for the same epoch)

  6. Dependent root — proposer pre-Fulu:
  GetDutiesV2.CurrentDutyDependentRoot ==
  GetProposerDutiesV2.dependent_root

7. Dependent root — proposer post-Fulu: ( DutiesV2 still has a bug that
doesn't take into account the proposer look ahead)
  GetDutiesV2.CurrentDutyDependentRoot ≠
  GetProposerDutiesV2.dependent_root. The V2 value should match
  GetAttesterDuties.dependent_root instead (both use (E-1)_start - 1)
  
  8. Dependent root — PTC: GetPTCDuties.dependent_root ==
  GetAttesterDuties.dependent_root (both use
  AttestationDependentRoot)

  9. Next epoch: Repeat checks 1-4 using
  GetDutiesV2.NextEpochDuties vs individual endpoints queried
  with epoch+1

  10. Edge cases: Epoch 0, epoch 1, sync committee period
  boundary, pre-Gloas epoch for PTC (should error)


**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-03-13 19:36:21 +00:00
terence
416c49e6d5 metrics: add initial gloas metric (#16519)
This PR adds an initial gloas metric

**Forkchoice**
- Add `forkchoice_payload_inserted_count`
- Add `forkchoice_payload_empty_node_count`
- Add `forkchoice_payload_full_node_count`
- Add `forkchoice_ptc_vote_count`

These metrics show how forkchoice evolves under Gloas:
- how many payload nodes are inserted
- how many currently tracked nodes represent empty payloads vs full
payloads
- how many PTC votes forkchoice receives

**Blockchain**
- Add `beacon_execution_payload_envelope_valid_total`
- Add `beacon_execution_payload_envelope_invalid_total`
- Add `beacon_execution_payload_envelope_processing_duration_seconds`
- Add `beacon_late_payload_task_triggered_total`

These metrics track the beacon node’s envelope handling:
- whether received execution payload envelopes are accepted or rejected
- how long envelope processing takes end-to-end
- how often the late payload task fires during the slot

**Sync**
- Add `sync_payload_attestation_arrival_delay_seconds`
- Add `sync_execution_payload_envelope_arrival_delay_seconds`
- Add `sync_payload_envelope_by_range_served_total`
- Add `sync_payload_envelope_by_root_served_total`
- Add `gloas_execution_payload_envelopes_rpc_requests_total{rpc,result}`

These metrics focus on network timing and RPC serving:
- how long after slot start payload attestations and payload envelopes
arrive over gossip
- how often payload envelope RPC requests are served
- how payload-envelope RPC requests break down by method and result

For gossip validation, only arrival delay remains:
- payload attestation
- execution payload envelope
- data column sidecar continues to use the existing generic arrival
metric

Custom sync-side gossip result counters were removed because libp2p
already accounts for validation outcomes.

**Payload Attestation Pool**
- Add `payload_attestation_pool_size`

This metric shows the current size of the payload attestation pool,
which is useful for seeing whether attestations are being accumulated or
drained as expected.

**Validator RPC**
- Add `gloas_payload_id_cache_total{result}`

This metric shows Gloas proposer payload ID cache behavior:
- cache hits
- cache misses

That helps explain whether proposers are reusing cached EL payload IDs
or falling back to fresh forkchoice updates.

**Validator Client**
- Add `validator_payload_attestation_submission_total{result}`
- Add `validator_self_build_envelope_submission_total{result}`

These metrics show validator-side Gloas behavior:
- whether payload attestation submissions succeed or fail
- whether self-build envelope submissions succeed or fail

**Core Gloas**
- Add `gloas_builder_pending_payments_processed_total`
- Add `gloas_builder_deposits_processed_total`

These metrics show builder-economics state transitions:
- how many pending builder payments are promoted into pending
withdrawals
- how many builder-related deposit requests are processed

**State Native**
- Add `gloas_execution_payload_availability_ratio`
- Add `gloas_builder_pending_withdrawals_count`
- Add `gloas_builder_pending_withdrawals_gwei`
- Add `gloas_payload_expected_withdrawals_count`
- Add `gloas_active_builders_count`
- Add `gloas_active_builders_balance_gwei`

These metrics expose the current Gloas state snapshot:
- how much of the payload availability window is marked available
- how many builder pending withdrawals exist and their total value
- how many expected withdrawals are cached for the next payload
- how many builders are currently active and their aggregate balance

These are emitted from `RecordStateMetrics()`
2026-03-13 16:07:01 +00:00
terence
6605dfbd50 forkchoice: fix forkchoice balance underflow when att slot change (#16520)
In ePBS, `resolveVoteNode` routes validator balances to two different
accumulators based on the attestation slot:

When a validator re-attests for the **same block** in a **new epoch**
(assigned to a different slot), the root and payload status are
unchanged. The trigger condition:

  ```go
if vote.currentRoot != vote.nextRoot || oldBalance != newBalance ||
vote.currentPayloadStatus != vote.nextPayloadStatus {
```
  ...does not fire, so the balance is never moved. But the vote rotation silently updates currentSlot to the new value. On the next vote change, the subtraction targets the wrong accumulator (which has balance=0), causing the underflow.

  Examples:
  1. Epoch 2: Validator attests at slot 95 for block B (slot 95).
  pending = (95 == 95) = true → 32 ETH added to `Node.balance`
  2. Epoch 3: Validator re-attests at slot 96 for same block `B.Root/payloadStatus` went from pending to empty → not reprocessed. And currentSlot rotated to 96.
  3. Epoch 4: Validator attests for block C. Subtract from B that's no longer pending but has zero balance which triggers underflow
2026-03-13 15:32:25 +00:00
Manu NALEPA
84993fdd68 Fix TestProcessPendingDepositsMultiplesSameDeposits to properly test that duplicate pending deposits for the same key top up a single validator instead of creating duplicates. (#16529)
**What type of PR is this?**
Bug fix (in tests)

**What does this PR do? Why is it needed?**
Fix `TestProcessPendingDepositsMultiplesSameDeposits` to properly test
that duplicate pending deposits for the same key top up a single
validator instead of creating duplicates.

Previously, this test used 2 validators with the same pubkey.

**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-03-13 15:01:19 +00:00
james-prysm
1e916418f2 changed /eth/v1/beacon/execution_payload_envelope/{block_root} to /eth/v1/beacon/execution_payload_envelope/{block_id} defined in beacon apis (#16521)
**What type of PR is this?**

 Bug fix

**What does this PR do? Why is it needed?**

missed this in review

we should have /eth/v1/beacon/execution_payload_envelope/{block_root} as
/eth/v1/beacon/execution_payload_envelope/{block_id} defined in beacon
apis


https://github.com/ethereum/beacon-APIs/blob/master/apis/beacon/execution_payload/envelope_get.yaml

missed in https://github.com/OffchainLabs/prysm/pull/16441

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-03-13 14:12:42 +00:00
terence
8d5d584cf8 sync,verification: add Gloas data column gossip validation path (#16515)
This PR adds a dedicated Gloas `data_column_sidecar` gossip validation
path and aligns it with the EIP-7732 spec.

The validation flow is now split cleanly:
- Fulu stays in the existing `validate_data_column` path
- Gloas moves into `validate_data_column_gloas`
- Gloas-specific checks live in a dedicated `GloasDataColumnVerifier`

**What Changed**
- Add a separate Gloas data column gossip validator in sync
- Add `GloasDataColumnVerifier` in `verification`
- Validate Gloas sidecars against
`block.body.signed_execution_payload_bid.message.blob_kzg_commitments`
- Refactor KZG proof verification to accept commitments explicitly
instead of rewriting the sidecar
- Use Gloas dedupe semantics: `(beacon_block_root, index)`
- Add spec-mapped comments next to the Gloas checks
- Add a TODO for deferred queue/revalidation when the block has not yet
been seen

**Spec Mapping**

The Gloas path now follows this order:
1. `IGNORE` if the block for `sidecar.beacon_block_root` has not been
seen
2. `REJECT` if `sidecar.slot != block.slot`
3. `REJECT` if `verify_data_column_sidecar(sidecar,
bid.blob_kzg_commitments)` fails
4. `REJECT` if the sidecar is on the wrong subnet
5. `REJECT` if `verify_data_column_sidecar_kzg_proofs(sidecar,
bid.blob_kzg_commitments)` fails
6. `IGNORE` if `(sidecar.beacon_block_root, sidecar.index)` has already
been seen

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2026-03-12 22:50:26 +00:00
satushh
598509ffa8 Remove next epoch lookahead from PTC duties (#16517)
**What does this PR do? Why is it needed?**

Implemented the spec change:
https://github.com/ethereum/beacon-APIs/pull/586

(Removed next epoch lookahead from PTC duties)
2026-03-12 16:43:32 +00:00
terence
0fbd643c02 forkchoice: fix head change on every attestation tick pre gloas (#16508)
`store.go` creates a `fullNodeByRoot` entry with `full=true` for all
fulu blocks because pre-Ggoas blocks embed their EL payload inline and
need optimistic validation tracking. `FullHead` was surfacing this
internal `full=true` to callers without distinguishing whether the node
was actually post-gloas.

`UpdateHead` called `FullHead` which returned `full=true` for fulu
blocks. Meanwhile `saveHead` always stored `s.head.full=false`for
pre-gloas states. The mismatch caused `isNewHead` to fire on ticker
interval.

I liked the option to fix in `FullHead`, it's a simple change. The con
is `CanonicalNodeAtSlot` is a different path that also returns `pn.full`
for pre-gloas, t's already call-site guarded by `GloasForkEpoch` so it's
fine, but the internal inconsistency remains.

**Option 2** is Fix in `saveHead` for fulu
```go
if headState.Version() >= version.Gloas {
} else {
    full = true
}
```
This aligns `s.head.full` with what `FullHead` returns

**Option 3** is Fix in `isNewHead` (ignore full pre-gloas)
```go
    postGloas := slots.ToEpoch(s.CurrentSlot()) >= params.BeaconConfig().GloasForkEpoch
    fullChanged := postGloas && full != currentFull
    return r != currentHeadRoot || fullChanged || r == [32]byte{}
```

**Option 4** is Fix in `store.go` to not expose full=true pre gloas,
which change `choosePayloadContent` to return the empty node for pre
gloas. I haven't analyzed this one deeply to understand its downstream
effect
```go
func (s *Store) choosePayloadContent(n *Node) *PayloadNode {
    fn := s.fullNodeByRoot[n.root]
    en := s.emptyNodeByRoot[n.root]
    if fn == nil {
        return en
    }
    if fn.full && slots.ToEpoch(n.slot) < params.BeaconConfig().GloasForkEpoch {
        return en
    }
    ...
```
2026-03-11 23:29:31 +00:00
james-prysm
0e4f3231d2 adding payload_attestation_message event (#16506)
**What type of PR is this?**

Feature

**What does this PR do? Why is it needed?**

adds payload_attestation_message event triggered via grpc and gossip
message received, introduced in
https://github.com/ethereum/beacon-APIs/pull/552

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [ ] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-03-11 16:24:53 +00:00
james-prysm
b17f2752ab adding payload attestation apis connection from validator client (#16504)
pr replaces grpc connection stub with actual call to api endpoints
2026-03-10 18:52:50 +00:00
Potuz
932e5eb7d8 Use execution block hash as state access key post-Gloas (#16459)
Post-Gloas, SaveState and the next-slot cache are keyed by execution
block hash as well of beacon block root. Thread an accessRoot through
getStateAndBlock (for StateByRoot) and getPayloadAttribute (for
ProcessSlotsUsingNextSlotCache) so each caller can supply the right
key.

Add FullHead to forkchoice: it returns the head root, the block hash
of the nearest full payload (walking up to an ancestor when the head
is empty), and whether the head itself is full. This gives callers
enough information to derive the correct accessRoot.

Rework UpdateHead to call FullHead and branch on post-Gloas: the
Gloas path computes accessRoot from the full block hash and sends FCU
via notifyForkchoiceUpdateGloas; the legacy path keeps the existing
forkchoiceUpdateWithExecution flow. saveHead and pruneAttsFromPool
now use local variables instead of the removed shared fcuArgs.

Flatten the else branch in lateBlockTasks so the Gloas path returns
early and the pre-Gloas path runs at the top indentation level.

In postPayloadHeadUpdate pass the envelope's block hash as accessRoot
since the state was just saved under that key.
2026-03-10 17:33:18 +00:00
Jun Song
de233438f1 Remove unnecessary memory allocation while encoding in JSON at GetBeaconStateV2 (#16485)
**What type of PR is this?**

> Other: Optimization

**What does this PR do? Why is it needed?**

In `GetBeaconStateV2`, an extra `json.Marshal` call is used for type
checking, which allocates `[]byte` **twice** so the total memory usage
is increased by the size of `BeaconState`. Currently, the `BeaconState`
in mainnet exceeds over 250MB, so I think this can be a good
optimization for nodes who serve `BeaconState` periodically.

**Which issues(s) does this PR fix?**

N/A

**Other notes for review**

I used [this benchmark
script](https://gist.github.com/syjn99/499eaae60e37447297bb568280ab2b64).

Before:

```
Running tool: /opt/homebrew/bin/go test -test.fullpath=true -benchmem -run=^$ -bench ^BenchmarkGetBeaconStateV2$ github.com/OffchainLabs/prysm/v7/beacon-chain/rpc/eth/debug

goos: darwin
goarch: arm64
pkg: github.com/OffchainLabs/prysm/v7/beacon-chain/rpc/eth/debug
cpu: Apple M4 Pro
BenchmarkGetBeaconStateV2/validators_64-12         	      49	  24245383 ns/op	42510257 B/op	  250511 allocs/op
BenchmarkGetBeaconStateV2/validators_100000-12     	       5	 225380608 ns/op	347510451 B/op	 1849520 allocs/op
BenchmarkGetBeaconStateV2/validators_1000000-12    	       1	2437710417 ns/op	2901637784 B/op	16249530 allocs/op
PASS
ok  	github.com/OffchainLabs/prysm/v7/beacon-chain/rpc/eth/debug	7.431s
```

After:
```
Running tool: /opt/homebrew/bin/go test -test.fullpath=true -benchmem -run=^$ -bench ^BenchmarkGetBeaconStateV2$ github.com/OffchainLabs/prysm/v7/beacon-chain/rpc/eth/debug

goos: darwin
goarch: arm64
pkg: github.com/OffchainLabs/prysm/v7/beacon-chain/rpc/eth/debug
cpu: Apple M4 Pro
BenchmarkGetBeaconStateV2/validators_64-12         	     129	   8947429 ns/op	34877098 B/op	  250497 allocs/op
BenchmarkGetBeaconStateV2/validators_100000-12     	      12	  90652486 ns/op	254957336 B/op	 1849493 allocs/op
BenchmarkGetBeaconStateV2/validators_1000000-12    	       1	1182659125 ns/op	2475831984 B/op	16249524 allocs/op
PASS
ok  	github.com/OffchainLabs/prysm/v7/beacon-chain/rpc/eth/debug	5.816s
```

Summary:
```
validators_64 ->
Time: 24.2ms → 8.9ms (2.71x faster, −63%)
Memory: 42.5MB → 34.9MB (−18%)

validators_100,000 ->
Time: 225ms → 91ms (2.49x faster, −60%)
Memory: 347.5MB → 255.0MB (−26.6%)

validators_1,000,000 ->
Time: 2,438ms → 1,183ms (2.06x faster, −51%)
Memory: 2,901MB → 2,476MB (−14.7%)
```

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2026-03-10 15:37:04 +00:00
Potuz
e35f6c351a Handle missing Payloads (#16460)
In the event of a late missing Payload, we add a helper that updates the
beacon block post state in the NSC as well as handling the boundary
caches and calling FCU if we are proposing next slot.
2026-03-10 13:59:50 +00:00
terence
4da5ed072c proposer: add payload attestations packing (#16493)
This PR adds packing for payload attestations for a beacon block. It
filters payload attestations for slot and beacon root checks
2026-03-10 02:27:03 +00:00
721 changed files with 30027 additions and 6522 deletions

View File

@@ -1,4 +1,4 @@
version: v1.7.0-alpha.2
version: v1.7.0-alpha.4
style: full
specrefs:
@@ -23,6 +23,8 @@ exceptions:
- PTC_SIZE#gloas
constants:
# heze
- DOMAIN_INCLUSION_LIST_COMMITTEE#heze
# phase0
- BASIS_POINTS#phase0
- ENDIANNESS#phase0
@@ -61,7 +63,6 @@ exceptions:
- ATTESTATION_TIMELINESS_INDEX#gloas
- BUILDER_INDEX_FLAG#gloas
- BUILDER_INDEX_SELF_BUILD#gloas
- DOMAIN_PROPOSER_PREFERENCES#gloas
- NUM_BLOCK_TIMELINESS_DEADLINES#gloas
- PTC_TIMELINESS_INDEX#gloas
@@ -72,12 +73,31 @@ exceptions:
- CONTRIBUTION_DUE_BPS_GLOAS#gloas
- GLOAS_FORK_EPOCH#gloas
- GLOAS_FORK_VERSION#gloas
- MIN_BUILDER_WITHDRAWABILITY_DELAY#gloas
- SYNC_MESSAGE_DUE_BPS_GLOAS#gloas
# heze
- HEZE_FORK_EPOCH#heze
- HEZE_FORK_VERSION#heze
- INCLUSION_LIST_SUBMISSION_DUE_BPS#heze
- MAX_BYTES_PER_INCLUSION_LIST#heze
- MAX_REQUEST_INCLUSION_LIST#heze
- PROPOSER_INCLUSION_LIST_CUTOFF_BPS#heze
- VIEW_FREEZE_CUTOFF_BPS#heze
ssz_objects:
# phase0
- Eth1Block#phase0
# fulu
- PartialDataColumnHeader#fulu
- PartialDataColumnPartsMetadata#fulu
- PartialDataColumnSidecar#fulu
# gloas
- PartialDataColumnHeader#gloas
# heze
- BeaconState#heze
- ExecutionPayloadBid#heze
- InclusionList#heze
- SignedExecutionPayloadBid#heze
- SignedInclusionList#heze
# capella
- LightClientBootstrap#capella
- LightClientFinalityUpdate#capella
@@ -107,6 +127,7 @@ exceptions:
dataclasses:
# phase0
- LatestMessage#phase0
- Seen#phase0
- Store#phase0
# altair
- LightClientStore#altair
@@ -123,6 +144,11 @@ exceptions:
- ExpectedWithdrawals#gloas
- LatestMessage#gloas
- Store#gloas
# heze
- GetInclusionListResponse#heze
- InclusionListStore#heze
- PayloadAttributes#heze
- Store#heze
functions:
# Functions implemented by KZG library for EIP-4844
@@ -179,11 +205,22 @@ exceptions:
- verify_cell_kzg_proof_batch_impl#fulu
# phase0
- compute_attestation_subnet_prefix_bits#phase0
- compute_min_epochs_for_block_requests#phase0
- compute_time_at_slot_ms#phase0
- is_not_from_future_slot#phase0
- is_within_slot_range#phase0
- update_proposer_boost_root#phase0
- is_proposer_equivocation#phase0
- record_block_timeliness#phase0
- compute_proposer_score#phase0
- get_attestation_score#phase0
- validate_attester_slashing_gossip#phase0
- validate_beacon_aggregate_and_proof_gossip#phase0
- validate_beacon_attestation_gossip#phase0
- validate_beacon_block_gossip#phase0
- validate_proposer_slashing_gossip#phase0
- validate_voluntary_exit_gossip#phase0
- calculate_committee_fraction#phase0
- compute_fork_version#phase0
- compute_pulled_up_tip#phase0
@@ -274,6 +311,7 @@ exceptions:
- upgrade_lc_store_to_capella#capella
- upgrade_lc_update_to_capella#capella
# deneb
- compute_max_request_blob_sidecars#deneb
- get_lc_execution_root#deneb
- is_valid_light_client_header#deneb
- prepare_execution_payload#deneb
@@ -284,6 +322,7 @@ exceptions:
- upgrade_lc_store_to_deneb#deneb
- upgrade_lc_update_to_deneb#deneb
# electra
- compute_max_request_blob_sidecars#electra
- compute_weak_subjectivity_period#electra
- current_sync_committee_gindex_at_slot#electra
- finalized_root_gindex_at_slot#electra
@@ -305,12 +344,20 @@ exceptions:
- upgrade_lc_store_to_electra#electra
- upgrade_lc_update_to_electra#electra
# fulu
- compute_max_request_data_column_sidecars#fulu
- compute_matrix#fulu
- verify_partial_data_column_header_inclusion_proof#fulu
- verify_partial_data_column_sidecar_kzg_proofs#fulu
- get_blob_parameters#fulu
- get_data_column_sidecars_from_block#fulu
- get_data_column_sidecars_from_column_sidecar#fulu
- recover_matrix#fulu
# gloas
- compute_ptc#gloas
- initialize_ptc_window#gloas
- is_payload_data_available#gloas
- is_pending_validator#gloas
- process_ptc_window#gloas
- compute_balance_weighted_acceptance#gloas
- compute_balance_weighted_selection#gloas
- compute_fork_version#gloas
@@ -388,10 +435,8 @@ exceptions:
- get_builder_withdrawals#gloas
- get_builders_sweep_withdrawals#gloas
- get_index_for_new_builder#gloas
- get_pending_balance_to_withdraw_for_builder#gloas
- get_proposer_preferences_signature#gloas
- get_upcoming_proposal_slots#gloas
- initiate_builder_exit#gloas
- is_active_builder#gloas
- is_builder_index#gloas
- is_data_available#gloas
@@ -402,7 +447,6 @@ exceptions:
- is_valid_proposal_slot#gloas
- onboard_builders_from_pending_deposits#gloas
- process_deposit_request#gloas
- process_voluntary_exit#gloas
- record_block_timeliness#gloas
- record_block_timeliness#phase0
- verify_data_column_sidecar_kzg_proofs#gloas
@@ -411,6 +455,28 @@ exceptions:
- update_next_withdrawal_builder_index#gloas
- update_payload_expected_withdrawals#gloas
- update_proposer_boost_root#gloas
# heze
- compute_fork_version#heze
- get_forkchoice_store#heze
- get_inclusion_list_bits#heze
- get_inclusion_list_committee_assignment#heze
- get_inclusion_list_committee#heze
- get_inclusion_list_signature#heze
- get_inclusion_list_store#heze
- get_inclusion_list_submission_due_ms#heze
- get_inclusion_list_transactions#heze
- get_proposer_inclusion_list_cutoff_ms#heze
- get_view_freeze_cutoff_ms#heze
- is_inclusion_list_bits_inclusive#heze
- is_payload_inclusion_list_satisfied#heze
- is_valid_inclusion_list_signature#heze
- on_execution_payload#heze
- on_inclusion_list#heze
- prepare_execution_payload#heze
- process_inclusion_list#heze
- record_payload_inclusion_list_satisfaction#heze
- should_extend_payload#heze
- upgrade_to_heze#heze
presets:
# gloas
@@ -419,3 +485,5 @@ exceptions:
- MAX_BUILDERS_PER_WITHDRAWALS_SWEEP#gloas
- MAX_PAYLOAD_ATTESTATIONS#gloas
- PTC_SIZE#gloas
# heze
- INCLUSION_LIST_COMMITTEE_SIZE#heze

67
.github/workflows/sbom-export.yaml vendored Normal file
View File

@@ -0,0 +1,67 @@
name: SBOM Export & Centralize
on:
push:
branches: [ "develop" ]
schedule:
- cron: '50 21 * * 2'
permissions:
contents: read
jobs:
generate-and-upload:
runs-on: ubuntu-latest
steps:
- name: Checkout Source Code
uses: actions/checkout@v6
- name: Check for recent changes
id: check
run: |
if [ -z "$(git log --since='7 days ago' --oneline | head -1)" ]; then
echo "No commits in the last 7 days, skipping SBOM generation."
echo "skip=true" >> "$GITHUB_OUTPUT"
fi
- name: Generate CycloneDX SBOM via cdxgen
if: steps.check.outputs.skip != 'true'
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
docker run --rm \
--user "$(id -u):$(id -g)" \
-v /tmp:/tmp \
-v "${{ github.workspace }}:/app:rw" \
-e FETCH_LICENSE=true \
-e GITHUB_TOKEN \
ghcr.io/cdxgen/cdxgen:v12.1.1 \
-r /app \
-o /app/sbom.cdx.json \
--no-install-deps \
--spec-version 1.6
if [ ! -s sbom.cdx.json ]; then
echo "::error::cdxgen SBOM generation failed or returned empty."
exit 1
fi
echo "SBOM generated successfully:"
ls -lh sbom.cdx.json
- name: Upload SBOM to Dependency Track
if: steps.check.outputs.skip != 'true'
env:
DT_API_KEY: ${{ secrets.DEPENDENCY_TRACK_API_KEY }}
DT_URL: ${{ secrets.DEPENDENCY_TRACK_URL }}
run: |
REPO_NAME=${GITHUB_REPOSITORY##*/}
curl -sf -X POST "${DT_URL}/api/v1/bom" \
-H "X-Api-Key: ${DT_API_KEY}" \
-F "autoCreate=true" \
-F "projectName=${REPO_NAME}" \
-F "projectVersion=${{ github.ref_name }}" \
-F "bom=@sbom.cdx.json"
echo "SBOM uploaded to Dependency Track for ${REPO_NAME}@${{ github.ref_name }}"

View File

@@ -4,6 +4,116 @@ All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
## [v7.1.3](https://github.com/prysmaticlabs/prysm/compare/v7.1.2...v7.1.3) - 2026-03-18
This release brings extensive Gloas (next fork) groundwork, a major logging infrastructure overhaul, and numerous performance optimizations across the beacon chain. A security update to go-ethereum v1.16.8 is also included.
Release highlights:
- **Gloas fork preparation**: Builder registry, bid processing, payload attestation, proposer slashing, slot processing, block API endpoints, and duty timing intervals are all wired up.
- **Logging revamp**: New ephemeral debug logfile (24h retention, enabled by default), per-package loggers with CI enforcement, per-hook verbosity control (`--log.vmodule`), and a version banner at startup.
- **Performance**: Forkchoice updates moved to background, post-Electra attestation data cached per slot, parallel data column cache warmup, reduced heap allocations in SSZ marshaling and `MixInLength`, and proposer preprocessing behind a feature flag.
- **Validator client**: gRPC fallback now matches the REST API implementation — both connect only to fully synced nodes. The gRPC health endpoint returns an error on syncing/optimistic status.
- **Security**: go-ethereum updated to v1.16.8; fixed an authentication bypass on `/v2/validator/*` endpoints.
- **State storage**: Initial support for the `hdiff` state-diff feature — migration-to-cold and DB initialization are now available behind feature flags.
There are no known security issues in this release. Operators can update at their convenience.
### Added
- Use the head state to validate attestations for the previous epoch if head is compatible with the target checkpoint. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16109)
- Added separate logrus hooks for handling the formatting and output of terminal logs vs log-file logs, instead of the. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16102)
- Batch publish data columns for faster data propogation. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16183)
- `--disable-get-blobs-v2` flag. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16155)
- Update spectests to v1.7.0-alpha.0. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16219)
- Added basic Gloas builder support (`Builder` message and `BeaconStateGloas` `builders`/`next_withdrawal_builder_index` fields). [[PR]](https://github.com/prysmaticlabs/prysm/pull/16164)
- Added an ephemeral debug logfile that for beacon and validator nodes that captures debug-level logs for 24 hours. It. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16108)
- Add a feature flag to pass spectests with low validator count. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16231)
- Add feature flag `--enable-proposer-preprocessing` to process the block and verify signatures before proposing. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15920)
- Add `ProofByFieldIndex` to generalize merkle proof generation for `BeaconState`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15443)
- Update spectests to v1.7.0-alpha-1. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16246)
- Add feature flag to use hashtree instead of gohashtre. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16216)
- Migrate to cold with the hdiff feature. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16049)
- Adding basic fulu fork transition support for mainnet and minimal e2e tests (multi scenario is not included). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15640)
- `commitment_count_in_gossip_processed_blocks` gauge metric to track the number of blob KZG commitments in processed beacon blocks. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16254)
- Add Gloas latest execution bid processing. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15638)
- Added shell completion support for `beacon-chain` and `validator` CLI tools. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16245)
- add pending payments processing and quorum threshold, plus spectests and state hooks (rotate/append). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15655)
- Add slot processing with execution payload availability updates. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15730)
- Implement modified proposer slashing for gloas. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16212)
- Added missing beacon config in fulu so that the presets don't go missing in /eth/v1/config/spec beacon api. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16170)
- Close opened file in data_column.go. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16274)
- Flag `--log.vmodule` to set per-package verbosity levels for logging. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16272)
- Added a version log at startup to display the version of the build. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16283)
- gloas block return support for /eth/v2/beacon/blocks/{block_id} and /eth/v1/beacon/blocks/{block_id}/root endpoints. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16278)
- Add Gloas process payload attestation. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15650)
- Initialize db with state-diff feature flag. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16203)
- Gloas-specific timing intervals for validator attestation, aggregation, and sync duties. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16291)
- Added new proofCollector type to ssz-query. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16177)
- Added README for maintaining specrefs. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16302)
- The ability to download the nightly reference tests from a specific day. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16298)
- Set beacon node options after reading the config file. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16320)
- Implement finalization-based eviction for `CheckpointStateCache`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16458)
### Changed
- Performance improvement in ProcessConsolidationRequests: Use more performance HasPendingBalanceToWithdraw instead of PendingBalanceToWithdraw as no need to calculate full total pending balance. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16189)
- Extend `httperror` analyzer to more functions. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16186)
- Do not check block signature on state transition. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14820)
- Notify the engine about forkchoice updates in the background. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16149)
- Use a separate context when updating the slot cache. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16209)
- Data column sidecars cache warmup: Process in parallel all sidecars for a given epoch. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16207)
- Use lookahead to validate data column sidecar proposer index. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16202)
- Summarize DEBUG log corresponding to incoming via gossip data column sidecar. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16210)
- Added a log.go file for every important package with a logger variable containing a `package` field set to the package. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16059)
- Added a CI check to ensure every important package has a log.go file with the correct `package` field. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16059)
- Changed the log formatter to use this `package` field instead of the previous `prefix` field. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16059)
- Replaced `time.Sleep` with `require.Eventually` polling in tests to fix flaky behavior caused by race conditions between goroutines and assertions. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16217)
- changed IsHealthy check to IsReady for validator client's interpretation from /eth/v1/node/health, 206 will now return false as the node is syncing. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16167)
- Performance improvement in state (MarshalSSZTo): use copy() instead of byte-by-byte loop which isn't required. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16222)
- Moved verbosity settings to be configurable per hook, rather than just globally. This allows us to control the. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16106)
- updated go ethereum to 1.16.7. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15640)
- Use dependent root and target root to verify data column proposer index. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16250)
- post electra we now call attestation data once per slot and use a cache for subsequent requests. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16236)
- Avoid unnessary heap allocation while calling MixInLength. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16251)
- Log commitments instead of indices in missingCommitError. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16258)
- Added some defensive checks to prevent overflows in block batch requests. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16227)
- gRPC health endpoint will now return an error on syncing or optimistic status showing that it's unavailable. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16294)
- Sample PTC per committee to reduce allocations. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16293)
- gRPC fallback now matches rest api implementation and will also check and connect to only synced nodes. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16215)
- Improved node fallback logs. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16316)
- Improved integrations with ethspecify so specrefs can be used throughout the codebase. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16304)
- Fixed the logging issue described in #16314. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16322)
### Removed
- removed github.com/MariusVanDerWijden/FuzzyVM and github.com/MariusVanDerWijden/tx-fuzz due to lack of support post 1.16.7, only used in e2e for transaction fuzzing. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15640)
- Remove unused `delay` parameter from `fetchOriginDataColumnSidecars` function. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16262)
- Batching of KZG verification for incoming via gossip data column sidecars. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16240)
- `--disable-get-blobs-v2` flag from help. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16265)
- gRPC resolver for load balancing, the new implementation matches rest api's so we should remove the resolver so it's handled the same way for consistency. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16215)
### Fixed
- avoid panic when fork schedule is empty [#16175](https://github.com/OffchainLabs/prysm/pull/16175). [[PR]](https://github.com/prysmaticlabs/prysm/pull/16175)
- Fix validation logic for `--backfill-oldest-slot`, which was rejecting slots newer than 1056767. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16173)
- Don't call trace.WithMaxExportBatchSize(trace.DefaultMaxExportBatchSize) twice. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16211)
- When adding the `--[semi-]supernode` flag, update the ealiest available slot accordingly. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16230)
- fixed broken and old links to actual. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15856)
- stop SlotIntervalTicker goroutine leaks [#16241](https://github.com/OffchainLabs/prysm/pull/16241). [[PR]](https://github.com/prysmaticlabs/prysm/pull/16241)
- Fix `prysmctl testnet generate-genesis` to use the timestamp from `--geth-genesis-json-in` when `--genesis-time` is not explicitly provided. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16239)
- Prevent authentication bypass on direct `/v2/validator/*` endpoints by enforcing auth checks for non-public routes. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16226)
- Fixed a typo: AggregrateDueBPS -> AggregateDueBPS. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16194)
- Fixed a bug in `hack/check-logs.sh` where untracked files were ignored. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16287)
- Fix hashtree release builds. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16288)
- Fix Bazel build failure on macOS x86_64 (darwin_amd64) (adds missing assembly stub to hashtree patch). [[PR]](https://github.com/prysmaticlabs/prysm/pull/16281)
- a potential race condition when switching hosts quickly and reconnecting to same host on an old connection. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16316)
- Fixed a bug where `cmd/beacon-chain/execution` was being ignored by `hack/gen-logs.sh` due to a `.gitignore` rule. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16328)
### Security
- Update go-ethereum to v1.16.8. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16252)
## [v7.1.2](https://github.com/prysmaticlabs/prysm/compare/v7.1.1...v7.1.2) - 2026-01-07
Happy new year! This patch release is very small. The main improvement is better management of pending attestation aggregation via [PR 16153](https://github.com/OffchainLabs/prysm/pull/16153).
@@ -4046,4 +4156,4 @@ There are no security updates in this release.
# Older than v2.0.0
For changelog history for releases older than v2.0.0, please refer to https://github.com/prysmaticlabs/prysm/releases
For changelog history for releases older than v2.0.0, please refer to https://github.com/prysmaticlabs/prysm/releases

View File

@@ -273,16 +273,16 @@ filegroup(
url = "https://github.com/ethereum/EIPs/archive/5480440fe51742ed23342b68cf106cefd427e39d.tar.gz",
)
consensus_spec_version = "v1.7.0-alpha.2"
consensus_spec_version = "v1.7.0-alpha.4"
load("@prysm//tools:download_spectests.bzl", "consensus_spec_tests")
consensus_spec_tests(
name = "consensus_spec_tests",
flavors = {
"general": "sha256-iGQsGZ1cHah+2CSod9jC3kN8Ku4n6KO0hIwfINrn/po=",
"minimal": "sha256-TgcYt8N8sXSttdHTGvOa+exUZ1zn1UzlAMz0V7i37xc=",
"mainnet": "sha256-LnXyiLoJtrvEvbqLDSAAqpLMdN/lXv92SAgYG8fNjCs=",
"general": "sha256-kNJxuhCtW4RbuS9nb4U6JXHlPgTSg6G3hWeHFVB9gZ4=",
"minimal": "sha256-U1tCkXxtdI6mkEdk80i8z9LU2hAyf7Ztz5SBYo5oMzo=",
"mainnet": "sha256-Ga8VDOcNhTTdXDj8tSyBVYrwya9f1HO94ehJ5vv91r4=",
},
version = consensus_spec_version,
)
@@ -298,7 +298,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-Y/67Dg393PksZj5rTFNLntiJ6hNdB7Rxbu5gZE2gebY=",
integrity = "sha256-XHu5K/65mue+5po63L9yGTFjGfU1RGj4S56dmcHc2Rs=",
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
)

View File

@@ -46,7 +46,7 @@ func EnsureReady(ctx context.Context, provider HostProvider, checker ReadyChecke
"previous": startingHost,
"current": provider.CurrentHost(),
"tried": attemptedHosts,
}).Info("Switched to responsive beacon node")
}).Warn("Switched to responsive beacon node")
}
return true
}

View File

@@ -3,14 +3,15 @@ package api
import "net/http"
const (
VersionHeader = "Eth-Consensus-Version"
ExecutionPayloadBlindedHeader = "Eth-Execution-Payload-Blinded"
ExecutionPayloadValueHeader = "Eth-Execution-Payload-Value"
ConsensusBlockValueHeader = "Eth-Consensus-Block-Value"
JsonMediaType = "application/json"
OctetStreamMediaType = "application/octet-stream"
EventStreamMediaType = "text/event-stream"
KeepAlive = "keep-alive"
VersionHeader = "Eth-Consensus-Version"
ExecutionPayloadBlindedHeader = "Eth-Execution-Payload-Blinded"
ExecutionPayloadValueHeader = "Eth-Execution-Payload-Value"
ConsensusBlockValueHeader = "Eth-Consensus-Block-Value"
ExecutionPayloadIncludedHeader = "Eth-Execution-Payload-Included"
JsonMediaType = "application/json"
OctetStreamMediaType = "application/octet-stream"
EventStreamMediaType = "text/event-stream"
KeepAlive = "keep-alive"
)
// SetSSEHeaders sets the headers needed for a server-sent event response.

View File

@@ -29,6 +29,8 @@ type Server struct {
startFailure error
}
const eventStreamPath = "/eth/v1/events"
// New returns a new instance of the Server.
func New(ctx context.Context, opts ...Option) (*Server, error) {
g := &Server{
@@ -48,7 +50,17 @@ func New(ctx context.Context, opts ...Option) (*Server, error) {
handler = middleware.MiddlewareChain(g.cfg.router, g.cfg.middlewares)
if g.cfg.timeout > 0*time.Second {
defaultReadHeaderTimeout = g.cfg.timeout
handler = http.TimeoutHandler(handler, g.cfg.timeout, "request timed out")
baseHandler := handler
timeoutHandler := http.TimeoutHandler(baseHandler, g.cfg.timeout, "request timed out")
handler = http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// SSE streams stay open indefinitely, so the global timeout wrapper must not
// cancel `/eth/v1/events` before the handler starts streaming responses.
if r.URL != nil && r.URL.Path == eventStreamPath {
baseHandler.ServeHTTP(w, r)
return
}
timeoutHandler.ServeHTTP(w, r)
})
}
g.server = &http.Server{
Addr: g.cfg.httpAddr,

View File

@@ -7,7 +7,9 @@ import (
"net/http"
"net/http/httptest"
"net/url"
"strings"
"testing"
"time"
"github.com/OffchainLabs/prysm/v7/cmd/beacon-chain/flags"
"github.com/OffchainLabs/prysm/v7/testing/assert"
@@ -37,10 +39,18 @@ func TestServer_StartStop(t *testing.T) {
require.NoError(t, err)
g.Start()
go func() {
require.LogsContain(t, hook, "Starting HTTP server")
require.LogsDoNotContain(t, hook, "Starting API middleware")
}()
require.Eventually(t, func() bool {
foundStart := false
for _, entry := range hook.AllEntries() {
if strings.Contains(entry.Message, "Starting HTTP server") {
foundStart = true
}
if strings.Contains(entry.Message, "Starting API middleware") {
return false
}
}
return foundStart
}, time.Second, 10*time.Millisecond)
err = g.Stop()
require.NoError(t, err)
}
@@ -68,3 +78,51 @@ func TestServer_NilHandler_NotFoundHandlerRegistered(t *testing.T) {
g.cfg.router.ServeHTTP(writer, &http.Request{Method: "GET", Host: "localhost", URL: &url.URL{Path: "/foo"}})
assert.Equal(t, http.StatusNotFound, writer.Code)
}
func TestServer_TimeoutHandlerBypassesSSE(t *testing.T) {
handler := http.NewServeMux()
handler.HandleFunc(eventStreamPath, func(w http.ResponseWriter, _ *http.Request) {
time.Sleep(20 * time.Millisecond)
w.WriteHeader(http.StatusOK)
_, err := w.Write([]byte("stream-open"))
require.NoError(t, err)
})
g, err := New(t.Context(),
WithHTTPAddr("127.0.0.1:0"),
WithRouter(handler),
WithTimeout(5*time.Millisecond),
)
require.NoError(t, err)
req := httptest.NewRequest(http.MethodGet, eventStreamPath, nil)
writer := httptest.NewRecorder()
g.server.Handler.ServeHTTP(writer, req)
assert.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, "stream-open", writer.Body.String())
}
func TestServer_TimeoutHandlerStillAppliesToNonSSE(t *testing.T) {
handler := http.NewServeMux()
handler.HandleFunc("/foo", func(w http.ResponseWriter, _ *http.Request) {
time.Sleep(20 * time.Millisecond)
w.WriteHeader(http.StatusOK)
_, err := w.Write([]byte("ok"))
require.NoError(t, err)
})
g, err := New(t.Context(),
WithHTTPAddr("127.0.0.1:0"),
WithRouter(handler),
WithTimeout(5*time.Millisecond),
)
require.NoError(t, err)
req := httptest.NewRequest(http.MethodGet, "/foo", nil)
writer := httptest.NewRecorder()
g.server.Handler.ServeHTTP(writer, req)
assert.Equal(t, http.StatusServiceUnavailable, writer.Code)
assert.Equal(t, true, strings.Contains(writer.Body.String(), "request timed out"))
}

View File

@@ -54,11 +54,13 @@ go_test(
name = "go_default_test",
srcs = [
"conversions_block_execution_test.go",
"conversions_block_gloas_test.go",
"conversions_test.go",
],
embed = [":go_default_library"],
deps = [
"//consensus-types/blocks:go_default_library",
"//consensus-types/primitives:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//testing/assert:go_default_library",

View File

@@ -540,6 +540,12 @@ type PayloadAttestation struct {
Signature string `json:"signature"`
}
type PayloadAttestationMessage struct {
ValidatorIndex string `json:"validator_index"`
Data *PayloadAttestationData `json:"data"`
Signature string `json:"signature"`
}
type BeaconBlockBodyGloas struct {
RandaoReveal string `json:"randao_reveal"`
Eth1Data *Eth1Data `json:"eth1_data"`
@@ -578,6 +584,13 @@ func (s *SignedBeaconBlockGloas) SigString() string {
return s.Signature
}
type BlockContentsGloas struct {
Block *BeaconBlockGloas `json:"block"`
ExecutionPayloadEnvelope *ExecutionPayloadEnvelope `json:"execution_payload_envelope"`
KzgProofs []string `json:"kzg_proofs"`
Blobs []string `json:"blobs"`
}
type ExecutionPayloadEnvelope struct {
Payload *ExecutionPayloadDeneb `json:"payload"`
ExecutionRequests *ExecutionRequests `json:"execution_requests"`

View File

@@ -2966,6 +2966,14 @@ func PayloadAttestationFromConsensus(pa *eth.PayloadAttestation) *PayloadAttesta
}
}
func PayloadAttestationMessageFromConsensus(m *eth.PayloadAttestationMessage) *PayloadAttestationMessage {
return &PayloadAttestationMessage{
ValidatorIndex: fmt.Sprintf("%d", m.ValidatorIndex),
Data: PayloadAttestationDataFromConsensus(m.Data),
Signature: hexutil.Encode(m.Signature),
}
}
func PayloadAttestationDataFromConsensus(d *eth.PayloadAttestationData) *PayloadAttestationData {
return &PayloadAttestationData{
BeaconBlockRoot: hexutil.Encode(d.BeaconBlockRoot),
@@ -2975,6 +2983,19 @@ func PayloadAttestationDataFromConsensus(d *eth.PayloadAttestationData) *Payload
}
}
func (b *SignedBeaconBlockGloas) ToGeneric() (*eth.GenericSignedBeaconBlock, error) {
if b == nil {
return nil, errNilValue
}
signed, err := b.ToConsensus()
if err != nil {
return nil, err
}
return &eth.GenericSignedBeaconBlock{
Block: &eth.GenericSignedBeaconBlock_Gloas{Gloas: signed},
}, nil
}
func (b *SignedBeaconBlockGloas) ToConsensus() (*eth.SignedBeaconBlockGloas, error) {
if b == nil {
return nil, errNilValue
@@ -3129,6 +3150,14 @@ func (b *BeaconBlockBodyGloas) ToConsensus() (*eth.BeaconBlockBodyGloas, error)
}, nil
}
func (b *BeaconBlockGloas) ToGeneric() (*eth.GenericBeaconBlock, error) {
block, err := b.ToConsensus()
if err != nil {
return nil, errors.Wrap(err, "could not convert gloas block to consensus")
}
return &eth.GenericBeaconBlock{Block: &eth.GenericBeaconBlock_Gloas{Gloas: block}}, nil
}
func (b *SignedExecutionPayloadBid) ToConsensus() (*eth.SignedExecutionPayloadBid, error) {
if b == nil {
return nil, errNilValue
@@ -3276,25 +3305,113 @@ func (d *PayloadAttestationData) ToConsensus() (*eth.PayloadAttestationData, err
}, nil
}
// SignedExecutionPayloadEnvelopeFromConsensus converts a proto envelope to the API struct.
func SignedExecutionPayloadEnvelopeFromConsensus(e *eth.SignedExecutionPayloadEnvelope) (*SignedExecutionPayloadEnvelope, error) {
payload, err := ExecutionPayloadDenebFromConsensus(e.Message.Payload)
// ExecutionPayloadEnvelopeFromConsensus converts a proto envelope to the API struct.
func ExecutionPayloadEnvelopeFromConsensus(e *eth.ExecutionPayloadEnvelope) (*ExecutionPayloadEnvelope, error) {
payload, err := ExecutionPayloadDenebFromConsensus(e.Payload)
if err != nil {
return nil, err
}
var requests *ExecutionRequests
if e.Message.ExecutionRequests != nil {
requests = ExecutionRequestsFromConsensus(e.Message.ExecutionRequests)
if e.ExecutionRequests != nil {
requests = ExecutionRequestsFromConsensus(e.ExecutionRequests)
}
return &ExecutionPayloadEnvelope{
Payload: payload,
ExecutionRequests: requests,
BuilderIndex: fmt.Sprintf("%d", e.BuilderIndex),
BeaconBlockRoot: hexutil.Encode(e.BeaconBlockRoot),
Slot: fmt.Sprintf("%d", e.Slot),
StateRoot: hexutil.Encode(e.StateRoot),
}, nil
}
// SignedExecutionPayloadEnvelopeFromConsensus converts a signed proto envelope to the API struct.
func SignedExecutionPayloadEnvelopeFromConsensus(e *eth.SignedExecutionPayloadEnvelope) (*SignedExecutionPayloadEnvelope, error) {
envelope, err := ExecutionPayloadEnvelopeFromConsensus(e.Message)
if err != nil {
return nil, err
}
return &SignedExecutionPayloadEnvelope{
Message: &ExecutionPayloadEnvelope{
Payload: payload,
ExecutionRequests: requests,
BuilderIndex: fmt.Sprintf("%d", e.Message.BuilderIndex),
BeaconBlockRoot: hexutil.Encode(e.Message.BeaconBlockRoot),
Slot: fmt.Sprintf("%d", e.Message.Slot),
StateRoot: hexutil.Encode(e.Message.StateRoot),
},
Message: envelope,
Signature: hexutil.Encode(e.Signature),
}, nil
}
// BlockContentsGloasFromConsensus converts a proto Gloas block and envelope to the API struct.
func BlockContentsGloasFromConsensus(block *eth.BeaconBlockGloas, envelope *eth.ExecutionPayloadEnvelope) (*BlockContentsGloas, error) {
b, err := BeaconBlockGloasFromConsensus(block)
if err != nil {
return nil, err
}
env, err := ExecutionPayloadEnvelopeFromConsensus(envelope)
if err != nil {
return nil, err
}
return &BlockContentsGloas{
Block: b,
ExecutionPayloadEnvelope: env,
KzgProofs: []string{}, // TODO: populate from blobs bundle
Blobs: []string{}, // TODO: populate from blobs bundle
}, nil
}
// ToConsensus converts the API struct to a proto ExecutionPayloadEnvelope.
func (e *ExecutionPayloadEnvelope) ToConsensus() (*eth.ExecutionPayloadEnvelope, error) {
if e == nil {
return nil, server.NewDecodeError(errNilValue, "ExecutionPayloadEnvelope")
}
payload, err := e.Payload.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Payload")
}
var requests *enginev1.ExecutionRequests
if e.ExecutionRequests != nil {
requests, err = e.ExecutionRequests.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionRequests")
}
}
builderIndex, err := strconv.ParseUint(e.BuilderIndex, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "BuilderIndex")
}
beaconBlockRoot, err := bytesutil.DecodeHexWithLength(e.BeaconBlockRoot, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "BeaconBlockRoot")
}
slot, err := strconv.ParseUint(e.Slot, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "Slot")
}
stateRoot, err := bytesutil.DecodeHexWithLength(e.StateRoot, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "StateRoot")
}
return &eth.ExecutionPayloadEnvelope{
Payload: payload,
ExecutionRequests: requests,
BuilderIndex: primitives.BuilderIndex(builderIndex),
BeaconBlockRoot: beaconBlockRoot,
Slot: primitives.Slot(slot),
StateRoot: stateRoot,
}, nil
}
// ToConsensus converts the API struct to a proto SignedExecutionPayloadEnvelope.
func (e *SignedExecutionPayloadEnvelope) ToConsensus() (*eth.SignedExecutionPayloadEnvelope, error) {
if e == nil {
return nil, server.NewDecodeError(errNilValue, "SignedExecutionPayloadEnvelope")
}
msg, err := e.Message.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Message")
}
sig, err := bytesutil.DecodeHexWithLength(e.Signature, fieldparams.BLSSignatureLength)
if err != nil {
return nil, server.NewDecodeError(err, "Signature")
}
return &eth.SignedExecutionPayloadEnvelope{
Message: msg,
Signature: sig,
}, nil
}

View File

@@ -0,0 +1,67 @@
package structs
import (
"testing"
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
eth "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/OffchainLabs/prysm/v7/testing/util"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
)
func testEnvelopeProto() *eth.ExecutionPayloadEnvelope {
return &eth.ExecutionPayloadEnvelope{
Payload: &enginev1.ExecutionPayloadDeneb{
ParentHash: fillByteSlice(common.HashLength, 0xaa),
FeeRecipient: fillByteSlice(20, 0xbb),
StateRoot: fillByteSlice(32, 0xcc),
ReceiptsRoot: fillByteSlice(32, 0xdd),
LogsBloom: fillByteSlice(256, 0xee),
PrevRandao: fillByteSlice(32, 0xff),
BaseFeePerGas: fillByteSlice(32, 0x11),
BlockHash: fillByteSlice(common.HashLength, 0x22),
},
ExecutionRequests: &enginev1.ExecutionRequests{},
BuilderIndex: 7,
BeaconBlockRoot: fillByteSlice(32, 0x33),
Slot: 42,
StateRoot: fillByteSlice(32, 0x44),
}
}
func TestExecutionPayloadEnvelopeFromConsensus(t *testing.T) {
env := testEnvelopeProto()
result, err := ExecutionPayloadEnvelopeFromConsensus(env)
require.NoError(t, err)
require.NotNil(t, result.Payload)
require.Equal(t, hexutil.Encode(env.Payload.ParentHash), result.Payload.ParentHash)
require.Equal(t, "7", result.BuilderIndex)
require.Equal(t, hexutil.Encode(env.BeaconBlockRoot), result.BeaconBlockRoot)
require.Equal(t, "42", result.Slot)
require.Equal(t, hexutil.Encode(env.StateRoot), result.StateRoot)
require.NotNil(t, result.ExecutionRequests)
}
func TestExecutionPayloadEnvelopeFromConsensus_NilRequests(t *testing.T) {
env := testEnvelopeProto()
env.ExecutionRequests = nil
result, err := ExecutionPayloadEnvelopeFromConsensus(env)
require.NoError(t, err)
require.Equal(t, (*ExecutionRequests)(nil), result.ExecutionRequests)
}
func TestBlockContentsGloasFromConsensus(t *testing.T) {
block := util.NewBeaconBlockGloas().Block
env := testEnvelopeProto()
result, err := BlockContentsGloasFromConsensus(block, env)
require.NoError(t, err)
require.NotNil(t, result.Block)
require.NotNil(t, result.Block.Body)
require.NotNil(t, result.ExecutionPayloadEnvelope)
require.Equal(t, hexutil.Encode(env.BeaconBlockRoot), result.ExecutionPayloadEnvelope.BeaconBlockRoot)
require.Equal(t, 0, len(result.KzgProofs))
require.Equal(t, 0, len(result.Blobs))
}

View File

@@ -5,6 +5,7 @@ import (
"testing"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
eth "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/testing/assert"
@@ -488,7 +489,7 @@ func TestBeaconStateGloasFromConsensus(t *testing.T) {
state.GenesisTime = 123
state.GenesisValidatorsRoot = bytes.Repeat([]byte{0x10}, 32)
state.Slot = 5
state.ProposerLookahead = []uint64{1, 2}
state.ProposerLookahead = []primitives.ValidatorIndex{1, 2}
state.LatestExecutionPayloadBid = &eth.ExecutionPayloadBid{
ParentBlockHash: bytes.Repeat([]byte{0x11}, 32),
ParentBlockRoot: bytes.Repeat([]byte{0x12}, 32),

View File

@@ -53,8 +53,8 @@ type ChainReorgEvent struct {
Slot string `json:"slot"`
Depth string `json:"depth"`
OldHeadBlock string `json:"old_head_block"`
NewHeadBlock string `json:"old_head_state"`
OldHeadState string `json:"new_head_block"`
NewHeadBlock string `json:"new_head_block"`
OldHeadState string `json:"old_head_state"`
NewHeadState string `json:"new_head_state"`
Epoch string `json:"epoch"`
ExecutionOptimistic bool `json:"execution_optimistic"`

View File

@@ -95,6 +95,14 @@ type ProduceBlockV3Response struct {
Data json.RawMessage `json:"data"` // represents the block values based on the version
}
// ProduceBlockV4Response is a wrapper json object for the returned block from the ProduceBlockV4 endpoint
type ProduceBlockV4Response struct {
Version string `json:"version"`
ConsensusBlockValue string `json:"consensus_block_value"`
ExecutionPayloadIncluded bool `json:"execution_payload_included"`
Data json.RawMessage `json:"data"`
}
type GetLivenessResponse struct {
Data []*Liveness `json:"data"`
}
@@ -151,6 +159,11 @@ type ValidatorParticipation struct {
PreviousEpochHeadAttestingGwei string `json:"previous_epoch_head_attesting_gwei"`
}
type GetValidatorExecutionPayloadEnvelopeResponse struct {
Version string `json:"version"`
Data *ExecutionPayloadEnvelope `json:"data"`
}
type ActiveSetChanges struct {
Epoch string `json:"epoch"`
ActivatedPublicKeys []string `json:"activated_public_keys"`

View File

@@ -141,6 +141,7 @@ go_test(
"service_test.go",
"setup_forkchoice_test.go",
"setup_test.go",
"tracked_proposer_test.go",
"weak_subjectivity_checks_test.go",
],
embed = [":go_default_library"],
@@ -154,7 +155,6 @@ go_test(
"//beacon-chain/core/altair:go_default_library",
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/gloas:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/signing:go_default_library",

View File

@@ -40,6 +40,7 @@ type ChainInfoFetcher interface {
// of locking forkchoice
type ForkchoiceFetcher interface {
Ancestor(context.Context, []byte, primitives.Slot) ([]byte, error)
BlockHash(root [32]byte) ([32]byte, error)
CachedHeadRoot() [32]byte
GetProposerHead() [32]byte
SetForkChoiceGenesisTime(time.Time)
@@ -47,6 +48,7 @@ type ForkchoiceFetcher interface {
HighestReceivedBlockSlot() primitives.Slot
HighestReceivedBlockRoot() [32]byte
HasFullNode([32]byte) bool
PayloadContentLookup([32]byte) ([32]byte, bool)
ReceivedBlocksLastEpoch() (uint64, error)
InsertNode(context.Context, state.BeaconState, consensus_blocks.ROBlock) error
InsertPayload(interfaces.ROExecutionPayloadEnvelope) error
@@ -57,6 +59,7 @@ type ForkchoiceFetcher interface {
IsCanonical(ctx context.Context, blockRoot [32]byte) (bool, error)
DependentRoot(primitives.Epoch) ([32]byte, error)
CanonicalNodeAtSlot(primitives.Slot) ([32]byte, bool)
ShouldIgnoreData(parentRoot [32]byte, dataSlot primitives.Slot) bool
}
// TimeFetcher retrieves the Ethereum consensus data that's related to time.
@@ -596,3 +599,26 @@ func (s *Service) inRegularSync() bool {
func (s *Service) validating() bool {
return s.cfg.TrackedValidatorsCache.Validating()
}
// ShouldIgnoreData returns true if the data for the given parent root and slot should be ignored.
func (s *Service) ShouldIgnoreData(parentRoot [32]byte, dataSlot primitives.Slot) bool {
currentEpoch := slots.ToEpoch(s.CurrentSlot())
if slots.ToEpoch(dataSlot) < currentEpoch {
return false
}
s.cfg.ForkChoiceStore.RLock()
defer s.cfg.ForkChoiceStore.RUnlock()
parentSlot, err := s.cfg.ForkChoiceStore.Slot(parentRoot)
if err != nil {
// This should not happen. The caller should have already checked the parent is in forkchoice.
return false
}
j := s.cfg.ForkChoiceStore.JustifiedCheckpoint()
if j == nil {
return false
}
if slots.ToEpoch(parentSlot) >= j.Epoch {
return false
}
return s.cfg.ForkChoiceStore.IsCanonical(parentRoot)
}

View File

@@ -49,6 +49,13 @@ func (s *Service) HighestReceivedBlockRoot() [32]byte {
return s.cfg.ForkChoiceStore.HighestReceivedBlockRoot()
}
// BlockHash returns the execution payload block hash for the given beacon block root from forkchoice.
func (s *Service) BlockHash(root [32]byte) ([32]byte, error) {
s.cfg.ForkChoiceStore.RLock()
defer s.cfg.ForkChoiceStore.RUnlock()
return s.cfg.ForkChoiceStore.BlockHash(root)
}
// HasFullNode returns the corresponding value from forkchoice
func (s *Service) HasFullNode(root [32]byte) bool {
s.cfg.ForkChoiceStore.RLock()
@@ -56,6 +63,13 @@ func (s *Service) HasFullNode(root [32]byte) bool {
return s.cfg.ForkChoiceStore.HasFullNode(root)
}
// PayloadContentLookup returns the preferred payload-content lookup key from forkchoice.
func (s *Service) PayloadContentLookup(root [32]byte) ([32]byte, bool) {
s.cfg.ForkChoiceStore.RLock()
defer s.cfg.ForkChoiceStore.RUnlock()
return s.cfg.ForkChoiceStore.PayloadContentLookup(root)
}
// ReceivedBlocksLastEpoch returns the corresponding value from forkchoice
func (s *Service) ReceivedBlocksLastEpoch() (uint64, error) {
s.cfg.ForkChoiceStore.RLock()
@@ -143,6 +157,13 @@ func (s *Service) hashForGenesisBlock(ctx context.Context, root [32]byte) ([]byt
if st.Version() < version.Bellatrix {
return nil, nil
}
if st.Version() >= version.Gloas {
h, err := st.LatestBlockHash()
if err != nil {
return nil, errors.Wrap(err, "could not get latest block hash")
}
return bytesutil.SafeCopyBytes(h[:]), nil
}
header, err := st.LatestExecutionPayloadHeader()
if err != nil {
return nil, errors.Wrap(err, "could not get latest execution payload header")

View File

@@ -20,6 +20,7 @@ import (
"github.com/OffchainLabs/prysm/v7/testing/assert"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/OffchainLabs/prysm/v7/testing/util"
"github.com/OffchainLabs/prysm/v7/time/slots"
"google.golang.org/protobuf/proto"
)
@@ -735,6 +736,73 @@ func TestParentPayloadReady(t *testing.T) {
})
}
func TestService_ShouldIgnoreData(t *testing.T) {
service, tr := minimalTestService(t)
ctx := t.Context()
fcs := tr.fcs
zeroHash := params.BeaconConfig().ZeroHash
currentSlot := service.CurrentSlot()
currentEpoch := slots.ToEpoch(currentSlot)
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
// Build a chain in forkchoice:
// genesis (slot 0) -> nodeA (slot 1, epoch 0) -> nodeB (slot slotsPerEpoch, epoch 1) -> nodeC (slot 2*slotsPerEpoch, epoch 2)
nodeARoot := [32]byte{1}
nodeBRoot := [32]byte{2}
nodeCRoot := [32]byte{3}
nodeASlot := primitives.Slot(1)
nodeBSlot := primitives.Slot(slotsPerEpoch) // epoch 1
nodeCSlot := primitives.Slot(2 * slotsPerEpoch) // epoch 2
stA, robA, err := prepareForkchoiceState(ctx, nodeASlot, nodeARoot, zeroHash, [32]byte{10}, &ethpb.Checkpoint{Epoch: 0, Root: zeroHash[:]}, &ethpb.Checkpoint{Epoch: 0, Root: zeroHash[:]})
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, stA, robA))
stB, robB, err := prepareForkchoiceState(ctx, nodeBSlot, nodeBRoot, nodeARoot, [32]byte{11}, &ethpb.Checkpoint{Epoch: 0, Root: zeroHash[:]}, &ethpb.Checkpoint{Epoch: 0, Root: zeroHash[:]})
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, stB, robB))
stC, robC, err := prepareForkchoiceState(ctx, nodeCSlot, nodeCRoot, nodeBRoot, [32]byte{12}, &ethpb.Checkpoint{Epoch: 0, Root: zeroHash[:]}, &ethpb.Checkpoint{Epoch: 0, Root: zeroHash[:]})
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, stC, robC))
// Set justified checkpoint to nodeB (epoch 1).
fcs.SetBalancesByRooter(func(_ context.Context, _ [32]byte) ([]uint64, error) { return []uint64{}, nil })
require.NoError(t, fcs.UpdateJustifiedCheckpoint(ctx, &forkchoicetypes.Checkpoint{Epoch: 1, Root: nodeBRoot}))
t.Run("past epoch data is not ignored", func(t *testing.T) {
pastSlot := primitives.Slot((currentEpoch - 1) * primitives.Epoch(slotsPerEpoch))
require.Equal(t, false, service.ShouldIgnoreData(nodeARoot, pastSlot))
})
t.Run("parent not in forkchoice", func(t *testing.T) {
unknownRoot := [32]byte{99}
require.Equal(t, false, service.ShouldIgnoreData(unknownRoot, currentSlot))
})
t.Run("parent epoch at or after justified", func(t *testing.T) {
// nodeB is at epoch 1, justified is epoch 1 => parentEpoch >= justified => false
require.Equal(t, false, service.ShouldIgnoreData(nodeBRoot, currentSlot))
})
t.Run("canonical parent before justified is ignored", func(t *testing.T) {
// nodeA is at epoch 0 < justified epoch 1, and is canonical => true
require.Equal(t, true, service.ShouldIgnoreData(nodeARoot, currentSlot))
})
t.Run("non-canonical parent before justified is not ignored", func(t *testing.T) {
// Insert a fork: nodeD at slot 2 (epoch 0) branching from nodeA, not on the canonical chain.
nodeDRoot := [32]byte{4}
stD, robD, err := prepareForkchoiceState(ctx, 2, nodeDRoot, nodeARoot, [32]byte{13}, &ethpb.Checkpoint{Epoch: 0, Root: zeroHash[:]}, &ethpb.Checkpoint{Epoch: 0, Root: zeroHash[:]})
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, stD, robD))
// nodeD is at epoch 0 < justified epoch 1, but not canonical => false
require.Equal(t, false, service.ShouldIgnoreData(nodeDRoot, currentSlot))
})
}
func Test_hashForGenesisRoot(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := t.Context()
@@ -752,3 +820,23 @@ func Test_hashForGenesisRoot(t *testing.T) {
require.NoError(t, err)
require.Equal(t, [32]byte{}, [32]byte(genRoot))
}
func Test_hashForGenesisRoot_Gloas(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := t.Context()
c := setupBeaconChain(t, beaconDB)
expectedHash := [32]byte{1, 2, 3, 4, 5}
st, err := state_native.InitializeFromProtoGloas(&ethpb.BeaconStateGloas{
LatestBlockHash: expectedHash[:],
})
require.NoError(t, err)
genesis.StoreDuringTest(t, genesis.GenesisData{State: st})
genesisRoot := [32]byte{0xaa}
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, genesisRoot))
genHash, err := c.hashForGenesisBlock(ctx, genesisRoot)
require.NoError(t, err)
require.Equal(t, expectedHash, [32]byte(genHash))
}

View File

@@ -271,7 +271,7 @@ func (s *Service) notifyNewPayload(ctx context.Context, stVersion int, header in
}
}
lastValidHash, err = s.cfg.ExecutionEngineCaller.NewPayload(ctx, payload, versionedHashes, parentRoot, requests)
lastValidHash, err = s.cfg.ExecutionEngineCaller.NewPayload(ctx, payload, versionedHashes, parentRoot, requests, blk.Block().Slot())
if err == nil {
newPayloadValidNodeCount.Inc()
return true, nil
@@ -321,7 +321,7 @@ func (s *Service) pruneInvalidBlock(ctx context.Context, root, parentRoot, paren
// getPayloadAttributes returns the payload attributes for the given state and slot.
// The attribute is required to initiate a payload build process in the context of an `engine_forkchoiceUpdated` call.
func (s *Service) getPayloadAttribute(ctx context.Context, st state.BeaconState, slot primitives.Slot, headRoot []byte) payloadattribute.Attributer {
func (s *Service) getPayloadAttribute(ctx context.Context, st state.BeaconState, slot primitives.Slot, headRoot, accessRoot []byte) payloadattribute.Attributer {
emptyAttri := payloadattribute.EmptyWithVersion(st.Version())
// If it is an epoch boundary then process slots to get the right
@@ -343,7 +343,7 @@ func (s *Service) getPayloadAttribute(ctx context.Context, st state.BeaconState,
// right proposer index pre-Fulu, either way we need to copy the state to process it.
st = st.Copy()
var err error
st, err = transition.ProcessSlotsUsingNextSlotCache(ctx, st, headRoot, slot)
st, err = transition.ProcessSlotsUsingNextSlotCache(ctx, st, accessRoot, slot)
if err != nil {
log.WithError(err).Error("Could not process slots to get payload attribute")
return emptyAttri
@@ -371,66 +371,91 @@ func (s *Service) getPayloadAttribute(ctx context.Context, st state.BeaconState,
}
v := st.Version()
if v >= version.Deneb {
withdrawals, _, err := st.ExpectedWithdrawals()
if err != nil {
log.WithError(err).Error("Could not get expected withdrawals to get payload attribute")
return emptyAttri
}
attr, err := payloadattribute.New(&enginev1.PayloadAttributesV3{
Timestamp: uint64(t.Unix()),
PrevRandao: prevRando,
SuggestedFeeRecipient: val.FeeRecipient[:],
Withdrawals: withdrawals,
ParentBeaconBlockRoot: headRoot,
})
if err != nil {
log.WithError(err).Error("Could not get payload attribute")
return emptyAttri
}
return attr
switch {
case v >= version.Gloas:
return payloadAttributesGloas(st, uint64(t.Unix()), prevRando, val.FeeRecipient[:], headRoot)
case v >= version.Deneb:
return payloadAttributesDeneb(st, uint64(t.Unix()), prevRando, val.FeeRecipient[:], headRoot)
case v >= version.Capella:
return payloadAttributesCapella(st, uint64(t.Unix()), prevRando, val.FeeRecipient[:])
case v >= version.Bellatrix:
return payloadAttributesBellatrix(uint64(t.Unix()), prevRando, val.FeeRecipient[:])
default:
log.WithField("version", version.String(v)).Error("Could not get payload attribute due to unknown state version")
return payloadattribute.EmptyWithVersion(v)
}
}
if v >= version.Capella {
withdrawals, _, err := st.ExpectedWithdrawals()
if err != nil {
log.WithError(err).Error("Could not get expected withdrawals to get payload attribute")
return emptyAttri
}
attr, err := payloadattribute.New(&enginev1.PayloadAttributesV2{
Timestamp: uint64(t.Unix()),
PrevRandao: prevRando,
SuggestedFeeRecipient: val.FeeRecipient[:],
Withdrawals: withdrawals,
})
if err != nil {
log.WithError(err).Error("Could not get payload attribute")
return emptyAttri
}
return attr
func payloadAttributesGloas(st state.BeaconState, timestamp uint64, prevRandao, feeRecipient, parentBeaconBlockRoot []byte) payloadattribute.Attributer {
withdrawals, err := st.WithdrawalsForPayload()
if err != nil {
log.WithError(err).Error("Could not get payload withdrawals to get payload attribute")
return payloadattribute.EmptyWithVersion(st.Version())
}
if v >= version.Bellatrix {
attr, err := payloadattribute.New(&enginev1.PayloadAttributes{
Timestamp: uint64(t.Unix()),
PrevRandao: prevRando,
SuggestedFeeRecipient: val.FeeRecipient[:],
})
if err != nil {
log.WithError(err).Error("Could not get payload attribute")
return emptyAttri
}
return attr
attr, err := payloadattribute.New(&enginev1.PayloadAttributesV3{
Timestamp: timestamp,
PrevRandao: prevRandao,
SuggestedFeeRecipient: feeRecipient,
Withdrawals: withdrawals,
ParentBeaconBlockRoot: parentBeaconBlockRoot,
})
if err != nil {
log.WithError(err).Error("Could not get payload attribute")
return payloadattribute.EmptyWithVersion(st.Version())
}
return attr
}
log.WithField("version", version.String(st.Version())).Error("Could not get payload attribute due to unknown state version")
return emptyAttri
func payloadAttributesDeneb(st state.BeaconState, timestamp uint64, prevRandao, feeRecipient, parentBeaconBlockRoot []byte) payloadattribute.Attributer {
withdrawals, _, err := st.ExpectedWithdrawals()
if err != nil {
log.WithError(err).Error("Could not get expected withdrawals to get payload attribute")
return payloadattribute.EmptyWithVersion(st.Version())
}
attr, err := payloadattribute.New(&enginev1.PayloadAttributesV3{
Timestamp: timestamp,
PrevRandao: prevRandao,
SuggestedFeeRecipient: feeRecipient,
Withdrawals: withdrawals,
ParentBeaconBlockRoot: parentBeaconBlockRoot,
})
if err != nil {
log.WithError(err).Error("Could not get payload attribute")
return payloadattribute.EmptyWithVersion(st.Version())
}
return attr
}
func payloadAttributesCapella(st state.BeaconState, timestamp uint64, prevRandao, feeRecipient []byte) payloadattribute.Attributer {
withdrawals, _, err := st.ExpectedWithdrawals()
if err != nil {
log.WithError(err).Error("Could not get expected withdrawals to get payload attribute")
return payloadattribute.EmptyWithVersion(st.Version())
}
attr, err := payloadattribute.New(&enginev1.PayloadAttributesV2{
Timestamp: timestamp,
PrevRandao: prevRandao,
SuggestedFeeRecipient: feeRecipient,
Withdrawals: withdrawals,
})
if err != nil {
log.WithError(err).Error("Could not get payload attribute")
return payloadattribute.EmptyWithVersion(st.Version())
}
return attr
}
func payloadAttributesBellatrix(timestamp uint64, prevRandao, feeRecipient []byte) payloadattribute.Attributer {
attr, err := payloadattribute.New(&enginev1.PayloadAttributes{
Timestamp: timestamp,
PrevRandao: prevRandao,
SuggestedFeeRecipient: feeRecipient,
})
if err != nil {
log.WithError(err).Error("Could not get payload attribute")
return payloadattribute.EmptyWithVersion(version.Bellatrix)
}
return attr
}
// removeInvalidBlockAndState removes the invalid block, blob and its corresponding state from the cache and DB.

View File

@@ -717,14 +717,14 @@ func Test_GetPayloadAttribute(t *testing.T) {
ctx := tr.ctx
st, _ := util.DeterministicGenesisStateBellatrix(t, 1)
attr := service.getPayloadAttribute(ctx, st, 0, []byte{})
attr := service.getPayloadAttribute(ctx, st, 0, []byte{}, []byte{})
require.Equal(t, true, attr.IsEmpty())
service.cfg.TrackedValidatorsCache.Set(cache.TrackedValidator{Active: true, Index: 0})
// Cache hit, advance state, no fee recipient
slot := primitives.Slot(1)
service.cfg.PayloadIDCache.Set(slot, [32]byte{}, [8]byte{})
attr = service.getPayloadAttribute(ctx, st, slot, params.BeaconConfig().ZeroHash[:])
attr = service.getPayloadAttribute(ctx, st, slot, params.BeaconConfig().ZeroHash[:], params.BeaconConfig().ZeroHash[:])
require.Equal(t, false, attr.IsEmpty())
require.Equal(t, params.BeaconConfig().EthBurnAddressHex, common.BytesToAddress(attr.SuggestedFeeRecipient()).String())
@@ -732,7 +732,7 @@ func Test_GetPayloadAttribute(t *testing.T) {
suggestedAddr := common.HexToAddress("123")
service.cfg.TrackedValidatorsCache.Set(cache.TrackedValidator{Active: true, FeeRecipient: primitives.ExecutionAddress(suggestedAddr), Index: 0})
service.cfg.PayloadIDCache.Set(slot, [32]byte{}, [8]byte{})
attr = service.getPayloadAttribute(ctx, st, slot, params.BeaconConfig().ZeroHash[:])
attr = service.getPayloadAttribute(ctx, st, slot, params.BeaconConfig().ZeroHash[:], params.BeaconConfig().ZeroHash[:])
require.Equal(t, false, attr.IsEmpty())
require.Equal(t, suggestedAddr, common.BytesToAddress(attr.SuggestedFeeRecipient()))
}
@@ -747,7 +747,7 @@ func Test_GetPayloadAttribute_PrepareAllPayloads(t *testing.T) {
ctx := tr.ctx
st, _ := util.DeterministicGenesisStateBellatrix(t, 1)
attr := service.getPayloadAttribute(ctx, st, 0, []byte{})
attr := service.getPayloadAttribute(ctx, st, 0, []byte{}, []byte{})
require.Equal(t, false, attr.IsEmpty())
require.Equal(t, params.BeaconConfig().EthBurnAddressHex, common.BytesToAddress(attr.SuggestedFeeRecipient()).String())
}
@@ -757,14 +757,14 @@ func Test_GetPayloadAttributeV2(t *testing.T) {
ctx := tr.ctx
st, _ := util.DeterministicGenesisStateCapella(t, 1)
attr := service.getPayloadAttribute(ctx, st, 0, []byte{})
attr := service.getPayloadAttribute(ctx, st, 0, []byte{}, []byte{})
require.Equal(t, true, attr.IsEmpty())
// Cache hit, advance state, no fee recipient
service.cfg.TrackedValidatorsCache.Set(cache.TrackedValidator{Active: true, Index: 0})
slot := primitives.Slot(1)
service.cfg.PayloadIDCache.Set(slot, [32]byte{}, [8]byte{})
attr = service.getPayloadAttribute(ctx, st, slot, params.BeaconConfig().ZeroHash[:])
attr = service.getPayloadAttribute(ctx, st, slot, params.BeaconConfig().ZeroHash[:], params.BeaconConfig().ZeroHash[:])
require.Equal(t, false, attr.IsEmpty())
require.Equal(t, params.BeaconConfig().EthBurnAddressHex, common.BytesToAddress(attr.SuggestedFeeRecipient()).String())
a, err := attr.Withdrawals()
@@ -775,7 +775,7 @@ func Test_GetPayloadAttributeV2(t *testing.T) {
suggestedAddr := common.HexToAddress("123")
service.cfg.TrackedValidatorsCache.Set(cache.TrackedValidator{Active: true, FeeRecipient: primitives.ExecutionAddress(suggestedAddr), Index: 0})
service.cfg.PayloadIDCache.Set(slot, [32]byte{}, [8]byte{})
attr = service.getPayloadAttribute(ctx, st, slot, params.BeaconConfig().ZeroHash[:])
attr = service.getPayloadAttribute(ctx, st, slot, params.BeaconConfig().ZeroHash[:], params.BeaconConfig().ZeroHash[:])
require.Equal(t, false, attr.IsEmpty())
require.Equal(t, suggestedAddr, common.BytesToAddress(attr.SuggestedFeeRecipient()))
a, err = attr.Withdrawals()
@@ -809,14 +809,14 @@ func Test_GetPayloadAttributeV3(t *testing.T) {
service, tr := minimalTestService(t, WithPayloadIDCache(cache.NewPayloadIDCache()))
ctx := tr.ctx
attr := service.getPayloadAttribute(ctx, test.st, 0, []byte{})
attr := service.getPayloadAttribute(ctx, test.st, 0, []byte{}, []byte{})
require.Equal(t, true, attr.IsEmpty())
// Cache hit, advance state, no fee recipient
slot := primitives.Slot(1)
service.cfg.TrackedValidatorsCache.Set(cache.TrackedValidator{Active: true, Index: 0})
service.cfg.PayloadIDCache.Set(slot, [32]byte{}, [8]byte{})
attr = service.getPayloadAttribute(ctx, test.st, slot, params.BeaconConfig().ZeroHash[:])
attr = service.getPayloadAttribute(ctx, test.st, slot, params.BeaconConfig().ZeroHash[:], params.BeaconConfig().ZeroHash[:])
require.Equal(t, false, attr.IsEmpty())
require.Equal(t, params.BeaconConfig().EthBurnAddressHex, common.BytesToAddress(attr.SuggestedFeeRecipient()).String())
a, err := attr.Withdrawals()
@@ -827,7 +827,7 @@ func Test_GetPayloadAttributeV3(t *testing.T) {
suggestedAddr := common.HexToAddress("123")
service.cfg.TrackedValidatorsCache.Set(cache.TrackedValidator{Active: true, FeeRecipient: primitives.ExecutionAddress(suggestedAddr), Index: 0})
service.cfg.PayloadIDCache.Set(slot, [32]byte{}, [8]byte{})
attr = service.getPayloadAttribute(ctx, test.st, slot, params.BeaconConfig().ZeroHash[:])
attr = service.getPayloadAttribute(ctx, test.st, slot, params.BeaconConfig().ZeroHash[:], params.BeaconConfig().ZeroHash[:])
require.Equal(t, false, attr.IsEmpty())
require.Equal(t, suggestedAddr, common.BytesToAddress(attr.SuggestedFeeRecipient()))
a, err = attr.Withdrawals()

View File

@@ -18,19 +18,21 @@ import (
"github.com/sirupsen/logrus"
)
func (s *Service) isNewHead(r [32]byte) bool {
func (s *Service) isNewHead(r [32]byte, full bool) bool {
s.headLock.RLock()
defer s.headLock.RUnlock()
currentHeadRoot := s.originBlockRoot
currentFull := false
if s.head != nil {
currentHeadRoot = s.headRoot()
currentFull = s.head.full
}
return r != currentHeadRoot || r == [32]byte{}
return r != currentHeadRoot || full != currentFull || r == [32]byte{}
}
func (s *Service) getStateAndBlock(ctx context.Context, r [32]byte) (state.BeaconState, interfaces.ReadOnlySignedBeaconBlock, error) {
func (s *Service) getStateAndBlock(ctx context.Context, r, h [32]byte) (state.BeaconState, interfaces.ReadOnlySignedBeaconBlock, error) {
if !s.hasBlockInInitSyncOrDB(ctx, r) {
return nil, nil, errors.New("block does not exist")
}
@@ -38,7 +40,7 @@ func (s *Service) getStateAndBlock(ctx context.Context, r [32]byte) (state.Beaco
if err != nil {
return nil, nil, err
}
headState, err := s.cfg.StateGen.StateByRoot(ctx, r)
headState, err := s.cfg.StateGen.StateByRoot(ctx, h)
if err != nil {
return nil, nil, err
}
@@ -70,7 +72,7 @@ func (s *Service) sendFCU(cfg *postBlockProcessConfig) {
return
}
// If head has not been updated and attributes are nil, we can skip the FCU.
if !s.isNewHead(cfg.headRoot) && (fcuArgs.attributes == nil || fcuArgs.attributes.IsEmpty()) {
if !s.isNewHead(cfg.headRoot, false) && (fcuArgs.attributes == nil || fcuArgs.attributes.IsEmpty()) {
return
}
// If we are proposing and we aim to reorg the block, we have already sent FCU with attributes on lateBlockTasks
@@ -81,7 +83,7 @@ func (s *Service) sendFCU(cfg *postBlockProcessConfig) {
go s.forkchoiceUpdateWithExecution(cfg.ctx, fcuArgs)
}
if s.isNewHead(fcuArgs.headRoot) {
if s.isNewHead(fcuArgs.headRoot, false) {
if err := s.saveHead(cfg.ctx, fcuArgs.headRoot, fcuArgs.headBlock, fcuArgs.headState); err != nil {
log.WithError(err).Error("Could not save head")
}

View File

@@ -19,23 +19,42 @@ import (
func TestService_isNewHead(t *testing.T) {
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
require.Equal(t, true, service.isNewHead([32]byte{}))
// Zero root is always a new head
require.Equal(t, true, service.isNewHead([32]byte{}, false))
require.Equal(t, true, service.isNewHead([32]byte{}, true))
// Different root is a new head
service.head = &head{root: [32]byte{1}}
require.Equal(t, true, service.isNewHead([32]byte{2}))
require.Equal(t, false, service.isNewHead([32]byte{1}))
require.Equal(t, true, service.isNewHead([32]byte{2}, false))
// Same root and same full status is not a new head
require.Equal(t, false, service.isNewHead([32]byte{1}, false))
// Same root but different full status is a new head
require.Equal(t, true, service.isNewHead([32]byte{1}, true))
// Same root and both full is not a new head
service.head = &head{root: [32]byte{1}, full: true}
require.Equal(t, false, service.isNewHead([32]byte{1}, true))
// Same root, head is full but incoming is not full, is a new head
require.Equal(t, true, service.isNewHead([32]byte{1}, false))
// Nil head should use origin root
service.head = nil
service.originBlockRoot = [32]byte{3}
require.Equal(t, true, service.isNewHead([32]byte{2}))
require.Equal(t, false, service.isNewHead([32]byte{3}))
require.Equal(t, true, service.isNewHead([32]byte{2}, false))
require.Equal(t, false, service.isNewHead([32]byte{3}, false))
// Nil head with full=true is always a new head (originBlockRoot has full=false)
require.Equal(t, true, service.isNewHead([32]byte{3}, true))
}
func TestService_getHeadStateAndBlock(t *testing.T) {
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
_, _, err := service.getStateAndBlock(t.Context(), [32]byte{})
_, _, err := service.getStateAndBlock(t.Context(), [32]byte{}, [32]byte{})
require.ErrorContains(t, "block does not exist", err)
blk, err := blocks.NewSignedBeaconBlock(util.HydrateSignedBeaconBlock(&ethpb.SignedBeaconBlock{Signature: []byte{1}}))

View File

@@ -1,13 +1,42 @@
package blockchain
import (
"context"
"math"
"github.com/OffchainLabs/prysm/v7/beacon-chain/cache"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/time"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/transition"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
"github.com/OffchainLabs/prysm/v7/config/params"
consensus_blocks "github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
payloadattribute "github.com/OffchainLabs/prysm/v7/consensus-types/payload-attribute"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
"github.com/OffchainLabs/prysm/v7/runtime/version"
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/pkg/errors"
)
func (s *Service) waitUntilEpoch(target primitives.Epoch, secondsPerSlot uint64) error {
if slots.ToEpoch(s.CurrentSlot()) >= target {
return nil
}
ticker := slots.NewSlotTicker(s.genesisTime, secondsPerSlot)
defer ticker.Done()
for {
select {
case slot := <-ticker.C():
if slots.ToEpoch(slot) >= target {
return nil
}
case <-s.ctx.Done():
return s.ctx.Err()
}
}
}
// getLookupParentRoot returns the root that serves as key to generate the parent state for the passed beacon block.
// if it is based on empty or it is pre-Gloas, it is the parent root of the block, otherwise if it is based on full it is
// the parent hash.
@@ -43,3 +72,142 @@ func (s *Service) getLookupParentRoot(b consensus_blocks.ROBlock) ([32]byte, err
}
return parentRoot, nil
}
func (s *Service) runLatePayloadTasks() {
if err := s.waitForSync(); err != nil {
log.WithError(err).Error("Failed to wait for initial sync")
return
}
cfg := params.BeaconConfig()
if cfg.GloasForkEpoch == math.MaxUint64 {
return
}
if err := s.waitUntilEpoch(cfg.GloasForkEpoch, cfg.SecondsPerSlot); err != nil {
return
}
offset := cfg.SlotComponentDuration(cfg.PayloadAttestationDueBPS)
ticker := slots.NewSlotTickerWithOffset(s.genesisTime, offset, cfg.SecondsPerSlot)
defer ticker.Done()
for {
select {
case <-ticker.C():
s.latePayloadTasks(s.ctx)
case <-s.ctx.Done():
log.Debug("Context closed, exiting late payload tasks routine")
return
}
}
}
func (s *Service) checkIfProposing(st state.ReadOnlyBeaconState, slot primitives.Slot) (cache.TrackedValidator, bool) {
e := slots.ToEpoch(slot)
stateEpoch := slots.ToEpoch(st.Slot())
fuluAndNextEpoch := st.Version() >= version.Fulu && e == stateEpoch+1
if e == stateEpoch || fuluAndNextEpoch {
return s.trackedProposer(st, slot)
}
return cache.TrackedValidator{}, false
}
// This is a Gloas version of getPayloadAttribute that avoids all the clutter that was originally due to the proposer Index.
// It is guaranteed to be called for the current slot + 1 and the head state to have been advanced to at least the current epoch.
func (s *Service) getPayloadAttributeGloas(ctx context.Context, st state.ReadOnlyBeaconState, slot primitives.Slot, headRoot, accessRoot []byte) payloadattribute.Attributer {
emptyAttri := payloadattribute.EmptyWithVersion(st.Version())
val, proposing := s.checkIfProposing(st, slot)
if !proposing {
return emptyAttri
}
st, err := transition.ProcessSlotsIfNeeded(ctx, st, accessRoot, slot)
if err != nil {
log.WithError(err).Error("Could not process slots to get payload attribute")
return emptyAttri
}
// Get previous randao.
prevRando, err := helpers.RandaoMix(st, time.CurrentEpoch(st))
if err != nil {
log.WithError(err).Error("Could not get randao mix to get payload attribute")
return emptyAttri
}
// Get timestamp.
t, err := slots.StartTime(s.genesisTime, slot)
if err != nil {
log.WithError(err).Error("Could not get timestamp to get payload attribute")
return emptyAttri
}
withdrawals, err := st.WithdrawalsForPayload()
if err != nil {
log.WithError(err).Error("Could not get payload withdrawals to get payload attribute")
return emptyAttri
}
attr, err := payloadattribute.New(&enginev1.PayloadAttributesV3{
Timestamp: uint64(t.Unix()),
PrevRandao: prevRando,
SuggestedFeeRecipient: val.FeeRecipient[:],
Withdrawals: withdrawals,
ParentBeaconBlockRoot: headRoot,
})
if err != nil {
log.WithError(err).Error("Could not get payload attribute")
return emptyAttri
}
return attr
}
// latePayloadTasks updates the NSC and epoch boundary caches when there is no payload in the current slot (and there is a block)
// The case where the block was also missing would have been dealt by lateBlockTasks already.
// We call FCU only if we are proposing next slot, as the execution head is assumed to not have changed.
func (s *Service) latePayloadTasks(ctx context.Context) {
currentSlot := s.CurrentSlot()
if currentSlot != s.HeadSlot() {
// We must've already sent a FCU and updated the caches in lateBlockTaks.
return
}
r, err := s.HeadRoot(ctx)
if err != nil {
log.WithError(err).Error("Failed to get head root")
return
}
hr := [32]byte(r)
if s.payloadBeingSynced.isSyncing(hr) {
return
}
if s.HasFullNode(hr) {
return
}
st, err := s.HeadStateReadOnly(ctx)
if err != nil {
log.WithError(err).Error("Failed to get head state")
return
}
if !s.inRegularSync() {
return
}
attr := s.getPayloadAttributeGloas(ctx, st, currentSlot+1, r, r)
if attr == nil || attr.IsEmpty() {
return
}
beaconLatePayloadTaskTriggeredTotal.Inc()
// Head is the empty block.
bh, err := st.LatestBlockHash()
if err != nil {
log.WithError(err).Error("Could not get latest block hash to notify engine")
return
}
pid, err := s.notifyForkchoiceUpdateGloas(ctx, bh, attr)
if err != nil {
log.WithError(err).Error("Could not notify forkchoice update")
return
}
if pid == nil {
log.Warn("Received nil payload ID from forkchoice update.")
return
}
var pId [8]byte
copy(pId[:], pid[:])
s.cfg.PayloadIDCache.Set(currentSlot+1, hr, pId)
}

View File

@@ -75,7 +75,7 @@ func prepareGloasForkchoiceState(
ExecutionPayloadAvailability: make([]byte, 1024),
LatestBlockHash: make([]byte, 32),
PayloadExpectedWithdrawals: make([]*enginev1.Withdrawal, 0),
ProposerLookahead: make([]uint64, 64),
ProposerLookahead: make([]primitives.ValidatorIndex, 64),
}
st, err := state_native.InitializeFromProtoUnsafeGloas(base)
@@ -146,7 +146,7 @@ func testGloasState(t *testing.T, slot primitives.Slot, parentRoot [32]byte, blo
ExecutionPayloadAvailability: make([]byte, 1024),
LatestBlockHash: make([]byte, 32),
PayloadExpectedWithdrawals: make([]*enginev1.Withdrawal, 0),
ProposerLookahead: make([]uint64, 64),
ProposerLookahead: make([]primitives.ValidatorIndex, 64),
}
bid := util.HydrateSignedExecutionPayloadBid(&ethpb.SignedExecutionPayloadBid{
@@ -446,6 +446,37 @@ func TestPostPayloadHeadUpdate_NotHead(t *testing.T) {
require.NoError(t, s.postPayloadHeadUpdate(ctx, envelope, st, root, headRoot[:]))
}
func TestPostPayloadHeadUpdate_SetsHeadFull(t *testing.T) {
s, _ := setupGloasService(t, &mockExecution.EngineClient{})
ctx := t.Context()
root := bytesutil.ToBytes32([]byte("root1"))
blockHash := bytesutil.ToBytes32([]byte("hash1"))
base, blk := testGloasState(t, 1, params.BeaconConfig().ZeroHash, blockHash)
st, err := state_native.InitializeFromProtoUnsafeGloas(base)
require.NoError(t, err)
signed, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
s.head = &head{root: root, block: signed, state: st, slot: 1}
require.Equal(t, false, s.head.full)
env := &ethpb.ExecutionPayloadEnvelope{
BeaconBlockRoot: root[:],
Payload: &enginev1.ExecutionPayloadDeneb{BlockHash: blockHash[:], ParentHash: make([]byte, 32)},
Slot: 1,
}
envelope, err := blocks.WrappedROExecutionPayloadEnvelope(env)
require.NoError(t, err)
require.NoError(t, s.postPayloadHeadUpdate(ctx, envelope, st, root, root[:]))
s.headLock.RLock()
require.Equal(t, true, s.head.full)
s.headLock.RUnlock()
}
func TestGetLookupParentRoot_PreGloas(t *testing.T) {
service, _ := minimalTestService(t)
@@ -616,6 +647,71 @@ func TestGetLookupParentRoot_GloasParentPreForkEpoch(t *testing.T) {
require.Equal(t, parentRoot, got)
}
func TestLatePayloadTasks_ReturnsEarlyWhenBlockLate(t *testing.T) {
logHook := logTest.NewGlobal()
service, tr := setupGloasService(t, &mockExecution.EngineClient{})
blockHash := bytesutil.ToBytes32([]byte("hash1"))
base, _ := testGloasState(t, 1, params.BeaconConfig().ZeroHash, blockHash)
base.LatestBlockHash = blockHash[:]
st, err := state_native.InitializeFromProtoUnsafeGloas(base)
require.NoError(t, err)
headRoot := bytesutil.ToBytes32([]byte("headroot"))
service.head = &head{
root: headRoot,
state: st,
slot: 1,
}
// Set genesis time so CurrentSlot > HeadSlot.
service.SetGenesisTime(time.Now().Add(-2 * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second))
service.latePayloadTasks(tr.ctx)
require.LogsDoNotContain(t, logHook, "Could not notify forkchoice update")
// No payload ID should have been cached.
_, has := service.cfg.PayloadIDCache.PayloadID(service.CurrentSlot()+1, headRoot)
require.Equal(t, false, has)
}
func TestLatePayloadTasks_SendsFCU(t *testing.T) {
logHook := logTest.NewGlobal()
resetCfg := features.InitWithReset(&features.Flags{
PrepareAllPayloads: true,
})
defer resetCfg()
pid := &enginev1.PayloadIDBytes{1, 2, 3, 4, 5, 6, 7, 8}
service, tr := setupGloasService(t, &mockExecution.EngineClient{PayloadIDBytes: pid})
blockHash := bytesutil.ToBytes32([]byte("hash1"))
base, blk := testGloasState(t, 1, params.BeaconConfig().ZeroHash, blockHash)
base.LatestBlockHash = blockHash[:]
st, err := state_native.InitializeFromProtoUnsafeGloas(base)
require.NoError(t, err)
signed, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
headRoot := bytesutil.ToBytes32([]byte("headroot"))
service.head = &head{
root: headRoot,
block: signed,
state: st,
slot: 1,
}
// CurrentSlot == HeadSlot == 1: place genesis 1.5 slots ago so we're solidly in slot 1.
service.SetGenesisTime(time.Now().Add(-3 * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second / 2))
service.SetForkChoiceGenesisTime(service.genesisTime)
service.latePayloadTasks(tr.ctx)
require.LogsDoNotContain(t, logHook, "Could not notify forkchoice update")
require.LogsDoNotContain(t, logHook, "Could not get")
// Payload ID should have been cached.
cachedPid, has := service.cfg.PayloadIDCache.PayloadID(service.CurrentSlot()+1, headRoot)
require.Equal(t, true, has)
require.Equal(t, primitives.PayloadID(pid[:]), cachedPid)
}
func TestLateBlockTasks_GloasFCU(t *testing.T) {
logHook := logTest.NewGlobal()
resetCfg := features.InitWithReset(&features.Flags{
@@ -645,4 +741,217 @@ func TestLateBlockTasks_GloasFCU(t *testing.T) {
service.lateBlockTasks(tr.ctx)
require.LogsDoNotContain(t, logHook, "could not perform late block tasks")
// Payload ID should have been cached by the Gloas FCU path.
cachedPid, has := service.cfg.PayloadIDCache.PayloadID(service.CurrentSlot()+1, headRoot)
require.Equal(t, true, has)
require.Equal(t, primitives.PayloadID(pid[:]), cachedPid)
}
// TestSaveHead_GloasForkBoundary_PreforkBidForcesEmptyHead verifies that saveHead does not
// treat the head as "full" when the latest execution payload bid was issued in a pre-fork epoch.
// This guards against the Fulu->Gloas upgrade-seeded bid (bid.BlockHash == latestBlockHash,
// bid.Slot == 0) causing a spurious full=true head before any real Gloas bid has been processed.
func TestSaveHead_GloasForkBoundary_PreforkBidForcesEmptyHead(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.GloasForkEpoch = 1
cfg.InitializeForkSchedule()
params.OverrideBeaconConfig(cfg)
service, _ := setupGloasService(t, &mockExecution.EngineClient{})
ctx := t.Context()
blockRoot := bytesutil.ToBytes32([]byte("root1"))
parentRoot := params.BeaconConfig().ZeroHash
blockHash := bytesutil.ToBytes32([]byte("hash1"))
// Create a Gloas state where IsParentBlockFull()==true (bid.BlockHash == LatestBlockHash)
// but bid.Slot is 0 (epoch 0, pre-fork). This mimics the upgrade-seeded state.
base, blk := testGloasState(t, 1, parentRoot, blockHash)
base.LatestBlockHash = blockHash[:]
// bid.Slot defaults to 0, which is before GloasForkEpoch=1.
// Set a valid initial head so saveHead's headBlock() call does not panic.
// We do NOT insert the old block into forkchoice because insertGloasBlock
// would claim the tree root slot; the target block (parentRoot=ZeroHash) must
// be the first node inserted so it can become the tree root.
oldBlk := util.HydrateSignedBeaconBlockGloas(&ethpb.SignedBeaconBlockGloas{})
oldSigned, err2 := blocks.NewSignedBeaconBlock(oldBlk)
require.NoError(t, err2)
oldSt, err2 := state_native.InitializeFromProtoUnsafeGloas(&ethpb.BeaconStateGloas{
Slot: 0,
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
CurrentJustifiedCheckpoint: &ethpb.Checkpoint{Root: make([]byte, 32)},
FinalizedCheckpoint: &ethpb.Checkpoint{Root: make([]byte, 32)},
LatestBlockHeader: &ethpb.BeaconBlockHeader{ParentRoot: make([]byte, 32), StateRoot: make([]byte, 32), BodyRoot: make([]byte, 32)},
Eth1Data: &ethpb.Eth1Data{DepositRoot: make([]byte, 32), BlockHash: make([]byte, 32)},
LatestExecutionPayloadBid: &ethpb.ExecutionPayloadBid{BlockHash: make([]byte, 32), ParentBlockHash: make([]byte, 32), ParentBlockRoot: make([]byte, 32), PrevRandao: make([]byte, 32), FeeRecipient: make([]byte, 20), BlobKzgCommitments: [][]byte{make([]byte, 48)}},
BuilderPendingPayments: func() []*ethpb.BuilderPendingPayment {
pp := make([]*ethpb.BuilderPendingPayment, 64)
for i := range pp {
pp[i] = &ethpb.BuilderPendingPayment{Withdrawal: &ethpb.BuilderPendingWithdrawal{FeeRecipient: make([]byte, 20)}}
}
return pp
}(),
ExecutionPayloadAvailability: make([]byte, 1024),
LatestBlockHash: make([]byte, 32),
PayloadExpectedWithdrawals: make([]*enginev1.Withdrawal, 0),
ProposerLookahead: make([]primitives.ValidatorIndex, 64),
})
require.NoError(t, err2)
oldRoot := bytesutil.ToBytes32([]byte("oldroot1"))
service.head = &head{root: oldRoot, block: oldSigned, state: oldSt, slot: 0}
insertGloasBlock(t, service, base, blk, blockRoot)
st, err := state_native.InitializeFromProtoUnsafeGloas(base)
require.NoError(t, err)
// Verify precondition: IsParentBlockFull() is true.
full, err := st.IsParentBlockFull()
require.NoError(t, err)
require.Equal(t, true, full, "precondition: IsParentBlockFull must be true")
// Verify guard precondition: bid.Slot is pre-fork.
bid, err := st.LatestExecutionPayloadBid()
require.NoError(t, err)
isPrefork := slots.ToEpoch(bid.Slot()) < params.BeaconConfig().GloasForkEpoch
require.Equal(t, true, isPrefork, "precondition: bid.Slot must be pre-fork")
ssigned, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
// saveHead should NOT mark the head as full because bid.Slot < GloasForkEpoch.
require.NoError(t, service.saveHead(ctx, blockRoot, ssigned, st))
service.headLock.RLock()
headFull := service.head.full
service.headLock.RUnlock()
require.Equal(t, false, headFull, "head must not be full for upgrade-seeded bid")
}
// TestSaveHead_GloasForkBoundary_PostforkBidSetsFullHead verifies that saveHead correctly
// marks the head as full when the latest bid is from a post-fork epoch.
func TestSaveHead_GloasForkBoundary_PostforkBidSetsFullHead(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.GloasForkEpoch = 1
cfg.InitializeForkSchedule()
params.OverrideBeaconConfig(cfg)
service, _ := setupGloasService(t, &mockExecution.EngineClient{})
ctx := t.Context()
forkSlot, err := slots.EpochStart(params.BeaconConfig().GloasForkEpoch)
require.NoError(t, err)
blockRoot := bytesutil.ToBytes32([]byte("root1"))
parentRoot := params.BeaconConfig().ZeroHash
blockHash := bytesutil.ToBytes32([]byte("hash1"))
// Set a valid initial head so saveHead's headBlock() call does not panic.
// Do NOT use insertGloasBlock for the old block — the target block must be
// the first node inserted so it can claim the tree root (parentRoot=ZeroHash).
oldBlk2 := util.HydrateSignedBeaconBlockGloas(&ethpb.SignedBeaconBlockGloas{})
oldSigned2, err2 := blocks.NewSignedBeaconBlock(oldBlk2)
require.NoError(t, err2)
oldSt2, err2 := state_native.InitializeFromProtoUnsafeGloas(&ethpb.BeaconStateGloas{
Slot: 0,
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
CurrentJustifiedCheckpoint: &ethpb.Checkpoint{Root: make([]byte, 32)},
FinalizedCheckpoint: &ethpb.Checkpoint{Root: make([]byte, 32)},
LatestBlockHeader: &ethpb.BeaconBlockHeader{ParentRoot: make([]byte, 32), StateRoot: make([]byte, 32), BodyRoot: make([]byte, 32)},
Eth1Data: &ethpb.Eth1Data{DepositRoot: make([]byte, 32), BlockHash: make([]byte, 32)},
LatestExecutionPayloadBid: &ethpb.ExecutionPayloadBid{BlockHash: make([]byte, 32), ParentBlockHash: make([]byte, 32), ParentBlockRoot: make([]byte, 32), PrevRandao: make([]byte, 32), FeeRecipient: make([]byte, 20), BlobKzgCommitments: [][]byte{make([]byte, 48)}},
BuilderPendingPayments: func() []*ethpb.BuilderPendingPayment {
pp := make([]*ethpb.BuilderPendingPayment, 64)
for i := range pp {
pp[i] = &ethpb.BuilderPendingPayment{Withdrawal: &ethpb.BuilderPendingWithdrawal{FeeRecipient: make([]byte, 20)}}
}
return pp
}(),
ExecutionPayloadAvailability: make([]byte, 1024),
LatestBlockHash: make([]byte, 32),
PayloadExpectedWithdrawals: make([]*enginev1.Withdrawal, 0),
ProposerLookahead: make([]primitives.ValidatorIndex, 64),
})
require.NoError(t, err2)
oldRoot2 := bytesutil.ToBytes32([]byte("oldroot2"))
service.head = &head{root: oldRoot2, block: oldSigned2, state: oldSt2, slot: 0}
base, blk := testGloasState(t, forkSlot+1, parentRoot, blockHash)
base.LatestBlockHash = blockHash[:]
// Set bid.Slot to a post-fork epoch slot.
base.LatestExecutionPayloadBid.Slot = forkSlot + 1
insertGloasBlock(t, service, base, blk, blockRoot)
st, err := state_native.InitializeFromProtoUnsafeGloas(base)
require.NoError(t, err)
// Verify preconditions.
full, err := st.IsParentBlockFull()
require.NoError(t, err)
require.Equal(t, true, full, "precondition: IsParentBlockFull must be true")
bid, err := st.LatestExecutionPayloadBid()
require.NoError(t, err)
isPostfork := slots.ToEpoch(bid.Slot()) >= params.BeaconConfig().GloasForkEpoch
require.Equal(t, true, isPostfork, "precondition: bid.Slot must be post-fork")
ssigned, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
// saveHead SHOULD mark the head as full because bid.Slot >= GloasForkEpoch.
require.NoError(t, service.saveHead(ctx, blockRoot, ssigned, st))
service.headLock.RLock()
headFull := service.head.full
service.headLock.RUnlock()
require.Equal(t, true, headFull, "head must be full for real post-fork bid")
}
// TestLateBlockTasks_GloasForkBoundary_PreforkBidUsesHeadRoot verifies that lateBlockTasks
// uses headRoot (not LatestBlockHash) as the accessRoot when the bid is pre-fork epoch.
// Without this guard, the upgrade-seeded bid would cause lateBlockTasks to use the wrong
// access root for the next-slot cache.
func TestLateBlockTasks_GloasForkBoundary_PreforkBidUsesHeadRoot(t *testing.T) {
logHook := logTest.NewGlobal()
resetCfg := features.InitWithReset(&features.Flags{
PrepareAllPayloads: true,
})
defer resetCfg()
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.GloasForkEpoch = 1
cfg.InitializeForkSchedule()
params.OverrideBeaconConfig(cfg)
pid := &enginev1.PayloadIDBytes{1, 2, 3, 4, 5, 6, 7, 8}
service, tr := setupGloasService(t, &mockExecution.EngineClient{PayloadIDBytes: pid})
blockHash := bytesutil.ToBytes32([]byte("hash1"))
base, _ := testGloasState(t, 1, params.BeaconConfig().ZeroHash, blockHash)
// Make IsParentBlockFull() true: bid.BlockHash == LatestBlockHash.
base.LatestBlockHash = blockHash[:]
// bid.Slot is 0 (pre-fork epoch): the epoch guard should prevent using LatestBlockHash as accessRoot.
st, err := state_native.InitializeFromProtoUnsafeGloas(base)
require.NoError(t, err)
headRoot := bytesutil.ToBytes32([]byte("headroot"))
service.head = &head{
root: headRoot,
state: st,
slot: 1,
}
// Trigger late block logic: CurrentSlot > HeadSlot.
service.SetGenesisTime(time.Now().Add(-2 * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second))
service.SetForkChoiceGenesisTime(service.genesisTime)
service.lateBlockTasks(tr.ctx)
require.LogsDoNotContain(t, logHook, "could not perform late block tasks")
}

View File

@@ -50,6 +50,7 @@ type head struct {
block interfaces.ReadOnlySignedBeaconBlock // current head block.
state state.BeaconState // current head state.
slot primitives.Slot // the head block slot number
full bool // whether the head is post-CL or post-EL after Gloas
optimistic bool // optimistic status when saved head
}
@@ -60,8 +61,24 @@ func (s *Service) saveHead(ctx context.Context, newHeadRoot [32]byte, headBlock
ctx, span := trace.StartSpan(ctx, "blockChain.saveHead")
defer span.End()
// Pre-Gloas we use empty for head because we still key states by blockroot
var full bool
var err error
if headState.Version() >= version.Gloas {
gloasFirstSlot, err := slots.EpochStart(params.BeaconConfig().GloasForkEpoch)
if err != nil {
return errors.Wrap(err, "could not compute gloas first slot")
}
if headState.Slot() > gloasFirstSlot {
full, err = headState.IsParentBlockFull()
if err != nil {
return errors.Wrap(err, "could not determine if head is full or not")
}
}
}
// Do nothing if head hasn't changed.
if !s.isNewHead(newHeadRoot) {
if !s.isNewHead(newHeadRoot, full) {
return nil
}
@@ -157,6 +174,7 @@ func (s *Service) saveHead(ctx context.Context, newHeadRoot [32]byte, headBlock
state: headState,
optimistic: isOptimistic,
slot: headBlock.Block().Slot(),
full: full,
}
if err := s.setHead(newHead); err != nil {
return errors.Wrap(err, "could not set head")
@@ -217,6 +235,7 @@ func (s *Service) setHead(newHead *head) error {
root: newHead.root,
block: bCp,
state: newHead.state.Copy(),
full: newHead.full,
optimistic: newHead.optimistic,
slot: newHead.slot,
}
@@ -333,13 +352,16 @@ func (s *Service) notifyNewHeadEvent(
if currentDutyDependentRoot == [32]byte{} {
currentDutyDependentRoot = s.originBlockRoot
}
previousDutyDependentRoot := currentDutyDependentRoot
var previousDutyDependentRoot [32]byte
if currEpoch > 0 {
previousDutyDependentRoot, err = s.DependentRoot(currEpoch.Sub(1))
if err != nil {
return errors.Wrap(err, "could not get duty dependent root")
}
}
if previousDutyDependentRoot == [32]byte{} {
previousDutyDependentRoot = s.originBlockRoot
}
isOptimistic, err := s.IsOptimistic(ctx)
if err != nil {

View File

@@ -213,7 +213,7 @@ func Test_notifyNewHeadEvent(t *testing.T) {
Block: newHeadRoot[:],
State: newHeadStateRoot[:],
EpochTransition: true,
PreviousDutyDependentRoot: make([]byte, 32),
PreviousDutyDependentRoot: srv.originBlockRoot[:],
CurrentDutyDependentRoot: srv.originBlockRoot[:],
}
require.DeepSSZEqual(t, wanted, eventHead)
@@ -243,11 +243,35 @@ func Test_notifyNewHeadEvent(t *testing.T) {
Block: newHeadRoot[:],
State: newHeadStateRoot[:],
EpochTransition: true,
PreviousDutyDependentRoot: params.BeaconConfig().ZeroHash[:],
PreviousDutyDependentRoot: srv.originBlockRoot[:],
CurrentDutyDependentRoot: srv.originBlockRoot[:],
}
require.DeepSSZEqual(t, wanted, eventHead)
})
t.Run("previous dependent root zero hash falls back to origin", func(t *testing.T) {
srv := testServiceWithDB(t)
srv.SetGenesisTime(time.Now())
notifier := srv.cfg.StateNotifier.(*mock.MockStateNotifier)
srv.originBlockRoot = [32]byte{0xab}
st, blk, err := prepareForkchoiceState(t.Context(), 0, [32]byte{}, [32]byte{}, [32]byte{}, &ethpb.Checkpoint{}, &ethpb.Checkpoint{})
require.NoError(t, err)
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(t.Context(), st, blk))
newHeadRoot := [32]byte{3}
st, blk, err = prepareForkchoiceState(t.Context(), 32, newHeadRoot, [32]byte{}, [32]byte{}, &ethpb.Checkpoint{}, &ethpb.Checkpoint{})
require.NoError(t, err)
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(t.Context(), st, blk))
newHeadSlot := params.BeaconConfig().SlotsPerEpoch
require.NoError(t, srv.notifyNewHeadEvent(t.Context(), newHeadSlot, []byte{2}, newHeadRoot[:]))
events := notifier.ReceivedEvents()
require.Equal(t, 1, len(events))
eventHead, ok := events[0].Data.(*ethpbv1.EventHead)
require.Equal(t, true, ok)
// DependentRoot(0) returns zero hash since the forkchoice tree is sparse.
// The fix ensures it falls back to originBlockRoot instead of sending zeros.
assert.DeepEqual(t, srv.originBlockRoot[:], eventHead.PreviousDutyDependentRoot)
assert.DeepEqual(t, srv.originBlockRoot[:], eventHead.CurrentDutyDependentRoot)
})
}
func TestRetrieveHead_ReadOnly(t *testing.T) {

View File

@@ -77,10 +77,10 @@ func VerifyBlobKZGProofBatch(blobs [][]byte, commitments [][]byte, proofs [][]by
return fmt.Errorf("blobs len (%d) differs from expected (%d)", len(blobs[i]), len(ckzg4844.Blob{}))
}
if len(commitments[i]) != len(ckzg4844.Bytes48{}) {
return fmt.Errorf("commitments len (%d) differs from expected (%d)", len(commitments[i]), len(ckzg4844.Blob{}))
return fmt.Errorf("commitments len (%d) differs from expected (%d)", len(commitments[i]), len(ckzg4844.Bytes48{}))
}
if len(proofs[i]) != len(ckzg4844.Bytes48{}) {
return fmt.Errorf("proofs len (%d) differs from expected (%d)", len(proofs[i]), len(ckzg4844.Blob{}))
return fmt.Errorf("proofs len (%d) differs from expected (%d)", len(proofs[i]), len(ckzg4844.Bytes48{}))
}
ckzgBlobs[i] = ckzg4844.Blob(blobs[i])
ckzgCommitments[i] = ckzg4844.Bytes48(commitments[i])

View File

@@ -234,6 +234,25 @@ var (
Help: "The maximum number of blobs allowed in a block.",
},
)
beaconExecutionPayloadEnvelopeValidTotal = promauto.NewCounter(prometheus.CounterOpts{
Name: "beacon_execution_payload_envelope_valid_total",
Help: "Count the number of execution payload envelopes that were processed successfully.",
})
beaconExecutionPayloadEnvelopeInvalidTotal = promauto.NewCounter(prometheus.CounterOpts{
Name: "beacon_execution_payload_envelope_invalid_total",
Help: "Count the number of execution payload envelopes that failed processing.",
})
beaconExecutionPayloadEnvelopeProcessingDurationSeconds = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "beacon_execution_payload_envelope_processing_duration_seconds",
Help: "Captures end-to-end processing time for execution payload envelopes.",
Buckets: prometheus.DefBuckets,
},
)
beaconLatePayloadTaskTriggeredTotal = promauto.NewCounter(prometheus.CounterOpts{
Name: "beacon_late_payload_task_triggered_total",
Help: "Count the number of times late payload tasks fired.",
})
)
// reportSlotMetrics reports slot related metrics.

View File

@@ -96,6 +96,15 @@ func WithTrackedValidatorsCache(c *cache.TrackedValidatorsCache) Option {
}
}
// WithProposerPreferencesCache sets the proposer preferences cache used to
// look up fee recipient and gas limit from Gloas gossip preferences.
func WithProposerPreferencesCache(c *cache.ProposerPreferencesCache) Option {
return func(s *Service) error {
s.cfg.ProposerPreferencesCache = c
return nil
}
}
// WithAttestationCache for attestation lifecycle after chain inclusion.
func WithAttestationCache(c *cache.AttestationCache) Option {
return func(s *Service) error {

View File

@@ -108,7 +108,7 @@ func (s *Service) postBlockProcess(cfg *postBlockProcessConfig) error {
}
if cfg.roblock.Version() < version.Gloas {
s.sendFCU(cfg)
} else if s.isNewHead(cfg.headRoot) {
} else if s.isNewHead(cfg.headRoot, false) { // We reach this only when the incoming block is head.
if err := s.saveHead(ctx, cfg.headRoot, cfg.roblock, cfg.postState); err != nil {
log.WithError(err).Error("Could not save head")
}
@@ -135,7 +135,7 @@ func getStateVersionAndPayload(st state.BeaconState) (int, interfaces.ExecutionD
var err error
preStateVersion := st.Version()
switch preStateVersion {
case version.Phase0, version.Altair:
case version.Phase0, version.Altair, version.Gloas:
default:
preStateHeader, err = st.LatestExecutionPayloadHeader()
if err != nil {
@@ -145,7 +145,112 @@ func getStateVersionAndPayload(st state.BeaconState) (int, interfaces.ExecutionD
return preStateVersion, preStateHeader, nil
}
func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlock, avs das.AvailabilityChecker) error {
// applyPayloadIfNeeded applies the parent block's execution payload envelope to
// preState when the current block's bid indicates it built on a full parent.
func (s *Service) applyPayloadIfNeeded(ctx context.Context, b interfaces.ReadOnlyBeaconBlock, parentRoot [32]byte, preState state.BeaconState) error {
if b.Version() < version.Gloas || parentRoot == [32]byte{} {
return nil
}
parentBlock, err := s.cfg.BeaconDB.Block(ctx, parentRoot)
if err != nil {
return errors.Wrapf(err, "could not get parent block with root %#x", parentRoot)
}
if parentBlock.Version() < version.Gloas {
return nil
}
sb, err := b.Body().SignedExecutionPayloadBid()
if err != nil {
return errors.Wrap(err, "could not get execution payload bid for block")
}
if sb == nil || sb.Message == nil {
return fmt.Errorf("missing execution payload bid for block at slot %d", b.Slot())
}
parentBid, err := parentBlock.Block().Body().SignedExecutionPayloadBid()
if err != nil {
return errors.Wrapf(err, "could not get execution payload bid for parent block with root %#x", parentRoot)
}
if parentBid == nil || parentBid.Message == nil {
return fmt.Errorf("missing execution payload bid for parent block with root %#x", parentRoot)
}
if !bytes.Equal(sb.Message.ParentBlockHash, parentBid.Message.BlockHash) {
return nil
}
signedEnvelope, err := s.cfg.BeaconDB.ExecutionPayloadEnvelope(ctx, parentRoot)
if err != nil {
return errors.Wrapf(err, "could not get execution payload envelope for parent block with root %#x", parentRoot)
}
if signedEnvelope == nil || signedEnvelope.Message == nil {
return nil
}
envelope, err := consensusblocks.WrappedROBlindedExecutionPayloadEnvelope(signedEnvelope.Message)
if err != nil {
return errors.Wrapf(err, "could not wrap blinded execution payload envelope for parent block with root %#x", parentRoot)
}
return gloas.ProcessBlindedExecutionPayload(ctx, preState, parentBlock.Block().StateRoot(), envelope)
}
// getBatchPrestate returns the pre-state to apply to the first beacon block in the batch and returns true if it applied the first envelope before
func (s *Service) getBatchPrestate(ctx context.Context, b consensusblocks.ROBlock, envelopes []interfaces.ROSignedExecutionPayloadEnvelope) (state.BeaconState, bool, error) {
if len(envelopes) == 0 || b.Version() < version.Gloas {
blockPreState, err := s.cfg.StateGen.StateByRootInitialSync(ctx, b.Block().ParentRoot())
if err != nil {
return nil, false, errors.Wrap(err, "could not get block pre state")
}
return blockPreState, false, nil
}
parentRoot := b.Block().ParentRoot()
full, err := consensusblocks.BlockBuiltOnEnvelope(envelopes[0], b)
if err != nil {
return nil, false, errors.Wrap(err, "could not check if block builds on envelope")
}
blockPreState, err := s.cfg.StateGen.StateByRootInitialSync(ctx, parentRoot)
if err != nil {
return nil, false, errors.Wrap(err, "could not get block pre state")
}
if !full {
return blockPreState, false, nil
}
parentBlock, err := s.cfg.BeaconDB.Block(ctx, parentRoot)
if err != nil {
return nil, false, errors.Wrap(err, "could not get parent block")
}
if s.cfg.BeaconDB.HasExecutionPayloadEnvelope(ctx, parentRoot) {
// The parent envelope was already saved by a previous batch but the
// replayed state may not include it (replay skips the last block's
// envelope). Load the blinded form from DB and apply it.
blindedEnv, err := s.cfg.BeaconDB.ExecutionPayloadEnvelope(ctx, parentRoot)
if err != nil {
return nil, false, errors.Wrap(err, "could not load parent blinded envelope from DB")
}
wrappedEnv, err := consensusblocks.WrappedROBlindedExecutionPayloadEnvelope(blindedEnv.Message)
if err != nil {
return nil, false, errors.Wrap(err, "could not wrap blinded envelope")
}
if err := gloas.ProcessBlindedExecutionPayload(ctx, blockPreState, parentBlock.Block().StateRoot(), wrappedEnv); err != nil {
return nil, false, errors.Wrap(err, "could not apply parent blinded envelope from DB")
}
return blockPreState, true, nil
}
env, err := envelopes[0].Envelope()
if err != nil {
return nil, false, err
}
// notify the engine of the new envelope
if _, err := s.notifyNewEnvelope(ctx, blockPreState, env); err != nil {
return nil, false, err
}
if err := gloas.ProcessBlindedExecutionPayload(ctx, blockPreState, parentBlock.Block().StateRoot(), env); err != nil {
return nil, false, err
}
return blockPreState, true, nil
}
type versionAndHeader struct {
version int
header interfaces.ExecutionData
}
func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlock, envelopes []interfaces.ROSignedExecutionPayloadEnvelope, avs das.AvailabilityChecker) error {
ctx, span := trace.StartSpan(ctx, "blockChain.onBlockBatch")
defer span.End()
@@ -159,16 +264,35 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlo
b := blks[0].Block()
// Retrieve incoming block's pre state.
if err := s.verifyBlkPreState(ctx, b.ParentRoot()); err != nil {
parentRoot := b.ParentRoot()
if err := s.verifyBlkPreState(ctx, parentRoot); err != nil {
return err
}
preState, err := s.cfg.StateGen.StateByRootInitialSync(ctx, b.ParentRoot())
preState, applied, err := s.getBatchPrestate(ctx, blks[0], envelopes)
if err != nil {
return err
}
if preState == nil || preState.IsNil() {
return fmt.Errorf("nil pre state for slot %d", b.Slot())
}
var eidx int
var br [32]byte
sigSet := bls.NewSet()
if applied {
eidx = 1
envSigSet, err := gloas.ExecutionPayloadEnvelopeSignatureBatch(preState, envelopes[0])
if err != nil {
return err
}
sigSet.Join(envSigSet)
}
if eidx < len(envelopes) {
env, err := envelopes[eidx].Envelope()
if err != nil {
return err
}
br = env.BeaconBlockRoot()
}
// Fill in missing blocks
if err := s.fillInForkChoiceMissingBlocks(ctx, blks[0], preState.FinalizedCheckpoint(), preState.CurrentJustifiedCheckpoint()); err != nil {
@@ -177,11 +301,6 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlo
jCheckpoints := make([]*ethpb.Checkpoint, len(blks))
fCheckpoints := make([]*ethpb.Checkpoint, len(blks))
sigSet := bls.NewSet()
type versionAndHeader struct {
version int
header interfaces.ExecutionData
}
preVersionAndHeaders := make([]*versionAndHeader, len(blks))
postVersionAndHeaders := make([]*versionAndHeader, len(blks))
var set *bls.SignatureBatch
@@ -203,6 +322,23 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlo
if err != nil {
return invalidBlock{error: err}
}
if b.Root() == br && eidx < len(envelopes) {
envSigSet, err := gloas.ProcessExecutionPayloadWithDeferredSig(ctx, preState, b.Block().StateRoot(), envelopes[eidx])
if err != nil {
return err
}
sigSet.Join(envSigSet)
eidx++
if eidx < len(envelopes) {
nextEnv, err := envelopes[eidx].Envelope()
if err != nil {
return err
}
br = nextEnv.BeaconBlockRoot()
} else {
br = [32]byte{}
}
}
// Save potential boundary states.
if slots.IsEpochStart(preState.Slot()) {
boundaries[b.Root()] = preState.Copy()
@@ -234,56 +370,9 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlo
return errors.New("batch block signature verification failed")
}
// blocks have been verified, save them and call the engine
pendingNodes := make([]*forkchoicetypes.BlockAndCheckpoints, len(blks))
var isValidPayload bool
for i, b := range blks {
root := b.Root()
isValidPayload, err = s.notifyNewPayload(ctx,
postVersionAndHeaders[i].version,
postVersionAndHeaders[i].header, b)
if err != nil {
// this call does not have the root in forkchoice yet.
return s.handleInvalidExecutionError(ctx, err, root, b.Block().ParentRoot(), [32]byte(postVersionAndHeaders[i].header.ParentHash()))
}
if isValidPayload {
if err := s.validateMergeTransitionBlock(ctx, preVersionAndHeaders[i].version,
preVersionAndHeaders[i].header, b); err != nil {
return err
}
}
if err := s.areSidecarsAvailable(ctx, avs, b); err != nil {
return errors.Wrapf(err, "could not validate sidecar availability for block %#x at slot %d", b.Root(), b.Block().Slot())
}
args := &forkchoicetypes.BlockAndCheckpoints{Block: b,
JustifiedCheckpoint: jCheckpoints[i],
FinalizedCheckpoint: fCheckpoints[i]}
pendingNodes[i] = args
if err := s.saveInitSyncBlock(ctx, root, b); err != nil {
tracing.AnnotateError(span, err)
return err
}
if err := s.cfg.BeaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{
Slot: b.Block().Slot(),
Root: root[:],
}); err != nil {
tracing.AnnotateError(span, err)
return err
}
if i > 0 && jCheckpoints[i].Epoch > jCheckpoints[i-1].Epoch {
if err := s.cfg.BeaconDB.SaveJustifiedCheckpoint(ctx, jCheckpoints[i]); err != nil {
tracing.AnnotateError(span, err)
return err
}
}
if i > 0 && fCheckpoints[i].Epoch > fCheckpoints[i-1].Epoch {
if err := s.updateFinalized(ctx, fCheckpoints[i]); err != nil {
tracing.AnnotateError(span, err)
return err
}
}
pendingNodes, isValidPayload, err := s.notifyEngineAndSaveData(ctx, blks, envelopes, avs, preVersionAndHeaders, postVersionAndHeaders, jCheckpoints, fCheckpoints)
if err != nil {
return err
}
// Save boundary states that will be useful for forkchoice
for r, st := range boundaries {
@@ -298,6 +387,15 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlo
return err
}
// Insert all nodes to forkchoice
if applied {
env, err := envelopes[0].Envelope()
if err != nil {
return err
}
if err := s.cfg.ForkChoiceStore.InsertPayload(env); err != nil {
return errors.Wrap(err, "could not insert first payload in batch to forkchoice")
}
}
if err := s.cfg.ForkChoiceStore.InsertChain(ctx, pendingNodes); err != nil {
return errors.Wrap(err, "could not insert batch to forkchoice")
}
@@ -310,6 +408,102 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlo
return s.saveHeadNoDB(ctx, lastB, lastBR, preState, !isValidPayload)
}
func (s *Service) notifyEngineAndSaveData(
ctx context.Context,
blks []consensusblocks.ROBlock,
envelopes []interfaces.ROSignedExecutionPayloadEnvelope,
avs das.AvailabilityChecker,
preVersionAndHeaders []*versionAndHeader,
postVersionAndHeaders []*versionAndHeader,
jCheckpoints []*ethpb.Checkpoint,
fCheckpoints []*ethpb.Checkpoint,
) ([]*forkchoicetypes.BlockAndCheckpoints, bool, error) {
span := trace.FromContext(ctx)
pendingNodes := make([]*forkchoicetypes.BlockAndCheckpoints, len(blks))
var isValidPayload bool
var err error
envMap := make(map[[32]byte]int, len(envelopes))
for i, e := range envelopes {
env, err := e.Envelope()
if err != nil {
return nil, false, err
}
envMap[env.BeaconBlockRoot()] = i
}
for i, b := range blks {
root := b.Root()
args := &forkchoicetypes.BlockAndCheckpoints{Block: b,
JustifiedCheckpoint: jCheckpoints[i],
FinalizedCheckpoint: fCheckpoints[i]}
if b.Version() < version.Gloas {
isValidPayload, err = s.notifyNewPayload(ctx,
postVersionAndHeaders[i].version,
postVersionAndHeaders[i].header, b)
if err != nil {
return nil, false, s.handleInvalidExecutionError(ctx, err, root, b.Block().ParentRoot(), [32]byte(postVersionAndHeaders[i].header.ParentHash()))
}
if isValidPayload {
if err := s.validateMergeTransitionBlock(ctx, preVersionAndHeaders[i].version,
preVersionAndHeaders[i].header, b); err != nil {
return nil, false, err
}
}
} else {
idx, ok := envMap[root]
if ok {
env, err := envelopes[idx].Envelope()
if err != nil {
return nil, false, err
}
isValidPayload, err = s.notifyNewEnvelopeFromBlock(ctx, b, env)
if err != nil {
return nil, false, errors.Wrap(err, "could not notify new envelope from block")
}
args.HasPayload = true
bh := env.BlockHash()
if err := s.cfg.BeaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{
Slot: b.Block().Slot(),
Root: bh[:],
}); err != nil {
tracing.AnnotateError(span, err)
return nil, false, err
}
}
}
if err := s.areSidecarsAvailable(ctx, avs, b); err != nil {
return nil, false, errors.Wrapf(err, "could not validate sidecar availability for block %#x at slot %d", b.Root(), b.Block().Slot())
}
pendingNodes[i] = args
if err := s.saveInitSyncBlock(ctx, root, b); err != nil {
tracing.AnnotateError(span, err)
return nil, false, err
}
if err := s.cfg.BeaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{
Slot: b.Block().Slot(),
Root: root[:],
}); err != nil {
tracing.AnnotateError(span, err)
return nil, false, err
}
if i > 0 && jCheckpoints[i].Epoch > jCheckpoints[i-1].Epoch {
if err := s.cfg.BeaconDB.SaveJustifiedCheckpoint(ctx, jCheckpoints[i]); err != nil {
tracing.AnnotateError(span, err)
return nil, false, err
}
}
if i > 0 && fCheckpoints[i].Epoch > fCheckpoints[i-1].Epoch {
if err := s.updateFinalized(ctx, fCheckpoints[i]); err != nil {
tracing.AnnotateError(span, err)
return nil, false, err
}
}
}
return pendingNodes, isValidPayload, nil
}
func (s *Service) areSidecarsAvailable(ctx context.Context, avs das.AvailabilityChecker, roBlock consensusblocks.ROBlock) error {
blockVersion := roBlock.Version()
block := roBlock.Block()
@@ -431,7 +625,7 @@ func (s *Service) updateCachesAndEpochBoundary(ctx context.Context, currentSlot
// Epoch boundary tasks: it copies the headState and updates the epoch boundary
// caches. The caller of this function must not hold a lock in forkchoice store.
func (s *Service) handleEpochBoundary(ctx context.Context, slot primitives.Slot, headState state.BeaconState, blockRoot []byte) error {
func (s *Service) handleEpochBoundary(ctx context.Context, slot primitives.Slot, headState state.ReadOnlyBeaconState, blockRoot []byte) error {
ctx, span := trace.StartSpan(ctx, "blockChain.handleEpochBoundary")
defer span.End()
// return early if we are advancing to a past epoch
@@ -492,7 +686,7 @@ func (s *Service) handleBlockPayloadAttestations(ctx context.Context, blk interf
if len(atts) == 0 {
return nil
}
committee, err := gloas.PayloadCommittee(ctx, st, blk.Slot()-1)
committee, err := st.PayloadCommitteeReadOnly(blk.Slot() - 1)
if err != nil {
return err
}
@@ -1022,9 +1216,11 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
headRoot := s.headRoot()
headState := s.headState(ctx)
s.headLock.RUnlock()
var accessRoot [32]byte
isFull, err := headState.IsParentBlockFull()
if err != nil || !isFull {
gloasFirstSlot, _ := slots.EpochStart(params.BeaconConfig().GloasForkEpoch)
if err != nil || !isFull || headState.Slot() <= gloasFirstSlot {
accessRoot = headRoot
} else {
accessRoot, err = headState.LatestBlockHash()
@@ -1041,7 +1237,7 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
return
}
attribute := s.getPayloadAttribute(ctx, headState, s.CurrentSlot()+1, headRoot[:])
attribute := s.getPayloadAttribute(ctx, headState, s.CurrentSlot()+1, headRoot[:], accessRoot[:])
// return early if we are not proposing next slot
if attribute.IsEmpty() {
return
@@ -1053,32 +1249,35 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
log.WithError(err).Debug("could not perform late block tasks: failed to retrieve latest block hash")
return
}
_, err = s.notifyForkchoiceUpdateGloas(ctx, bh, attribute)
id, err := s.notifyForkchoiceUpdateGloas(ctx, bh, attribute)
if err != nil {
log.WithError(err).Debug("could not perform late block tasks: failed to update forkchoice with engine")
}
} else {
s.headLock.RLock()
headBlock, err := s.headBlock()
if err != nil {
s.headLock.RUnlock()
log.WithError(err).Debug("could not perform late block tasks: failed to retrieve head block")
return
if id != nil {
s.cfg.PayloadIDCache.Set(s.CurrentSlot()+1, headRoot, [8]byte(*id))
}
return
}
s.headLock.RLock()
headBlock, err := s.headBlock()
if err != nil {
s.headLock.RUnlock()
log.WithError(err).Debug("could not perform late block tasks: failed to retrieve head block")
return
}
s.headLock.RUnlock()
fcuArgs := &fcuConfig{
headState: headState,
headRoot: headRoot,
headBlock: headBlock,
attributes: attribute,
}
s.cfg.ForkChoiceStore.Lock()
defer s.cfg.ForkChoiceStore.Unlock()
_, err = s.notifyForkchoiceUpdate(ctx, fcuArgs)
if err != nil {
log.WithError(err).Debug("could not perform late block tasks: failed to update forkchoice with engine")
}
fcuArgs := &fcuConfig{
headState: headState,
headRoot: headRoot,
headBlock: headBlock,
attributes: attribute,
}
s.cfg.ForkChoiceStore.Lock()
defer s.cfg.ForkChoiceStore.Unlock()
_, err = s.notifyForkchoiceUpdate(ctx, fcuArgs)
if err != nil {
log.WithError(err).Debug("could not perform late block tasks: failed to update forkchoice with engine")
}
}

View File

@@ -1,6 +1,7 @@
package blockchain
import (
"bytes"
"context"
"fmt"
"slices"
@@ -22,6 +23,7 @@ import (
mathutil "github.com/OffchainLabs/prysm/v7/math"
"github.com/OffchainLabs/prysm/v7/monitoring/tracing/trace"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/runtime/version"
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/ethereum/go-ethereum/common"
"github.com/pkg/errors"
@@ -44,7 +46,7 @@ func (s *Service) getFCUArgs(cfg *postBlockProcessConfig) (*fcuConfig, error) {
if err != nil {
return nil, err
}
fcuArgs.attributes = s.getPayloadAttribute(cfg.ctx, fcuArgs.headState, fcuArgs.proposingSlot, cfg.headRoot[:])
fcuArgs.attributes = s.getPayloadAttribute(cfg.ctx, fcuArgs.headState, fcuArgs.proposingSlot, cfg.headRoot[:], cfg.headRoot[:])
return fcuArgs, nil
}
@@ -89,7 +91,7 @@ func (s *Service) logNonCanonicalBlockReceived(blockRoot [32]byte, headRoot [32]
// fcuArgsNonCanonicalBlock returns the arguments to the FCU call when the
// incoming block is non-canonical, that is, based on the head root.
func (s *Service) fcuArgsNonCanonicalBlock(cfg *postBlockProcessConfig) (*fcuConfig, error) {
headState, headBlock, err := s.getStateAndBlock(cfg.ctx, cfg.headRoot)
headState, headBlock, err := s.getStateAndBlock(cfg.ctx, cfg.headRoot, cfg.headRoot)
if err != nil {
return nil, err
}
@@ -387,6 +389,7 @@ func (s *Service) fillInForkChoiceMissingBlocks(ctx context.Context, signed inte
return err
}
root := signed.Block().ParentRoot()
child := signed
// As long as parent node is not in fork choice store, and parent node is in DB.
for !s.cfg.ForkChoiceStore.HasNode(root) && s.cfg.BeaconDB.HasBlock(ctx, root) {
b, err := s.getBlock(ctx, root)
@@ -400,10 +403,33 @@ func (s *Service) fillInForkChoiceMissingBlocks(ctx context.Context, signed inte
if err != nil {
return err
}
hasPayload := false
if roblock.Version() >= version.Gloas {
sbid, err := child.Block().Body().SignedExecutionPayloadBid()
if err != nil {
return errors.Wrapf(err, "could not get execution payload bid for block at slot %d", child.Block().Slot())
}
if sbid == nil || sbid.Message == nil {
return fmt.Errorf("missing execution payload bid for block at slot %d", child.Block().Slot())
}
parentBid, err := b.Block().Body().SignedExecutionPayloadBid()
if err != nil {
return errors.Wrapf(err, "could not get execution payload bid for block at slot %d", b.Block().Slot())
}
if parentBid == nil || parentBid.Message == nil {
return fmt.Errorf("missing execution payload bid for block at slot %d", b.Block().Slot())
}
if bytes.Equal(sbid.Message.ParentBlockHash, parentBid.Message.BlockHash) {
hasPayload = true
}
}
root = b.Block().ParentRoot()
child = b
args := &forkchoicetypes.BlockAndCheckpoints{Block: roblock,
JustifiedCheckpoint: jCheckpoint,
FinalizedCheckpoint: fCheckpoint}
FinalizedCheckpoint: fCheckpoint,
HasPayload: hasPayload,
}
pendingNodes = append(pendingNodes, args)
}
if len(pendingNodes) == 0 {

View File

@@ -14,7 +14,6 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/cache"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/blocks"
statefeed "github.com/OffchainLabs/prysm/v7/beacon-chain/core/feed/state"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/gloas"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/signing"
@@ -164,7 +163,7 @@ func TestStore_OnBlockBatch(t *testing.T) {
require.NoError(t, err)
blks = append(blks, rwsb)
}
err := service.onBlockBatch(ctx, blks, &das.MockAvailabilityStore{})
err := service.onBlockBatch(ctx, blks, nil, &das.MockAvailabilityStore{})
require.NoError(t, err)
jcp := service.CurrentJustifiedCheckpt()
jroot := bytesutil.ToBytes32(jcp.Root)
@@ -194,7 +193,7 @@ func TestStore_OnBlockBatch_NotifyNewPayload(t *testing.T) {
require.NoError(t, service.saveInitSyncBlock(ctx, rwsb.Root(), wsb))
blks = append(blks, rwsb)
}
require.NoError(t, service.onBlockBatch(ctx, blks, &das.MockAvailabilityStore{}))
require.NoError(t, service.onBlockBatch(ctx, blks, nil, &das.MockAvailabilityStore{}))
}
func TestCachedPreState_CanGetFromStateSummary(t *testing.T) {
@@ -2074,7 +2073,7 @@ func TestNoViableHead_Reboot(t *testing.T) {
rwsb, err := consensusblocks.NewROBlock(wsb)
require.NoError(t, err)
// We use onBlockBatch here because the valid chain is missing in forkchoice
require.NoError(t, service.onBlockBatch(ctx, []consensusblocks.ROBlock{rwsb}, &das.MockAvailabilityStore{}))
require.NoError(t, service.onBlockBatch(ctx, []consensusblocks.ROBlock{rwsb}, nil, &das.MockAvailabilityStore{}))
// Check that the head is now VALID and the node is not optimistic
require.Equal(t, genesisRoot, service.ensureRootNotZeros(service.cfg.ForkChoiceStore.CachedHeadRoot()))
headRoot, err = service.HeadRoot(ctx)
@@ -3556,7 +3555,7 @@ func TestHandleBlockPayloadAttestations(t *testing.T) {
base, insertBlk := testGloasState(t, 1, parentRoot, blockHash)
insertGloasBlock(t, s, base, insertBlk, blockRoot)
ptc, err := gloas.PayloadCommittee(ctx, headState, 1)
ptc, err := headState.PayloadCommitteeReadOnly(1)
require.NoError(t, err)
require.NotEqual(t, 0, len(ptc))

View File

@@ -134,38 +134,64 @@ func (s *Service) UpdateHead(ctx context.Context, proposingSlot primitives.Slot)
start = time.Now()
// return early if we haven't changed head
newHeadRoot, err := s.cfg.ForkChoiceStore.Head(ctx)
newHeadRoot, newHeadBlockHash, full, err := s.cfg.ForkChoiceStore.FullHead(ctx)
if err != nil {
log.WithError(err).Error("Could not compute head from new attestations")
return
}
if !s.isNewHead(newHeadRoot) {
if !s.isNewHead(newHeadRoot, full) {
return
}
log.WithField("newHeadRoot", fmt.Sprintf("%#x", newHeadRoot)).Debug("Head changed due to attestations")
headState, headBlock, err := s.getStateAndBlock(ctx, newHeadRoot)
var accessRoot [32]byte
postGloas := slots.ToEpoch(proposingSlot) >= params.BeaconConfig().GloasForkEpoch
if full && postGloas {
accessRoot = newHeadBlockHash
} else {
accessRoot = newHeadRoot
}
headState, headBlock, err := s.getStateAndBlock(ctx, newHeadRoot, accessRoot)
if err != nil {
log.WithError(err).Error("Could not get head block")
log.WithError(err).Error("Could not get head block and state")
return
}
newAttHeadElapsedTime.Observe(float64(time.Since(start).Milliseconds()))
fcuArgs := &fcuConfig{
headState: headState,
headRoot: newHeadRoot,
headBlock: headBlock,
proposingSlot: proposingSlot,
}
if s.inRegularSync() {
fcuArgs.attributes = s.getPayloadAttribute(ctx, headState, proposingSlot, newHeadRoot[:])
if fcuArgs.attributes != nil && s.shouldOverrideFCU(newHeadRoot, proposingSlot) {
attr := s.getPayloadAttribute(ctx, headState, proposingSlot, newHeadRoot[:], accessRoot[:])
if attr != nil && s.shouldOverrideFCU(newHeadRoot, proposingSlot) {
return
}
go s.forkchoiceUpdateWithExecution(s.ctx, fcuArgs)
if postGloas {
go func() {
pid, err := s.notifyForkchoiceUpdateGloas(s.ctx, newHeadBlockHash, attr)
if err != nil {
log.WithError(err).Error("Could not update forkchoice with engine")
}
if pid == nil {
if attr != nil {
log.Warn("Engine did not return a payload ID for the fork choice update with attributes")
}
return
}
var pId [8]byte
copy(pId[:], pid[:])
s.cfg.PayloadIDCache.Set(proposingSlot, newHeadRoot, pId)
}()
} else {
fcuArgs := &fcuConfig{
headState: headState,
headRoot: newHeadRoot,
headBlock: headBlock,
proposingSlot: proposingSlot,
attributes: attr,
}
go s.forkchoiceUpdateWithExecution(s.ctx, fcuArgs)
}
}
if err := s.saveHead(s.ctx, fcuArgs.headRoot, fcuArgs.headBlock, fcuArgs.headState); err != nil {
if err := s.saveHead(s.ctx, newHeadRoot, headBlock, headState); err != nil {
log.WithError(err).Error("Could not save head")
}
s.pruneAttsFromPool(s.ctx, fcuArgs.headState, fcuArgs.headBlock)
s.pruneAttsFromPool(s.ctx, headState, headBlock)
}
// This processes fork choice attestations from the pool to account for validator votes and fork choice.

View File

@@ -41,7 +41,7 @@ var epochsSinceFinalityExpandCache = primitives.Epoch(4)
// BlockReceiver interface defines the methods of chain service for receiving and processing new blocks.
type BlockReceiver interface {
ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, blockRoot [32]byte, avs das.AvailabilityChecker) error
ReceiveBlockBatch(ctx context.Context, blocks []blocks.ROBlock, avs das.AvailabilityChecker) error
ReceiveBlockBatch(ctx context.Context, blocks []blocks.ROBlock, envelopes []interfaces.ROSignedExecutionPayloadEnvelope, avs das.AvailabilityChecker) error
HasBlock(ctx context.Context, root [32]byte) bool
RecentBlockSlot(root [32]byte) (primitives.Slot, error)
BlockBeingSynced([32]byte) bool
@@ -373,7 +373,7 @@ func (s *Service) executePostFinalizationTasks(ctx context.Context, finalizedSta
// ReceiveBlockBatch processes the whole block batch at once, assuming the block batch is linear ,transitioning
// the state, performing batch verification of all collected signatures and then performing the appropriate
// actions for a block post-transition.
func (s *Service) ReceiveBlockBatch(ctx context.Context, blocks []blocks.ROBlock, avs das.AvailabilityChecker) error {
func (s *Service) ReceiveBlockBatch(ctx context.Context, blocks []blocks.ROBlock, envelopes []interfaces.ROSignedExecutionPayloadEnvelope, avs das.AvailabilityChecker) error {
ctx, span := trace.StartSpan(ctx, "blockChain.ReceiveBlockBatch")
defer span.End()
@@ -381,7 +381,7 @@ func (s *Service) ReceiveBlockBatch(ctx context.Context, blocks []blocks.ROBlock
defer s.cfg.ForkChoiceStore.Unlock()
// Apply state transition on the incoming newly received block batches, one by one.
if err := s.onBlockBatch(ctx, blocks, avs); err != nil {
if err := s.onBlockBatch(ctx, blocks, envelopes, avs); err != nil {
err := errors.Wrap(err, "could not process block in batch")
tracing.AnnotateError(span, err)
return err
@@ -421,6 +421,15 @@ func (s *Service) ReceiveBlockBatch(ctx context.Context, blocks []blocks.ROBlock
if err := s.cfg.BeaconDB.SaveBlocks(ctx, s.getInitSyncBlocks()); err != nil {
return err
}
for _, e := range envelopes {
protoEnv, ok := e.Proto().(*ethpb.SignedExecutionPayloadEnvelope)
if !ok {
return errors.New("could not type assert signed envelope to proto")
}
if err := s.cfg.BeaconDB.SaveExecutionPayloadEnvelope(ctx, protoEnv); err != nil {
return errors.Wrap(err, "could not save execution payload envelope")
}
}
finalized := s.cfg.ForkChoiceStore.FinalizedCheckpoint()
if finalized == nil {
return errNilFinalizedInStore

View File

@@ -281,7 +281,7 @@ func TestService_ReceiveBlockBatch(t *testing.T) {
require.NoError(t, err)
rwsb, err := blocks.NewROBlock(wsb)
require.NoError(t, err)
err = s.ReceiveBlockBatch(ctx, []blocks.ROBlock{rwsb}, &das.MockAvailabilityStore{})
err = s.ReceiveBlockBatch(ctx, []blocks.ROBlock{rwsb}, nil, &das.MockAvailabilityStore{})
if tt.wantedErr != "" {
assert.ErrorContains(t, tt.wantedErr, err)
} else {

View File

@@ -4,6 +4,7 @@ import (
"bytes"
"context"
"fmt"
"time"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/feed"
statefeed "github.com/OffchainLabs/prysm/v7/beacon-chain/core/feed/state"
@@ -11,6 +12,7 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/transition"
"github.com/OffchainLabs/prysm/v7/beacon-chain/execution"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
payloadattribute "github.com/OffchainLabs/prysm/v7/consensus-types/payload-attribute"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
@@ -31,9 +33,18 @@ type ExecutionPayloadEnvelopeReceiver interface {
}
// ReceiveExecutionPayloadEnvelope processes a signed execution payload envelope for the Gloas fork.
func (s *Service) ReceiveExecutionPayloadEnvelope(ctx context.Context, signed interfaces.ROSignedExecutionPayloadEnvelope) error {
func (s *Service) ReceiveExecutionPayloadEnvelope(ctx context.Context, signed interfaces.ROSignedExecutionPayloadEnvelope) (err error) {
ctx, span := trace.StartSpan(ctx, "blockChain.ReceiveExecutionPayloadEnvelope")
defer span.End()
start := time.Now()
defer func() {
beaconExecutionPayloadEnvelopeProcessingDurationSeconds.Observe(time.Since(start).Seconds())
if err != nil {
beaconExecutionPayloadEnvelopeInvalidTotal.Inc()
return
}
beaconExecutionPayloadEnvelopeValidTotal.Inc()
}()
envelope, err := signed.Envelope()
if err != nil {
@@ -139,6 +150,7 @@ func (s *Service) postPayloadHeadUpdate(ctx context.Context, envelope interfaces
s.headLock.Lock()
s.head.state = st
s.head.full = true
s.headLock.Unlock()
go func() {
@@ -152,7 +164,7 @@ func (s *Service) postPayloadHeadUpdate(ctx context.Context, envelope interfaces
}
}()
attr := s.getPayloadAttribute(ctx, st, envelope.Slot()+1, headRoot)
attr := s.getPayloadAttribute(ctx, st, envelope.Slot()+1, headRoot, blockHash[:])
if s.inRegularSync() {
go func() {
pid, err := s.notifyForkchoiceUpdateGloas(s.ctx, blockHash, attr)
@@ -191,6 +203,50 @@ func (s *Service) getPayloadEnvelopePrestate(ctx context.Context, envelope inter
return preState, nil
}
func (s *Service) callNewPayload(
ctx context.Context,
payload interfaces.ExecutionData,
versionedHashes []common.Hash,
parentRoot common.Hash,
requests *enginev1.ExecutionRequests,
slot primitives.Slot,
) (bool, error) {
_, err := s.cfg.ExecutionEngineCaller.NewPayload(ctx, payload, versionedHashes, &parentRoot, requests, slot)
if err == nil {
return true, nil
}
if errors.Is(err, execution.ErrAcceptedSyncingPayloadStatus) {
log.WithFields(logrus.Fields{
"slot": slot,
"payloadBlockHash": fmt.Sprintf("%#x", bytesutil.Trunc(payload.BlockHash())),
}).Info("Called new payload with optimistic envelope")
return false, nil
}
if errors.Is(err, execution.ErrInvalidPayloadStatus) {
return false, invalidBlock{error: ErrInvalidPayload}
}
return false, errors.WithMessage(ErrUndefinedExecutionEngineError, err.Error())
}
func (s *Service) notifyNewEnvelopeFromBlock(ctx context.Context, b blocks.ROBlock, envelope interfaces.ROExecutionPayloadEnvelope) (bool, error) {
ctx, span := trace.StartSpan(ctx, "blockChain.notifyNewEnvelopeFromBlock")
defer span.End()
payload, err := envelope.Execution()
if err != nil {
return false, errors.Wrap(err, "could not get execution payload from envelope")
}
sbid, err := b.Block().Body().SignedExecutionPayloadBid()
if err != nil {
return false, errors.Wrap(err, "could not get signed execution payload bid from block")
}
versionedHashes := make([]common.Hash, len(sbid.Message.BlobKzgCommitments))
for i, c := range sbid.Message.BlobKzgCommitments {
versionedHashes[i] = primitives.ConvertKzgCommitmentToVersionedHash(c)
}
return s.callNewPayload(ctx, payload, versionedHashes, common.Hash(b.Block().ParentRoot()), envelope.ExecutionRequests(), envelope.Slot())
}
// The returned boolean indicates whether the payload was valid or if it was accepted as syncing (optimistic).
func (s *Service) notifyNewEnvelope(ctx context.Context, st state.BeaconState, envelope interfaces.ROExecutionPayloadEnvelope) (bool, error) {
ctx, span := trace.StartSpan(ctx, "blockChain.notifyNewEnvelope")
@@ -200,7 +256,6 @@ func (s *Service) notifyNewEnvelope(ctx context.Context, st state.BeaconState, e
if err != nil {
return false, errors.Wrap(err, "could not get execution payload from envelope")
}
latestBid, err := st.LatestExecutionPayloadBid()
if err != nil {
return false, errors.Wrap(err, "could not get latest execution payload bid")
@@ -210,25 +265,7 @@ func (s *Service) notifyNewEnvelope(ctx context.Context, st state.BeaconState, e
for i, c := range commitments {
versionedHashes[i] = primitives.ConvertKzgCommitmentToVersionedHash(c)
}
parentRoot := common.Hash(bytesutil.ToBytes32(st.LatestBlockHeader().ParentRoot))
requests := envelope.ExecutionRequests()
_, err = s.cfg.ExecutionEngineCaller.NewPayload(ctx, payload, versionedHashes, &parentRoot, requests)
if err == nil {
return true, nil
}
if errors.Is(err, execution.ErrAcceptedSyncingPayloadStatus) {
log.WithFields(logrus.Fields{
"slot": envelope.Slot(),
"payloadBlockHash": fmt.Sprintf("%#x", bytesutil.Trunc(payload.BlockHash())),
}).Info("Called new payload with optimistic envelope")
return false, nil
}
if errors.Is(err, execution.ErrInvalidPayloadStatus) {
return false, invalidBlock{error: ErrInvalidPayload}
}
return false, errors.WithMessage(ErrUndefinedExecutionEngineError, err.Error())
return s.callNewPayload(ctx, payload, versionedHashes, common.Hash(bytesutil.ToBytes32(st.LatestBlockHeader().ParentRoot)), envelope.ExecutionRequests(), envelope.Slot())
}
func (s *Service) validateExecutionOnEnvelope(ctx context.Context, st state.BeaconState, envelope interfaces.ROExecutionPayloadEnvelope) (bool, error) {

View File

@@ -3,7 +3,6 @@ package blockchain
import (
"testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/gloas"
mockExecution "github.com/OffchainLabs/prysm/v7/beacon-chain/execution/testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
"github.com/OffchainLabs/prysm/v7/config/params"
@@ -51,7 +50,7 @@ func TestReceivePayloadAttestationMessage_ValidatorNotInPTC(t *testing.T) {
require.NoError(t, err)
s.head = &head{root: blockRoot, block: wsb, state: headState, slot: 1}
ptc, err := gloas.PayloadCommittee(ctx, headState, 1)
ptc, err := headState.PayloadCommitteeReadOnly(1)
require.NoError(t, err)
// Pick a validator index not in the PTC.
@@ -100,7 +99,7 @@ func TestReceivePayloadAttestationMessage_OK(t *testing.T) {
require.NoError(t, err)
s.head = &head{root: blockRoot, block: wsb, state: headState, slot: 1}
ptc, err := gloas.PayloadCommittee(ctx, headState, 1)
ptc, err := headState.PayloadCommitteeReadOnly(1)
require.NoError(t, err)
require.NotEqual(t, 0, len(ptc))

View File

@@ -73,29 +73,30 @@ type Service struct {
// config options for the service.
type config struct {
BeaconBlockBuf int
ChainStartFetcher execution.ChainStartFetcher
BeaconDB db.HeadAccessDatabase
DepositCache cache.DepositCache
PayloadIDCache *cache.PayloadIDCache
TrackedValidatorsCache *cache.TrackedValidatorsCache
AttestationCache *cache.AttestationCache
AttPool attestations.Pool
ExitPool voluntaryexits.PoolManager
SlashingPool slashings.PoolManager
BLSToExecPool blstoexec.PoolManager
P2P p2p.Accessor
MaxRoutines int
StateNotifier statefeed.Notifier
ForkChoiceStore f.ForkChoicer
AttService *attestations.Service
StateGen *stategen.State
SlasherAttestationsFeed *event.Feed
WeakSubjectivityCheckpt *ethpb.Checkpoint
BlockFetcher execution.POWBlockFetcher
FinalizedStateAtStartUp state.BeaconState
ExecutionEngineCaller execution.EngineCaller
SyncChecker Checker
BeaconBlockBuf int
ChainStartFetcher execution.ChainStartFetcher
BeaconDB db.HeadAccessDatabase
DepositCache cache.DepositCache
PayloadIDCache *cache.PayloadIDCache
TrackedValidatorsCache *cache.TrackedValidatorsCache
ProposerPreferencesCache *cache.ProposerPreferencesCache
AttestationCache *cache.AttestationCache
AttPool attestations.Pool
ExitPool voluntaryexits.PoolManager
SlashingPool slashings.PoolManager
BLSToExecPool blstoexec.PoolManager
P2P p2p.Accessor
MaxRoutines int
StateNotifier statefeed.Notifier
ForkChoiceStore f.ForkChoicer
AttService *attestations.Service
StateGen *stategen.State
SlasherAttestationsFeed *event.Feed
WeakSubjectivityCheckpt *ethpb.Checkpoint
BlockFetcher execution.POWBlockFetcher
FinalizedStateAtStartUp state.BeaconState
ExecutionEngineCaller execution.EngineCaller
SyncChecker Checker
}
// Checker is an interface used to determine if a node is in initial sync
@@ -213,6 +214,7 @@ func (s *Service) Start() {
}
s.spawnProcessAttestationsRoutine()
go s.runLateBlockTasks()
go s.runLatePayloadTasks()
}
// Stop the blockchain service's main event loop and associated goroutines.
@@ -343,7 +345,7 @@ func (s *Service) initializeHead(ctx context.Context, st state.BeaconState) erro
return errors.Wrap(err, "could not get head state")
}
}
if err := s.setHead(&head{root, blk, st, blk.Block().Slot(), false}); err != nil {
if err := s.setHead(&head{root, blk, st, blk.Block().Slot(), false, false}); err != nil {
return errors.Wrap(err, "could not set head")
}
log.WithFields(logrus.Fields{
@@ -432,6 +434,7 @@ func (s *Service) saveGenesisData(ctx context.Context, genesisState state.Beacon
genesisState,
genesisBlk.Block().Slot(),
false,
false,
}); err != nil {
log.WithError(err).Fatal("Could not set head")
}

View File

@@ -12,6 +12,7 @@ import (
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v7/runtime/version"
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/pkg/errors"
)
@@ -77,8 +78,12 @@ func (s *Service) setupForkchoiceTree(st state.BeaconState) error {
log.WithError(err).Error("Could not build forkchoice chain, starting with finalized block as head")
return nil
}
resolveChainPayloadStatus(chain)
s.cfg.ForkChoiceStore.Lock()
defer s.cfg.ForkChoiceStore.Unlock()
if err := s.markFinalizedRootFull(chain, fRoot); err != nil {
log.WithError(err).Error("Could not mark finalized root as full in forkchoice")
}
return s.cfg.ForkChoiceStore.InsertChain(s.ctx, chain)
}
@@ -145,6 +150,68 @@ func (s *Service) setupForkchoiceRoot(st state.BeaconState) error {
return nil
}
// resolveChainPayloadStatus determines which blocks in the chain had their
// execution payloads delivered by checking if consecutive blocks' bids indicate
// payload delivery. For each pair of blocks (chain[i], chain[i+1]), if the next
// block's bid parentBlockHash equals the current block's bid blockHash, the
// current block's payload was delivered.
func resolveChainPayloadStatus(chain []*forkchoicetypes.BlockAndCheckpoints) {
for i := 0; i < len(chain)-1; i++ {
curr := chain[i].Block.Block()
next := chain[i+1].Block.Block()
if curr.Version() < version.Gloas || next.Version() < version.Gloas {
continue
}
currBid, err := curr.Body().SignedExecutionPayloadBid()
if err != nil || currBid == nil || currBid.Message == nil {
continue
}
nextBid, err := next.Body().SignedExecutionPayloadBid()
if err != nil || nextBid == nil || nextBid.Message == nil {
continue
}
if bytes.Equal(nextBid.Message.ParentBlockHash, currBid.Message.BlockHash) {
chain[i].HasPayload = true
}
}
}
// markFinalizedRootFull checks whether the finalized root block's execution
// payload was delivered by inspecting the first block in the chain. If the first
// block's bid parentBlockHash equals the finalized block's bid blockHash, the
// finalized block's payload was delivered and a full node must be created in
// forkchoice. The caller must hold the forkchoice lock.
func (s *Service) markFinalizedRootFull(chain []*forkchoicetypes.BlockAndCheckpoints, fRoot [32]byte) error {
if len(chain) == 0 {
return nil
}
firstBlock := chain[0].Block.Block()
if firstBlock.Version() < version.Gloas {
return nil
}
firstBid, err := firstBlock.Body().SignedExecutionPayloadBid()
if err != nil || firstBid == nil || firstBid.Message == nil {
return nil
}
fBlock, err := s.cfg.BeaconDB.Block(s.ctx, fRoot)
if err != nil {
return errors.Wrap(err, "could not get finalized block")
}
if fBlock.Block().Version() < version.Gloas {
return nil
}
fBid, err := fBlock.Block().Body().SignedExecutionPayloadBid()
if err != nil || fBid == nil || fBid.Message == nil {
return nil
}
if !bytes.Equal(firstBid.Message.ParentBlockHash, fBid.Message.BlockHash) {
return nil
}
// The finalized block's payload was delivered. Create the full node.
s.cfg.ForkChoiceStore.MarkFullNode(fRoot)
return nil
}
func (s *Service) setupForkchoiceCheckpoints() error {
justified, err := s.cfg.BeaconDB.JustifiedCheckpoint(s.ctx)
if err != nil {

View File

@@ -94,6 +94,11 @@ func (mb *mockBroadcaster) BroadcastDataColumnSidecars(_ context.Context, _ []bl
return nil
}
func (mb *mockBroadcaster) BroadcastForEpoch(_ context.Context, _ proto.Message, _ primitives.Epoch) error {
mb.broadcastCalled = true
return nil
}
func (mb *mockBroadcaster) BroadcastBLSChanges(_ context.Context, _ []*ethpb.SignedBLSToExecutionChange) {
}

View File

@@ -77,10 +77,14 @@ type ChainService struct {
DataColumns []blocks.VerifiedRODataColumn
TargetRoot [32]byte
MockHeadSlot *primitives.Slot
DependentRootCB func([32]byte, primitives.Epoch) ([32]byte, error)
MockCanonicalRoots map[primitives.Slot][32]byte
MockCanonicalFull map[primitives.Slot]bool
MockPayloadContentLookup map[[32]byte][32]byte
MockPayloadContentIsFull map[[32]byte]bool
ParentPayloadReadyVal *bool
ForkchoiceRoots map[[32]byte]bool
ForkchoiceBlockHashes map[[32]byte][32]byte
}
func (s *ChainService) Ancestor(ctx context.Context, root []byte, slot primitives.Slot) ([]byte, error) {
@@ -278,7 +282,7 @@ func (s *ChainService) ReceiveBlockInitialSync(ctx context.Context, block interf
}
// ReceiveBlockBatch processes blocks in batches from initial-sync.
func (s *ChainService) ReceiveBlockBatch(ctx context.Context, blks []blocks.ROBlock, _ das.AvailabilityChecker) error {
func (s *ChainService) ReceiveBlockBatch(ctx context.Context, blks []blocks.ROBlock, _ []interfaces.ROSignedExecutionPayloadEnvelope, _ das.AvailabilityChecker) error {
if s.State == nil {
return ErrNilState
}
@@ -590,6 +594,16 @@ func (s *ChainService) InForkchoice(root [32]byte) bool {
return !s.NotFinalized
}
// BlockHash mocks the execution payload block hash lookup for a beacon block root.
func (s *ChainService) BlockHash(root [32]byte) ([32]byte, error) {
if s.ForkchoiceBlockHashes != nil {
if blockHash, ok := s.ForkchoiceBlockHashes[root]; ok {
return blockHash, nil
}
}
return [32]byte{}, errors.New("block hash not found")
}
// IsOptimisticForRoot mocks the same method in the chain service.
func (s *ChainService) IsOptimisticForRoot(_ context.Context, root [32]byte) (bool, error) {
s.OptimisticCheckRootReceived = root
@@ -744,6 +758,24 @@ func (s *ChainService) HasFullNode(root [32]byte) bool {
return false
}
// ShouldIgnoreData returns true if the data for the given parent root and slot should be ignored.
func (s *ChainService) ShouldIgnoreData(_ [32]byte, _ primitives.Slot) bool {
return false
}
// PayloadContentLookup mocks the same method in the chain service.
func (s *ChainService) PayloadContentLookup(root [32]byte) ([32]byte, bool) {
if s.ForkChoiceStore != nil {
return s.ForkChoiceStore.PayloadContentLookup(root)
}
if s.MockPayloadContentLookup != nil {
if value, ok := s.MockPayloadContentLookup[root]; ok {
return value, s.MockPayloadContentIsFull[root]
}
}
return root, false
}
// InsertNode mocks the same method in the chain service
func (s *ChainService) InsertNode(ctx context.Context, st state.BeaconState, block blocks.ROBlock) error {
if s.ForkChoiceStore != nil {
@@ -836,7 +868,10 @@ func (s *ChainService) ParentPayloadReady(_ interfaces.ReadOnlyBeaconBlock) bool
}
// DependentRootForEpoch mocks the same method in the chain service
func (c *ChainService) DependentRootForEpoch(_ [32]byte, _ primitives.Epoch) ([32]byte, error) {
func (c *ChainService) DependentRootForEpoch(root [32]byte, epoch primitives.Epoch) ([32]byte, error) {
if c.DependentRootCB != nil {
return c.DependentRootCB(root, epoch)
}
return c.TargetRoot, nil
}

View File

@@ -8,11 +8,35 @@ import (
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
)
// proposerPreference returns a TrackedValidator from the ProposerPreferencesCache
// if a preference exists for the given slot.
func (s *Service) proposerPreference(slot primitives.Slot) (cache.TrackedValidator, bool) {
if s.cfg.ProposerPreferencesCache == nil {
return cache.TrackedValidator{}, false
}
pref, ok := s.cfg.ProposerPreferencesCache.Get(slot)
if !ok {
return cache.TrackedValidator{}, false
}
var feeRecipient primitives.ExecutionAddress
copy(feeRecipient[:], pref.FeeRecipient)
return cache.TrackedValidator{Active: true, FeeRecipient: feeRecipient, GasLimit: pref.GasLimit}, true
}
// trackedProposer returns whether the beacon node was informed, via the
// validators/prepare_proposer endpoint, of the proposer at the given slot.
// It only returns true if the tracked proposer is present and active.
//
// When PrepareAllPayloads is enabled, the node prepares payloads for every
// slot. After the Gloas fork, proposers broadcast their preferences (fee
// recipient, gas limit) via gossip into the ProposerPreferencesCache. When
// available, these preferences supply the fee recipient; otherwise the
// default (burn address) is used.
func (s *Service) trackedProposer(st state.ReadOnlyBeaconState, slot primitives.Slot) (cache.TrackedValidator, bool) {
if features.Get().PrepareAllPayloads {
if val, ok := s.proposerPreference(slot); ok {
return val, true
}
return cache.TrackedValidator{Active: true}, true
}
id, err := helpers.BeaconProposerIndexAtSlot(s.ctx, st, slot)
@@ -23,5 +47,8 @@ func (s *Service) trackedProposer(st state.ReadOnlyBeaconState, slot primitives.
if !ok {
return cache.TrackedValidator{}, false
}
if pref, ok := s.proposerPreference(slot); ok {
return pref, true
}
return val, val.Active
}

View File

@@ -0,0 +1,83 @@
package blockchain
import (
"testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/cache"
"github.com/OffchainLabs/prysm/v7/config/features"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/OffchainLabs/prysm/v7/testing/util"
"github.com/ethereum/go-ethereum/common"
)
func TestTrackedProposer_NotTracked(t *testing.T) {
service, _ := minimalTestService(t, WithPayloadIDCache(cache.NewPayloadIDCache()))
st, _ := util.DeterministicGenesisStateBellatrix(t, 1)
_, ok := service.trackedProposer(st, 0)
require.Equal(t, false, ok)
}
func TestTrackedProposer_Tracked(t *testing.T) {
service, _ := minimalTestService(t, WithPayloadIDCache(cache.NewPayloadIDCache()))
st, _ := util.DeterministicGenesisStateBellatrix(t, 1)
addr := common.HexToAddress("0x1234")
service.cfg.TrackedValidatorsCache.Set(cache.TrackedValidator{Active: true, FeeRecipient: primitives.ExecutionAddress(addr), Index: 0})
val, ok := service.trackedProposer(st, 0)
require.Equal(t, true, ok)
require.Equal(t, primitives.ExecutionAddress(addr), val.FeeRecipient)
}
func TestTrackedProposer_PrepareAllPayloads_Default(t *testing.T) {
resetCfg := features.InitWithReset(&features.Flags{PrepareAllPayloads: true})
defer resetCfg()
service, _ := minimalTestService(t, WithPayloadIDCache(cache.NewPayloadIDCache()))
st, _ := util.DeterministicGenesisStateBellatrix(t, 1)
val, ok := service.trackedProposer(st, 0)
require.Equal(t, true, ok)
require.Equal(t, true, val.Active)
require.Equal(t, params.BeaconConfig().EthBurnAddressHex, common.BytesToAddress(val.FeeRecipient[:]).String())
}
func TestTrackedProposer_PrepareAllPayloads_WithProposerPreference(t *testing.T) {
resetCfg := features.InitWithReset(&features.Flags{PrepareAllPayloads: true})
defer resetCfg()
prefCache := cache.NewProposerPreferencesCache()
service, _ := minimalTestService(t,
WithPayloadIDCache(cache.NewPayloadIDCache()),
WithProposerPreferencesCache(prefCache),
)
st, _ := util.DeterministicGenesisStateBellatrix(t, 1)
addr := common.HexToAddress("0xabcd")
prefCache.Add(0, addr.Bytes(), 42_000_000)
val, ok := service.trackedProposer(st, 0)
require.Equal(t, true, ok)
require.Equal(t, true, val.Active)
require.Equal(t, primitives.ExecutionAddress(addr), val.FeeRecipient)
require.Equal(t, uint64(42_000_000), val.GasLimit)
}
func TestTrackedProposer_TrackedWithProposerPreferenceOverride(t *testing.T) {
prefCache := cache.NewProposerPreferencesCache()
service, _ := minimalTestService(t,
WithPayloadIDCache(cache.NewPayloadIDCache()),
WithProposerPreferencesCache(prefCache),
)
st, _ := util.DeterministicGenesisStateBellatrix(t, 1)
trackedAddr := common.HexToAddress("0x1111")
prefAddr := common.HexToAddress("0x2222")
service.cfg.TrackedValidatorsCache.Set(cache.TrackedValidator{Active: true, FeeRecipient: primitives.ExecutionAddress(trackedAddr), Index: 0})
prefCache.Add(0, prefAddr.Bytes(), 50_000_000)
val, ok := service.trackedProposer(st, 0)
require.Equal(t, true, ok)
// Proposer preference overrides tracked validator.
require.Equal(t, primitives.ExecutionAddress(prefAddr), val.FeeRecipient)
require.Equal(t, uint64(50_000_000), val.GasLimit)
}

View File

@@ -15,6 +15,7 @@ go_library(
"common.go",
"doc.go",
"error.go",
"highest_execution_payload_bid.go",
"interfaces.go",
"log.go",
"payload_attestation.go",
@@ -22,6 +23,7 @@ go_library(
"proposer_indices.go",
"proposer_indices_disabled.go", # keep
"proposer_indices_type.go",
"proposer_preferences.go",
"registration.go",
"skip_slot_cache.go",
"subnet_ids.go",
@@ -78,10 +80,12 @@ go_test(
"checkpoint_state_test.go",
"committee_fuzz_test.go",
"committee_test.go",
"highest_execution_payload_bid_test.go",
"payload_attestation_test.go",
"payload_id_test.go",
"private_access_test.go",
"proposer_indices_test.go",
"proposer_preferences_test.go",
"registration_test.go",
"skip_slot_cache_test.go",
"subnet_ids_test.go",

View File

@@ -0,0 +1,76 @@
package cache
import (
"sync"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
)
type executionPayloadBidKey struct {
slot primitives.Slot
parentHash [32]byte
parentRoot [32]byte
}
// HighestExecutionPayloadBidCache stores the highest bid for each
// (slot, parent_block_hash, parent_block_root) tuple.
type HighestExecutionPayloadBidCache struct {
bids map[executionPayloadBidKey]*ethpb.SignedExecutionPayloadBid
lock sync.RWMutex
}
// NewHighestExecutionPayloadBidCache initializes a highest-bid cache.
func NewHighestExecutionPayloadBidCache() *HighestExecutionPayloadBidCache {
return &HighestExecutionPayloadBidCache{
bids: make(map[executionPayloadBidKey]*ethpb.SignedExecutionPayloadBid),
}
}
// Get returns the highest cached bid for the given tuple.
func (c *HighestExecutionPayloadBidCache) Get(
slot primitives.Slot,
parentHash [32]byte,
parentRoot [32]byte,
) (*ethpb.SignedExecutionPayloadBid, bool) {
c.lock.RLock()
defer c.lock.RUnlock()
bid, ok := c.bids[executionPayloadBidKey{
slot: slot,
parentHash: parentHash,
parentRoot: parentRoot,
}]
return bid, ok
}
// SetIfHigher inserts the bid if absent, or replaces the cached bid only if
// the incoming value is strictly greater.
func (c *HighestExecutionPayloadBidCache) SetIfHigher(bid *ethpb.SignedExecutionPayloadBid) bool {
c.lock.Lock()
defer c.lock.Unlock()
key := executionPayloadBidKey{
slot: bid.Message.Slot,
parentHash: [32]byte(bid.Message.ParentBlockHash),
parentRoot: [32]byte(bid.Message.ParentBlockRoot),
}
cached, ok := c.bids[key]
if !ok || bid.Message.Value > cached.Message.Value {
c.bids[key] = bid
return true
}
return false
}
// PruneBefore removes all cached bids for slots before the provided slot.
func (c *HighestExecutionPayloadBidCache) PruneBefore(slot primitives.Slot) {
c.lock.Lock()
defer c.lock.Unlock()
for key := range c.bids {
if key.slot < slot {
delete(c.bids, key)
}
}
}

View File

@@ -0,0 +1,105 @@
package cache
import (
"bytes"
"testing"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/testing/require"
)
func TestHighestExecutionPayloadBidCache_GetSetIfHigher(t *testing.T) {
c := NewHighestExecutionPayloadBidCache()
bid := testSignedExecutionPayloadBid(10, [32]byte{0x01}, [32]byte{0x02}, 100)
inserted := c.SetIfHigher(bid)
require.Equal(t, true, inserted)
got, ok := c.Get(10, [32]byte{0x01}, [32]byte{0x02})
require.Equal(t, true, ok)
require.DeepEqual(t, bid, got)
}
func TestHighestExecutionPayloadBidCache_SetIfHigher_ReplacesOnlyOnHigherValue(t *testing.T) {
c := NewHighestExecutionPayloadBidCache()
low := testSignedExecutionPayloadBid(10, [32]byte{0x01}, [32]byte{0x02}, 100)
same := testSignedExecutionPayloadBid(10, [32]byte{0x01}, [32]byte{0x02}, 100)
high := testSignedExecutionPayloadBid(10, [32]byte{0x01}, [32]byte{0x02}, 101)
require.Equal(t, true, c.SetIfHigher(low))
require.Equal(t, false, c.SetIfHigher(same))
got, ok := c.Get(10, [32]byte{0x01}, [32]byte{0x02})
require.Equal(t, true, ok)
require.DeepEqual(t, low, got)
require.Equal(t, true, c.SetIfHigher(high))
got, ok = c.Get(10, [32]byte{0x01}, [32]byte{0x02})
require.Equal(t, true, ok)
require.DeepEqual(t, high, got)
}
func TestHighestExecutionPayloadBidCache_SetIfHigher_KeepsDistinctTuples(t *testing.T) {
c := NewHighestExecutionPayloadBidCache()
first := testSignedExecutionPayloadBid(10, [32]byte{0x01}, [32]byte{0x02}, 100)
second := testSignedExecutionPayloadBid(10, [32]byte{0x03}, [32]byte{0x02}, 50)
third := testSignedExecutionPayloadBid(10, [32]byte{0x01}, [32]byte{0x04}, 75)
require.Equal(t, true, c.SetIfHigher(first))
require.Equal(t, true, c.SetIfHigher(second))
require.Equal(t, true, c.SetIfHigher(third))
got, ok := c.Get(10, [32]byte{0x01}, [32]byte{0x02})
require.Equal(t, true, ok)
require.DeepEqual(t, first, got)
got, ok = c.Get(10, [32]byte{0x03}, [32]byte{0x02})
require.Equal(t, true, ok)
require.DeepEqual(t, second, got)
got, ok = c.Get(10, [32]byte{0x01}, [32]byte{0x04})
require.Equal(t, true, ok)
require.DeepEqual(t, third, got)
}
func TestHighestExecutionPayloadBidCache_PruneBefore(t *testing.T) {
c := NewHighestExecutionPayloadBidCache()
oldBid := testSignedExecutionPayloadBid(9, [32]byte{0x01}, [32]byte{0x02}, 100)
currentBid := testSignedExecutionPayloadBid(10, [32]byte{0x03}, [32]byte{0x04}, 101)
require.Equal(t, true, c.SetIfHigher(oldBid))
require.Equal(t, true, c.SetIfHigher(currentBid))
c.PruneBefore(10)
_, ok := c.Get(9, [32]byte{0x01}, [32]byte{0x02})
require.Equal(t, false, ok)
got, ok := c.Get(10, [32]byte{0x03}, [32]byte{0x04})
require.Equal(t, true, ok)
require.DeepEqual(t, currentBid, got)
}
func testSignedExecutionPayloadBid(
slot primitives.Slot,
parentHash [32]byte,
parentRoot [32]byte,
value uint64,
) *ethpb.SignedExecutionPayloadBid {
return &ethpb.SignedExecutionPayloadBid{
Message: &ethpb.ExecutionPayloadBid{
Slot: slot,
ParentBlockHash: bytes.Clone(parentHash[:]),
ParentBlockRoot: bytes.Clone(parentRoot[:]),
BlockHash: bytes.Repeat([]byte{0x03}, 32),
PrevRandao: bytes.Repeat([]byte{0x04}, 32),
FeeRecipient: bytes.Repeat([]byte{0x05}, 20),
GasLimit: 30_000_000,
BuilderIndex: 1,
Value: primitives.Gwei(value),
ExecutionPayment: 10,
},
Signature: bytes.Repeat([]byte{0x06}, 96),
}
}

View File

@@ -0,0 +1,87 @@
package cache
import (
"sync"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
)
// ProposerPreference stores the proposer fee recipient and gas limit for a slot.
type ProposerPreference struct {
FeeRecipient []byte
GasLimit uint64
}
// ProposerPreferencesCache stores proposer preferences by slot.
type ProposerPreferencesCache struct {
slotToPreferences map[primitives.Slot]ProposerPreference
lock sync.RWMutex
}
// NewProposerPreferencesCache initializes a proposer preferences cache.
func NewProposerPreferencesCache() *ProposerPreferencesCache {
return &ProposerPreferencesCache{
slotToPreferences: make(map[primitives.Slot]ProposerPreference),
}
}
// Add stores proposer preferences for a slot. If the slot already exists, the
// existing value is kept and false is returned.
func (c *ProposerPreferencesCache) Add(slot primitives.Slot, feeRecipient []byte, gasLimit uint64) bool {
c.lock.Lock()
defer c.lock.Unlock()
if _, ok := c.slotToPreferences[slot]; ok {
return false
}
// FeeRecipient comes from validated SSZ-decoded proposer preferences, so
// retaining the slice reference here is intentional.
c.slotToPreferences[slot] = ProposerPreference{
FeeRecipient: feeRecipient,
GasLimit: gasLimit,
}
return true
}
// Get returns proposer preferences for a slot.
func (c *ProposerPreferencesCache) Get(slot primitives.Slot) (ProposerPreference, bool) {
c.lock.RLock()
defer c.lock.RUnlock()
pref, ok := c.slotToPreferences[slot]
if !ok {
return ProposerPreference{}, false
}
return pref, true
}
// Has returns true if proposer preferences for the slot already exist.
func (c *ProposerPreferencesCache) Has(slot primitives.Slot) bool {
c.lock.RLock()
defer c.lock.RUnlock()
_, ok := c.slotToPreferences[slot]
return ok
}
// PruneBefore removes all proposer preferences for slots before the provided slot.
func (c *ProposerPreferencesCache) PruneBefore(slot primitives.Slot) {
c.lock.Lock()
defer c.lock.Unlock()
for cachedSlot := range c.slotToPreferences {
if cachedSlot < slot {
delete(c.slotToPreferences, cachedSlot)
}
}
}
// Clear removes all cached proposer preferences.
func (c *ProposerPreferencesCache) Clear() {
c.lock.Lock()
defer c.lock.Unlock()
c.slotToPreferences = make(map[primitives.Slot]ProposerPreference)
}

View File

@@ -0,0 +1,63 @@
package cache
import (
"testing"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/testing/require"
)
func TestProposerPreferencesCache_AddGetHas(t *testing.T) {
c := NewProposerPreferencesCache()
slot := primitives.Slot(123)
feeRecipient := []byte{1, 2, 3, 4}
require.Equal(t, false, c.Has(slot))
added := c.Add(slot, feeRecipient, 42)
require.Equal(t, true, added)
require.Equal(t, true, c.Has(slot))
pref, ok := c.Get(slot)
require.Equal(t, true, ok)
require.DeepEqual(t, feeRecipient, pref.FeeRecipient)
require.Equal(t, uint64(42), pref.GasLimit)
}
func TestProposerPreferencesCache_AddDuplicateSlot(t *testing.T) {
c := NewProposerPreferencesCache()
slot := primitives.Slot(456)
require.Equal(t, true, c.Add(slot, []byte{1}, 10))
require.Equal(t, false, c.Add(slot, []byte{2}, 20))
pref, ok := c.Get(slot)
require.Equal(t, true, ok)
require.DeepEqual(t, []byte{1}, pref.FeeRecipient)
require.Equal(t, uint64(10), pref.GasLimit)
}
func TestProposerPreferencesCache_Clear(t *testing.T) {
c := NewProposerPreferencesCache()
slot := primitives.Slot(789)
require.Equal(t, true, c.Add(slot, []byte{1}, 10))
c.Clear()
require.Equal(t, false, c.Has(slot))
_, ok := c.Get(slot)
require.Equal(t, false, ok)
}
func TestProposerPreferencesCache_PruneBefore(t *testing.T) {
c := NewProposerPreferencesCache()
require.Equal(t, true, c.Add(10, []byte{1}, 10))
require.Equal(t, true, c.Add(11, []byte{2}, 11))
require.Equal(t, true, c.Add(12, []byte{3}, 12))
c.PruneBefore(11)
require.Equal(t, false, c.Has(10))
require.Equal(t, true, c.Has(11))
require.Equal(t, true, c.Has(12))
}

View File

@@ -172,7 +172,7 @@ func (s *SyncCommitteeCache) idxPositionInCommittee(
// UpdatePositionsInCommittee updates caching of validators position in sync committee in respect to
// current epoch and next epoch. This should be called when `current_sync_committee` and `next_sync_committee`
// change and that happens every `EPOCHS_PER_SYNC_COMMITTEE_PERIOD`.
func (s *SyncCommitteeCache) UpdatePositionsInCommittee(syncCommitteeBoundaryRoot [32]byte, st state.BeaconState) error {
func (s *SyncCommitteeCache) UpdatePositionsInCommittee(syncCommitteeBoundaryRoot [32]byte, st state.ReadOnlyBeaconState) error {
// since we call UpdatePositionsInCommittee asynchronously, keep track of the cache value
// seen at the beginning of the routine and compare at the end before updating. If the underlying value has been
// cycled (new address), don't update it.

View File

@@ -32,7 +32,7 @@ func (s *FakeSyncCommitteeCache) NextPeriodIndexPosition(root [32]byte, valIdx p
}
// UpdatePositionsInCommittee -- fake.
func (s *FakeSyncCommitteeCache) UpdatePositionsInCommittee(syncCommitteeBoundaryRoot [32]byte, state state.BeaconState) error {
func (s *FakeSyncCommitteeCache) UpdatePositionsInCommittee(syncCommitteeBoundaryRoot [32]byte, state state.ReadOnlyBeaconState) error {
return nil
}

View File

@@ -21,6 +21,7 @@ type (
Active bool
FeeRecipient primitives.ExecutionAddress
Index primitives.ValidatorIndex
GasLimit uint64
}
TrackedValidatorsCache struct {

View File

@@ -60,6 +60,7 @@ go_test(
"block_operations_fuzz_test.go",
"block_regression_test.go",
"eth1_data_test.go",
"exit_builder_test.go",
"exit_test.go",
"exports_test.go",
"genesis_test.go",

View File

@@ -4,6 +4,7 @@ import (
"context"
"fmt"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/gloas"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/signing"
v "github.com/OffchainLabs/prysm/v7/beacon-chain/core/validators"
@@ -62,6 +63,16 @@ func ProcessVoluntaryExits(
if exit == nil || exit.Exit == nil {
return nil, errors.New("nil voluntary exit in block body")
}
// [New in Gloas:EIP7732] Builder exits are identified by the builder index flag.
if beaconState.Version() >= version.Gloas && exit.Exit.ValidatorIndex.IsBuilderIndex() {
if err := verifyBuilderExitAndSignature(beaconState, exit); err != nil {
return nil, errors.Wrapf(err, "could not verify builder exit %d", idx)
}
if err := gloas.InitiateBuilderExit(beaconState, exit.Exit.ValidatorIndex.ToBuilderIndex()); err != nil {
return nil, err
}
continue
}
val, err := beaconState.ValidatorAtIndexReadOnly(exit.Exit.ValidatorIndex)
if err != nil {
return nil, err
@@ -102,19 +113,24 @@ func ProcessVoluntaryExits(
// initiate_validator_exit(state, voluntary_exit.validator_index)
func VerifyExitAndSignature(
validator state.ReadOnlyValidator,
state state.ReadOnlyBeaconState,
st state.ReadOnlyBeaconState,
signed *ethpb.SignedVoluntaryExit,
) error {
if signed == nil || signed.Exit == nil {
return errors.New("nil exit")
}
fork := state.Fork()
genesisRoot := state.GenesisValidatorsRoot()
// [New in Gloas:EIP7732] Builder exits are verified separately.
if st.Version() >= version.Gloas && signed.Exit.ValidatorIndex.IsBuilderIndex() {
return verifyBuilderExitAndSignature(st, signed)
}
fork := st.Fork()
genesisRoot := st.GenesisValidatorsRoot()
// EIP-7044: Beginning in Deneb, fix the fork version to Capella.
// This allows for signed validator exits to be valid forever.
if state.Version() >= version.Deneb {
if st.Version() >= version.Deneb {
fork = &ethpb.Fork{
PreviousVersion: params.BeaconConfig().CapellaForkVersion,
CurrentVersion: params.BeaconConfig().CapellaForkVersion,
@@ -123,7 +139,7 @@ func VerifyExitAndSignature(
}
exit := signed.Exit
if err := verifyExitConditions(state, validator, exit); err != nil {
if err := verifyExitConditions(st, validator, exit); err != nil {
return err
}
domain, err := signing.Domain(fork, exit.Epoch, params.BeaconConfig().DomainVoluntaryExit, genesisRoot)
@@ -198,3 +214,57 @@ func verifyExitConditions(st state.ReadOnlyBeaconState, validator state.ReadOnly
return nil
}
// verifyBuilderExitAndSignature validates a builder voluntary exit.
// [New in Gloas:EIP7732]
func verifyBuilderExitAndSignature(st state.ReadOnlyBeaconState, signed *ethpb.SignedVoluntaryExit) error {
if signed == nil || signed.Exit == nil {
return errors.New("nil exit")
}
exit := signed.Exit
builderIndex := exit.ValidatorIndex.ToBuilderIndex()
// Exits must specify an epoch when they become valid; they are not valid before then.
currentEpoch := slots.ToEpoch(st.Slot())
if currentEpoch < exit.Epoch {
return fmt.Errorf("expected current epoch >= exit epoch, received %d < %d", currentEpoch, exit.Epoch)
}
// Verify the builder is active.
active, err := st.IsActiveBuilder(builderIndex)
if err != nil {
return errors.Wrap(err, "could not check if builder is active")
}
if !active {
return fmt.Errorf("builder %d is not active", builderIndex)
}
// Only exit builder if it has no pending balance to withdraw.
pendingBalance, err := st.BuilderPendingBalanceToWithdraw(builderIndex)
if err != nil {
return errors.Wrap(err, "could not get builder pending balance to withdraw")
}
if pendingBalance != 0 {
return fmt.Errorf("builder %d has pending balance to withdraw: %d", builderIndex, pendingBalance)
}
// Verify signature using builder pubkey with Capella fork version (EIP-7044).
pubkey, err := st.BuilderPubkey(builderIndex)
if err != nil {
return errors.Wrap(err, "could not get builder pubkey")
}
fork := &ethpb.Fork{
PreviousVersion: params.BeaconConfig().CapellaForkVersion,
CurrentVersion: params.BeaconConfig().CapellaForkVersion,
Epoch: params.BeaconConfig().CapellaForkEpoch,
}
genesisRoot := st.GenesisValidatorsRoot()
domain, err := signing.Domain(fork, exit.Epoch, params.BeaconConfig().DomainVoluntaryExit, genesisRoot)
if err != nil {
return err
}
if err := signing.VerifySigningRoot(exit, pubkey[:], signed.Signature, domain); err != nil {
return signing.ErrSigFailedToVerify
}
return nil
}

View File

@@ -0,0 +1,307 @@
package blocks_test
import (
"testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/blocks"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/signing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/validators"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
state_native "github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/crypto/bls"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/testing/assert"
"github.com/OffchainLabs/prysm/v7/testing/require"
)
// setGloasTestConfig sets fork epochs so Gloas is active at epoch 5.
func setGloasTestConfig(t *testing.T) {
t.Helper()
cfg := params.BeaconConfig().Copy()
cfg.CapellaForkEpoch = 1
cfg.DenebForkEpoch = 2
cfg.ElectraForkEpoch = 3
cfg.FuluForkEpoch = 4
cfg.GloasForkEpoch = 5
params.SetActiveTestCleanup(t, cfg)
}
// newGloasStateWithBuilder creates a minimal Gloas beacon state with one active builder
// and returns the state along with the builder's BLS private key.
func newGloasStateWithBuilder(t *testing.T, builderIndex primitives.BuilderIndex, epoch primitives.Epoch) (state.BeaconState, bls.SecretKey) {
t.Helper()
priv, err := bls.RandKey()
require.NoError(t, err)
cfg := params.BeaconConfig()
builder := &ethpb.Builder{
Pubkey: priv.PublicKey().Marshal(),
WithdrawableEpoch: cfg.FarFutureEpoch,
DepositEpoch: 0,
Balance: 32_000_000_000,
ExecutionAddress: make([]byte, 20),
}
builders := make([]*ethpb.Builder, int(builderIndex)+1)
for i := range builders {
if primitives.BuilderIndex(i) == builderIndex {
builders[i] = builder
} else {
builders[i] = &ethpb.Builder{
Pubkey: make([]byte, 48),
WithdrawableEpoch: cfg.FarFutureEpoch,
DepositEpoch: 0,
ExecutionAddress: make([]byte, 20),
}
}
}
stProto := &ethpb.BeaconStateGloas{
Slot: cfg.SlotsPerEpoch * primitives.Slot(epoch),
Fork: &ethpb.Fork{
PreviousVersion: cfg.FuluForkVersion,
CurrentVersion: cfg.GloasForkVersion,
Epoch: cfg.GloasForkEpoch,
},
GenesisValidatorsRoot: make([]byte, 32),
FinalizedCheckpoint: &ethpb.Checkpoint{
Epoch: epoch - 1,
Root: make([]byte, 32),
},
CurrentJustifiedCheckpoint: &ethpb.Checkpoint{Root: make([]byte, 32)},
PreviousJustifiedCheckpoint: &ethpb.Checkpoint{Root: make([]byte, 32)},
Builders: builders,
Validators: []*ethpb.Validator{
{
ExitEpoch: cfg.FarFutureEpoch,
ActivationEpoch: 0,
PublicKey: make([]byte, 48),
},
},
Balances: []uint64{32_000_000_000},
BlockRoots: make([][]byte, cfg.SlotsPerHistoricalRoot),
StateRoots: make([][]byte, cfg.SlotsPerHistoricalRoot),
RandaoMixes: make([][]byte, cfg.EpochsPerHistoricalVector),
Slashings: make([]uint64, cfg.EpochsPerSlashingsVector),
ExecutionPayloadAvailability: make([]byte, cfg.SlotsPerHistoricalRoot/8),
}
for i := range stProto.BlockRoots {
stProto.BlockRoots[i] = make([]byte, 32)
}
for i := range stProto.StateRoots {
stProto.StateRoots[i] = make([]byte, 32)
}
for i := range stProto.RandaoMixes {
stProto.RandaoMixes[i] = make([]byte, 32)
}
st, err := state_native.InitializeFromProtoUnsafeGloas(stProto)
require.NoError(t, err)
return st, priv
}
func signBuilderExit(t *testing.T, st state.ReadOnlyBeaconState, exit *ethpb.VoluntaryExit, priv bls.SecretKey) *ethpb.SignedVoluntaryExit {
t.Helper()
sb, err := signing.ComputeDomainAndSign(st, exit.Epoch, exit, params.BeaconConfig().DomainVoluntaryExit, priv)
require.NoError(t, err)
sig, err := bls.SignatureFromBytes(sb)
require.NoError(t, err)
return &ethpb.SignedVoluntaryExit{
Exit: exit,
Signature: sig.Marshal(),
}
}
func TestVerifyExitAndSignature_BuilderExit_HappyPath(t *testing.T) {
setGloasTestConfig(t)
builderIndex := primitives.BuilderIndex(0)
epoch := primitives.Epoch(10)
st, priv := newGloasStateWithBuilder(t, builderIndex, epoch)
exit := &ethpb.VoluntaryExit{
ValidatorIndex: builderIndex.ToValidatorIndex(),
Epoch: epoch,
}
signed := signBuilderExit(t, st, exit, priv)
err := blocks.VerifyExitAndSignature(nil, st, signed)
require.NoError(t, err)
}
func TestVerifyExitAndSignature_BuilderNotActive(t *testing.T) {
setGloasTestConfig(t)
builderIndex := primitives.BuilderIndex(0)
epoch := primitives.Epoch(10)
st, priv := newGloasStateWithBuilder(t, builderIndex, epoch)
// Make builder not active by setting withdrawable epoch (already initiated exit).
builder, err := st.Builder(builderIndex)
require.NoError(t, err)
builder.WithdrawableEpoch = 5
require.NoError(t, st.UpdateBuilderAtIndex(builderIndex, builder))
exit := &ethpb.VoluntaryExit{
ValidatorIndex: builderIndex.ToValidatorIndex(),
Epoch: epoch,
}
signed := signBuilderExit(t, st, exit, priv)
err = blocks.VerifyExitAndSignature(nil, st, signed)
assert.ErrorContains(t, "is not active", err)
}
func TestVerifyExitAndSignature_BuilderPendingWithdrawal(t *testing.T) {
setGloasTestConfig(t)
builderIndex := primitives.BuilderIndex(0)
epoch := primitives.Epoch(10)
st, priv := newGloasStateWithBuilder(t, builderIndex, epoch)
// Give the builder a pending withdrawal.
require.NoError(t, st.AppendBuilderPendingWithdrawals([]*ethpb.BuilderPendingWithdrawal{
{
BuilderIndex: builderIndex,
Amount: 1000,
FeeRecipient: make([]byte, 20),
},
}))
exit := &ethpb.VoluntaryExit{
ValidatorIndex: builderIndex.ToValidatorIndex(),
Epoch: epoch,
}
signed := signBuilderExit(t, st, exit, priv)
err := blocks.VerifyExitAndSignature(nil, st, signed)
assert.ErrorContains(t, "pending balance to withdraw", err)
}
func TestVerifyExitAndSignature_BuilderBadSignature(t *testing.T) {
setGloasTestConfig(t)
builderIndex := primitives.BuilderIndex(0)
epoch := primitives.Epoch(10)
st, _ := newGloasStateWithBuilder(t, builderIndex, epoch)
wrongKey, err := bls.RandKey()
require.NoError(t, err)
exit := &ethpb.VoluntaryExit{
ValidatorIndex: builderIndex.ToValidatorIndex(),
Epoch: epoch,
}
signed := signBuilderExit(t, st, exit, wrongKey)
err = blocks.VerifyExitAndSignature(nil, st, signed)
assert.ErrorContains(t, "signature did not verify", err)
}
func TestVerifyExitAndSignature_BuilderExitInFuture(t *testing.T) {
setGloasTestConfig(t)
builderIndex := primitives.BuilderIndex(0)
epoch := primitives.Epoch(10)
st, priv := newGloasStateWithBuilder(t, builderIndex, epoch)
exit := &ethpb.VoluntaryExit{
ValidatorIndex: builderIndex.ToValidatorIndex(),
Epoch: epoch + 1, // Future epoch.
}
signed := signBuilderExit(t, st, exit, priv)
err := blocks.VerifyExitAndSignature(nil, st, signed)
assert.ErrorContains(t, "expected current epoch >= exit epoch", err)
}
func TestProcessVoluntaryExits_BuilderExit(t *testing.T) {
setGloasTestConfig(t)
builderIndex := primitives.BuilderIndex(0)
epoch := primitives.Epoch(10)
st, priv := newGloasStateWithBuilder(t, builderIndex, epoch)
exit := &ethpb.VoluntaryExit{
ValidatorIndex: builderIndex.ToValidatorIndex(),
Epoch: epoch,
}
signed := signBuilderExit(t, st, exit, priv)
newState, err := blocks.ProcessVoluntaryExits(t.Context(), st, []*ethpb.SignedVoluntaryExit{signed}, validators.ExitInformation(st))
require.NoError(t, err)
// Verify builder's withdrawable epoch was set.
builder, err := newState.Builder(builderIndex)
require.NoError(t, err)
cfg := params.BeaconConfig()
expectedWithdrawableEpoch := epoch + cfg.MinBuilderWithdrawabilityDelay
assert.Equal(t, expectedWithdrawableEpoch, builder.WithdrawableEpoch)
}
func TestProcessVoluntaryExits_BuilderExitPreGloas(t *testing.T) {
cfg := params.BeaconConfig().Copy()
cfg.CapellaForkEpoch = 1
cfg.DenebForkEpoch = 2
cfg.ElectraForkEpoch = 3
cfg.FuluForkEpoch = 4
cfg.GloasForkEpoch = 100 // Gloas not yet active.
params.SetActiveTestCleanup(t, cfg)
epoch := primitives.Epoch(10)
builderIndex := primitives.BuilderIndex(0)
stProto := &ethpb.BeaconStateFulu{
Slot: cfg.SlotsPerEpoch * primitives.Slot(epoch),
Fork: &ethpb.Fork{
PreviousVersion: cfg.DenebForkVersion,
CurrentVersion: cfg.FuluForkVersion,
Epoch: cfg.FuluForkEpoch,
},
GenesisValidatorsRoot: make([]byte, 32),
FinalizedCheckpoint: &ethpb.Checkpoint{Root: make([]byte, 32)},
CurrentJustifiedCheckpoint: &ethpb.Checkpoint{Root: make([]byte, 32)},
PreviousJustifiedCheckpoint: &ethpb.Checkpoint{Root: make([]byte, 32)},
Validators: []*ethpb.Validator{
{ExitEpoch: cfg.FarFutureEpoch, ActivationEpoch: 0, PublicKey: make([]byte, 48)},
},
Balances: []uint64{32_000_000_000},
BlockRoots: make([][]byte, cfg.SlotsPerHistoricalRoot),
StateRoots: make([][]byte, cfg.SlotsPerHistoricalRoot),
RandaoMixes: make([][]byte, cfg.EpochsPerHistoricalVector),
Slashings: make([]uint64, cfg.EpochsPerSlashingsVector),
}
for i := range stProto.BlockRoots {
stProto.BlockRoots[i] = make([]byte, 32)
}
for i := range stProto.StateRoots {
stProto.StateRoots[i] = make([]byte, 32)
}
for i := range stProto.RandaoMixes {
stProto.RandaoMixes[i] = make([]byte, 32)
}
st, err := state_native.InitializeFromProtoUnsafeFulu(stProto)
require.NoError(t, err)
signed := &ethpb.SignedVoluntaryExit{
Exit: &ethpb.VoluntaryExit{
ValidatorIndex: builderIndex.ToValidatorIndex(),
Epoch: epoch,
},
Signature: make([]byte, 96),
}
// On pre-Gloas state, builder-flagged exits are not routed to the builder path.
// ProcessVoluntaryExits treats the builder-flagged index as a regular validator index,
// which fails because no such validator exists.
_, err = blocks.ProcessVoluntaryExits(t.Context(), st, []*ethpb.SignedVoluntaryExit{signed}, validators.ExitInformation(st))
require.ErrorContains(t, "out of bounds", err)
}

View File

@@ -2,4 +2,5 @@ package blocks
var ProcessBLSToExecutionChange = processBLSToExecutionChange
var ErrInvalidBLSPrefix = errInvalidBLSPrefix
var ErrInvalidWithdrawalCredentials = errInvalidWithdrawalCredentials
var VerifyBlobCommitmentCount = verifyBlobCommitmentCount

View File

@@ -192,11 +192,47 @@ func NewGenesisBlockForState(ctx context.Context, st state.BeaconState) (interfa
Block: electraGenesisBlock(root),
Signature: params.BeaconConfig().EmptySignature[:],
})
case *ethpb.BeaconStateGloas:
return blocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlockGloas{
Block: gloasGenesisBlock(root),
Signature: params.BeaconConfig().EmptySignature[:],
})
default:
return nil, ErrUnrecognizedState
}
}
func gloasGenesisBlock(root [fieldparams.RootLength]byte) *ethpb.BeaconBlockGloas {
return &ethpb.BeaconBlockGloas{
ParentRoot: params.BeaconConfig().ZeroHash[:],
StateRoot: root[:],
Body: &ethpb.BeaconBlockBodyGloas{
RandaoReveal: make([]byte, 96),
Eth1Data: &ethpb.Eth1Data{
DepositRoot: make([]byte, 32),
BlockHash: make([]byte, 32),
},
Graffiti: make([]byte, 32),
SyncAggregate: &ethpb.SyncAggregate{
SyncCommitteeBits: make([]byte, fieldparams.SyncCommitteeLength/8),
SyncCommitteeSignature: make([]byte, fieldparams.BLSSignatureLength),
},
SignedExecutionPayloadBid: &ethpb.SignedExecutionPayloadBid{
Message: &ethpb.ExecutionPayloadBid{
ParentBlockHash: make([]byte, 32),
ParentBlockRoot: make([]byte, 32),
BlockHash: make([]byte, 32),
PrevRandao: make([]byte, 32),
FeeRecipient: make([]byte, 20),
BlobKzgCommitments: make([][]byte, 0),
},
Signature: make([]byte, fieldparams.BLSSignatureLength),
},
PayloadAttestations: make([]*ethpb.PayloadAttestation, 0),
},
}
}
func electraGenesisBlock(root [fieldparams.RootLength]byte) *ethpb.BeaconBlockElectra {
return &ethpb.BeaconBlockElectra{
ParentRoot: params.BeaconConfig().ZeroHash[:],

View File

@@ -147,7 +147,7 @@ func TestProcessBLSToExecutionChange(t *testing.T) {
_, err = blocks.ValidateBLSToExecutionChange(st, signed)
// The state should return an empty validator, even when the validator object in the registry is
// nil. This error should return when the withdrawal credentials are invalid or too short.
require.ErrorIs(t, err, blocks.ErrInvalidBLSPrefix)
require.ErrorIs(t, err, blocks.ErrInvalidWithdrawalCredentials)
})
t.Run("non-existent validator", func(t *testing.T) {
priv, err := bls.RandKey()

View File

@@ -20,37 +20,46 @@ import (
)
func TestProcessPendingDepositsMultiplesSameDeposits(t *testing.T) {
st := stateWithActiveBalanceETH(t, 1000)
deps := make([]*eth.PendingDeposit, 2) // Make same deposit twice
validators := st.Validators()
sk, err := bls.RandKey()
require.NoError(t, err)
for i := 0; i < len(deps); i += 1 {
wc := make([]byte, 32)
wc[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
wc[31] = byte(i)
validators[i].PublicKey = sk.PublicKey().Marshal()
validators[i].WithdrawalCredentials = wc
deps[i] = stateTesting.GeneratePendingDeposit(t, sk, 32, bytesutil.ToBytes32(wc), 0)
}
require.NoError(t, st.SetPendingDeposits(deps))
const (
depositCount = uint64(2)
amountETH = uint64(32)
slot = 0
activeBalanceGwei = 10_000
)
err = electra.ProcessPendingDeposits(context.TODO(), st, 10000)
state := stateWithActiveBalanceETH(t, 0)
secretKey, err := bls.RandKey()
require.NoError(t, err)
val := st.Validators()
seenPubkeys := make(map[string]struct{})
for i := 0; i < len(val); i += 1 {
if len(val[i].PublicKey) == 0 {
continue
}
_, ok := seenPubkeys[string(val[i].PublicKey)]
if ok {
t.Fatalf("duplicated pubkeys")
} else {
seenPubkeys[string(val[i].PublicKey)] = struct{}{}
}
withdrawalCredentialsBytes := make([]byte, 32)
withdrawalCredentialsBytes[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
withdrawalCredentials := bytesutil.ToBytes32(withdrawalCredentialsBytes)
validators := state.Validators()
require.Equal(t, 0, len(validators))
deposits := make([]*eth.PendingDeposit, 0, depositCount)
for range depositCount {
deposit := stateTesting.GeneratePendingDeposit(t, secretKey, amountETH, withdrawalCredentials, slot)
deposits = append(deposits, deposit)
}
err = state.SetPendingDeposits(deposits)
require.NoError(t, err)
err = electra.ProcessPendingDeposits(t.Context(), state, activeBalanceGwei)
require.NoError(t, err)
// The first deposit should create a new validator,
// and the second deposit should top up the same validator
// We should have 1 validator with balance of 64 ETH.
validators = state.Validators()
require.Equal(t, 1, len(validators))
balance, err := state.BalanceAtIndex(0)
require.NoError(t, err)
require.Equal(t, depositCount*amountETH, balance)
}
func TestProcessPendingDeposits(t *testing.T) {

View File

@@ -39,9 +39,6 @@ func ProcessEffectiveBalanceUpdates(st state.BeaconState) error {
// Update effective balances with hysteresis.
validatorFunc := func(idx int, val state.ReadOnlyValidator) (newVal *ethpb.Validator, err error) {
if val.IsNil() {
return nil, fmt.Errorf("validator %d is nil in state", idx)
}
if idx >= len(bals) {
return nil, fmt.Errorf("validator index exceeds validator length in state %d >= %d", idx, len(st.Balances()))
}
@@ -54,8 +51,10 @@ func ProcessEffectiveBalanceUpdates(st state.BeaconState) error {
if balance+downwardThreshold < val.EffectiveBalance() || val.EffectiveBalance()+upwardThreshold < balance {
effectiveBal := min(balance-balance%effBalanceInc, effectiveBalanceLimit)
newVal = val.Copy()
newVal.EffectiveBalance = effectiveBal
if effectiveBal != val.EffectiveBalance() {
newVal = val.Copy()
newVal.EffectiveBalance = effectiveBal
}
}
return newVal, nil
}

View File

@@ -16,9 +16,6 @@ import (
func TestSwitchToCompoundingValidator(t *testing.T) {
s, err := state_native.InitializeFromProtoElectra(&eth.BeaconStateElectra{
Validators: []*eth.Validator{
{
WithdrawalCredentials: []byte{}, // No withdrawal credentials
},
{
WithdrawalCredentials: []byte{0x01, 0xFF}, // Has withdrawal credentials
},
@@ -27,22 +24,19 @@ func TestSwitchToCompoundingValidator(t *testing.T) {
},
},
Balances: []uint64{
params.BeaconConfig().MinActivationBalance,
params.BeaconConfig().MinActivationBalance,
params.BeaconConfig().MinActivationBalance + 100_000, // Has excess balance
},
})
// Test that a validator with no withdrawal credentials cannot be switched to compounding.
require.NoError(t, err)
require.ErrorContains(t, "validator has no withdrawal credentials", electra.SwitchToCompoundingValidator(s, 0))
// Test that a validator with withdrawal credentials can be switched to compounding.
require.NoError(t, electra.SwitchToCompoundingValidator(s, 1))
v, err := s.ValidatorAtIndex(1)
require.NoError(t, electra.SwitchToCompoundingValidator(s, 0))
v, err := s.ValidatorAtIndex(0)
require.NoError(t, err)
require.Equal(t, true, bytes.HasPrefix(v.WithdrawalCredentials, []byte{params.BeaconConfig().CompoundingWithdrawalPrefixByte}), "withdrawal credentials were not updated")
// val_1 Balance is not changed
b, err := s.BalanceAtIndex(1)
// val_0 Balance is not changed
b, err := s.BalanceAtIndex(0)
require.NoError(t, err)
require.Equal(t, params.BeaconConfig().MinActivationBalance, b, "balance was changed")
pbd, err := s.PendingDeposits()
@@ -50,8 +44,8 @@ func TestSwitchToCompoundingValidator(t *testing.T) {
require.Equal(t, 0, len(pbd), "pending balance deposits should be empty")
// Test that a validator with excess balance can be switched to compounding, excess balance is queued.
require.NoError(t, electra.SwitchToCompoundingValidator(s, 2))
b, err = s.BalanceAtIndex(2)
require.NoError(t, electra.SwitchToCompoundingValidator(s, 1))
b, err = s.BalanceAtIndex(1)
require.NoError(t, err)
require.Equal(t, params.BeaconConfig().MinActivationBalance, b, "balance was not changed")
pbd, err = s.PendingDeposits()

View File

@@ -46,6 +46,9 @@ const (
// DataColumnReceived is sent after a data column has been seen after gossip validation rules.
DataColumnReceived = 12
// PayloadAttestationMessageReceived is sent after a payload attestation message is received from gossip or rpc.
PayloadAttestationMessageReceived = 13
)
// UnAggregatedAttReceivedData is the data sent with UnaggregatedAttReceived events.
@@ -114,3 +117,8 @@ type DataColumnReceivedData struct {
BlockRoot [32]byte
KzgCommitments [][]byte
}
// PayloadAttestationMessageReceivedData is the data sent with PayloadAttestationMessageReceived events.
type PayloadAttestationMessageReceivedData struct {
Message *ethpb.PayloadAttestationMessage
}

View File

@@ -5,8 +5,10 @@ go_library(
srcs = [
"attestation.go",
"bid.go",
"builder_exit.go",
"deposit_request.go",
"log.go",
"metrics.go",
"payload.go",
"payload_attestation.go",
"pending_payment.go",
@@ -33,11 +35,14 @@ go_library(
"//crypto/bls/common:go_default_library",
"//crypto/hash:go_default_library",
"//encoding/bytesutil:go_default_library",
"//monitoring/tracing/trace:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
)

View File

@@ -110,7 +110,7 @@ func ProcessExecutionPayloadBid(st state.BeaconState, block interfaces.ReadOnlyB
return fmt.Errorf("builder %d cannot cover bid amount %d", builderIndex, amount)
}
if err := validatePayloadBidSignature(st, wrappedBid); err != nil {
if err := ValidatePayloadBidSignature(st, wrappedBid); err != nil {
return errors.Wrap(err, "bid signature validation failed")
}
}
@@ -179,10 +179,10 @@ func validateBidConsistency(st state.BeaconState, bid interfaces.ROExecutionPayl
return nil
}
// validatePayloadBidSignature verifies the BLS signature on a signed execution payload bid.
// ValidatePayloadBidSignature verifies the BLS signature on a signed execution payload bid.
// It validates that the signature was created by the builder specified in the bid
// using the appropriate domain for the beacon builder.
func validatePayloadBidSignature(st state.ReadOnlyBeaconState, signedBid interfaces.ROSignedExecutionPayloadBid) error {
func ValidatePayloadBidSignature(st state.ReadOnlyBeaconState, signedBid interfaces.ROSignedExecutionPayloadBid) error {
bid, err := signedBid.Bid()
if err != nil {
return errors.Wrap(err, "failed to get bid")

View File

@@ -0,0 +1,37 @@
package gloas
import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/time/slots"
)
// InitiateBuilderExit initiates the exit of a builder by setting its withdrawable epoch.
//
// <spec fn="initiate_builder_exit" fork="gloas" hash="3da938d5">
// def initiate_builder_exit(state: BeaconState, builder_index: BuilderIndex) -> None:
// """
// Initiate the exit of the builder with index ``index``.
// """
// # Return if builder already initiated exit
// builder = state.builders[builder_index]
// if builder.withdrawable_epoch != FAR_FUTURE_EPOCH:
// return
//
// # Set builder exit epoch
// builder.withdrawable_epoch = get_current_epoch(state) + MIN_BUILDER_WITHDRAWABILITY_DELAY
// </spec>
func InitiateBuilderExit(s state.BeaconState, builderIndex primitives.BuilderIndex) error {
builder, err := s.Builder(builderIndex)
if err != nil {
return err
}
// Return if builder already initiated exit.
if builder.WithdrawableEpoch != params.BeaconConfig().FarFutureEpoch {
return nil
}
currentEpoch := slots.ToEpoch(s.Slot())
builder.WithdrawableEpoch = currentEpoch + params.BeaconConfig().MinBuilderWithdrawabilityDelay
return s.UpdateBuilderAtIndex(builderIndex, builder)
}

View File

@@ -29,7 +29,7 @@ func processDepositRequests(ctx context.Context, beaconState state.BeaconState,
// processDepositRequest processes the specific deposit request
//
// <spec fn="process_deposit_request" fork="gloas" hash="3c6b0310">
// <spec fn="process_deposit_request" fork="gloas" hash="0e8b94ab">
// def process_deposit_request(state: BeaconState, deposit_request: DepositRequest) -> None:
// # [New in Gloas:EIP7732]
// builder_pubkeys = [b.pubkey for b in state.builders]
@@ -40,8 +40,11 @@ func processDepositRequests(ctx context.Context, beaconState state.BeaconState,
// # already exists with this pubkey, apply the deposit to their balance
// is_builder = deposit_request.pubkey in builder_pubkeys
// is_validator = deposit_request.pubkey in validator_pubkeys
// is_builder_prefix = is_builder_withdrawal_credential(deposit_request.withdrawal_credentials)
// if is_builder or (is_builder_prefix and not is_validator):
// if is_builder or (
// is_builder_withdrawal_credential(deposit_request.withdrawal_credentials)
// and not is_validator
// and not is_pending_validator(state, deposit_request.pubkey)
// ):
// # Apply builder deposits immediately
// apply_deposit_for_builder(
// state,
@@ -74,6 +77,7 @@ func processDepositRequest(beaconState state.BeaconState, request *enginev1.Depo
return errors.Wrap(err, "could not apply builder deposit")
}
if applied {
builderDepositsProcessedTotal.Inc()
return nil
}
@@ -118,13 +122,7 @@ func applyBuilderDepositRequest(beaconState state.BeaconState, request *enginev1
}
pubkey := bytesutil.ToBytes48(request.Pubkey)
_, isValidator := beaconState.ValidatorIndexByPubkey(pubkey)
idx, isBuilder := beaconState.BuilderIndexByPubkey(pubkey)
isBuilderPrefix := helpers.IsBuilderWithdrawalCredential(request.WithdrawalCredentials)
if !isBuilder && (!isBuilderPrefix || isValidator) {
return false, nil
}
if isBuilder {
if err := beaconState.IncreaseBuilderBalance(idx, request.Amount); err != nil {
return false, err
@@ -132,6 +130,20 @@ func applyBuilderDepositRequest(beaconState state.BeaconState, request *enginev1
return true, nil
}
isBuilderPrefix := helpers.IsBuilderWithdrawalCredential(request.WithdrawalCredentials)
_, isValidator := beaconState.ValidatorIndexByPubkey(pubkey)
if !isBuilderPrefix || isValidator {
return false, nil
}
isPending, err := beaconState.IsPendingValidator(request.Pubkey)
if err != nil {
return false, err
}
if isPending {
return false, nil
}
if err := applyDepositForNewBuilder(
beaconState,
request.Pubkey,

View File

@@ -91,6 +91,33 @@ func TestProcessDepositRequest_ExistingBuilderIncreasesBalance(t *testing.T) {
require.Equal(t, 0, len(pending))
}
func TestProcessDepositRequest_BuilderDepositWithExistingPendingDepositStaysPending(t *testing.T) {
sk, err := bls.RandKey()
require.NoError(t, err)
validatorCred := validatorWithdrawalCredentials()
builderCred := builderWithdrawalCredentials()
existingPending := stateTesting.GeneratePendingDeposit(t, sk, 1234, validatorCred, 0)
req := depositRequestFromPending(stateTesting.GeneratePendingDeposit(t, sk, 200, builderCred, 1), 9)
st := newGloasState(t, nil, nil)
require.NoError(t, st.SetPendingDeposits([]*ethpb.PendingDeposit{existingPending}))
err = processDepositRequest(st, req)
require.NoError(t, err)
_, ok := st.BuilderIndexByPubkey(toBytes48(req.Pubkey))
require.Equal(t, false, ok)
pending, err := st.PendingDeposits()
require.NoError(t, err)
require.Equal(t, 2, len(pending))
require.DeepEqual(t, existingPending.PublicKey, pending[0].PublicKey)
require.DeepEqual(t, req.Pubkey, pending[1].PublicKey)
require.DeepEqual(t, req.WithdrawalCredentials, pending[1].WithdrawalCredentials)
require.Equal(t, req.Amount, pending[1].Amount)
}
func TestApplyDepositForBuilder_InvalidSignatureIgnoresDeposit(t *testing.T) {
sk, err := bls.RandKey()
require.NoError(t, err)

View File

@@ -0,0 +1,27 @@
package gloas
import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
)
var (
builderPendingPaymentsProcessedTotal = promauto.NewCounter(
prometheus.CounterOpts{
Name: "builder_pending_payments_processed_total",
Help: "The number of builder pending payments promoted into the builder pending withdrawal queue.",
},
)
builderDepositsProcessedTotal = promauto.NewCounter(
prometheus.CounterOpts{
Name: "builder_deposits_processed_total",
Help: "The number of builder-related deposit requests processed.",
},
)
builderExitsProcessedTotal = promauto.NewCounter(
prometheus.CounterOpts{
Name: "builder_exits_processed_total",
Help: "The number of processed builder exits.",
},
)
)

View File

@@ -17,7 +17,8 @@ import (
"github.com/pkg/errors"
)
// ProcessExecutionPayload processes the signed execution payload envelope for the Gloas fork.
// ProcessExecutionPayload is the gossip entry point: verify signature, validate
// consistency, apply state mutations, and verify the post-payload state root.
//
// <spec fn="process_execution_payload" fork="gloas" hash="36bd3af3">
// def process_execution_payload(
@@ -108,7 +109,7 @@ func ProcessExecutionPayload(
st state.BeaconState,
signedEnvelope interfaces.ROSignedExecutionPayloadEnvelope,
) error {
if err := VerifyExecutionPayloadEnvelopeSignature(st, signedEnvelope); err != nil {
if err := verifyExecutionPayloadEnvelopeSignature(st, signedEnvelope); err != nil {
return errors.Wrap(err, "signature verification failed")
}
@@ -117,29 +118,132 @@ func ProcessExecutionPayload(
return errors.Wrap(err, "could not get envelope from signed envelope")
}
if err := ApplyExecutionPayload(ctx, st, envelope); err != nil {
if err := cacheLatestBlockHeaderStateRoot(ctx, st); err != nil {
return err
}
r, err := st.HashTreeRoot(ctx)
if err != nil {
return errors.Wrap(err, "could not get hash tree root")
if err := validatePayloadConsistency(st, envelope); err != nil {
return err
}
if r != envelope.StateRoot() {
return fmt.Errorf("state root mismatch: expected %#x, got %#x", envelope.StateRoot(), r)
if err := applyExecutionPayloadStateMutations(ctx, st, envelope.ExecutionRequests(), envelope.BlockHash()); err != nil {
return err
}
return nil
return verifyPostStateRoot(ctx, st, envelope)
}
// ApplyExecutionPayload applies the execution payload envelope to the state and performs the same
// consistency checks as the full processing path. This keeps the post-payload state root computation
// on a shared code path, even though some bid/payload checks are not strictly required for the root itself.
// ProcessExecutionPayloadWithDeferredSig is the init-sync entry point: extract the
// signature for deferred verification, validate consistency, apply state
// mutations, and verify the post-payload state root. The caller provides the
// previousStateRoot to avoid recomputing it.
func ProcessExecutionPayloadWithDeferredSig(
ctx context.Context,
st state.BeaconState,
previousStateRoot [32]byte,
signedEnvelope interfaces.ROSignedExecutionPayloadEnvelope,
) (*bls.SignatureBatch, error) {
sigBatch, err := ExecutionPayloadEnvelopeSignatureBatch(st, signedEnvelope)
if err != nil {
return nil, errors.Wrap(err, "could not extract envelope signature batch")
}
envelope, err := signedEnvelope.Envelope()
if err != nil {
return nil, errors.Wrap(err, "could not get envelope from signed envelope")
}
if err := setLatestBlockHeaderStateRoot(st, previousStateRoot); err != nil {
return nil, errors.Wrap(err, "could not set latest block header state root")
}
if err := validatePayloadConsistency(st, envelope); err != nil {
return nil, err
}
if err := applyExecutionPayloadStateMutations(ctx, st, envelope.ExecutionRequests(), envelope.BlockHash()); err != nil {
return nil, err
}
if err := verifyPostStateRoot(ctx, st, envelope); err != nil {
return nil, err
}
return sigBatch, nil
}
// ProcessBlindedExecutionPayload is the replay/stategen entry
// point: patch the block header, do minimal bid consistency checks, and apply
// state mutations. No payload data is available — only the blinded envelope.
// A nil envelope is a no-op (the payload was not delivered for that slot).
func ProcessBlindedExecutionPayload(
ctx context.Context,
st state.BeaconState,
previousStateRoot [32]byte,
envelope interfaces.ROBlindedExecutionPayloadEnvelope,
) error {
if envelope == nil {
return nil
}
if err := setLatestBlockHeaderStateRoot(st, previousStateRoot); err != nil {
return errors.Wrap(err, "could not set latest block header state root")
}
if envelope.Slot() != st.Slot() {
return errors.Errorf("blinded envelope slot does not match state slot: envelope=%d, state=%d", envelope.Slot(), st.Slot())
}
latestBid, err := st.LatestExecutionPayloadBid()
if err != nil {
return errors.Wrap(err, "could not get latest execution payload bid")
}
if latestBid == nil {
return errors.New("latest execution payload bid is nil")
}
if envelope.BuilderIndex() != latestBid.BuilderIndex() {
return errors.Errorf(
"blinded envelope builder index does not match committed bid builder index: envelope=%d, bid=%d",
envelope.BuilderIndex(),
latestBid.BuilderIndex(),
)
}
bidBlockHash := latestBid.BlockHash()
envelopeBlockHash := envelope.BlockHash()
if bidBlockHash != envelopeBlockHash {
return errors.Errorf(
"blinded envelope block hash does not match committed bid block hash: envelope=%#x, bid=%#x",
envelopeBlockHash,
bidBlockHash,
)
}
return applyExecutionPayloadStateMutations(ctx, st, envelope.ExecutionRequests(), envelopeBlockHash)
}
// ApplyExecutionPayload patches the block header state root, validates
// consistency, and applies state mutations. No signature or post-state-root
// verification is performed. Used by the proposer path to compute the
// post-payload state root for the envelope.
func ApplyExecutionPayload(
ctx context.Context,
st state.BeaconState,
envelope interfaces.ROExecutionPayloadEnvelope,
) error {
if err := cacheLatestBlockHeaderStateRoot(ctx, st); err != nil {
return err
}
if err := validatePayloadConsistency(st, envelope); err != nil {
return err
}
return applyExecutionPayloadStateMutations(ctx, st, envelope.ExecutionRequests(), envelope.BlockHash())
}
func setLatestBlockHeaderStateRoot(st state.BeaconState, root [32]byte) error {
latestHeader := st.LatestBlockHeader()
latestHeader.StateRoot = root[:]
return st.SetLatestBlockHeader(latestHeader)
}
// cacheLatestBlockHeaderStateRoot fills in the state root on the latest block
// header if it hasn't been set yet (the spec's "cache latest block header
// state root" step).
func cacheLatestBlockHeaderStateRoot(ctx context.Context, st state.BeaconState) error {
latestHeader := st.LatestBlockHeader()
if len(latestHeader.StateRoot) == 0 || bytes.Equal(latestHeader.StateRoot, make([]byte, 32)) {
previousStateRoot, err := st.HashTreeRoot(ctx)
@@ -151,7 +255,13 @@ func ApplyExecutionPayload(
return errors.Wrap(err, "could not set latest block header")
}
}
return nil
}
// validatePayloadConsistency checks that the envelope and payload are consistent
// with the beacon block header, the committed bid, and the current state.
func validatePayloadConsistency(st state.BeaconState, envelope interfaces.ROExecutionPayloadEnvelope) error {
latestHeader := st.LatestBlockHeader()
blockHeaderRoot, err := latestHeader.HashTreeRoot()
if err != nil {
return errors.Wrap(err, "could not compute block header root")
@@ -190,7 +300,6 @@ func ApplyExecutionPayload(
if err != nil {
return errors.Wrap(err, "could not get withdrawals from payload")
}
ok, err := st.WithdrawalsMatchPayloadExpected(withdrawals)
if err != nil {
return errors.Wrap(err, "could not validate payload withdrawals")
@@ -225,14 +334,26 @@ func ApplyExecutionPayload(
return errors.Errorf("payload timestamp does not match expected timestamp: payload=%d, expected=%d", payload.Timestamp(), uint64(t.Unix()))
}
if err := ApplyExecutionPayloadStateMutations(ctx, st, envelope.ExecutionRequests(), [32]byte(payload.BlockHash())); err != nil {
return err
}
return nil
}
func ApplyExecutionPayloadStateMutations(
// verifyPostStateRoot checks that the post-payload state root matches the
// envelope's declared state root.
func verifyPostStateRoot(ctx context.Context, st state.BeaconState, envelope interfaces.ROExecutionPayloadEnvelope) error {
r, err := st.HashTreeRoot(ctx)
if err != nil {
return errors.Wrap(err, "could not compute post-envelope state root")
}
if r != envelope.StateRoot() {
return fmt.Errorf("state root mismatch: expected %#x, got %#x", envelope.StateRoot(), r)
}
return nil
}
// applyExecutionPayloadStateMutations applies the state-changing operations
// from an execution payload: process execution requests, queue builder payment,
// set execution payload availability, and update the latest block hash.
func applyExecutionPayloadStateMutations(
ctx context.Context,
st state.BeaconState,
executionRequests *enginev1.ExecutionRequests,
@@ -257,6 +378,107 @@ func ApplyExecutionPayloadStateMutations(
return nil
}
// ExecutionPayloadEnvelopeSignatureBatch extracts the BLS signature from a signed execution payload
// envelope as a SignatureBatch for deferred batch verification.
func ExecutionPayloadEnvelopeSignatureBatch(
st state.BeaconState,
signedEnvelope interfaces.ROSignedExecutionPayloadEnvelope,
) (*bls.SignatureBatch, error) {
envelope, err := signedEnvelope.Envelope()
if err != nil {
return nil, fmt.Errorf("failed to get envelope: %w", err)
}
builderIdx := envelope.BuilderIndex()
publicKey, err := envelopePublicKey(st, builderIdx)
if err != nil {
return nil, err
}
currentEpoch := slots.ToEpoch(envelope.Slot())
domain, err := signing.Domain(
st.Fork(),
currentEpoch,
params.BeaconConfig().DomainBeaconBuilder,
st.GenesisValidatorsRoot(),
)
if err != nil {
return nil, fmt.Errorf("failed to compute signing domain: %w", err)
}
signingRoot, err := signedEnvelope.SigningRoot(domain)
if err != nil {
return nil, fmt.Errorf("failed to compute signing root: %w", err)
}
signatureBytes := signedEnvelope.Signature()
return &bls.SignatureBatch{
Signatures: [][]byte{signatureBytes[:]},
PublicKeys: []bls.PublicKey{publicKey},
Messages: [][32]byte{signingRoot},
Descriptions: []string{"execution payload envelope signature"},
}, nil
}
// verifyExecutionPayloadEnvelopeSignature verifies the BLS signature on a signed execution payload envelope.
//
// <spec fn="verify_execution_payload_envelope_signature" fork="gloas" style="full" hash="49483ae2">
// def verify_execution_payload_envelope_signature(
// state: BeaconState, signed_envelope: SignedExecutionPayloadEnvelope
// ) -> bool:
// builder_index = signed_envelope.message.builder_index
// if builder_index == BUILDER_INDEX_SELF_BUILD:
// validator_index = state.latest_block_header.proposer_index
// pubkey = state.validators[validator_index].pubkey
// else:
// pubkey = state.builders[builder_index].pubkey
//
// signing_root = compute_signing_root(
// signed_envelope.message, get_domain(state, DOMAIN_BEACON_BUILDER)
// )
// return bls.Verify(pubkey, signing_root, signed_envelope.signature)
// </spec>
func verifyExecutionPayloadEnvelopeSignature(st state.BeaconState, signedEnvelope interfaces.ROSignedExecutionPayloadEnvelope) error {
envelope, err := signedEnvelope.Envelope()
if err != nil {
return fmt.Errorf("failed to get envelope: %w", err)
}
builderIdx := envelope.BuilderIndex()
publicKey, err := envelopePublicKey(st, builderIdx)
if err != nil {
return err
}
signatureBytes := signedEnvelope.Signature()
signature, err := bls.SignatureFromBytes(signatureBytes[:])
if err != nil {
return fmt.Errorf("invalid signature format: %w", err)
}
currentEpoch := slots.ToEpoch(envelope.Slot())
domain, err := signing.Domain(
st.Fork(),
currentEpoch,
params.BeaconConfig().DomainBeaconBuilder,
st.GenesisValidatorsRoot(),
)
if err != nil {
return fmt.Errorf("failed to compute signing domain: %w", err)
}
signingRoot, err := signedEnvelope.SigningRoot(domain)
if err != nil {
return fmt.Errorf("failed to compute signing root: %w", err)
}
if !signature.Verify(publicKey, signingRoot[:]) {
return fmt.Errorf("signature verification failed: %w", signing.ErrSigFailedToVerify)
}
return nil
}
func envelopePublicKey(st state.BeaconState, builderIdx primitives.BuilderIndex) (bls.PublicKey, error) {
if builderIdx == params.BeaconConfig().BuilderIndexSelfBuild {
return proposerPublicKey(st)
@@ -293,10 +515,6 @@ func builderPublicKey(st state.BeaconState, builderIdx primitives.BuilderIndex)
}
// processExecutionRequests processes deposits, withdrawals, and consolidations from execution requests.
// Spec v1.7.0-alpha.0 (pseudocode):
// for op in requests.deposits: process_deposit_request(state, op)
// for op in requests.withdrawals: process_withdrawal_request(state, op)
// for op in requests.consolidations: process_consolidation_request(state, op)
func processExecutionRequests(ctx context.Context, st state.BeaconState, rqs *enginev1.ExecutionRequests) error {
if err := processDepositRequests(ctx, st, rqs.Deposits); err != nil {
return errors.Wrap(err, "could not process deposit requests")
@@ -313,65 +531,3 @@ func processExecutionRequests(ctx context.Context, st state.BeaconState, rqs *en
}
return nil
}
// VerifyExecutionPayloadEnvelopeSignature verifies the BLS signature on a signed execution payload envelope.
// <spec fn="verify_execution_payload_envelope_signature" fork="gloas" style="full" hash="49483ae2">
// def verify_execution_payload_envelope_signature(
//
// state: BeaconState, signed_envelope: SignedExecutionPayloadEnvelope
//
// ) -> bool:
//
// builder_index = signed_envelope.message.builder_index
// if builder_index == BUILDER_INDEX_SELF_BUILD:
// validator_index = state.latest_block_header.proposer_index
// pubkey = state.validators[validator_index].pubkey
// else:
// pubkey = state.builders[builder_index].pubkey
//
// signing_root = compute_signing_root(
// signed_envelope.message, get_domain(state, DOMAIN_BEACON_BUILDER)
// )
// return bls.Verify(pubkey, signing_root, signed_envelope.signature)
//
// </spec>
func VerifyExecutionPayloadEnvelopeSignature(st state.BeaconState, signedEnvelope interfaces.ROSignedExecutionPayloadEnvelope) error {
envelope, err := signedEnvelope.Envelope()
if err != nil {
return fmt.Errorf("failed to get envelope: %w", err)
}
builderIdx := envelope.BuilderIndex()
publicKey, err := envelopePublicKey(st, builderIdx)
if err != nil {
return err
}
signatureBytes := signedEnvelope.Signature()
signature, err := bls.SignatureFromBytes(signatureBytes[:])
if err != nil {
return fmt.Errorf("invalid signature format: %w", err)
}
currentEpoch := slots.ToEpoch(envelope.Slot())
domain, err := signing.Domain(
st.Fork(),
currentEpoch,
params.BeaconConfig().DomainBeaconBuilder,
st.GenesisValidatorsRoot(),
)
if err != nil {
return fmt.Errorf("failed to compute signing domain: %w", err)
}
signingRoot, err := signedEnvelope.SigningRoot(domain)
if err != nil {
return fmt.Errorf("failed to compute signing root: %w", err)
}
if !signature.Verify(publicKey, signingRoot[:]) {
return fmt.Errorf("signature verification failed: %w", signing.ErrSigFailedToVerify)
}
return nil
}

View File

@@ -19,6 +19,7 @@ import (
"github.com/OffchainLabs/prysm/v7/crypto/bls"
"github.com/OffchainLabs/prysm/v7/crypto/hash"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v7/monitoring/tracing/trace"
eth "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/pkg/errors"
@@ -80,7 +81,7 @@ func ProcessPayloadAttestations(ctx context.Context, st state.BeaconState, body
// indexedPayloadAttestation converts a payload attestation into its indexed form.
func indexedPayloadAttestation(ctx context.Context, st state.ReadOnlyBeaconState, att *eth.PayloadAttestation) (*consensus_types.IndexedPayloadAttestation, error) {
committee, err := PayloadCommittee(ctx, st, att.Data.Slot)
committee, err := st.PayloadCommitteeReadOnly(att.Data.Slot)
if err != nil {
return nil, err
}
@@ -99,10 +100,10 @@ func indexedPayloadAttestation(ctx context.Context, st state.ReadOnlyBeaconState
}, nil
}
// PayloadCommittee returns the payload timeliness committee for a given slot for the state.
// computePTC computes the payload timeliness committee for a given slot.
//
// <spec fn="get_ptc" fork="gloas" hash="ae15f761">
// def get_ptc(state: BeaconState, slot: Slot) -> Vector[ValidatorIndex, PTC_SIZE]:
// <spec fn="compute_ptc" fork="gloas" hash="0f323552">
// def compute_ptc(state: BeaconState, slot: Slot) -> Vector[ValidatorIndex, PTC_SIZE]:
// """
// Get the payload timeliness committee for the given ``slot``.
// """
@@ -118,7 +119,7 @@ func indexedPayloadAttestation(ctx context.Context, st state.ReadOnlyBeaconState
// state, indices, seed, size=PTC_SIZE, shuffle_indices=False
// )
// </spec>
func PayloadCommittee(ctx context.Context, st state.ReadOnlyBeaconState, slot primitives.Slot) ([]primitives.ValidatorIndex, error) {
func computePTC(ctx context.Context, st state.ReadOnlyBeaconState, slot primitives.Slot) ([]primitives.ValidatorIndex, error) {
epoch := slots.ToEpoch(slot)
seed, err := ptcSeed(st, epoch, slot)
if err != nil {
@@ -166,7 +167,7 @@ func PayloadCommitteeIndex(
slot primitives.Slot,
validatorIndex primitives.ValidatorIndex,
) (uint64, error) {
ptc, err := PayloadCommittee(ctx, st, slot)
ptc, err := st.PayloadCommitteeReadOnly(slot)
if err != nil {
return 0, err
}
@@ -275,12 +276,12 @@ func acceptByBalance(st state.ReadOnlyBeaconState, idx primitives.ValidatorIndex
offset := (round % 16) * 2
randomValue := uint64(binary.LittleEndian.Uint16(random[offset : offset+2])) // 16-bit draw per spec
val, err := st.ValidatorAtIndex(idx)
val, err := st.ValidatorAtIndexReadOnly(idx)
if err != nil {
return false, errors.Wrapf(err, "validator %d", idx)
}
return val.EffectiveBalance*fieldparams.MaxRandomValueElectra >= maxBalance*randomValue, nil
return val.EffectiveBalance()*fieldparams.MaxRandomValueElectra >= maxBalance*randomValue, nil
}
// validIndexedPayloadAttestation verifies the signature of an indexed payload attestation.
@@ -342,3 +343,43 @@ func validIndexedPayloadAttestation(st state.ReadOnlyBeaconState, att *consensus
}
return nil
}
// ProcessPTCWindow rotates the cached PTC window at epoch boundaries by computing
// PTC assignments for the new lookahead epoch and shifting the window.
//
// <spec fn="process_ptc_window" fork="gloas" hash="7be3d509">
// def process_ptc_window(state: BeaconState) -> None:
// """
// Update the cached PTC window.
// """
// # Shift all epochs forward by one
// state.ptc_window[: len(state.ptc_window) - SLOTS_PER_EPOCH] = state.ptc_window[SLOTS_PER_EPOCH:]
// # Fill in the last epoch
// next_epoch = Epoch(get_current_epoch(state) + MIN_SEED_LOOKAHEAD + 1)
// start_slot = compute_start_slot_at_epoch(next_epoch)
// state.ptc_window[len(state.ptc_window) - SLOTS_PER_EPOCH :] = [
// compute_ptc(state, Slot(slot)) for slot in range(start_slot, start_slot + SLOTS_PER_EPOCH)
// ]
// </spec>
func ProcessPTCWindow(ctx context.Context, st state.BeaconState) error {
_, span := trace.StartSpan(ctx, "gloas.ProcessPTCWindow")
defer span.End()
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
lastEpoch := slots.ToEpoch(st.Slot()) + params.BeaconConfig().MinSeedLookahead + 1
startSlot, err := slots.EpochStart(lastEpoch)
if err != nil {
return err
}
newSlots := make([]*eth.PTCs, slotsPerEpoch)
for i := range slotsPerEpoch {
ptc, err := computePTC(ctx, st, startSlot+primitives.Slot(i))
if err != nil {
return err
}
newSlots[i] = &eth.PTCs{ValidatorIndices: ptc}
}
return st.RotatePTCWindow(newSlots)
}

View File

@@ -2,13 +2,14 @@ package gloas_test
import (
"bytes"
"slices"
"testing"
"github.com/OffchainLabs/go-bitfield"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/gloas"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/signing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
@@ -119,7 +120,6 @@ func TestProcessPayloadAttestations_EmptyAggregationBits(t *testing.T) {
}
func TestProcessPayloadAttestations_HappyPath(t *testing.T) {
helpers.ClearCache()
setupTestConfig(t)
sk1, pk1 := newKey(t)
@@ -150,7 +150,6 @@ func TestProcessPayloadAttestations_HappyPath(t *testing.T) {
}
func TestProcessPayloadAttestations_MultipleAttestations(t *testing.T) {
helpers.ClearCache()
setupTestConfig(t)
sk1, pk1 := newKey(t)
@@ -216,6 +215,25 @@ func TestProcessPayloadAttestations_IndexedVerificationError(t *testing.T) {
}
func newTestState(t *testing.T, vals []*eth.Validator, slot primitives.Slot) state.BeaconState {
t.Helper()
st, err := testutil.NewBeaconStateGloas(func(seed *eth.BeaconStateGloas) error {
seed.Slot = slot
seed.Validators = vals
seed.Balances = make([]uint64, len(vals))
for i, v := range vals {
seed.Balances[i] = v.EffectiveBalance
}
seed.PtcWindow = deterministicPTCWindow(len(vals))
return nil
})
require.NoError(t, err)
return st
}
func newPhase0TestState(t *testing.T, vals []*eth.Validator, slot primitives.Slot) state.BeaconState {
t.Helper()
st, err := testutil.NewBeaconState()
require.NoError(t, err)
for _, v := range vals {
@@ -223,10 +241,25 @@ func newTestState(t *testing.T, vals []*eth.Validator, slot primitives.Slot) sta
require.NoError(t, st.AppendBalance(v.EffectiveBalance))
}
require.NoError(t, st.SetSlot(slot))
require.NoError(t, helpers.UpdateCommitteeCache(t.Context(), st, slots.ToEpoch(slot)))
return st
}
func deterministicPTCWindow(validatorCount int) []*eth.PTCs {
window := make([]*eth.PTCs, 3*params.BeaconConfig().SlotsPerEpoch)
indices := make([]primitives.ValidatorIndex, fieldparams.PTCSize)
if validatorCount > 0 {
for i := range indices {
indices[i] = primitives.ValidatorIndex(i % validatorCount)
}
}
for i := range window {
window[i] = &eth.PTCs{
ValidatorIndices: slices.Clone(indices),
}
}
return window
}
func setupTestConfig(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
@@ -291,6 +324,50 @@ func signAttestation(t *testing.T, st state.ReadOnlyBeaconState, data *eth.Paylo
return agg.Marshal()
}
func TestProcessPTCWindow(t *testing.T) {
fuluSt, _ := testutil.DeterministicGenesisStateFulu(t, 256)
st, err := gloas.UpgradeToGloas(fuluSt)
require.NoError(t, err)
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
// Get original window.
origWindow, err := st.PTCWindow()
require.NoError(t, err)
windowSize := int(slotsPerEpoch.Mul(uint64(2 + params.BeaconConfig().MinSeedLookahead)))
require.Equal(t, windowSize, len(origWindow))
// Advance state to next epoch boundary so process_ptc_window sees a new epoch.
require.NoError(t, st.SetSlot(slotsPerEpoch))
// Process PTC window — should rotate.
require.NoError(t, gloas.ProcessPTCWindow(t.Context(), st))
newWindow, err := st.PTCWindow()
require.NoError(t, err)
require.Equal(t, windowSize, len(newWindow))
// The first two epochs should be the old epochs 1 and 2 (shifted left by one epoch).
for i := range 2 * slotsPerEpoch {
require.DeepEqual(t, origWindow[slotsPerEpoch+i], newWindow[i])
}
// The last epoch should be freshly computed — not all zeros.
lastStart := 2 * slotsPerEpoch
for i := range slotsPerEpoch {
ptcSlot := newWindow[lastStart+i]
require.NotNil(t, ptcSlot)
nonZero := false
for _, idx := range ptcSlot.ValidatorIndices {
if idx != 0 {
nonZero = true
break
}
}
require.Equal(t, true, nonZero, "last epoch slot %d should have non-zero validator indices", i)
}
}
type validatorLookupErrState struct {
state.BeaconState
errIndex primitives.ValidatorIndex

View File

@@ -242,13 +242,74 @@ func TestProcessExecutionPayload_Success(t *testing.T) {
require.Equal(t, primitives.Gwei(0), payment.Withdrawal.Amount)
}
func TestProcessExecutionPayloadWithDeferredSig_Success(t *testing.T) {
fixture := buildPayloadFixture(t, nil)
header := fixture.state.LatestBlockHeader()
var previousStateRoot [32]byte
copy(previousStateRoot[:], header.StateRoot)
sigBatch, err := ProcessExecutionPayloadWithDeferredSig(t.Context(), fixture.state, previousStateRoot, fixture.signed)
require.NoError(t, err)
require.NotNil(t, sigBatch)
require.Equal(t, 1, len(sigBatch.Signatures))
require.Equal(t, 1, len(sigBatch.PublicKeys))
require.Equal(t, 1, len(sigBatch.Messages))
require.Equal(t, 1, len(sigBatch.Descriptions))
require.Equal(t, "execution payload envelope signature", sigBatch.Descriptions[0])
valid, err := sigBatch.Verify()
require.NoError(t, err)
require.Equal(t, true, valid)
latestHash, err := fixture.state.LatestBlockHash()
require.NoError(t, err)
var expectedHash [32]byte
copy(expectedHash[:], fixture.payload.BlockHash)
require.Equal(t, expectedHash, latestHash)
available, err := fixture.state.ExecutionPayloadAvailability(fixture.slot)
require.NoError(t, err)
require.Equal(t, uint64(1), available)
updatedHeader := fixture.state.LatestBlockHeader()
require.DeepEqual(t, previousStateRoot[:], updatedHeader.StateRoot)
}
func TestProcessExecutionPayloadWithDeferredSig_PreviousStateRootMismatch(t *testing.T) {
fixture := buildPayloadFixture(t, nil)
previousStateRoot := [32]byte{0x42}
_, err := ProcessExecutionPayloadWithDeferredSig(t.Context(), fixture.state, previousStateRoot, fixture.signed)
require.ErrorContains(t, "envelope beacon block root does not match state latest block header root", err)
}
func TestApplyExecutionPayload_Success(t *testing.T) {
fixture := buildPayloadFixture(t, nil)
envelope, err := fixture.signed.Envelope()
require.NoError(t, err)
require.NoError(t, ApplyExecutionPayload(t.Context(), fixture.state, envelope))
latestHash, err := fixture.state.LatestBlockHash()
require.NoError(t, err)
var expectedHash [32]byte
copy(expectedHash[:], fixture.payload.BlockHash)
require.Equal(t, expectedHash, latestHash)
available, err := fixture.state.ExecutionPayloadAvailability(fixture.slot)
require.NoError(t, err)
require.Equal(t, uint64(1), available)
}
func TestApplyExecutionPayloadStateMutations_UpdatesAvailabilityAndLatestHash(t *testing.T) {
fixture := buildPayloadFixture(t, nil)
newHash := [32]byte{}
newHash[0] = 0x99
require.NoError(t, ApplyExecutionPayloadStateMutations(t.Context(), fixture.state, fixture.envelope.ExecutionRequests, newHash))
require.NoError(t, applyExecutionPayloadStateMutations(t.Context(), fixture.state, fixture.envelope.ExecutionRequests, newHash))
latestHash, err := fixture.state.LatestBlockHash()
require.NoError(t, err)
@@ -282,6 +343,95 @@ func TestQueueBuilderPayment_ZeroAmountClearsSlot(t *testing.T) {
require.Equal(t, primitives.Gwei(0), payment.Withdrawal.Amount)
}
func TestProcessBlindedExecutionPayload_NilEnvelope(t *testing.T) {
fixture := buildPayloadFixture(t, nil)
require.NoError(t, ProcessBlindedExecutionPayload(t.Context(), fixture.state, [32]byte{}, nil))
}
func TestProcessBlindedExecutionPayload_Success(t *testing.T) {
fixture := buildPayloadFixture(t, nil)
st := fixture.state
blockHash := [32]byte(fixture.payload.BlockHash)
stateRoot := [32]byte{0xAA}
envelope := &ethpb.SignedBlindedExecutionPayloadEnvelope{
Message: &ethpb.BlindedExecutionPayloadEnvelope{
Slot: fixture.slot,
BuilderIndex: fixture.envelope.BuilderIndex,
BlockHash: blockHash[:],
BeaconBlockRoot: make([]byte, 32),
ExecutionRequests: fixture.envelope.ExecutionRequests,
},
}
wrappedEnv, err := blocks.WrappedROBlindedExecutionPayloadEnvelope(envelope.Message)
require.NoError(t, err)
require.NoError(t, ProcessBlindedExecutionPayload(t.Context(), st, stateRoot, wrappedEnv))
latestHash, err := st.LatestBlockHash()
require.NoError(t, err)
require.Equal(t, blockHash, latestHash)
available, err := st.ExecutionPayloadAvailability(fixture.slot)
require.NoError(t, err)
require.Equal(t, uint64(1), available)
header := st.LatestBlockHeader()
require.DeepEqual(t, stateRoot[:], header.StateRoot)
}
func TestProcessBlindedExecutionPayload_SlotMismatch(t *testing.T) {
fixture := buildPayloadFixture(t, nil)
envelope := &ethpb.SignedBlindedExecutionPayloadEnvelope{
Message: &ethpb.BlindedExecutionPayloadEnvelope{
Slot: fixture.slot + 1,
BlockHash: make([]byte, 32),
BeaconBlockRoot: make([]byte, 32),
},
}
wrappedEnv, err := blocks.WrappedROBlindedExecutionPayloadEnvelope(envelope.Message)
require.NoError(t, err)
err = ProcessBlindedExecutionPayload(t.Context(), fixture.state, [32]byte{}, wrappedEnv)
require.ErrorContains(t, "blinded envelope slot does not match state slot", err)
}
func TestProcessBlindedExecutionPayload_BuilderIndexMismatch(t *testing.T) {
fixture := buildPayloadFixture(t, nil)
blockHash := [32]byte(fixture.payload.BlockHash)
envelope := &ethpb.SignedBlindedExecutionPayloadEnvelope{
Message: &ethpb.BlindedExecutionPayloadEnvelope{
Slot: fixture.slot,
BuilderIndex: 999,
BlockHash: blockHash[:],
BeaconBlockRoot: make([]byte, 32),
},
}
wrappedEnv, err := blocks.WrappedROBlindedExecutionPayloadEnvelope(envelope.Message)
require.NoError(t, err)
err = ProcessBlindedExecutionPayload(t.Context(), fixture.state, [32]byte{}, wrappedEnv)
require.ErrorContains(t, "builder index does not match", err)
}
func TestProcessBlindedExecutionPayload_BlockHashMismatch(t *testing.T) {
fixture := buildPayloadFixture(t, nil)
wrongHash := bytes.Repeat([]byte{0xFF}, 32)
envelope := &ethpb.SignedBlindedExecutionPayloadEnvelope{
Message: &ethpb.BlindedExecutionPayloadEnvelope{
Slot: fixture.slot,
BuilderIndex: fixture.envelope.BuilderIndex,
BlockHash: wrongHash,
BeaconBlockRoot: make([]byte, 32),
},
}
wrappedEnv, err := blocks.WrappedROBlindedExecutionPayloadEnvelope(envelope.Message)
require.NoError(t, err)
err = ProcessBlindedExecutionPayload(t.Context(), fixture.state, [32]byte{}, wrappedEnv)
require.ErrorContains(t, "block hash does not match", err)
}
func TestVerifyExecutionPayloadEnvelopeSignature(t *testing.T) {
fixture := buildPayloadFixture(t, nil)
@@ -314,14 +464,14 @@ func TestVerifyExecutionPayloadEnvelopeSignature(t *testing.T) {
signed, err := blocks.WrappedROSignedExecutionPayloadEnvelope(signedProto)
require.NoError(t, err)
require.NoError(t, VerifyExecutionPayloadEnvelopeSignature(st, signed))
require.NoError(t, verifyExecutionPayloadEnvelopeSignature(st, signed))
})
t.Run("builder", func(t *testing.T) {
signed, err := blocks.WrappedROSignedExecutionPayloadEnvelope(fixture.signedProto)
require.NoError(t, err)
require.NoError(t, VerifyExecutionPayloadEnvelopeSignature(fixture.state, signed))
require.NoError(t, verifyExecutionPayloadEnvelopeSignature(fixture.state, signed))
})
t.Run("invalid signature", func(t *testing.T) {
@@ -347,7 +497,7 @@ func TestVerifyExecutionPayloadEnvelopeSignature(t *testing.T) {
badSigned, err := blocks.WrappedROSignedExecutionPayloadEnvelope(signedProto)
require.NoError(t, err)
err = VerifyExecutionPayloadEnvelopeSignature(st, badSigned)
err = verifyExecutionPayloadEnvelopeSignature(st, badSigned)
require.ErrorContains(t, "invalid signature format", err)
})
@@ -359,7 +509,7 @@ func TestVerifyExecutionPayloadEnvelopeSignature(t *testing.T) {
badSigned, err := blocks.WrappedROSignedExecutionPayloadEnvelope(signedProto)
require.NoError(t, err)
err = VerifyExecutionPayloadEnvelopeSignature(fixture.state, badSigned)
err = verifyExecutionPayloadEnvelopeSignature(fixture.state, badSigned)
require.ErrorContains(t, "invalid signature format", err)
})
})

View File

@@ -52,6 +52,7 @@ func ProcessBuilderPendingPayments(state state.BeaconState) error {
if err := state.RotateBuilderPendingPayments(); err != nil {
return errors.Wrap(err, "could not rotate builder pending payments")
}
builderPendingPaymentsProcessedTotal.Add(float64(len(withdrawals)))
return nil
}

View File

@@ -1,6 +1,8 @@
package gloas
import (
"context"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/time"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
state_native "github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native"
@@ -9,12 +11,13 @@ import (
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/pkg/errors"
)
// UpgradeToGloas updates inputs a generic state to return the version Gloas state.
//
// <spec fn="upgrade_to_gloas" fork="gloas" hash="6e66df25">
// <spec fn="upgrade_to_gloas" fork="gloas" hash="8f67112c">
// def upgrade_to_gloas(pre: fulu.BeaconState) -> BeaconState:
// epoch = fulu.get_current_epoch(pre)
//
@@ -81,6 +84,8 @@ import (
// latest_block_hash=pre.latest_execution_payload_header.block_hash,
// # [New in Gloas:EIP7732]
// payload_expected_withdrawals=[],
// # [New in Gloas:EIP7732]
// ptc_window=initialize_ptc_window(pre),
// )
//
// # [New in Gloas:EIP7732]
@@ -143,12 +148,73 @@ func UpgradeToGloas(beaconState state.BeaconState) (state.BeaconState, error) {
if err != nil {
return nil, errors.Wrap(err, "could not convert to gloas")
}
ptcWindow, err := initializePTCWindow(context.Background(), s)
if err != nil {
return nil, errors.Wrap(err, "failed to initialize ptc window")
}
if err := s.SetPTCWindow(ptcWindow); err != nil {
return nil, errors.Wrap(err, "failed to set ptc window")
}
if err := s.OnboardBuildersFromPendingDeposits(); err != nil {
return nil, errors.Wrap(err, "failed to onboard builders from pending deposits")
}
return s, nil
}
// initializePTCWindow builds the initial PTC window for the Gloas fork upgrade.
//
// <spec fn="initialize_ptc_window" fork="gloas" hash="3764b7f5">
// def initialize_ptc_window(
// state: BeaconState,
// ) -> Vector[Vector[ValidatorIndex, PTC_SIZE], (2 + MIN_SEED_LOOKAHEAD) * SLOTS_PER_EPOCH]:
// """
// Return the cached PTC window starting from the current epoch.
// Used to initialize the ``ptc_window`` field in the beacon state at genesis and after forks.
// """
// empty_previous_epoch = [
// Vector[ValidatorIndex, PTC_SIZE]([ValidatorIndex(0) for _ in range(PTC_SIZE)])
// for _ in range(SLOTS_PER_EPOCH)
// ]
//
// ptcs = []
// current_epoch = get_current_epoch(state)
// for e in range(1 + MIN_SEED_LOOKAHEAD):
// epoch = Epoch(current_epoch + e)
// start_slot = compute_start_slot_at_epoch(epoch)
// ptcs += [compute_ptc(state, Slot(start_slot + i)) for i in range(SLOTS_PER_EPOCH)]
//
// return empty_previous_epoch + ptcs
// </spec>
func initializePTCWindow(ctx context.Context, st state.ReadOnlyBeaconState) ([]*ethpb.PTCs, error) {
currentEpoch := slots.ToEpoch(st.Slot())
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
windowSize := slotsPerEpoch.Mul(uint64(2 + params.BeaconConfig().MinSeedLookahead))
window := make([]*ethpb.PTCs, 0, windowSize)
// Previous epoch has no cached data at fork time — fill with empty slots.
for range slotsPerEpoch {
window = append(window, &ethpb.PTCs{
ValidatorIndices: make([]primitives.ValidatorIndex, fieldparams.PTCSize),
})
}
// Compute PTC for current epoch through lookahead.
startSlot, err := slots.EpochStart(currentEpoch)
if err != nil {
return nil, err
}
totalSlots := slotsPerEpoch.Mul(uint64(1 + params.BeaconConfig().MinSeedLookahead))
for i := range totalSlots {
ptc, err := computePTC(ctx, st, startSlot+i)
if err != nil {
return nil, err
}
window = append(window, &ethpb.PTCs{ValidatorIndices: ptc})
}
return window, nil
}
func upgradeToGloas(beaconState state.BeaconState) (state.BeaconState, error) {
currentSyncCommittee, err := beaconState.CurrentSyncCommittee()
if err != nil {
@@ -226,10 +292,6 @@ func upgradeToGloas(beaconState state.BeaconState) (state.BeaconState, error) {
if err != nil {
return nil, err
}
proposerLookaheadU64 := make([]uint64, len(proposerLookahead))
for i, v := range proposerLookahead {
proposerLookaheadU64[i] = uint64(v)
}
executionPayloadAvailability := make([]byte, int((params.BeaconConfig().SlotsPerHistoricalRoot+7)/8))
for i := range executionPayloadAvailability {
@@ -293,7 +355,7 @@ func upgradeToGloas(beaconState state.BeaconState) (state.BeaconState, error) {
PendingDeposits: pendingDeposits,
PendingPartialWithdrawals: pendingPartialWithdrawals,
PendingConsolidations: pendingConsolidations,
ProposerLookahead: proposerLookaheadU64,
ProposerLookahead: proposerLookahead,
Builders: []*ethpb.Builder{},
NextWithdrawalBuilderIndex: primitives.BuilderIndex(0),
ExecutionPayloadAvailability: executionPayloadAvailability,

View File

@@ -103,7 +103,7 @@ func TestUpgradeToGloas_Basic(t *testing.T) {
}
func TestUpgradeToGloas_OnboardsBuilderDeposit(t *testing.T) {
st, _ := util.DeterministicGenesisStateFulu(t, 4)
st, _ := util.DeterministicGenesisStateFulu(t, params.BeaconConfig().MaxValidatorsPerCommittee)
sk, err := bls.RandKey()
require.NoError(t, err)

View File

@@ -658,8 +658,8 @@ func ComputeCommittee(
}
// InitializeProposerLookahead computes the list of the proposer indices for the next MIN_SEED_LOOKAHEAD + 1 epochs.
func InitializeProposerLookahead(ctx context.Context, state state.ReadOnlyBeaconState, epoch primitives.Epoch) ([]uint64, error) {
lookAhead := make([]uint64, 0, uint64(params.BeaconConfig().MinSeedLookahead+1)*uint64(params.BeaconConfig().SlotsPerEpoch))
func InitializeProposerLookahead(ctx context.Context, state state.ReadOnlyBeaconState, epoch primitives.Epoch) ([]primitives.ValidatorIndex, error) {
lookAhead := make([]primitives.ValidatorIndex, 0, uint64(params.BeaconConfig().MinSeedLookahead+1)*uint64(params.BeaconConfig().SlotsPerEpoch))
for i := range params.BeaconConfig().MinSeedLookahead + 1 {
indices, err := ActiveValidatorIndices(ctx, state, epoch+i)
if err != nil {
@@ -669,9 +669,7 @@ func InitializeProposerLookahead(ctx context.Context, state state.ReadOnlyBeacon
if err != nil {
return nil, errors.Wrap(err, "could not compute proposer indices")
}
for _, proposerIndex := range proposerIndices {
lookAhead = append(lookAhead, uint64(proposerIndex))
}
lookAhead = append(lookAhead, proposerIndices...)
}
return lookAhead, nil
}

View File

@@ -945,13 +945,8 @@ func TestInitializeProposerLookahead_RegressionTest(t *testing.T) {
endIdx := startIdx + slotsPerEpoch
actualProposers := proposerLookahead[startIdx:endIdx]
expectedUint64 := make([]uint64, len(expectedProposers))
for i, proposer := range expectedProposers {
expectedUint64[i] = uint64(proposer)
}
// This assertion would fail with the original bug:
for i, expected := range expectedUint64 {
for i, expected := range expectedProposers {
require.Equal(t, expected, actualProposers[i],
"Proposer index mismatch at slot %d in epoch %d", i, targetEpoch)
}

View File

@@ -121,7 +121,7 @@ func IsNextPeriodSyncCommittee(
// CurrentPeriodSyncSubcommitteeIndices returns the subcommittee indices of the
// current period sync committee for input validator.
func CurrentPeriodSyncSubcommitteeIndices(
st state.BeaconState, valIdx primitives.ValidatorIndex,
st state.ReadOnlyBeaconState, valIdx primitives.ValidatorIndex,
) ([]primitives.CommitteeIndex, error) {
root, err := SyncPeriodBoundaryRoot(st)
if err != nil {

View File

@@ -1165,7 +1165,7 @@ func TestBeaconProposerIndexAtSlotFulu(t *testing.T) {
cfg := params.BeaconConfig().Copy()
cfg.FuluForkEpoch = 1
params.OverrideBeaconConfig(cfg)
lookahead := make([]uint64, 64)
lookahead := make([]primitives.ValidatorIndex, 64)
lookahead[0] = 15
lookahead[1] = 16
lookahead[34] = 42

View File

@@ -26,6 +26,7 @@ go_library(
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enr:go_default_library",
"@com_github_hashicorp_golang_lru//:go_default_library",

View File

@@ -33,24 +33,31 @@ func (Cgc) ENRKey() string { return params.BeaconNetworkConfig().CustodyGroupCou
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/p2p-interface.md#verify_data_column_sidecar
func VerifyDataColumnSidecar(sidecar blocks.RODataColumn) error {
// The sidecar index must be within the valid range.
if sidecar.Index >= fieldparams.NumberOfColumns {
index := sidecar.Index()
if index >= fieldparams.NumberOfColumns {
return ErrIndexTooLarge
}
// A sidecar for zero blobs is invalid.
if len(sidecar.KzgCommitments) == 0 {
kzgCommitments, err := sidecar.KzgCommitments()
if err != nil {
return errors.Wrap(err, "kzg commitments")
}
if len(kzgCommitments) == 0 {
return ErrNoKzgCommitments
}
// A sidecar with more commitments than the max blob count for this block is invalid.
slot := sidecar.Slot()
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(slot)
if len(sidecar.KzgCommitments) > maxBlobsPerBlock {
if len(kzgCommitments) > maxBlobsPerBlock {
return ErrTooManyCommitments
}
// The column length must be equal to the number of commitments/proofs.
if len(sidecar.Column) != len(sidecar.KzgCommitments) || len(sidecar.Column) != len(sidecar.KzgProofs) {
column := sidecar.Column()
kzgProofs := sidecar.KzgProofs()
if len(column) != len(kzgCommitments) || len(column) != len(kzgProofs) {
return ErrMismatchLength
}
@@ -63,12 +70,39 @@ func VerifyDataColumnSidecar(sidecar blocks.RODataColumn) error {
// while we are verifying all the KZG proofs from multiple sidecars in a batch.
// This is done to improve performance since the internal KZG library is way more
// efficient when verifying in batch.
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/p2p-interface.md#verify_data_column_sidecar_kzg_proofs
// https://github.com/ethereum/consensus-specs/blob/master/specs/gloas/p2p-interface.md#modified-verify_data_column_sidecar_kzg_proofs
func VerifyDataColumnsSidecarKZGProofs(sidecars []blocks.RODataColumn) error {
commitmentsBySidecar := make([][][]byte, len(sidecars))
for i := range sidecars {
c, err := sidecars[i].KzgCommitments()
if err != nil {
return errors.Wrapf(err, "sidecar %d kzg commitments", i)
}
commitmentsBySidecar[i] = c
}
return verifyDataColumnsSidecarKZGProofs(sidecars, commitmentsBySidecar)
}
// VerifyDataColumnsSidecarKZGProofsWithCommitments verifies KZG proofs using
// explicitly provided commitments instead of the sidecar's own. This is used
// by Gloas, which validates against bid.blob_kzg_commitments.
func VerifyDataColumnsSidecarKZGProofsWithCommitments(sidecars []blocks.RODataColumn, commitmentsBySidecar [][][]byte) error {
return verifyDataColumnsSidecarKZGProofs(sidecars, commitmentsBySidecar)
}
func verifyDataColumnsSidecarKZGProofs(sidecars []blocks.RODataColumn, commitmentsBySidecar [][][]byte) error {
if len(sidecars) != len(commitmentsBySidecar) {
return ErrMismatchLength
}
// Compute the total count.
count := 0
for _, sidecar := range sidecars {
count += len(sidecar.Column)
for i, sidecar := range sidecars {
column := sidecar.Column()
if len(column) != len(commitmentsBySidecar[i]) {
return ErrMismatchLength
}
count += len(column)
}
commitments := make([]kzg.Bytes48, 0, count)
@@ -76,17 +110,20 @@ func VerifyDataColumnsSidecarKZGProofs(sidecars []blocks.RODataColumn) error {
cells := make([]kzg.Cell, 0, count)
proofs := make([]kzg.Bytes48, 0, count)
for _, sidecar := range sidecars {
for i := range sidecar.Column {
for sidecarIndex, sidecar := range sidecars {
column := sidecar.Column()
kzgProofs := sidecar.KzgProofs()
index := sidecar.Index()
for i := range column {
var (
commitment kzg.Bytes48
cell kzg.Cell
proof kzg.Bytes48
)
commitmentBytes := sidecar.KzgCommitments[i]
cellBytes := sidecar.Column[i]
proofBytes := sidecar.KzgProofs[i]
commitmentBytes := commitmentsBySidecar[sidecarIndex][i]
cellBytes := column[i]
proofBytes := kzgProofs[i]
if len(commitmentBytes) != len(commitment) ||
len(cellBytes) != len(cell) ||
@@ -99,7 +136,7 @@ func VerifyDataColumnsSidecarKZGProofs(sidecars []blocks.RODataColumn) error {
copy(proof[:], proofBytes)
commitments = append(commitments, commitment)
indices = append(indices, sidecar.Index)
indices = append(indices, index)
cells = append(cells, cell)
proofs = append(proofs, proof)
}
@@ -121,16 +158,27 @@ func VerifyDataColumnsSidecarKZGProofs(sidecars []blocks.RODataColumn) error {
// VerifyDataColumnSidecarInclusionProof verifies if the given KZG commitments included in the given beacon block.
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/p2p-interface.md#verify_data_column_sidecar_inclusion_proof
func VerifyDataColumnSidecarInclusionProof(sidecar blocks.RODataColumn) error {
if sidecar.SignedBlockHeader == nil || sidecar.SignedBlockHeader.Header == nil {
if sidecar.IsGloas() {
return nil
}
signedBlockHeader, err := sidecar.SignedBlockHeader()
if err != nil {
return errors.Wrap(err, "signed block header")
}
if signedBlockHeader == nil || signedBlockHeader.Header == nil {
return ErrNilBlockHeader
}
root := sidecar.SignedBlockHeader.Header.BodyRoot
root := signedBlockHeader.Header.BodyRoot
if len(root) != fieldparams.RootLength {
return ErrBadRootLength
}
leaves := blocks.LeavesFromCommitments(sidecar.KzgCommitments)
kzgCommitments, err := sidecar.KzgCommitments()
if err != nil {
return errors.Wrap(err, "kzg commitments")
}
leaves := blocks.LeavesFromCommitments(kzgCommitments)
sparse, err := trie.GenerateTrieFromItems(leaves, fieldparams.LogMaxBlobCommitments)
if err != nil {
@@ -142,7 +190,11 @@ func VerifyDataColumnSidecarInclusionProof(sidecar blocks.RODataColumn) error {
return errors.Wrap(err, "hash tree root")
}
verified := trie.VerifyMerkleProof(root, hashTreeRoot[:], kzgPosition, sidecar.KzgCommitmentsInclusionProof)
kzgInclusionProof, err := sidecar.KzgCommitmentsInclusionProof()
if err != nil {
return errors.Wrap(err, "kzg commitments inclusion proof")
}
verified := trie.VerifyMerkleProof(root, hashTreeRoot[:], kzgPosition, kzgInclusionProof)
if !verified {
return ErrInvalidInclusionProof
}

View File

@@ -70,7 +70,8 @@ func TestVerifyDataColumnSidecarKZGProofs(t *testing.T) {
t.Run("size mismatch", func(t *testing.T) {
sidecars := generateRandomSidecars(t, seed, blobCount)
sidecars[0].Column[0] = sidecars[0].Column[0][:len(sidecars[0].Column[0])-1] // Remove one byte to create size mismatch
column := sidecars[0].Column()
column[0] = column[0][:len(column[0])-1] // Remove one byte to create size mismatch
err := peerdas.VerifyDataColumnsSidecarKZGProofs(sidecars)
require.ErrorIs(t, err, peerdas.ErrMismatchLength)
@@ -78,7 +79,7 @@ func TestVerifyDataColumnSidecarKZGProofs(t *testing.T) {
t.Run("invalid proof", func(t *testing.T) {
sidecars := generateRandomSidecars(t, seed, blobCount)
sidecars[0].Column[0][0]++ // It is OK to overflow
sidecars[0].Column()[0][0]++ // It is OK to overflow
err := peerdas.VerifyDataColumnsSidecarKZGProofs(sidecars)
require.ErrorIs(t, err, peerdas.ErrInvalidKZGProof)
@@ -89,6 +90,12 @@ func TestVerifyDataColumnSidecarKZGProofs(t *testing.T) {
err := peerdas.VerifyDataColumnsSidecarKZGProofs(sidecars)
require.NoError(t, err)
})
t.Run("with commitments", func(t *testing.T) {
sidecars := generateRandomSidecars(t, seed, blobCount)
err := peerdas.VerifyDataColumnsSidecarKZGProofsWithCommitments(sidecars, sidecarCommitments(t, sidecars))
require.NoError(t, err)
})
}
func Test_VerifyKZGInclusionProofColumn(t *testing.T) {
@@ -196,7 +203,7 @@ func Test_VerifyKZGInclusionProofColumn(t *testing.T) {
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
roDataColumn := blocks.RODataColumn{DataColumnSidecar: tc.dataColumnSidecar}
roDataColumn := blocks.NewRODataColumnNoVerify(tc.dataColumnSidecar)
err = peerdas.VerifyDataColumnSidecarInclusionProof(roDataColumn)
if tc.expectedError == nil {
require.NoError(t, err)
@@ -208,6 +215,13 @@ func Test_VerifyKZGInclusionProofColumn(t *testing.T) {
}
}
func TestVerifyDataColumnSidecarInclusionProof_SkipsGloas(t *testing.T) {
dc := &ethpb.DataColumnSidecarGloas{Index: 0, Column: [][]byte{{0x01}}, KzgProofs: [][]byte{make([]byte, 48)}}
roCol, err := blocks.NewRODataColumnGloas(dc)
require.NoError(t, err)
require.NoError(t, peerdas.VerifyDataColumnSidecarInclusionProof(roCol))
}
func TestComputeSubnetForDataColumnSidecar(t *testing.T) {
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
@@ -348,6 +362,16 @@ func BenchmarkVerifyDataColumnSidecarKZGProofs_DiffCommitments_Batch4(b *testing
}
}
func sidecarCommitments(t *testing.T, sidecars []blocks.RODataColumn) [][][]byte {
commitmentsBySidecar := make([][][]byte, len(sidecars))
for i := range sidecars {
var err error
commitmentsBySidecar[i], err = sidecars[i].KzgCommitments()
require.NoError(t, err)
}
return commitmentsBySidecar
}
func createTestSidecar(t *testing.T, index uint64, column, kzgCommitments, kzgProofs [][]byte) blocks.RODataColumn {
pbSignedBeaconBlock := util.NewBeaconBlockDeneb()
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(pbSignedBeaconBlock)

View File

@@ -79,9 +79,9 @@ func recoverCellsForBlobs(verifiedRoSidecars []blocks.VerifiedRODataColumn, blob
cells := make([]kzg.Cell, 0, sidecarCount)
for _, sidecar := range verifiedRoSidecars {
cell := sidecar.Column[blobIndex]
cell := sidecar.Column()[blobIndex]
cells = append(cells, kzg.Cell(cell))
cellsIndices = append(cellsIndices, sidecar.Index)
cellsIndices = append(cellsIndices, sidecar.Index())
}
recoveredCells, err := kzg.RecoverCells(cellsIndices, cells)
@@ -116,9 +116,9 @@ func recoverCellsAndProofsForBlobs(verifiedRoSidecars []blocks.VerifiedRODataCol
cells := make([]kzg.Cell, 0, sidecarCount)
for _, sidecar := range verifiedRoSidecars {
cell := sidecar.Column[blobIndex]
cell := sidecar.Column()[blobIndex]
cells = append(cells, kzg.Cell(cell))
cellsIndices = append(cellsIndices, sidecar.Index)
cellsIndices = append(cellsIndices, sidecar.Index())
}
recoveredCells, recoveredProofs, err := kzg.RecoverCellsAndKZGProofs(cellsIndices, cells)
@@ -151,10 +151,10 @@ func ReconstructDataColumnSidecars(verifiedRoSidecars []blocks.VerifiedRODataCol
referenceSidecar := verifiedRoSidecars[0]
// Check if all columns have the same length and are commmitted to the same block.
blobCount := len(referenceSidecar.Column)
blobCount := len(referenceSidecar.Column())
blockRoot := referenceSidecar.BlockRoot()
for _, sidecar := range verifiedRoSidecars[1:] {
if len(sidecar.Column) != blobCount {
if len(sidecar.Column()) != blobCount {
return nil, ErrColumnLengthsDiffer
}
@@ -171,7 +171,7 @@ func ReconstructDataColumnSidecars(verifiedRoSidecars []blocks.VerifiedRODataCol
// Sort the input sidecars by index.
sort.Slice(verifiedRoSidecars, func(i, j int) bool {
return verifiedRoSidecars[i].Index < verifiedRoSidecars[j].Index
return verifiedRoSidecars[i].Index() < verifiedRoSidecars[j].Index()
})
// Recover cells and compute proofs in parallel.
@@ -209,9 +209,9 @@ func reconstructIfNeeded(verifiedDataColumnSidecars []blocks.VerifiedRODataColum
}
// Check if the sidecars are sorted by index and do not contain duplicates.
previousColumnIndex := verifiedDataColumnSidecars[0].Index
previousColumnIndex := verifiedDataColumnSidecars[0].Index()
for _, dataColumnSidecar := range verifiedDataColumnSidecars[1:] {
columnIndex := dataColumnSidecar.Index
columnIndex := dataColumnSidecar.Index()
if columnIndex <= previousColumnIndex {
return nil, ErrDataColumnSidecarsNotSortedByIndex
}
@@ -226,7 +226,7 @@ func reconstructIfNeeded(verifiedDataColumnSidecars []blocks.VerifiedRODataColum
}
// If all column sidecars corresponding to (non-extended) blobs are present, no need to reconstruct.
if verifiedDataColumnSidecars[cellsPerBlob-1].Index == uint64(cellsPerBlob-1) {
if verifiedDataColumnSidecars[cellsPerBlob-1].Index() == uint64(cellsPerBlob-1) {
return verifiedDataColumnSidecars, nil
}
@@ -415,9 +415,9 @@ func ReconstructBlobs(verifiedDataColumnSidecars []blocks.VerifiedRODataColumn,
}
// Check if the sidecars are sorted by index and do not contain duplicates.
previousColumnIndex := verifiedDataColumnSidecars[0].Index
previousColumnIndex := verifiedDataColumnSidecars[0].Index()
for _, dataColumnSidecar := range verifiedDataColumnSidecars[1:] {
columnIndex := dataColumnSidecar.Index
columnIndex := dataColumnSidecar.Index()
if columnIndex <= previousColumnIndex {
return nil, ErrDataColumnSidecarsNotSortedByIndex
}
@@ -433,7 +433,7 @@ func ReconstructBlobs(verifiedDataColumnSidecars []blocks.VerifiedRODataColumn,
// Verify that the actual blob count from the first sidecar matches the expected count
referenceSidecar := verifiedDataColumnSidecars[0]
actualBlobCount := len(referenceSidecar.Column)
actualBlobCount := len(referenceSidecar.Column())
if actualBlobCount != blobCount {
return nil, errors.Errorf("blob count mismatch: expected %d, got %d", blobCount, actualBlobCount)
}
@@ -448,7 +448,7 @@ func ReconstructBlobs(verifiedDataColumnSidecars []blocks.VerifiedRODataColumn,
// Check if all columns have the same length and are committed to the same block.
blockRoot := referenceSidecar.BlockRoot()
for _, sidecar := range verifiedDataColumnSidecars[1:] {
if len(sidecar.Column) != blobCount {
if len(sidecar.Column()) != blobCount {
return nil, ErrColumnLengthsDiffer
}
@@ -458,7 +458,7 @@ func ReconstructBlobs(verifiedDataColumnSidecars []blocks.VerifiedRODataColumn,
}
// Check if we have all non-extended columns (0..63) - if so, no reconstruction needed.
hasAllNonExtendedColumns := verifiedDataColumnSidecars[cellsPerBlob-1].Index == uint64(cellsPerBlob-1)
hasAllNonExtendedColumns := verifiedDataColumnSidecars[cellsPerBlob-1].Index() == uint64(cellsPerBlob-1)
var reconstructedCells map[int][]kzg.Cell
if !hasAllNonExtendedColumns {
@@ -480,7 +480,7 @@ func ReconstructBlobs(verifiedDataColumnSidecars []blocks.VerifiedRODataColumn,
var cell []byte
if hasAllNonExtendedColumns {
// Use existing cells from sidecars
cell = verifiedDataColumnSidecars[columnIndex].Column[blobIndex]
cell = verifiedDataColumnSidecars[columnIndex].Column()[blobIndex]
} else {
// Use reconstructed cells
cell = reconstructedCells[blobIndex][columnIndex][:]
@@ -501,8 +501,14 @@ func ReconstructBlobs(verifiedDataColumnSidecars []blocks.VerifiedRODataColumn,
func blobSidecarsFromDataColumnSidecars(roBlock blocks.ROBlock, dataColumnSidecars []blocks.VerifiedRODataColumn, indices []int) ([]*blocks.VerifiedROBlob, error) {
referenceSidecar := dataColumnSidecars[0]
kzgCommitments := referenceSidecar.KzgCommitments
signedBlockHeader := referenceSidecar.SignedBlockHeader
kzgCommitments, err := referenceSidecar.KzgCommitments()
if err != nil {
return nil, errors.Wrap(err, "kzg commitments")
}
signedBlockHeader, err := referenceSidecar.SignedBlockHeader()
if err != nil {
return nil, errors.Wrap(err, "signed block header")
}
verifiedROBlobs := make([]*blocks.VerifiedROBlob, 0, len(indices))
for _, blobIndex := range indices {
@@ -511,7 +517,7 @@ func blobSidecarsFromDataColumnSidecars(roBlock blocks.ROBlock, dataColumnSideca
// Compute the content of the blob.
for columnIndex := range fieldparams.CellsPerBlob {
dataColumnSidecar := dataColumnSidecars[columnIndex]
cell := dataColumnSidecar.Column[blobIndex]
cell := dataColumnSidecar.Column()[blobIndex]
if copy(blob[kzg.BytesPerCell*columnIndex:], cell) != kzg.BytesPerCell {
return nil, errors.New("wrong cell size - should never happen")
}

View File

@@ -36,7 +36,7 @@ func TestReconstructDataColumnSidecars(t *testing.T) {
_, _, verifiedRoSidecars := util.GenerateTestFuluBlockWithSidecars(t, 3)
// Arbitrarily alter the column with index 3
verifiedRoSidecars[3].Column = verifiedRoSidecars[3].Column[1:]
verifiedRoSidecars[3].DataColumnSidecar().Column = verifiedRoSidecars[3].DataColumnSidecar().Column[1:]
_, err := peerdas.ReconstructDataColumnSidecars(verifiedRoSidecars)
require.ErrorIs(t, err, peerdas.ErrColumnLengthsDiffer)
@@ -88,7 +88,10 @@ func TestReconstructDataColumnSidecars(t *testing.T) {
require.NoError(t, err)
// Verify that the reconstructed sidecars are equal to the original ones.
require.DeepSSZEqual(t, inputVerifiedRoSidecars, reconstructedVerifiedRoSidecars)
require.Equal(t, len(inputVerifiedRoSidecars), len(reconstructedVerifiedRoSidecars))
for i := range inputVerifiedRoSidecars {
require.DeepSSZEqual(t, inputVerifiedRoSidecars[i].DataColumnSidecar(), reconstructedVerifiedRoSidecars[i].DataColumnSidecar())
}
})
}

View File

@@ -10,6 +10,7 @@ import (
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/pkg/errors"
)
@@ -23,11 +24,13 @@ var (
var (
_ ConstructionPopulator = (*BlockReconstructionSource)(nil)
_ ConstructionPopulator = (*SidecarReconstructionSource)(nil)
_ ConstructionPopulator = (*BidReconstructionSource)(nil)
)
const (
BlockType = "BeaconBlock"
SidecarType = "DataColumnSidecar"
BidType = "ExecutionPayloadBid"
)
type (
@@ -37,7 +40,7 @@ type (
ConstructionPopulator interface {
Slot() primitives.Slot
Root() [fieldparams.RootLength]byte
ProposerIndex() primitives.ValidatorIndex
ProposerIndex() (primitives.ValidatorIndex, error)
Commitments() ([][]byte, error)
Type() string
@@ -49,11 +52,17 @@ type (
blocks.ROBlock
}
// DataColumnSidecar is a ConstructionPopulator that uses a data column sidecar as the source of data
// SidecarReconstructionSource is a ConstructionPopulator that uses a data column sidecar as the source of data
SidecarReconstructionSource struct {
blocks.VerifiedRODataColumn
}
// BidReconstructionSource is a ConstructionPopulator that uses the execution payload bid
// from a Gloas beacon block to extract KZG commitments for data column sidecar construction.
BidReconstructionSource struct {
blocks.ROBlock
}
blockInfo struct {
signedBlockHeader *ethpb.SignedBeaconBlockHeader
kzgCommitments [][]byte
@@ -71,6 +80,14 @@ func PopulateFromSidecar(sidecar blocks.VerifiedRODataColumn) *SidecarReconstruc
return &SidecarReconstructionSource{VerifiedRODataColumn: sidecar}
}
// PopulateFromBid creates a BidReconstructionSource from a Gloas beacon block.
// In Gloas (ePBS), the execution payload is delivered separately via the payload envelope,
// but the KZG commitments are available in the bid embedded in the block, allowing
// data column sidecars to be constructed from the EL as soon as the block arrives.
func PopulateFromBid(block blocks.ROBlock) *BidReconstructionSource {
return &BidReconstructionSource{ROBlock: block}
}
// ValidatorsCustodyRequirement returns the number of custody groups regarding the validator indices attached to the beacon node.
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/validator.md#validator-custody
func ValidatorsCustodyRequirement(st beaconState.ReadOnlyBalances, validatorsIndex map[primitives.ValidatorIndex]bool) (uint64, error) {
@@ -111,33 +128,93 @@ func DataColumnSidecars(cellsPerBlob [][]kzg.Cell, proofsPerBlob [][]kzg.Proof,
if err != nil {
return nil, errors.Wrap(err, "rotate cells and proofs")
}
info, err := src.extract()
if err != nil {
return nil, errors.Wrap(err, "extract block info")
}
isGloas := slots.ToEpoch(src.Slot()) >= params.BeaconConfig().GloasForkEpoch
root := src.Root()
roSidecars := make([]blocks.RODataColumn, 0, numberOfColumns)
for idx := range numberOfColumns {
sidecar := &ethpb.DataColumnSidecar{
Index: idx,
Column: cells[idx],
KzgCommitments: info.kzgCommitments,
KzgProofs: proofs[idx],
SignedBlockHeader: info.signedBlockHeader,
KzgCommitmentsInclusionProof: info.kzgInclusionProof,
if isGloas {
for idx := range numberOfColumns {
sidecar := &ethpb.DataColumnSidecarGloas{
Index: idx,
Column: cells[idx],
KzgProofs: proofs[idx],
Slot: src.Slot(),
BeaconBlockRoot: root[:],
}
if len(sidecar.Column) != len(sidecar.KzgProofs) {
return nil, ErrSizeMismatch
}
roSidecar, err := blocks.NewRODataColumnGloasWithRoot(sidecar, root)
if err != nil {
return nil, errors.Wrap(err, "new ro data column gloas")
}
roSidecars = append(roSidecars, roSidecar)
}
} else {
info, err := src.extract()
if err != nil {
return nil, errors.Wrap(err, "extract block info")
}
for idx := range numberOfColumns {
sidecar := &ethpb.DataColumnSidecar{
Index: idx,
Column: cells[idx],
KzgCommitments: info.kzgCommitments,
KzgProofs: proofs[idx],
SignedBlockHeader: info.signedBlockHeader,
KzgCommitmentsInclusionProof: info.kzgInclusionProof,
}
if len(sidecar.KzgCommitments) != len(sidecar.Column) || len(sidecar.KzgCommitments) != len(sidecar.KzgProofs) {
return nil, ErrSizeMismatch
}
roSidecar, err := blocks.NewRODataColumnWithRoot(sidecar, root)
if err != nil {
return nil, errors.Wrap(err, "new ro data column")
}
roSidecars = append(roSidecars, roSidecar)
}
}
if len(sidecar.KzgCommitments) != len(sidecar.Column) || len(sidecar.KzgCommitments) != len(sidecar.KzgProofs) {
dataColumnComputationTime.Observe(float64(time.Since(start).Milliseconds()))
return roSidecars, nil
}
// DataColumnSidecarsGloas constructs Gloas-format data column sidecars directly from cells, proofs,
// slot, and block root. Used by the proposer when building sidecars outside the ConstructionPopulator flow.
func DataColumnSidecarsGloas(
cellsPerBlob [][]kzg.Cell,
proofsPerBlob [][]kzg.Proof,
slot primitives.Slot,
beaconBlockRoot [32]byte,
) ([]blocks.RODataColumn, error) {
const numberOfColumns = uint64(fieldparams.NumberOfColumns)
if len(cellsPerBlob) == 0 {
return nil, nil
}
start := time.Now()
cells, proofs, err := rotateRowsToCols(cellsPerBlob, proofsPerBlob, numberOfColumns)
if err != nil {
return nil, errors.Wrap(err, "rotate cells and proofs")
}
roSidecars := make([]blocks.RODataColumn, 0, numberOfColumns)
for idx := range numberOfColumns {
sidecar := &ethpb.DataColumnSidecarGloas{
Index: idx,
Column: cells[idx],
KzgProofs: proofs[idx],
Slot: slot,
BeaconBlockRoot: beaconBlockRoot[:],
}
if len(sidecar.Column) != len(sidecar.KzgProofs) {
return nil, ErrSizeMismatch
}
roSidecar, err := blocks.NewRODataColumnWithRoot(sidecar, src.Root())
roSidecar, err := blocks.NewRODataColumnGloasWithRoot(sidecar, beaconBlockRoot)
if err != nil {
return nil, errors.Wrap(err, "new ro data column")
return nil, errors.Wrap(err, "new ro data column gloas")
}
roSidecars = append(roSidecars, roSidecar)
}
dataColumnComputationTime.Observe(float64(time.Since(start).Milliseconds()))
return roSidecars, nil
}
@@ -148,8 +225,8 @@ func (s *BlockReconstructionSource) Slot() primitives.Slot {
}
// ProposerIndex returns the proposer index of the source
func (s *BlockReconstructionSource) ProposerIndex() primitives.ValidatorIndex {
return s.Block().ProposerIndex()
func (s *BlockReconstructionSource) ProposerIndex() (primitives.ValidatorIndex, error) {
return s.Block().ProposerIndex(), nil
}
// Commitments returns the blob KZG commitments of the source
@@ -168,32 +245,24 @@ func (s *BlockReconstructionSource) Type() string {
return BlockType
}
// extract extracts the block information from the source
func (b *BlockReconstructionSource) extract() (*blockInfo, error) {
block := b.Block()
header, err := b.Header()
if err != nil {
return nil, errors.Wrap(err, "header")
}
commitments, err := block.Body().BlobKzgCommitments()
commitments, err := b.Block().Body().BlobKzgCommitments()
if err != nil {
return nil, errors.Wrap(err, "commitments")
}
inclusionProof, err := blocks.MerkleProofKZGCommitments(block.Body())
inclusionProof, err := blocks.MerkleProofKZGCommitments(b.Block().Body())
if err != nil {
return nil, errors.Wrap(err, "merkle proof kzg commitments")
}
info := &blockInfo{
return &blockInfo{
signedBlockHeader: header,
kzgCommitments: commitments,
kzgInclusionProof: inclusionProof,
}
return info, nil
}, nil
}
// rotateRowsToCols takes a 2D slice of cells and proofs, where the x is rows (blobs) and y is columns,
@@ -235,7 +304,7 @@ func (s *SidecarReconstructionSource) Root() [fieldparams.RootLength]byte {
// Commmitments returns the blob KZG commitments of the source
func (s *SidecarReconstructionSource) Commitments() ([][]byte, error) {
return s.KzgCommitments, nil
return s.KzgCommitments()
}
// Type returns the type of the source
@@ -243,13 +312,61 @@ func (s *SidecarReconstructionSource) Type() string {
return SidecarType
}
// extract extracts the block information from the source
func (s *SidecarReconstructionSource) extract() (*blockInfo, error) {
info := &blockInfo{
signedBlockHeader: s.SignedBlockHeader,
kzgCommitments: s.KzgCommitments,
kzgInclusionProof: s.KzgCommitmentsInclusionProof,
sbh, err := s.SignedBlockHeader()
if err != nil {
return nil, err
}
return info, nil
comms, err := s.KzgCommitments()
if err != nil {
return nil, err
}
incProof, err := s.KzgCommitmentsInclusionProof()
if err != nil {
return nil, err
}
return &blockInfo{
signedBlockHeader: sbh,
kzgCommitments: comms,
kzgInclusionProof: incProof,
}, nil
}
// Slot returns the slot of the source
func (s *BidReconstructionSource) Slot() primitives.Slot {
return s.Block().Slot()
}
// ProposerIndex returns the proposer index of the source
func (s *BidReconstructionSource) ProposerIndex() (primitives.ValidatorIndex, error) {
return s.Block().ProposerIndex(), nil
}
// Commitments returns the blob KZG commitments from the execution payload bid
func (s *BidReconstructionSource) Commitments() ([][]byte, error) {
bid, err := s.Block().Body().SignedExecutionPayloadBid()
if err != nil {
return nil, errors.Wrap(err, "signed execution payload bid")
}
return bid.Message.BlobKzgCommitments, nil
}
// Type returns the type of the source
func (s *BidReconstructionSource) Type() string {
return BidType
}
func (s *BidReconstructionSource) extract() (*blockInfo, error) {
commitments, err := s.Commitments()
if err != nil {
return nil, err
}
header, err := s.Header()
if err != nil {
return nil, errors.Wrap(err, "header")
}
return &blockInfo{
signedBlockHeader: header,
kzgCommitments: commitments,
}, nil
}

View File

@@ -7,6 +7,7 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
state_native "github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
@@ -176,22 +177,24 @@ func TestDataColumnSidecars(t *testing.T) {
// Verify each sidecar has the expected structure
for i, sidecar := range sidecars {
require.Equal(t, uint64(i), sidecar.Index)
require.Equal(t, 2, len(sidecar.Column))
require.Equal(t, 2, len(sidecar.KzgCommitments))
require.Equal(t, 2, len(sidecar.KzgProofs))
require.Equal(t, uint64(i), sidecar.Index())
require.Equal(t, 2, len(sidecar.Column()))
comms, err := sidecar.KzgCommitments()
require.NoError(t, err)
require.Equal(t, 2, len(comms))
require.Equal(t, 2, len(sidecar.KzgProofs()))
// Verify commitments match what we set
require.DeepEqual(t, commitment1, sidecar.KzgCommitments[0])
require.DeepEqual(t, commitment2, sidecar.KzgCommitments[1])
require.DeepEqual(t, commitment1, comms[0])
require.DeepEqual(t, commitment2, comms[1])
// Verify column data comes from the correct cells
require.Equal(t, byte(i), sidecar.Column[0][0])
require.Equal(t, byte(i+128), sidecar.Column[1][0])
require.Equal(t, byte(i), sidecar.Column()[0][0])
require.Equal(t, byte(i+128), sidecar.Column()[1][0])
// Verify proofs come from the correct proofs
require.Equal(t, byte(i), sidecar.KzgProofs[0][0])
require.Equal(t, byte(i+128), sidecar.KzgProofs[1][0])
require.Equal(t, byte(i), sidecar.KzgProofs()[0][0])
require.Equal(t, byte(i+128), sidecar.KzgProofs()[1][0])
}
})
}
@@ -241,7 +244,9 @@ func TestReconstructionSource(t *testing.T) {
src := peerdas.PopulateFromBlock(rob)
require.Equal(t, rob.Block().Slot(), src.Slot())
require.Equal(t, rob.Root(), src.Root())
require.Equal(t, rob.Block().ProposerIndex(), src.ProposerIndex())
srcPI, err := src.ProposerIndex()
require.NoError(t, err)
require.Equal(t, rob.Block().ProposerIndex(), srcPI)
commitments, err := src.Commitments()
require.NoError(t, err)
@@ -257,7 +262,11 @@ func TestReconstructionSource(t *testing.T) {
src := peerdas.PopulateFromSidecar(referenceSidecar)
require.Equal(t, referenceSidecar.Slot(), src.Slot())
require.Equal(t, referenceSidecar.BlockRoot(), src.Root())
require.Equal(t, referenceSidecar.ProposerIndex(), src.ProposerIndex())
refPI, err := referenceSidecar.ProposerIndex()
require.NoError(t, err)
srcPI, err := src.ProposerIndex()
require.NoError(t, err)
require.Equal(t, refPI, srcPI)
commitments, err := src.Commitments()
require.NoError(t, err)
@@ -267,4 +276,87 @@ func TestReconstructionSource(t *testing.T) {
require.Equal(t, peerdas.SidecarType, src.Type())
})
t.Run("from bid", func(t *testing.T) {
bidCommitment1 := make([]byte, 48)
bidCommitment2 := make([]byte, 48)
bidCommitment1[0] = 0xAA
bidCommitment2[0] = 0xBB
gloasBlockPb := util.NewBeaconBlockGloas()
gloasBlockPb.Block.Body.SignedExecutionPayloadBid.Message.BlobKzgCommitments = [][]byte{bidCommitment1, bidCommitment2}
gloasBlockPb.Block.Slot = 42
gloasBlockPb.Block.ProposerIndex = 7
signedGloasBlock, err := blocks.NewSignedBeaconBlock(gloasBlockPb)
require.NoError(t, err)
gloasRob, err := blocks.NewROBlock(signedGloasBlock)
require.NoError(t, err)
src := peerdas.PopulateFromBid(gloasRob)
require.Equal(t, primitives.Slot(42), src.Slot())
require.Equal(t, gloasRob.Root(), src.Root())
bidPI, err := src.ProposerIndex()
require.NoError(t, err)
require.Equal(t, primitives.ValidatorIndex(7), bidPI)
commitments, err := src.Commitments()
require.NoError(t, err)
require.Equal(t, 2, len(commitments))
require.DeepEqual(t, bidCommitment1, commitments[0])
require.DeepEqual(t, bidCommitment2, commitments[1])
require.Equal(t, peerdas.BidType, src.Type())
})
}
func TestPopulateFromBid_DataColumnSidecars(t *testing.T) {
const numberOfColumns = fieldparams.NumberOfColumns
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.GloasForkEpoch = 0
params.OverrideBeaconConfig(cfg)
bidCommitment1 := make([]byte, 48)
bidCommitment2 := make([]byte, 48)
bidCommitment1[0] = 0xAA
bidCommitment2[0] = 0xBB
gloasBlockPb := util.NewBeaconBlockGloas()
gloasBlockPb.Block.Body.SignedExecutionPayloadBid.Message.BlobKzgCommitments = [][]byte{bidCommitment1, bidCommitment2}
signedGloasBlock, err := blocks.NewSignedBeaconBlock(gloasBlockPb)
require.NoError(t, err)
gloasRob, err := blocks.NewROBlock(signedGloasBlock)
require.NoError(t, err)
cellsPerBlob := [][]kzg.Cell{
make([]kzg.Cell, numberOfColumns),
make([]kzg.Cell, numberOfColumns),
}
proofsPerBlob := [][]kzg.Proof{
make([]kzg.Proof, numberOfColumns),
make([]kzg.Proof, numberOfColumns),
}
for i := range numberOfColumns {
cellsPerBlob[0][i][0] = byte(i)
proofsPerBlob[0][i][0] = byte(i)
cellsPerBlob[1][i][0] = byte(i + 128)
proofsPerBlob[1][i][0] = byte(i + 128)
}
sidecars, err := peerdas.DataColumnSidecars(cellsPerBlob, proofsPerBlob, peerdas.PopulateFromBid(gloasRob))
require.NoError(t, err)
require.Equal(t, int(numberOfColumns), len(sidecars))
for i, sidecar := range sidecars {
require.Equal(t, true, sidecar.IsGloas())
require.Equal(t, uint64(i), sidecar.Index())
require.Equal(t, 2, len(sidecar.Column()))
require.Equal(t, 2, len(sidecar.KzgProofs()))
}
}

View File

@@ -46,16 +46,21 @@ func DataColumnsAlignWithBlock(block blocks.ROBlock, dataColumns []blocks.ROData
return ErrRootMismatch
}
dcKzgCommitments, err := dataColumn.KzgCommitments()
if err != nil {
return errors.Wrap(err, "kzg commitments")
}
// Check if the content length of the data column sidecar matches the block.
if len(dataColumn.Column) != blockCommitmentCount ||
len(dataColumn.KzgCommitments) != blockCommitmentCount ||
len(dataColumn.KzgProofs) != blockCommitmentCount {
if len(dataColumn.Column()) != blockCommitmentCount ||
len(dcKzgCommitments) != blockCommitmentCount ||
len(dataColumn.KzgProofs()) != blockCommitmentCount {
return ErrBlockColumnSizeMismatch
}
// Check if the commitments of the data column sidecar match the block.
for i := range blockCommitments {
if !bytes.Equal(blockCommitments[i], dataColumn.KzgCommitments[i]) {
if !bytes.Equal(blockCommitments[i], dcKzgCommitments[i]) {
return ErrCommitmentMismatch
}
}

View File

@@ -41,21 +41,21 @@ func TestDataColumnsAlignWithBlock(t *testing.T) {
t.Run("column size mismatch", func(t *testing.T) {
block, sidecars, _ := util.GenerateTestFuluBlockWithSidecars(t, 2, util.WithSlot(fs))
sidecars[0].Column = [][]byte{}
sidecars[0].DataColumnSidecar().Column = [][]byte{}
err := peerdas.DataColumnsAlignWithBlock(block, sidecars)
require.ErrorIs(t, err, peerdas.ErrBlockColumnSizeMismatch)
})
t.Run("KZG commitments size mismatch", func(t *testing.T) {
block, sidecars, _ := util.GenerateTestFuluBlockWithSidecars(t, 2, util.WithSlot(fs))
sidecars[0].KzgCommitments = [][]byte{}
sidecars[0].DataColumnSidecar().KzgCommitments = [][]byte{}
err := peerdas.DataColumnsAlignWithBlock(block, sidecars)
require.ErrorIs(t, err, peerdas.ErrBlockColumnSizeMismatch)
})
t.Run("KZG proofs mismatch", func(t *testing.T) {
block, sidecars, _ := util.GenerateTestFuluBlockWithSidecars(t, 2, util.WithSlot(fs))
sidecars[0].KzgProofs = [][]byte{}
sidecars[0].DataColumnSidecar().KzgProofs = [][]byte{}
err := peerdas.DataColumnsAlignWithBlock(block, sidecars)
require.ErrorIs(t, err, peerdas.ErrBlockColumnSizeMismatch)
})
@@ -63,7 +63,7 @@ func TestDataColumnsAlignWithBlock(t *testing.T) {
t.Run("commitment mismatch", func(t *testing.T) {
block, _, _ := util.GenerateTestFuluBlockWithSidecars(t, 2, util.WithSlot(fs))
_, alteredSidecars, _ := util.GenerateTestFuluBlockWithSidecars(t, 2, util.WithSlot(fs))
alteredSidecars[1].KzgCommitments[0][0]++ // Overflow is OK
alteredSidecars[1].DataColumnSidecar().KzgCommitments[0][0]++ // Overflow is OK
err := peerdas.DataColumnsAlignWithBlock(block, alteredSidecars)
require.ErrorIs(t, err, peerdas.ErrCommitmentMismatch)
})

View File

@@ -130,7 +130,7 @@ func gloasOperations(ctx context.Context, st state.BeaconState, block interfaces
//
// Spec definition:
//
// <spec fn="process_epoch" fork="gloas" hash="393b69ef">
// <spec fn="process_epoch" fork="gloas" hash="bf3575a9">
// def process_epoch(state: BeaconState) -> None:
// process_justification_and_finalization(state)
// process_inactivity_updates(state)
@@ -149,6 +149,8 @@ func gloasOperations(ctx context.Context, st state.BeaconState, block interfaces
// process_participation_flag_updates(state)
// process_sync_committee_updates(state)
// process_proposer_lookahead(state)
// # [New in Gloas:EIP7732]
// process_ptc_window(state)
// </spec>
func processEpochGloas(ctx context.Context, state state.BeaconState) error {
_, span := trace.StartSpan(ctx, "gloas.ProcessEpoch")
@@ -222,5 +224,5 @@ func processEpochGloas(ctx context.Context, state state.BeaconState) error {
if err := fulu.ProcessProposerLookahead(ctx, state); err != nil {
return err
}
return nil
return gloas.ProcessPTCWindow(ctx, state)
}

View File

@@ -120,7 +120,8 @@ func OptimizedGenesisBeaconStateBellatrix(genesisTime uint64, preState state.Bea
slashings := make([]uint64, params.BeaconConfig().EpochsPerSlashingsVector)
genesisValidatorsRoot, err := stateutil.ValidatorRegistryRoot(preState.Validators())
compactValidators := stateutil.CompactValidatorsFromProto(preState.Validators())
genesisValidatorsRoot, err := stateutil.ValidatorRegistryRoot(compactValidators)
if err != nil {
return nil, errors.Wrapf(err, "could not hash tree root genesis validators %v", err)
}

View File

@@ -147,7 +147,8 @@ func OptimizedGenesisBeaconState(genesisTime uint64, preState state.BeaconState,
slashings := make([]uint64, params.BeaconConfig().EpochsPerSlashingsVector)
genesisValidatorsRoot, err := stateutil.ValidatorRegistryRoot(preState.Validators())
compactValidators := stateutil.CompactValidatorsFromProto(preState.Validators())
genesisValidatorsRoot, err := stateutil.ValidatorRegistryRoot(compactValidators)
if err != nil {
return nil, errors.Wrapf(err, "could not hash tree root genesis validators %v", err)
}

View File

@@ -55,7 +55,7 @@ func NextSlotState(root []byte, wantedSlot types.Slot) state.BeaconState {
// UpdateNextSlotCache updates the `nextSlotCache`. It saves the input state after advancing the state slot by 1
// by calling `ProcessSlots`, it also saves the input root for later look up.
// This is useful to call after successfully processing a block.
func UpdateNextSlotCache(ctx context.Context, root []byte, state state.BeaconState) error {
func UpdateNextSlotCache(ctx context.Context, root []byte, state state.ReadOnlyBeaconState) error {
// Advancing one slot by using a copied state.
copied := state.Copy()
copied, err := ProcessSlots(ctx, copied, copied.Slot()+1)

View File

@@ -159,6 +159,15 @@ func ProcessSlot(ctx context.Context, state state.BeaconState) (state.BeaconStat
return state, nil
}
// ProcessSlotsIfNeeded takes a ReadOnlyBeaconState and processes it only if its needed, it returns a ReadOnlyBeaconState
func ProcessSlotsIfNeeded(ctx context.Context, state state.ReadOnlyBeaconState, accessRoot []byte, slot primitives.Slot) (state.ReadOnlyBeaconState, error) {
if slot <= state.Slot() {
return state, nil
}
copied := state.Copy()
return ProcessSlotsUsingNextSlotCache(ctx, copied, accessRoot, slot)
}
// ProcessSlotsUsingNextSlotCache processes slots by using next slot cache for higher efficiency.
func ProcessSlotsUsingNextSlotCache(
ctx context.Context,

View File

@@ -10,7 +10,9 @@ import (
state_native "github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
consensusblocks "github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
engine "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/runtime/version"
"github.com/stretchr/testify/require"
@@ -139,3 +141,217 @@ func testBeaconBlockHeader() *ethpb.BeaconBlockHeader {
BodyRoot: make([]byte, 32),
}
}
// newGloasForkBoundaryState returns a Gloas BeaconState where IsParentBlockFull()==true
// because bid.BlockHash == latestBlockHash. The parentBlockRoot parameter controls
// whether the bid looks like an upgrade-seed (all-zeros) or a real committed bid (non-zero).
func newGloasForkBoundaryState(
t *testing.T,
slot primitives.Slot,
blockHash [32]byte,
parentBlockRoot [32]byte,
) state.BeaconState {
t.Helper()
cfg := params.BeaconConfig()
availability := bytes.Repeat([]byte{0xFF}, int(cfg.SlotsPerHistoricalRoot/8))
protoState := &ethpb.BeaconStateGloas{
Slot: slot,
LatestBlockHeader: testBeaconBlockHeader(),
BlockRoots: make([][]byte, cfg.SlotsPerHistoricalRoot),
StateRoots: make([][]byte, cfg.SlotsPerHistoricalRoot),
RandaoMixes: make([][]byte, fieldparams.RandaoMixesLength),
ExecutionPayloadAvailability: availability,
BuilderPendingPayments: make([]*ethpb.BuilderPendingPayment, int(cfg.SlotsPerEpoch*2)),
// bid.BlockHash == LatestBlockHash so that IsParentBlockFull() returns true.
LatestBlockHash: blockHash[:],
LatestExecutionPayloadBid: &ethpb.ExecutionPayloadBid{
ParentBlockHash: make([]byte, 32),
ParentBlockRoot: parentBlockRoot[:],
BlockHash: blockHash[:],
PrevRandao: make([]byte, 32),
FeeRecipient: make([]byte, 20),
BlobKzgCommitments: [][]byte{make([]byte, 48)},
},
Eth1Data: &ethpb.Eth1Data{
DepositRoot: make([]byte, 32),
BlockHash: make([]byte, 32),
},
PreviousEpochParticipation: []byte{},
CurrentEpochParticipation: []byte{},
JustificationBits: []byte{0},
PreviousJustifiedCheckpoint: &ethpb.Checkpoint{Root: make([]byte, 32)},
CurrentJustifiedCheckpoint: &ethpb.Checkpoint{Root: make([]byte, 32)},
FinalizedCheckpoint: &ethpb.Checkpoint{Root: make([]byte, 32)},
PayloadExpectedWithdrawals: make([]*engine.Withdrawal, 0),
ProposerLookahead: make([]primitives.ValidatorIndex, 0),
Builders: make([]*ethpb.Builder, 0),
}
for i := range protoState.BlockRoots {
protoState.BlockRoots[i] = make([]byte, 32)
}
for i := range protoState.StateRoots {
protoState.StateRoots[i] = make([]byte, 32)
}
for i := range protoState.RandaoMixes {
protoState.RandaoMixes[i] = make([]byte, 32)
}
for i := range protoState.BuilderPendingPayments {
protoState.BuilderPendingPayments[i] = &ethpb.BuilderPendingPayment{
Withdrawal: &ethpb.BuilderPendingWithdrawal{FeeRecipient: make([]byte, 20)},
}
}
pubkeys := make([][]byte, cfg.SyncCommitteeSize)
for i := range pubkeys {
pubkeys[i] = make([]byte, fieldparams.BLSPubkeyLength)
}
aggPubkey := make([]byte, fieldparams.BLSPubkeyLength)
protoState.CurrentSyncCommittee = &ethpb.SyncCommittee{Pubkeys: pubkeys, AggregatePubkey: aggPubkey}
protoState.NextSyncCommittee = &ethpb.SyncCommittee{Pubkeys: pubkeys, AggregatePubkey: aggPubkey}
st, err := state_native.InitializeFromProtoGloas(protoState)
require.NoError(t, err)
return st
}
// newGloasTestBlock returns an ROBlock at the given slot with the given parentRoot.
func newGloasTestBlock(t *testing.T, slot primitives.Slot, parentRoot [32]byte) consensusblocks.ROBlock {
t.Helper()
blkProto := &ethpb.SignedBeaconBlockGloas{
Block: &ethpb.BeaconBlockGloas{
Slot: slot,
ParentRoot: parentRoot[:],
StateRoot: make([]byte, 32),
Body: &ethpb.BeaconBlockBodyGloas{
RandaoReveal: make([]byte, fieldparams.BLSSignatureLength),
Graffiti: make([]byte, 32),
Eth1Data: &ethpb.Eth1Data{DepositRoot: make([]byte, 32), BlockHash: make([]byte, 32)},
SyncAggregate: &ethpb.SyncAggregate{
SyncCommitteeBits: make([]byte, fieldparams.SyncAggregateSyncCommitteeBytesLength),
SyncCommitteeSignature: make([]byte, fieldparams.BLSSignatureLength),
},
SignedExecutionPayloadBid: &ethpb.SignedExecutionPayloadBid{
Message: &ethpb.ExecutionPayloadBid{
Slot: slot,
ParentBlockHash: make([]byte, 32),
ParentBlockRoot: make([]byte, 32),
BlockHash: make([]byte, 32),
PrevRandao: make([]byte, 32),
FeeRecipient: make([]byte, 20),
BlobKzgCommitments: [][]byte{},
},
Signature: make([]byte, fieldparams.BLSSignatureLength),
},
PayloadAttestations: []*ethpb.PayloadAttestation{},
},
},
Signature: make([]byte, fieldparams.BLSSignatureLength),
}
wsb, err := consensusblocks.NewSignedBeaconBlock(blkProto)
require.NoError(t, err)
rob, err := consensusblocks.NewROBlock(wsb)
require.NoError(t, err)
return rob
}
// TestProcessSlotsForBlock_UpgradeSeededBid verifies that ProcessSlotsForBlock uses
// b.ParentRoot() as the NSC access key when the state has an upgrade-seeded bid
// (bid.ParentBlockRoot == zero). This guards against the Fulu->Gloas fork-boundary
// false positive where UpgradeToGloas seeds bid.BlockHash == latestBlockHash while
// leaving bid.ParentBlockRoot as all-zeros.
func TestProcessSlotsForBlock_UpgradeSeededBid(t *testing.T) {
ctx := context.Background()
parentRoot := [32]byte{0x01, 0x02, 0x03}
blockHash := [32]byte{0xAA, 0xBB, 0xCC}
targetSlot := primitives.Slot(9)
// Build a Gloas state at slot 8 with IsParentBlockFull()==true but
// bid.ParentBlockRoot==zero (upgrade-seeded: not a real committed bid).
st := newGloasForkBoundaryState(t, targetSlot-1, blockHash, [32]byte{})
require.Equal(t, version.Gloas, st.Version())
// Verify preconditions.
full, err := st.IsParentBlockFull()
require.NoError(t, err)
require.True(t, full, "precondition: IsParentBlockFull must be true")
bid, err := st.LatestExecutionPayloadBid()
require.NoError(t, err)
require.Equal(t, [32]byte{}, bid.ParentBlockRoot(), "upgrade-seeded bid must have zero ParentBlockRoot")
// Prime NSC with parentRoot as the access key.
// With the guard in place (realBid==false), ProcessSlotsForBlock will use
// b.ParentRoot() as the NSC key and find this cached entry.
require.NoError(t, UpdateNextSlotCache(ctx, parentRoot[:], st))
blk := newGloasTestBlock(t, targetSlot, parentRoot)
out, err := ProcessSlotsForBlock(ctx, st, blk.Block())
require.NoError(t, err)
require.Equal(t, targetSlot, out.Slot())
// Verify that the NSC entry primed under parentRoot is still present,
// confirming it was used (read) rather than bypassed.
cached := NextSlotState(parentRoot[:], targetSlot)
require.NotNil(t, cached, "NSC entry under parentRoot should still be present after use")
}
// TestProcessSlotsForBlock_RealBid verifies that ProcessSlotsForBlock uses
// LatestBlockHash as the NSC access key when the state has a real committed bid
// (bid.ParentBlockRoot != zero). This is the normal post-fork case.
func TestProcessSlotsForBlock_RealBid(t *testing.T) {
ctx := context.Background()
parentRoot := [32]byte{0x01, 0x02, 0x03}
blockHash := [32]byte{0xAA, 0xBB, 0xCC}
realParentBlockRoot := [32]byte{0xDE, 0xAD, 0xBE, 0xEF}
targetSlot := primitives.Slot(9)
// Build a Gloas state at slot 8 with IsParentBlockFull()==true and
// bid.ParentBlockRoot!=zero (a real committed bid).
st := newGloasForkBoundaryState(t, targetSlot-1, blockHash, realParentBlockRoot)
require.Equal(t, version.Gloas, st.Version())
// Verify preconditions.
full, err := st.IsParentBlockFull()
require.NoError(t, err)
require.True(t, full, "precondition: IsParentBlockFull must be true")
bid, err := st.LatestExecutionPayloadBid()
require.NoError(t, err)
require.NotEqual(t, [32]byte{}, bid.ParentBlockRoot(), "real bid must have non-zero ParentBlockRoot")
// Prime NSC with the EL block hash as access key.
// With the guard in place (realBid==true), ProcessSlotsForBlock will use
// LatestBlockHash as the NSC key and find this cached entry.
require.NoError(t, UpdateNextSlotCache(ctx, blockHash[:], st))
blk := newGloasTestBlock(t, targetSlot, parentRoot)
out, err := ProcessSlotsForBlock(ctx, st, blk.Block())
require.NoError(t, err)
require.Equal(t, targetSlot, out.Slot())
// Verify that the NSC entry primed under blockHash is still present,
// confirming it was used (read) rather than bypassed.
cached := NextSlotState(blockHash[:], targetSlot)
require.NotNil(t, cached, "NSC entry under blockHash should still be present after use")
}
// TestProcessSlotsForBlock_PreGloas verifies that ProcessSlotsForBlock uses
// b.ParentRoot() as access key on pre-Gloas (Fulu) states, unchanged by the fix.
func TestProcessSlotsForBlock_PreGloas(t *testing.T) {
ctx := context.Background()
parentRoot := [32]byte{0x01, 0x02, 0x03}
targetSlot := primitives.Slot(5)
// newGloasState creates a Gloas-versioned state; we need a Fulu/pre-Gloas state.
// Use newGloasState as a base and just verify the slot advancement works.
// Note: version.Gloas is the version created by newGloasState; for pre-Gloas
// the function takes the version < Gloas path. We build a minimal Gloas state
// to test, but note ProcessSlotsForBlock has an explicit version check at top.
st := newGloasState(t, targetSlot-1, bytes.Repeat([]byte{0}, int(params.BeaconConfig().SlotsPerHistoricalRoot/8)))
blk := newGloasTestBlock(t, targetSlot, parentRoot)
out, err := ProcessSlotsForBlock(ctx, st, blk.Block())
require.NoError(t, err)
require.Equal(t, targetSlot, out.Slot())
}

View File

@@ -116,17 +116,32 @@ func CalculateStateRoot(
rollback state.BeaconState,
signed interfaces.ReadOnlySignedBeaconBlock,
) ([32]byte, error) {
ctx, span := trace.StartSpan(ctx, "core.state.CalculateStateRoot")
st, err := CalculatePostState(ctx, rollback, signed)
if err != nil {
return [32]byte{}, err
}
return st.HashTreeRoot(ctx)
}
// CalculatePostState returns the post-block state after processing the given
// block on a copy of the input state. It is identical to CalculateStateRoot
// but returns the full state instead of just its hash tree root.
func CalculatePostState(
ctx context.Context,
rollback state.BeaconState,
signed interfaces.ReadOnlySignedBeaconBlock,
) (state.BeaconState, error) {
ctx, span := trace.StartSpan(ctx, "core.state.CalculatePostState")
defer span.End()
if ctx.Err() != nil {
tracing.AnnotateError(span, ctx.Err())
return [32]byte{}, ctx.Err()
return nil, ctx.Err()
}
if rollback == nil || rollback.IsNil() {
return [32]byte{}, errors.New("nil state")
return nil, errors.New("nil state")
}
if signed == nil || signed.IsNil() || signed.Block().IsNil() {
return [32]byte{}, errors.New("nil block")
return nil, errors.New("nil block")
}
// Copy state to avoid mutating the state reference.
@@ -136,22 +151,22 @@ func CalculateStateRoot(
var err error
state, err = ProcessSlotsForBlock(ctx, state, signed.Block())
if err != nil {
return [32]byte{}, errors.Wrap(err, "could not process slots")
return nil, errors.Wrap(err, "could not process slots")
}
// Execute per block transition.
if features.Get().EnableProposerPreprocessing {
state, err = processBlockForProposing(ctx, state, signed)
if err != nil {
return [32]byte{}, errors.Wrap(err, "could not process block for proposing")
return nil, errors.Wrap(err, "could not process block for proposing")
}
} else {
state, err = ProcessBlockForStateRoot(ctx, state, signed)
if err != nil {
return [32]byte{}, errors.Wrap(err, "could not process block")
return nil, errors.Wrap(err, "could not process block")
}
}
return state.HashTreeRoot(ctx)
return state, nil
}
// processBlockVerifySigs processes the block and verifies the signatures within it. Block signatures are not verified as this block is not yet signed.

View File

@@ -204,7 +204,7 @@ func (s *LazilyPersistentStoreColumn) columnsNotStored(sidecars []blocks.RODataC
sum = s.store.Summary(sc.BlockRoot())
lastRoot = sc.BlockRoot()
}
if sum.HasIndex(sc.Index) {
if sum.HasIndex(sc.Index()) {
stored[i] = struct{}{}
}
}

View File

@@ -875,7 +875,7 @@ func TestColumnsNotStored(t *testing.T) {
if len(tc.stored) > 0 {
resultIndices := make(map[uint64]bool)
for _, col := range result {
resultIndices[col.Index] = true
resultIndices[col.Index()] = true
}
for _, storedIdx := range tc.stored {
require.Equal(t, false, resultIndices[storedIdx],
@@ -887,8 +887,8 @@ func TestColumnsNotStored(t *testing.T) {
if len(tc.expected) > 0 && len(tc.stored) == 0 {
// Only check exact order for non-stored cases (where we know they stay in same order)
for i, expectedIdx := range tc.expected {
require.Equal(t, columns[expectedIdx].Index, result[i].Index,
fmt.Sprintf("column %d: expected index %d, got %d", i, columns[expectedIdx].Index, result[i].Index))
require.Equal(t, columns[expectedIdx].Index(), result[i].Index(),
fmt.Sprintf("column %d: expected index %d, got %d", i, columns[expectedIdx].Index(), result[i].Index()))
}
}

View File

@@ -66,10 +66,11 @@ type dataColumnCacheEntry struct {
// stash will return an error if the given data column Index is out of bounds.
// It will overwrite any existing entry for the same index.
func (e *dataColumnCacheEntry) stash(sc blocks.RODataColumn) error {
if sc.Index >= fieldparams.NumberOfColumns {
return errors.Wrapf(errColumnIndexTooHigh, "index=%d", sc.Index)
index := sc.Index()
if index >= fieldparams.NumberOfColumns {
return errors.Wrapf(errColumnIndexTooHigh, "index=%d", index)
}
e.scs[sc.Index] = sc
e.scs[index] = sc
return nil
}

View File

@@ -23,7 +23,7 @@ func TestEnsureDeleteSetDiskSummary(t *testing.T) {
expect, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, []util.DataColumnParam{{Index: 1}})
require.NoError(t, entry.stash(expect[0]))
require.Equal(t, 1, len(entry.scs))
cols, err := nonDupe.append([]blocks.RODataColumn{}, peerdas.NewColumnIndicesFromSlice([]uint64{expect[0].Index}))
cols, err := nonDupe.append([]blocks.RODataColumn{}, peerdas.NewColumnIndicesFromSlice([]uint64{expect[0].Index()}))
require.NoError(t, err)
require.DeepEqual(t, expect[0], cols[0])
@@ -109,10 +109,10 @@ func TestAppendDataColumns(t *testing.T) {
require.NoError(t, err)
require.Equal(t, len(expected), len(actual))
slices.SortFunc(actual, func(i, j blocks.RODataColumn) int {
return int(i.Index) - int(j.Index)
return int(i.Index()) - int(j.Index())
})
for i := range expected {
require.Equal(t, expected[i].Index, actual[i].Index)
require.Equal(t, expected[i].Index(), actual[i].Index())
}
require.Equal(t, 1, len(original))
})

View File

@@ -369,7 +369,7 @@ func (dcs *DataColumnStorage) Save(dataColumnSidecars []blocks.VerifiedRODataCol
// Group data column sidecars by root.
for _, dataColumnSidecar := range dataColumnSidecars {
// Check if the data column index is too large.
if dataColumnSidecar.Index >= mandatoryNumberOfColumns {
if dataColumnSidecar.Index() >= mandatoryNumberOfColumns {
return errDataColumnIndexTooLarge
}
@@ -396,7 +396,7 @@ func (dcs *DataColumnStorage) Save(dataColumnSidecars []blocks.VerifiedRODataCol
// Get all indices.
indices := make([]uint64, 0, len(dataColumnSidecars))
for _, dataColumnSidecar := range dataColumnSidecars {
indices = append(indices, dataColumnSidecar.Index)
indices = append(indices, dataColumnSidecar.Index())
}
// Compute the data columns ident.
@@ -546,7 +546,7 @@ func (dcs *DataColumnStorage) Get(root [fieldparams.RootLength]byte, indices []u
return nil, errors.Wrap(err, "seek")
}
verifiedRODataColumn, err := verification.VerifiedRODataColumnFromDisk(file, root, metadata.sszEncodedDataColumnSidecarSize)
verifiedRODataColumn, err := verification.VerifiedRODataColumnFromDisk(file, root, metadata.sszEncodedDataColumnSidecarSize, summary.epoch)
if err != nil {
return nil, errors.Wrap(err, "verified RO data column from disk")
}
@@ -733,7 +733,7 @@ func (dcs *DataColumnStorage) saveDataColumnSidecarsExistingFile(filePath string
for _, dataColumnSidecar := range dataColumnSidecars {
// Extract the data columns index.
dataColumnIndex := dataColumnSidecar.Index
dataColumnIndex := dataColumnSidecar.Index()
ok, _, err := metadata.indices.get(dataColumnIndex)
if err != nil {
@@ -830,7 +830,7 @@ func (dcs *DataColumnStorage) saveDataColumnSidecarsNewFile(filePath string, inp
for _, dataColumnSidecar := range dataColumnSidecars {
// Extract the data column index.
dataColumnIndex := dataColumnSidecar.Index
dataColumnIndex := dataColumnSidecar.Index()
// Skip if the data column is already stored.
ok, _, err := indices.get(dataColumnIndex)

Some files were not shown because too many files have changed in this diff Show More