Compare commits

...

20 Commits

Author SHA1 Message Date
terence tsao
c77afbdbc4 Implement process exec payload 2026-01-03 15:34:39 -08:00
Manu NALEPA
0db74365e0 Summarize "Accepted data column sidecars summary" log. (#16210)
**What type of PR is this?**
Other

**What does this PR do? Why is it needed?**

**Before:**
```
[2026-01-02 13:29:50.13] DEBUG sync: Accepted data column sidecars summary columnIndices=[0 1 6 7 8 9 10 11 12 13 14 15 16 18 23 28 29 31 32 35 37 38 39 40 41 42 43 45 47 48 49 50 51 52 55 58 59 60 62 65 66 68 70 73 74 75 76 78 79 81 83 84 88 89 90 93 94 95 96 98 99 103 105 106 107 108 109 110 111 113 114 115 117 118 119 121 122] gossipScores=[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] peers=[rjzcRC oxj6o4 HCT2LE HCT2LE oxj6o4 oxj6o4 oxj6o4 oxj6o4 oxj6o4 oxj6o4 oxj6o4 oxj6o4 HCT2LE HCT2LE aZAzfp HCT2LE HCT2LE oxj6o4 oxj6o4 oxj6o4 HCT2LE oxj6o4 oxj6o4 HCT2LE oxj6o4 HCT2LE oxj6o4 oxj6o4 oxj6o4 HCT2LE oxj6o4 oxj6o4 HCT2LE HCT2LE oxj6o4 oxj6o4 oxj6o4 oxj6o4 oxj6o4 HCT2LE oxj6o4 HCT2LE oxj6o4 oxj6o4 HCT2LE aZAzfp oxj6o4 oxj6o4 YdJQCg oxj6o4 oxj6o4 oxj6o4 HCT2LE oxj6o4 HCT2LE HCT2LE 5jMhEK HCT2LE oxj6o4 oxj6o4 oxj6o4 oxj6o4 oxj6o4 oxj6o4 HCT2LE rjzcRC oxj6o4 HCT2LE oxj6o4 oxj6o4 HCT2LE oxj6o4 oxj6o4 HCT2LE HCT2LE HCT2LE oxj6o4] receivedCount=77 sinceStartTimes=[869.00ms 845.00ms 797.00ms 795.00ms 805.00ms 906.00ms 844.00ms 849.00ms 843.00ms 844.00ms 821.00ms 796.00ms 794.00ms 796.00ms 838.00ms 842.00ms 843.00ms 848.00ms 795.00ms 820.00ms 797.00ms 830.00ms 801.00ms 794.00ms 925.00ms 924.00ms 935.00ms 843.00ms 802.00ms 796.00ms 802.00ms 798.00ms 794.00ms 796.00ms 796.00ms 843.00ms 802.00ms 830.00ms 826.00ms 796.00ms 819.00ms 801.00ms 852.00ms 877.00ms 876.00ms 843.00ms 843.00ms 844.00ms 1138.00ms 843.00ms 886.00ms 805.00ms 794.00ms 844.00ms 909.00ms 845.00ms 889.00ms 798.00ms 792.00ms 843.00ms 878.00ms 802.00ms 798.00ms 849.00ms 826.00ms 815.00ms 844.00ms 797.00ms 795.00ms 798.00ms 843.00ms 844.00ms 845.00ms 845.00ms 867.00ms 805.00ms 800.00ms] slot=2095599 validationTimes=[399.00ms 423.00ms 470.00ms 472.00ms 463.00ms 362.00ms 423.00ms 419.00ms 425.00ms 423.00ms 446.00ms 471.00ms 473.00ms 471.00ms 429.00ms 425.00ms 424.00ms 419.00ms 471.00ms 448.00ms 470.00ms 437.00ms 467.00ms 472.00ms 342.00ms 343.00ms 332.00ms 424.00ms 465.00ms 471.00ms 465.00ms 469.00ms 473.00ms 470.00ms 470.00ms 424.00ms 466.00ms 438.00ms 442.00ms 471.00ms 448.00ms 467.00ms 416.00ms 390.00ms 392.00ms 424.00ms 425.00ms 423.00ms 140.00ms 424.00ms 381.00ms 462.00ms 473.00ms 423.00ms 359.00ms 423.00ms 378.00ms 469.00ms 475.00ms 425.00ms 390.00ms 465.00ms 469.00ms 419.00ms 442.00ms 452.00ms 423.00ms 470.00ms 473.00ms 469.00ms 424.00ms 423.00ms 423.00ms 423.00ms 400.00ms 462.00ms 467.00ms]
```


**After:**
```
[2026-01-02 16:48:48.61] DEBUG sync: Accepted data column sidecars summary count=31 indices=0-1,3-5,7,21,24,27,29,36-37,46,48,55,57,66,70,76,82,89,93-94,97,99-101,113,120,124,126 root=0x409a4eac4761a3199f60dec0dfe50b6eed91e29d6c3671bb61704401906d2b69 sinceStartTime=[min: 512.181127ms, avg: 541.358688ms, max: 557.074707ms] slot=2096594 validationTime=[min: 13.357515ms, avg: 55.1343ms, max: 73.909889ms]
```

Distributions are still available on metrics:
<img width="792" height="309" alt="image"
src="https://github.com/user-attachments/assets/15128283-6740-4387-b205-41fb18205f54"
/>

<img width="799" height="322" alt="image"
src="https://github.com/user-attachments/assets/e0d602fa-db06-4cd3-8ec7-1ee2671c9921"
/>


**Which issues(s) does this PR fix?**

Fixes:
- https://github.com/OffchainLabs/prysm/issues/16208

**Other notes for review**

**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-01-02 17:09:30 +00:00
Potuz
6f90101364 Use proposer lookahead for data column verification (#16202)
Replace the proposer indices cache usage in data column sidecar
verification with direct state lookahead access. Since data column
sidecars require the Fulu fork, the state always has a ProposerLookahead
field that provides O(1) proposer index lookups for current and next
epoch.

This simplifies SidecarProposerExpected() by removing:
- Checkpoint-based proposer cache lookup
- Singleflight wrapper (not needed for O(1) access)
- Target root computation for cache keys

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-02 17:01:53 +00:00
Manu NALEPA
49e1763ec2 Data columns cache warmup: Parallelize computation of all files for a given epoch. (#16207)
**What type of PR is this?**
Other

**What does this PR do? Why is it needed?**
Before this PR, all `.sszs` files containing the data column sidecars
were read an process sequentially, taking some time.
After this PR, every `.sszs` files of a given epoch (so, up to 32 files
with the current `SLOT_PER_EPOCHS` value) are processed in parallel.

**Which issues(s) does this PR fix?**
- https://github.com/OffchainLabs/prysm/issues/16204

Tested on - [Netcup VPS 4000 G11](https://www.netcup.com/en/server/vps).
**Before this PR (3 trials)**:
```
[2026-01-02 08:55:12.71]  INFO filesystem: Data column filesystem cache warm-up complete elapsed=1m22.894007534s
[2026-01-02 12:59:33.62]  INFO filesystem: Data column filesystem cache warm-up complete elapsed=42.346732863s
[2026-01-02 13:03:13.65]  INFO filesystem: Data column filesystem cache warm-up complete elapsed=56.143565960s
```

**After this PR (3 trials)**:
```
[2026-01-02 12:50:07.53]  INFO filesystem: Data column filesystem cache warm-up complete elapsed=2.019424193s
[2026-01-02 12:52:01.34]  INFO filesystem: Data column filesystem cache warm-up complete elapsed=1.960671225s
[2026-01-02 12:53:34.66]  INFO filesystem: Data column filesystem cache warm-up complete elapsed=2.549555363s
```


**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-01-02 16:59:55 +00:00
Potuz
c2527c82cd Use a separate context when updating the NSC (#16209)
There is a race condition introduced in #16149 in which the update to
the NSC happens with a context that may be cancelled by the time the
routine is called. This PR starts a new context with a deadline to call
the routine in the background.

fixes #16205
2026-01-02 16:43:34 +00:00
Potuz
d4ea8fafd6 Call FCU in the background (#16149)
This PR introduces several simplifications to block processing.

It calls to notify the engine in the background when forkchoice needs to
be updated.

It no longer updates the caches and process epoch transition before
computing payload attributes, since this is no longer needed after Fulu.

It removes a complicated second call to FCU with the same head after
processing the last slot of the epoch.

Some checks for reviewers:

- the single caller of sendFCU held a lock to forkchoice. Since the call
now is in the background this helper can aquire the lock.
- All paths to handleEpochBoundary are now **NOT** locked. This allows
the lock to get the target root to be taken locally in place.
- The checkpoint cache is completely useless and thus the target root
call could be removed. But removing the proposer ID cache is more
complicated and out of scope for this PR.
- lateBlockTasks has pre and post-fulu cased, we could remove pre-fulu
checks and defer to the update function if deemed cleaner.
- Conversely, postBlockProcess does not have this casing and thus
pre-Fulu blocks on gossip may fail to get proposed correctly because of
the lack of the proposer being correctly computed.
2025-12-30 21:01:34 +00:00
kasey
07d1d6bdf9 Fix validation bug in --backfill-oldest-slot (#16173)
**What type of PR is this?**

Bug fix


**What does this PR do? Why is it needed?**

Validation of `--backfill-oldest-slot` fails for values > 1056767,
because the validation code is comparing the slot/32 to
`MIN_EPOCHS_FOR_BLOCK_REQUESTS` (33024), instead of comparing it to
`current_epoch - MIN_EPOCHS_FOR_BLOCK_REQUESTS`.

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-12-29 20:35:46 +00:00
Potuz
f938da99d9 Use head to validate atts for previous epoch (#16109)
In the event that the target checkpoint of an attestation is for the
previous epoch, and the head state has the same dependent root at that
epoch. The reason being that this guarantees that both seed and active
validator indices are guaranteed to be the same at the checkpoint's
epoch, from the point of view of the attester (even on a different
branch) and the head view.
2025-12-29 20:07:21 +00:00
Potuz
9deec69cc7 Do not verify block signature on block processing (#14820)
Verifying the block signature adds a batch and performs a full hash of
the block unnecessarily.
2025-12-29 19:52:38 +00:00
Potuz
2767f08f4d Do not send FCU on block batches (#16199)
On block batches the engine does not need to be notified of FCU, only on
regular sync at the end of sync it's useful to notify the engine.
2025-12-29 11:39:12 +00:00
Radosław Kapka
d46c620783 Extend httperror analyzer to more functions (#16186)
**What type of PR is this?**

Tooling

**What does this PR do? Why is it needed?**

Renames `httperror` analyzer to `httpwriter` and extends it to the
following functions:
- `WriteError`
- `WriteJson`
- `WriteSsz`

_**NOTE: The PR is currently red because the fix in
https://github.com/OffchainLabs/prysm/pull/16175 must be merged first**_

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2025-12-23 16:53:01 +00:00
sashass1315
dd05e44ef3 fix: avoid panic when fork schedule is empty (#16175)
SortedForkSchedule should never be empty for a properly initialized
network schedule, but the handler already had a branch to support an
empty result. Without an early return, we wrote a JSON response and then
still accessed schedule[0], which could panic and double-write the HTTP
response in misconfigured setups.

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-12-23 15:46:21 +00:00
satushh
9da36a5de6 Use HasPendingBalanceToWithdraw instead of PendingBalanceToWithdraw in ProcessConsolidationRequests (#16189)
<!-- Thanks for sending a PR! Before submitting:

1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
   fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
   in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
   Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->

**What type of PR is this?**

Performance

**What does this PR do? Why is it needed?**

`PendingBalanceToWithdraw` was used to find the `bal` only to check
later if `bal` is greater than 0 or not. No need to calculate the full
balance and we could just check if `bal` is greater than 0 or not by
using an existing function `HasPendingBalanceToWithdraw`. So this should
help in reducing some unnecessary computation.

`HasPendingBalanceToWithdraw` returns immediately on first match of
non-zero instance, while `PendingBalanceToWithdraw` always iterates
through all entries to compute the sum.

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [ ] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [ ] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [ ] I have added a description with sufficient context for reviewers
to understand this PR.
- [ ] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2025-12-22 18:16:14 +00:00
terence
7950a24926 feat(primitives): add BuilderIndex SSZ type (#16169)
This pr adds primitives.BuilderIndex for builder registry indexing in
Gloas
2025-12-20 04:29:42 +00:00
Potuz
ea51253be9 Do not process slots and copy state for payload attributes post Fulu (#16168)
When computing payload attributes post-Fulu, we do not need to process
slots, nor copy the state if we need to find out if the node is
proposing in the next slot. This prevents an immediate epoch processing
after block 31 is processed unless we are actually proposing.
2025-12-19 22:03:52 +00:00
Manu NALEPA
2ac30f5ce6 Pending aggregates: When multiple aggregated attestations only differing by the aggregator index are in the pending queue, only process one of them. (#16153)
**What type of PR is this?**
Other

**What does this PR do? Why is it needed?**
When an (potentially aggregated) attestation is received **before** the
block being voted for, Prysm queues this attestation, then processes the
queue when the block has been received.

This behavior is consistent with the [Phase0 specification
](https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/p2p-interface.md#beacon_attestation_subnet_id).

> [IGNORE] The block being voted for
(attestation.data.beacon_block_root) has been seen (via gossip or
non-gossip sources) (a client MAY queue attestations for processing once
block is retrieved).

Once the block being voted for is processed, previously queued
(potentially aggregated) attestations are then processed, and
broadcasted.

Processing (potentially aggregated) attestations takes some non
negligible time. For this reason, (potentially aggregated) attestations
are deduplicated before being introduced into the pending queue, to
avoid eventually processing duplicates.

Before this PR, two aggregated attestations were considered duplicated
if all of the following conditions were gathered:
1. Attestations have the same version, 
2. **Attestations have the same aggregator index (aka., the same
validator aggregated them)**,
3. Attestations have the same slot, 
4. Attestations have the same committee index, and
5. Attestations have the same aggregation bits

Aggregated attestations are then broadcasted.
The final purpose of aggregated attestations is to be packed into the
next block by the next proposer.
When packing attestations, the aggregator index is not used any more.

This pull request modifies the deduplication function used in the
pending aggregated attestations queue by considering that multiple
aggregated attestations only differing by the aggregator index are
equivalent (removing `2.` of the previous list.)

As a consequence, the count of aggregated attestations to be introduced
in the pending queue is reduced from 1 aggregated attestation by
aggregator to, in the best case,
[MAX_COMMITTEE_PER_SLOT=64](https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/beacon-chain.md#misc-1).

Also, only a single aggregated attestation for a given version, slot,
committee index and aggregation bits will be re-broadcasted. This is a
correct behavior, since no data to be included in a block will be lost.
(We can even say that this will reduce by a bit the total networking
volume.)

**How to test**:
1. Start a beacon node (preferably, on a slow computer) from a
checkpoint.
2. Filter logs containing `Synced new block` and `Verified and saved
pending attestations to pool`. (You can pipe logs into `grep -E "Synced
new block|Verified and saved pending attestations to pool"`.

- In `Synced new block` logs, monitor the `sinceSlotStartTime` value.
This should monotonically decrease.
- In `Verified and saved pending attestations to pool`, monitor the
`pendingAggregateAttAndProofCount` value. It should be a "honest" value.
"honest" is not really quantifiable here, since it depends on the
aggregators. But it's likely to be less than
`5*MAX_COMMITTEE_PER_SLOT=320`.

**Which issues(s) does this PR fix?**

Partially fixes:
- https://github.com/OffchainLabs/prysm/issues/16160

**Other notes for review**
Please read commit by commit, with commit messages.
The important commit is b748c04a67.

**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2025-12-19 14:05:50 +00:00
Manu NALEPA
7418c00ad6 validateDataColumn: Remove error logs. (#16157)
**What type of PR is this?**
Other

**What does this PR do? Why is it needed?**
When we receive data column sidecars via gossip, if the sidecar does not
respect the validation rules, a scary ERROR log is displayed. We can't
to anything about it, since the error comes from an invalid incoming
sidecar, so there is no need to print an ERROR message.

Node: As all REJECTED gossip message, a DEBUG log is also always
displayed.

Example of ERROR log:
```
[2025-12-18 15:38:26.46] ERROR sync: Failed to decode message error=invalid ssz encoding. first variable element offset indexes into fixed value data
[2025-12-18 15:38:26.46] DEBUG sync: Gossip message was rejected agent=erigon/caplin error=invalid ssz encoding. first variable element offset indexes into fixed value data gossipScore=0 multiaddress=/ip4/141.147.32.105/tcp/9000 peerID=16Uiu2HAmHu88k97iBist1vJg7cPNuTjJFRARKvDF7yaH3Pv3Vmso topic=/eth2/c6ecb76c/data_column_sidecar_30/ssz_snappy
```

(After this PR, the DEBUG one will still be printed.)

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2025-12-18 16:18:02 +00:00
james-prysm
66342655fd throw 503 error when submit attestation and sync committee are called on syncing node + align changes to gRPC (#16152)
<!-- Thanks for sending a PR! Before submitting:

1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
   fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
   in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
   Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->

**What type of PR is this?**

Bug fix


**What does this PR do? Why is it needed?**

Prysm starting throwing this error `Could not write response message"
error="write tcp 10.104.92.212:5052->10.104.92.196:41876: write: broken
pipe` because a validator got attestation data from a synced node and
submitted attestation to a syncing node, when the node couldn't replay
the state the validator context deadlined and disconnected but the
writer when it finally responded tries to write it gets this broken pipe
error.

applies to `/eth/v2/beacon/pool/attestations` and
`/eth/v1/beacon/pool/sync_committees`

the solution is 2 part.
1. we shouldn't allow submission of an attestation if the node is
syncing because we can't save the attestation without the state
information.
2. we were doing the expensive state call before broadcast before in
rest and now it should match gRPC where it happens afterward in its own
go routine.

Tested manually running kurtosis with rest validators

```
participants:
 # Super-nodes
 - el_type: nethermind
   cl_type: prysm
   cl_image: gcr.io/offchainlabs/prysm/beacon-chain:latest
   count: 2
   supernode: true
   cl_extra_params:
     - --subscribe-all-subnets
     - --verbosity=debug
   vc_extra_params:
     - --enable-beacon-rest-api
     - --verbosity=debug

 # Full-nodes
 - el_type: nethermind
   cl_type: prysm
   cl_image: gcr.io/offchainlabs/prysm/beacon-chain:latest
   validator_count: 63
   cl_extra_params:
     - --verbosity=debug
   vc_extra_params:
     - --enable-beacon-rest-api
     - --verbosity=debug

 - el_type: nethermind
   cl_type: prysm
   cl_image: gcr.io/offchainlabs/prysm/beacon-chain:latest
   cl_extra_params:
     - --verbosity=debug
   vc_extra_params:
     - --enable-beacon-rest-api
     - --verbosity=debug
   validator_count: 13

additional_services:
 - dora
 - spamoor

spamoor_params:
 image: ethpandaops/spamoor:master
 max_mem: 4000
 spammers:
   - scenario: eoatx
     config:
       throughput: 200
   - scenario: blobs
     config:
       throughput: 20

network_params:
  fulu_fork_epoch: 2
  bpo_1_epoch: 8
  bpo_1_max_blobs: 21
  withdrawal_type: "0x02"
  preset: mainnet
  seconds_per_slot: 6

global_log_level: debug
```

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2025-12-18 15:07:09 +00:00
Bastin
18eca953c1 Fix lightclient p2p bug (#16151)
**What type of PR is this?**
Bug fix

**What does this PR do? Why is it needed?**
This PR fixes the LC p2p `fork version not recognized` bug. It adds
object mappings for the LC types for Fulu, and fixes tests to cover such
cases in the future.
2025-12-17 20:45:06 +00:00
Manu NALEPA
8191bb5711 Construct data column sidecars from the execution layer in parallel and add metrics (#16115)
**What type of PR is this?**
Optimisation

**What does this PR do? Why is it needed?**
While constructing data column sidecars from the execution layer is very
cheap compared to reconstructing data column sidecars from data column
sidecars, it is still efficient to run this construction in parallel.

(**Reminder:** Using `getBlobsV2`, all the cell proofs are present, but
only 64 (out of 128) cells are present. Recomputing the missing cells is
cheap, while reconstruction the missing proofs is expensive.)

This PR:
- adds some metrics
- ensure the construction is done in parallel

**Other notes for review**
Please read commit by commit

The red vertical lines represent the limit between before and after this
pull request
<img width="1575" height="603" alt="image"
src="https://github.com/user-attachments/assets/24811b1b-8e3c-4bf5-ac82-f920d385573a"
/>

The last commit transforms the bottom right histogram to summary, since
it makes no sense any more to have an histogram for values.

Please check "hide whitespace" so this PR is easier to review:
<img width="229" height="196" alt="image"
src="https://github.com/user-attachments/assets/548cb2f4-b6f4-41d1-b3b3-4d4c8554f390"
/>

Updated metrics:



Now, for every **non missed slot**, for a block **with at least one
commitment**, we have either:
```
[2025-12-10 10:02:12.93] DEBUG sync: Constructed data column sidecars from the execution client count=118 indices=0-5,7-16,18-27,29-35,37-46,48-49,51-82,84-100,102-106,108-125,127 iteration=0 proposerIndex=855082 root=0xf8f44e7d4cbc209b2ff2796c07fcf91e85ab45eebe145c4372017a18b25bf290 slot=1928961 type=BeaconBlock
```

either
```
[2025-12-10 10:02:25.69] DEBUG sync: No data column sidecars constructed from the execution client iteration=2 proposerIndex=1093657 root=0x64c2f6c31e369cd45f2edaf5524b64f4869e8148cd29fb84b5b8866be529eea3 slot=1928962 type=DataColumnSidecar
```
<img width="1581" height="957" alt="image"
src="https://github.com/user-attachments/assets/514dbdae-ef14-47e2-9127-502ac6d26bc0"
/>
<img width="1596" height="916" alt="image"
src="https://github.com/user-attachments/assets/343d4710-4191-49e8-98be-afe70d5ffe1c"
/>



**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.
2025-12-16 16:27:32 +00:00
103 changed files with 3857 additions and 1522 deletions

View File

@@ -193,7 +193,7 @@ nogo(
"//tools/analyzers/featureconfig:go_default_library",
"//tools/analyzers/gocognit:go_default_library",
"//tools/analyzers/ineffassign:go_default_library",
"//tools/analyzers/httperror:go_default_library",
"//tools/analyzers/httpwriter:go_default_library",
"//tools/analyzers/interfacechecker:go_default_library",
"//tools/analyzers/logcapitalization:go_default_library",
"//tools/analyzers/logruswitherror:go_default_library",

View File

@@ -323,14 +323,17 @@ func (s *Service) getPayloadAttribute(ctx context.Context, st state.BeaconState,
var ok bool
e := slots.ToEpoch(slot)
stateEpoch := slots.ToEpoch(st.Slot())
if e == stateEpoch {
fuluAndNextEpoch := st.Version() >= version.Fulu && e == stateEpoch+1
if e == stateEpoch || fuluAndNextEpoch {
val, ok = s.trackedProposer(st, slot)
if !ok {
return emptyAttri
}
}
st = st.Copy()
if slot > st.Slot() {
// At this point either we know we are proposing on a future slot or we need to still compute the
// right proposer index pre-Fulu, either way we need to copy the state to process it.
st = st.Copy()
var err error
st, err = transition.ProcessSlotsUsingNextSlotCache(ctx, st, headRoot, slot)
if err != nil {
@@ -338,7 +341,7 @@ func (s *Service) getPayloadAttribute(ctx context.Context, st state.BeaconState,
return emptyAttri
}
}
if e > stateEpoch {
if e > stateEpoch && !fuluAndNextEpoch {
emptyAttri := payloadattribute.EmptyWithVersion(st.Version())
val, ok = s.trackedProposer(st, slot)
if !ok {

View File

@@ -1053,40 +1053,3 @@ func TestKZGCommitmentToVersionedHashes(t *testing.T) {
require.Equal(t, vhs[0].String(), vh0)
require.Equal(t, vhs[1].String(), vh1)
}
func TestComputePayloadAttribute(t *testing.T) {
service, tr := minimalTestService(t, WithPayloadIDCache(cache.NewPayloadIDCache()))
ctx := tr.ctx
st, _ := util.DeterministicGenesisStateBellatrix(t, 1)
service.cfg.TrackedValidatorsCache.Set(cache.TrackedValidator{Active: true, Index: 0})
// Cache hit, advance state, no fee recipient
slot := primitives.Slot(1)
service.cfg.PayloadIDCache.Set(slot, [32]byte{}, [8]byte{})
blk := util.NewBeaconBlockBellatrix()
signed, err := consensusblocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
roblock, err := consensusblocks.NewROBlockWithRoot(signed, [32]byte{'a'})
require.NoError(t, err)
cfg := &postBlockProcessConfig{
ctx: ctx,
roblock: roblock,
}
fcu := &fcuConfig{
headState: st,
proposingSlot: slot,
headRoot: [32]byte{},
}
require.NoError(t, service.computePayloadAttributes(cfg, fcu))
require.Equal(t, false, fcu.attributes.IsEmpty())
require.Equal(t, params.BeaconConfig().EthBurnAddressHex, common.BytesToAddress(fcu.attributes.SuggestedFeeRecipient()).String())
// Cache hit, advance state, has fee recipient
suggestedAddr := common.HexToAddress("123")
service.cfg.TrackedValidatorsCache.Set(cache.TrackedValidator{Active: true, FeeRecipient: primitives.ExecutionAddress(suggestedAddr), Index: 0})
service.cfg.PayloadIDCache.Set(slot, [32]byte{}, [8]byte{})
require.NoError(t, service.computePayloadAttributes(cfg, fcu))
require.Equal(t, false, fcu.attributes.IsEmpty())
require.Equal(t, suggestedAddr, common.BytesToAddress(fcu.attributes.SuggestedFeeRecipient()))
}

View File

@@ -12,6 +12,7 @@ import (
payloadattribute "github.com/OffchainLabs/prysm/v7/consensus-types/payload-attribute"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/monitoring/tracing/trace"
"github.com/OffchainLabs/prysm/v7/runtime/version"
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
@@ -53,58 +54,53 @@ type fcuConfig struct {
}
// sendFCU handles the logic to notify the engine of a forckhoice update
// for the first time when processing an incoming block during regular sync. It
// always updates the shuffling caches and handles epoch transitions when the
// incoming block is late, preparing payload attributes in this case while it
// only sends a message with empty attributes for early blocks.
func (s *Service) sendFCU(cfg *postBlockProcessConfig, fcuArgs *fcuConfig) error {
if !s.isNewHead(cfg.headRoot) {
return nil
// when processing an incoming block during regular sync. It
// always updates the shuffling caches and handles epoch transitions .
func (s *Service) sendFCU(cfg *postBlockProcessConfig, fcuArgs *fcuConfig) {
if cfg.postState.Version() < version.Fulu {
// update the caches to compute the right proposer index
// this function is called under a forkchoice lock which we need to release.
s.ForkChoicer().Unlock()
s.updateCachesPostBlockProcessing(cfg)
s.ForkChoicer().Lock()
}
if err := s.getFCUArgs(cfg, fcuArgs); err != nil {
log.WithError(err).Error("Could not get forkchoice update argument")
return
}
// If head has not been updated and attributes are nil, we can skip the FCU.
if !s.isNewHead(cfg.headRoot) && (fcuArgs.attributes == nil || fcuArgs.attributes.IsEmpty()) {
return
}
// If we are proposing and we aim to reorg the block, we have already sent FCU with attributes on lateBlockTasks
if fcuArgs.attributes != nil && !fcuArgs.attributes.IsEmpty() && s.shouldOverrideFCU(cfg.headRoot, s.CurrentSlot()+1) {
return nil
return
}
if s.inRegularSync() {
go s.forkchoiceUpdateWithExecution(cfg.ctx, fcuArgs)
}
return s.forkchoiceUpdateWithExecution(cfg.ctx, fcuArgs)
}
// sendFCUWithAttributes computes the payload attributes and sends an FCU message
// to the engine if needed
func (s *Service) sendFCUWithAttributes(cfg *postBlockProcessConfig, fcuArgs *fcuConfig) {
slotCtx, cancel := context.WithTimeout(context.Background(), slotDeadline)
defer cancel()
cfg.ctx = slotCtx
s.cfg.ForkChoiceStore.RLock()
defer s.cfg.ForkChoiceStore.RUnlock()
if err := s.computePayloadAttributes(cfg, fcuArgs); err != nil {
log.WithError(err).Error("Could not compute payload attributes")
return
}
if fcuArgs.attributes.IsEmpty() {
return
}
if _, err := s.notifyForkchoiceUpdate(cfg.ctx, fcuArgs); err != nil {
log.WithError(err).Error("Could not update forkchoice with payload attributes for proposal")
if s.isNewHead(fcuArgs.headRoot) {
if err := s.saveHead(cfg.ctx, fcuArgs.headRoot, fcuArgs.headBlock, fcuArgs.headState); err != nil {
log.WithError(err).Error("Could not save head")
}
s.pruneAttsFromPool(s.ctx, fcuArgs.headState, fcuArgs.headBlock)
}
}
// fockchoiceUpdateWithExecution is a wrapper around notifyForkchoiceUpdate. It decides whether a new call to FCU should be made.
func (s *Service) forkchoiceUpdateWithExecution(ctx context.Context, args *fcuConfig) error {
// fockchoiceUpdateWithExecution is a wrapper around notifyForkchoiceUpdate. It gets a forkchoice lock and calls the engine.
// The caller of this function should NOT have a lock in forkchoice store.
func (s *Service) forkchoiceUpdateWithExecution(ctx context.Context, args *fcuConfig) {
_, span := trace.StartSpan(ctx, "beacon-chain.blockchain.forkchoiceUpdateWithExecution")
defer span.End()
// Note: Use the service context here to avoid the parent context being ended during a forkchoice update.
ctx = trace.NewContext(s.ctx, span)
s.ForkChoicer().Lock()
defer s.ForkChoicer().Unlock()
_, err := s.notifyForkchoiceUpdate(ctx, args)
if err != nil {
return errors.Wrap(err, "could not notify forkchoice update")
log.WithError(err).Error("Could not notify forkchoice update")
}
if err := s.saveHead(ctx, args.headRoot, args.headBlock, args.headState); err != nil {
log.WithError(err).Error("Could not save head")
}
// Only need to prune attestations from pool if the head has changed.
s.pruneAttsFromPool(s.ctx, args.headState, args.headBlock)
return nil
}
// shouldOverrideFCU checks whether the incoming block is still subject to being

View File

@@ -97,7 +97,7 @@ func TestService_forkchoiceUpdateWithExecution_exceptionalCases(t *testing.T) {
headBlock: wsb,
proposingSlot: service.CurrentSlot() + 1,
}
require.NoError(t, service.forkchoiceUpdateWithExecution(ctx, args))
service.forkchoiceUpdateWithExecution(ctx, args)
payloadID, has := service.cfg.PayloadIDCache.PayloadID(2, [32]byte{2})
require.Equal(t, true, has)
@@ -151,7 +151,7 @@ func TestService_forkchoiceUpdateWithExecution_SameHeadRootNewProposer(t *testin
headRoot: r,
proposingSlot: service.CurrentSlot() + 1,
}
require.NoError(t, service.forkchoiceUpdateWithExecution(ctx, args))
service.forkchoiceUpdateWithExecution(ctx, args)
}
func TestShouldOverrideFCU(t *testing.T) {

View File

@@ -22,7 +22,7 @@ import (
// The caller of this function must have a lock on forkchoice.
func (s *Service) getRecentPreState(ctx context.Context, c *ethpb.Checkpoint) state.ReadOnlyBeaconState {
headEpoch := slots.ToEpoch(s.HeadSlot())
if c.Epoch < headEpoch || c.Epoch == 0 {
if c.Epoch+1 < headEpoch || c.Epoch == 0 {
return nil
}
// Only use head state if the head state is compatible with the target checkpoint.
@@ -30,11 +30,13 @@ func (s *Service) getRecentPreState(ctx context.Context, c *ethpb.Checkpoint) st
if err != nil {
return nil
}
headDependent, err := s.cfg.ForkChoiceStore.DependentRootForEpoch([32]byte(headRoot), c.Epoch-1)
// headEpoch - 1 equals c.Epoch if c is from the previous epoch and equals c.Epoch - 1 if c is from the current epoch.
// We don't use the smaller c.Epoch - 1 because forkchoice would not have the data to answer that.
headDependent, err := s.cfg.ForkChoiceStore.DependentRootForEpoch([32]byte(headRoot), headEpoch-1)
if err != nil {
return nil
}
targetDependent, err := s.cfg.ForkChoiceStore.DependentRootForEpoch([32]byte(c.Root), c.Epoch-1)
targetDependent, err := s.cfg.ForkChoiceStore.DependentRootForEpoch([32]byte(c.Root), headEpoch-1)
if err != nil {
return nil
}
@@ -43,7 +45,7 @@ func (s *Service) getRecentPreState(ctx context.Context, c *ethpb.Checkpoint) st
}
// If the head state alone is enough, we can return it directly read only.
if c.Epoch == headEpoch {
if c.Epoch <= headEpoch {
st, err := s.HeadStateReadOnly(ctx)
if err != nil {
return nil

View File

@@ -170,12 +170,13 @@ func TestService_GetRecentPreState(t *testing.T) {
err = s.SetFinalizedCheckpoint(cp0)
require.NoError(t, err)
st, root, err := prepareForkchoiceState(ctx, 31, [32]byte(ckRoot), [32]byte{}, [32]byte{'R'}, cp0, cp0)
st, blk, err := prepareForkchoiceState(ctx, 31, [32]byte(ckRoot), [32]byte{}, [32]byte{'R'}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, root))
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blk))
service.head = &head{
root: [32]byte(ckRoot),
state: s,
block: blk,
slot: 31,
}
require.NotNil(t, service.getRecentPreState(ctx, &ethpb.Checkpoint{Epoch: 1, Root: ckRoot}))
@@ -197,12 +198,13 @@ func TestService_GetRecentPreState_Old_Checkpoint(t *testing.T) {
err = s.SetFinalizedCheckpoint(cp0)
require.NoError(t, err)
st, root, err := prepareForkchoiceState(ctx, 33, [32]byte(ckRoot), [32]byte{}, [32]byte{'R'}, cp0, cp0)
st, blk, err := prepareForkchoiceState(ctx, 33, [32]byte(ckRoot), [32]byte{}, [32]byte{'R'}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, root))
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blk))
service.head = &head{
root: [32]byte(ckRoot),
state: s,
block: blk,
slot: 33,
}
require.IsNil(t, service.getRecentPreState(ctx, &ethpb.Checkpoint{}))
@@ -227,6 +229,7 @@ func TestService_GetRecentPreState_Same_DependentRoots(t *testing.T) {
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blk))
st, blk, err = prepareForkchoiceState(ctx, 64, [32]byte{'T'}, blk.Root(), [32]byte{}, cp0, cp0)
require.NoError(t, err)
headBlock := blk
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blk))
st, blk, err = prepareForkchoiceState(ctx, 33, [32]byte{'U'}, [32]byte(ckRoot), [32]byte{}, cp0, cp0)
require.NoError(t, err)
@@ -235,8 +238,9 @@ func TestService_GetRecentPreState_Same_DependentRoots(t *testing.T) {
service.head = &head{
root: [32]byte{'T'},
state: s,
block: headBlock,
slot: 64,
state: s,
}
require.NotNil(t, service.getRecentPreState(ctx, &ethpb.Checkpoint{Epoch: 2, Root: cpRoot[:]}))
}
@@ -263,6 +267,7 @@ func TestService_GetRecentPreState_Different_DependentRoots(t *testing.T) {
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blk))
st, blk, err = prepareForkchoiceState(ctx, 64, [32]byte{'U'}, blk.Root(), [32]byte{}, cp0, cp0)
require.NoError(t, err)
headBlock := blk
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blk))
st, blk, err = prepareForkchoiceState(ctx, 33, [32]byte{'V'}, [32]byte(ckRoot), [32]byte{}, cp0, cp0)
require.NoError(t, err)
@@ -270,7 +275,8 @@ func TestService_GetRecentPreState_Different_DependentRoots(t *testing.T) {
cpRoot := blk.Root()
service.head = &head{
root: [32]byte{'T'},
root: [32]byte{'U'},
block: headBlock,
state: s,
slot: 64,
}
@@ -287,12 +293,13 @@ func TestService_GetRecentPreState_Different(t *testing.T) {
err = s.SetFinalizedCheckpoint(cp0)
require.NoError(t, err)
st, root, err := prepareForkchoiceState(ctx, 33, [32]byte(ckRoot), [32]byte{}, [32]byte{'R'}, cp0, cp0)
st, blk, err := prepareForkchoiceState(ctx, 33, [32]byte(ckRoot), [32]byte{}, [32]byte{'R'}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, root))
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blk))
service.head = &head{
root: [32]byte(ckRoot),
state: s,
block: blk,
slot: 33,
}
require.IsNil(t, service.getRecentPreState(ctx, &ethpb.Checkpoint{}))

View File

@@ -66,9 +66,6 @@ func (s *Service) postBlockProcess(cfg *postBlockProcessConfig) error {
startTime := time.Now()
fcuArgs := &fcuConfig{}
if s.inRegularSync() {
defer s.handleSecondFCUCall(cfg, fcuArgs)
}
if features.Get().EnableLightClient && slots.ToEpoch(s.CurrentSlot()) >= params.BeaconConfig().AltairForkEpoch {
defer s.processLightClientUpdates(cfg)
}
@@ -105,14 +102,17 @@ func (s *Service) postBlockProcess(cfg *postBlockProcessConfig) error {
s.logNonCanonicalBlockReceived(cfg.roblock.Root(), cfg.headRoot)
return nil
}
if err := s.getFCUArgs(cfg, fcuArgs); err != nil {
log.WithError(err).Error("Could not get forkchoice update argument")
return nil
}
if err := s.sendFCU(cfg, fcuArgs); err != nil {
return errors.Wrap(err, "could not send FCU to engine")
}
s.sendFCU(cfg, fcuArgs)
// Pre-Fulu the caches are updated when computing the payload attributes
if cfg.postState.Version() >= version.Fulu {
go func() {
ctx, cancel := context.WithTimeout(s.ctx, slotDeadline)
defer cancel()
cfg.ctx = ctx
s.updateCachesPostBlockProcessing(cfg)
}()
}
return nil
}
@@ -295,14 +295,6 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlo
return errors.Wrap(err, "could not set optimistic block to valid")
}
}
arg := &fcuConfig{
headState: preState,
headRoot: lastBR,
headBlock: lastB,
}
if _, err := s.notifyForkchoiceUpdate(ctx, arg); err != nil {
return err
}
return s.saveHeadNoDB(ctx, lastB, lastBR, preState, !isValidPayload)
}
@@ -330,6 +322,7 @@ func (s *Service) areSidecarsAvailable(ctx context.Context, avs das.Availability
return nil
}
// the caller of this function must not hold a lock in forkchoice store.
func (s *Service) updateEpochBoundaryCaches(ctx context.Context, st state.BeaconState) error {
e := coreTime.CurrentEpoch(st)
if err := helpers.UpdateCommitteeCache(ctx, st, e); err != nil {
@@ -359,7 +352,9 @@ func (s *Service) updateEpochBoundaryCaches(ctx context.Context, st state.Beacon
if e > 0 {
e = e - 1
}
s.ForkChoicer().RLock()
target, err := s.cfg.ForkChoiceStore.TargetRootForEpoch(r, e)
s.ForkChoicer().RUnlock()
if err != nil {
log.WithError(err).Error("Could not update proposer index state-root map")
return nil
@@ -372,7 +367,7 @@ func (s *Service) updateEpochBoundaryCaches(ctx context.Context, st state.Beacon
}
// Epoch boundary tasks: it copies the headState and updates the epoch boundary
// caches.
// caches. The caller of this function must not hold a lock in forkchoice store.
func (s *Service) handleEpochBoundary(ctx context.Context, slot primitives.Slot, headState state.BeaconState, blockRoot []byte) error {
ctx, span := trace.StartSpan(ctx, "blockChain.handleEpochBoundary")
defer span.End()
@@ -912,8 +907,6 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
if currentSlot == s.HeadSlot() {
return
}
s.cfg.ForkChoiceStore.RLock()
defer s.cfg.ForkChoiceStore.RUnlock()
// return early if we are in init sync
if !s.inRegularSync() {
return
@@ -926,14 +919,32 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
if lastState == nil {
lastRoot, lastState = headRoot[:], headState
}
// Copy all the field tries in our cached state in the event of late
// blocks.
lastState.CopyAllTries()
if err := transition.UpdateNextSlotCache(ctx, lastRoot, lastState); err != nil {
log.WithError(err).Debug("Could not update next slot state cache")
}
if err := s.handleEpochBoundary(ctx, currentSlot, headState, headRoot[:]); err != nil {
log.WithError(err).Error("Could not update epoch boundary caches")
// Before Fulu we need to process the next slot to find out if we are proposing.
if lastState.Version() < version.Fulu {
// Copy all the field tries in our cached state in the event of late
// blocks.
lastState.CopyAllTries()
if err := transition.UpdateNextSlotCache(ctx, lastRoot, lastState); err != nil {
log.WithError(err).Debug("Could not update next slot state cache")
}
if err := s.handleEpochBoundary(ctx, currentSlot, headState, headRoot[:]); err != nil {
log.WithError(err).Error("Could not update epoch boundary caches")
}
} else {
// After Fulu, we can update the caches asynchronously after sending FCU to the engine
defer func() {
go func() {
ctx, cancel := context.WithTimeout(s.ctx, slotDeadline)
defer cancel()
lastState.CopyAllTries()
if err := transition.UpdateNextSlotCache(ctx, lastRoot, lastState); err != nil {
log.WithError(err).Debug("Could not update next slot state cache")
}
if err := s.handleEpochBoundary(ctx, currentSlot, headState, headRoot[:]); err != nil {
log.WithError(err).Error("Could not update epoch boundary caches")
}
}()
}()
}
// return early if we already started building a block for the current
// head root
@@ -963,6 +974,8 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
headBlock: headBlock,
attributes: attribute,
}
s.cfg.ForkChoiceStore.Lock()
defer s.cfg.ForkChoiceStore.Unlock()
_, err = s.notifyForkchoiceUpdate(ctx, fcuArgs)
if err != nil {
log.WithError(err).Debug("could not perform late block tasks: failed to update forkchoice with engine")

View File

@@ -42,14 +42,8 @@ func (s *Service) getFCUArgs(cfg *postBlockProcessConfig, fcuArgs *fcuConfig) er
if err := s.getFCUArgsEarlyBlock(cfg, fcuArgs); err != nil {
return err
}
if !s.inRegularSync() {
return nil
}
slot := cfg.roblock.Block().Slot()
if slots.WithinVotingWindow(s.genesisTime, slot) {
return nil
}
return s.computePayloadAttributes(cfg, fcuArgs)
fcuArgs.attributes = s.getPayloadAttribute(cfg.ctx, fcuArgs.headState, fcuArgs.proposingSlot, cfg.headRoot[:])
return nil
}
func (s *Service) getFCUArgsEarlyBlock(cfg *postBlockProcessConfig, fcuArgs *fcuConfig) error {
@@ -173,26 +167,19 @@ func (s *Service) processLightClientUpdates(cfg *postBlockProcessConfig) {
// updateCachesPostBlockProcessing updates the next slot cache and handles the epoch
// boundary in order to compute the right proposer indices after processing
// state transition. This function is called on late blocks while still locked,
// before sending FCU to the engine.
func (s *Service) updateCachesPostBlockProcessing(cfg *postBlockProcessConfig) error {
// state transition. The caller of this function must not hold a lock in forkchoice store.
func (s *Service) updateCachesPostBlockProcessing(cfg *postBlockProcessConfig) {
slot := cfg.postState.Slot()
root := cfg.roblock.Root()
if err := transition.UpdateNextSlotCache(cfg.ctx, root[:], cfg.postState); err != nil {
return errors.Wrap(err, "could not update next slot state cache")
log.WithError(err).Error("Could not update next slot state cache")
return
}
if !slots.IsEpochEnd(slot) {
return nil
return
}
return s.handleEpochBoundary(cfg.ctx, slot, cfg.postState, root[:])
}
// handleSecondFCUCall handles a second call to FCU when syncing a new block.
// This is useful when proposing in the next block and we want to defer the
// computation of the next slot shuffling.
func (s *Service) handleSecondFCUCall(cfg *postBlockProcessConfig, fcuArgs *fcuConfig) {
if (fcuArgs.attributes == nil || fcuArgs.attributes.IsEmpty()) && cfg.headRoot == cfg.roblock.Root() {
go s.sendFCUWithAttributes(cfg, fcuArgs)
if err := s.handleEpochBoundary(cfg.ctx, slot, cfg.postState, root[:]); err != nil {
log.WithError(err).Error("Could not handle epoch boundary")
}
}
@@ -202,20 +189,6 @@ func reportProcessingTime(startTime time.Time) {
onBlockProcessingTime.Observe(float64(time.Since(startTime).Milliseconds()))
}
// computePayloadAttributes modifies the passed FCU arguments to
// contain the right payload attributes with the tracked proposer. It gets
// called on blocks that arrive after the attestation voting window, or in a
// background routine after syncing early blocks.
func (s *Service) computePayloadAttributes(cfg *postBlockProcessConfig, fcuArgs *fcuConfig) error {
if cfg.roblock.Root() == cfg.headRoot {
if err := s.updateCachesPostBlockProcessing(cfg); err != nil {
return err
}
}
fcuArgs.attributes = s.getPayloadAttribute(cfg.ctx, fcuArgs.headState, fcuArgs.proposingSlot, cfg.headRoot[:])
return nil
}
// getBlockPreState returns the pre state of an incoming block. It uses the parent root of the block
// to retrieve the state in DB. It verifies the pre state's validity and the incoming block
// is in the correct time window.

View File

@@ -738,7 +738,9 @@ func TestOnBlock_CanFinalize_WithOnTick(t *testing.T) {
require.NoError(t, service.savePostStateInfo(ctx, r, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, r)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, true}))
service.cfg.ForkChoiceStore.Unlock()
require.NoError(t, service.updateJustificationOnBlock(ctx, preState, postState, currStoreJustifiedEpoch))
_, err = service.updateFinalizationOnBlock(ctx, preState, postState, currStoreFinalizedEpoch)
require.NoError(t, err)
@@ -788,7 +790,9 @@ func TestOnBlock_CanFinalize(t *testing.T) {
require.NoError(t, service.savePostStateInfo(ctx, r, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, r)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, true}))
service.cfg.ForkChoiceStore.Unlock()
require.NoError(t, service.updateJustificationOnBlock(ctx, preState, postState, currStoreJustifiedEpoch))
_, err = service.updateFinalizationOnBlock(ctx, preState, postState, currStoreFinalizedEpoch)
require.NoError(t, err)
@@ -816,25 +820,9 @@ func TestOnBlock_NilBlock(t *testing.T) {
service, tr := minimalTestService(t)
signed := &consensusblocks.SignedBeaconBlock{}
roblock := consensusblocks.ROBlock{ReadOnlySignedBeaconBlock: signed}
service.cfg.ForkChoiceStore.Lock()
err := service.postBlockProcess(&postBlockProcessConfig{tr.ctx, roblock, [32]byte{}, nil, true})
require.Equal(t, true, IsInvalidBlock(err))
}
func TestOnBlock_InvalidSignature(t *testing.T) {
service, tr := minimalTestService(t)
ctx := tr.ctx
gs, keys := util.DeterministicGenesisState(t, 32)
require.NoError(t, service.saveGenesisData(ctx, gs))
blk, err := util.GenerateFullBlock(gs, keys, util.DefaultBlockGenConfig(), 1)
require.NoError(t, err)
blk.Signature = []byte{'a'} // Mutate the signature.
wsb, err := consensusblocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
_, err = service.validateStateTransition(ctx, preState, wsb)
service.cfg.ForkChoiceStore.Unlock()
require.Equal(t, true, IsInvalidBlock(err))
}
@@ -866,7 +854,9 @@ func TestOnBlock_CallNewPayloadAndForkchoiceUpdated(t *testing.T) {
require.NoError(t, service.savePostStateInfo(ctx, r, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, r)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
service.cfg.ForkChoiceStore.Unlock()
testState, err = service.cfg.StateGen.StateByRoot(ctx, r)
require.NoError(t, err)
}
@@ -1339,7 +1329,9 @@ func TestOnBlock_ProcessBlocksParallel(t *testing.T) {
lock.Lock()
roblock, err := consensusblocks.NewROBlockWithRoot(wsb1, r1)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, true}))
service.cfg.ForkChoiceStore.Unlock()
lock.Unlock()
wg.Done()
}()
@@ -1351,7 +1343,9 @@ func TestOnBlock_ProcessBlocksParallel(t *testing.T) {
lock.Lock()
roblock, err := consensusblocks.NewROBlockWithRoot(wsb2, r2)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, true}))
service.cfg.ForkChoiceStore.Unlock()
lock.Unlock()
wg.Done()
}()
@@ -1363,7 +1357,9 @@ func TestOnBlock_ProcessBlocksParallel(t *testing.T) {
lock.Lock()
roblock, err := consensusblocks.NewROBlockWithRoot(wsb3, r3)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, true}))
service.cfg.ForkChoiceStore.Unlock()
lock.Unlock()
wg.Done()
}()
@@ -1375,7 +1371,9 @@ func TestOnBlock_ProcessBlocksParallel(t *testing.T) {
lock.Lock()
roblock, err := consensusblocks.NewROBlockWithRoot(wsb4, r4)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, true}))
service.cfg.ForkChoiceStore.Unlock()
lock.Unlock()
wg.Done()
}()
@@ -1400,197 +1398,6 @@ func Test_verifyBlkFinalizedSlot_invalidBlock(t *testing.T) {
require.Equal(t, true, IsInvalidBlock(err))
}
// See the description in #10777 and #10782 for the full setup
// We sync optimistically a chain of blocks. Block 17 is the last block in Epoch
// 2. Block 18 justifies block 12 (the first in Epoch 2) and Block 19 returns
// INVALID from FCU, with LVH block 17. No head is viable. We check
// that the node is optimistic and that we can actually import a block on top of
// 17 and recover.
func TestStore_NoViableHead_FCU(t *testing.T) {
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
config.SlotsPerEpoch = 6
config.AltairForkEpoch = 1
config.BellatrixForkEpoch = 2
params.OverrideBeaconConfig(config)
mockEngine := &mockExecution.EngineClient{ErrNewPayload: execution.ErrAcceptedSyncingPayloadStatus, ErrForkchoiceUpdated: execution.ErrAcceptedSyncingPayloadStatus}
service, tr := minimalTestService(t, WithExecutionEngineCaller(mockEngine))
ctx := tr.ctx
st, keys := util.DeterministicGenesisState(t, 64)
stateRoot, err := st.HashTreeRoot(ctx)
require.NoError(t, err, "Could not hash genesis state")
require.NoError(t, service.saveGenesisData(ctx, st))
genesis := blocks.NewGenesisBlock(stateRoot[:])
wsb, err := consensusblocks.NewSignedBeaconBlock(genesis)
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wsb), "Could not save genesis block")
parentRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err, "Could not get signing root")
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, st, parentRoot), "Could not save genesis state")
require.NoError(t, service.cfg.BeaconDB.SaveHeadBlockRoot(ctx, parentRoot), "Could not save genesis state")
for i := 1; i < 6; i++ {
driftGenesisTime(service, primitives.Slot(i), 0)
st, err := service.HeadState(ctx)
require.NoError(t, err)
b, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), primitives.Slot(i))
require.NoError(t, err)
wsb, err := consensusblocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
}
for i := 6; i < 12; i++ {
driftGenesisTime(service, primitives.Slot(i), 0)
st, err := service.HeadState(ctx)
require.NoError(t, err)
b, err := util.GenerateFullBlockAltair(st, keys, util.DefaultBlockGenConfig(), primitives.Slot(i))
require.NoError(t, err)
wsb, err := consensusblocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false})
require.NoError(t, err)
}
for i := 12; i < 18; i++ {
driftGenesisTime(service, primitives.Slot(i), 0)
st, err := service.HeadState(ctx)
require.NoError(t, err)
b, err := util.GenerateFullBlockBellatrix(st, keys, util.DefaultBlockGenConfig(), primitives.Slot(i))
require.NoError(t, err)
wsb, err := consensusblocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false})
require.NoError(t, err)
}
// Check that we haven't justified the second epoch yet
jc := service.cfg.ForkChoiceStore.JustifiedCheckpoint()
require.Equal(t, primitives.Epoch(0), jc.Epoch)
// import a block that justifies the second epoch
driftGenesisTime(service, 18, 0)
validHeadState, err := service.HeadState(ctx)
require.NoError(t, err)
b, err := util.GenerateFullBlockBellatrix(validHeadState, keys, util.DefaultBlockGenConfig(), 18)
require.NoError(t, err)
wsb, err = consensusblocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
firstInvalidRoot, err := b.Block.HashTreeRoot()
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, firstInvalidRoot, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, firstInvalidRoot)
require.NoError(t, err)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false})
require.NoError(t, err)
jc = service.cfg.ForkChoiceStore.JustifiedCheckpoint()
require.Equal(t, primitives.Epoch(2), jc.Epoch)
sjc := validHeadState.CurrentJustifiedCheckpoint()
require.Equal(t, primitives.Epoch(0), sjc.Epoch)
lvh := b.Block.Body.ExecutionPayload.ParentHash
// check our head
require.Equal(t, firstInvalidRoot, service.cfg.ForkChoiceStore.CachedHeadRoot())
// import another block to find out that it was invalid
mockEngine = &mockExecution.EngineClient{ErrNewPayload: execution.ErrAcceptedSyncingPayloadStatus, ErrForkchoiceUpdated: execution.ErrInvalidPayloadStatus, ForkChoiceUpdatedResp: lvh}
service.cfg.ExecutionEngineCaller = mockEngine
driftGenesisTime(service, 19, 0)
st, err = service.HeadState(ctx)
require.NoError(t, err)
b, err = util.GenerateFullBlockBellatrix(st, keys, util.DefaultBlockGenConfig(), 19)
require.NoError(t, err)
wsb, err = consensusblocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
preState, err = service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err = service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err = consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false})
require.ErrorContains(t, "received an INVALID payload from execution engine", err)
// Check that forkchoice's head is the last invalid block imported. The
// store's headroot is the previous head (since the invalid block did
// not finish importing) one and that the node is optimistic
require.Equal(t, root, service.cfg.ForkChoiceStore.CachedHeadRoot())
headRoot, err := service.HeadRoot(ctx)
require.NoError(t, err)
require.Equal(t, firstInvalidRoot, bytesutil.ToBytes32(headRoot))
optimistic, err := service.IsOptimistic(ctx)
require.NoError(t, err)
require.Equal(t, true, optimistic)
// import another block based on the last valid head state
mockEngine = &mockExecution.EngineClient{}
service.cfg.ExecutionEngineCaller = mockEngine
driftGenesisTime(service, 20, 0)
b, err = util.GenerateFullBlockBellatrix(validHeadState, keys, &util.BlockGenConfig{}, 20)
require.NoError(t, err)
wsb, err = consensusblocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
root, err = b.Block.HashTreeRoot()
require.NoError(t, err)
preState, err = service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err = service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err = consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, true})
require.NoError(t, err)
// Check the newly imported block is head, it justified the right
// checkpoint and the node is no longer optimistic
require.Equal(t, root, service.cfg.ForkChoiceStore.CachedHeadRoot())
sjc = service.CurrentJustifiedCheckpt()
require.Equal(t, jc.Epoch, sjc.Epoch)
require.Equal(t, jc.Root, bytesutil.ToBytes32(sjc.Root))
optimistic, err = service.IsOptimistic(ctx)
require.NoError(t, err)
require.Equal(t, false, optimistic)
}
// See the description in #10777 and #10782 for the full setup
// We sync optimistically a chain of blocks. Block 17 is the last block in Epoch
// 2. Block 18 justifies block 12 (the first in Epoch 2) and Block 19 returns
@@ -1642,7 +1449,9 @@ func TestStore_NoViableHead_NewPayload(t *testing.T) {
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
service.cfg.ForkChoiceStore.Unlock()
}
for i := 6; i < 12; i++ {
@@ -1662,8 +1471,9 @@ func TestStore_NoViableHead_NewPayload(t *testing.T) {
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false})
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
service.cfg.ForkChoiceStore.Unlock()
}
for i := 12; i < 18; i++ {
@@ -1684,8 +1494,9 @@ func TestStore_NoViableHead_NewPayload(t *testing.T) {
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false})
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
service.cfg.ForkChoiceStore.Unlock()
}
// Check that we haven't justified the second epoch yet
jc := service.cfg.ForkChoiceStore.JustifiedCheckpoint()
@@ -1708,7 +1519,9 @@ func TestStore_NoViableHead_NewPayload(t *testing.T) {
require.NoError(t, service.savePostStateInfo(ctx, firstInvalidRoot, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, firstInvalidRoot)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
err = service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false})
service.cfg.ForkChoiceStore.Unlock()
require.NoError(t, err)
jc = service.cfg.ForkChoiceStore.JustifiedCheckpoint()
require.Equal(t, primitives.Epoch(2), jc.Epoch)
@@ -1718,6 +1531,10 @@ func TestStore_NoViableHead_NewPayload(t *testing.T) {
lvh := b.Block.Body.ExecutionPayload.ParentHash
// check our head
require.Equal(t, firstInvalidRoot, service.cfg.ForkChoiceStore.CachedHeadRoot())
isBlock18OptimisticAfterImport, err := service.IsOptimisticForRoot(ctx, firstInvalidRoot)
require.NoError(t, err)
require.Equal(t, true, isBlock18OptimisticAfterImport)
time.Sleep(20 * time.Millisecond) // wait for async forkchoice update to be processed
// import another block to find out that it was invalid
mockEngine = &mockExecution.EngineClient{ErrNewPayload: execution.ErrInvalidPayloadStatus, NewPayloadResp: lvh}
@@ -1768,7 +1585,9 @@ func TestStore_NoViableHead_NewPayload(t *testing.T) {
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err = consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
err = service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, true})
service.cfg.ForkChoiceStore.Unlock()
require.NoError(t, err)
// Check the newly imported block is head, it justified the right
// checkpoint and the node is no longer optimistic
@@ -1835,7 +1654,9 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
service.cfg.ForkChoiceStore.Unlock()
}
for i := 6; i < 12; i++ {
@@ -1856,8 +1677,9 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false})
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
service.cfg.ForkChoiceStore.Unlock()
}
// import the merge block
@@ -1877,7 +1699,9 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
require.NoError(t, service.savePostStateInfo(ctx, lastValidRoot, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, lastValidRoot)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
err = service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false})
service.cfg.ForkChoiceStore.Unlock()
require.NoError(t, err)
// save the post state and the payload Hash of this block since it will
// be the LVH
@@ -1906,8 +1730,9 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
require.NoError(t, service.savePostStateInfo(ctx, invalidRoots[i-13], wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, invalidRoots[i-13])
require.NoError(t, err)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false})
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
service.cfg.ForkChoiceStore.Unlock()
}
// Check that we have justified the second epoch
jc := service.cfg.ForkChoiceStore.JustifiedCheckpoint()
@@ -1975,7 +1800,9 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err = consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, true}))
service.cfg.ForkChoiceStore.Unlock()
// Check that the head is still INVALID and the node is still optimistic
require.Equal(t, invalidHeadRoot, service.cfg.ForkChoiceStore.CachedHeadRoot())
optimistic, err = service.IsOptimistic(ctx)
@@ -2000,7 +1827,9 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
err = service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, true})
service.cfg.ForkChoiceStore.Unlock()
require.NoError(t, err)
st, err = service.cfg.StateGen.StateByRoot(ctx, root)
require.NoError(t, err)
@@ -2028,7 +1857,9 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err = consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
err = service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, true})
service.cfg.ForkChoiceStore.Unlock()
require.NoError(t, err)
require.Equal(t, root, service.cfg.ForkChoiceStore.CachedHeadRoot())
sjc = service.CurrentJustifiedCheckpt()
@@ -2072,7 +1903,6 @@ func TestNoViableHead_Reboot(t *testing.T) {
require.NoError(t, service.cfg.BeaconDB.SaveGenesisBlockRoot(ctx, genesisRoot), "Could not save genesis state")
for i := 1; i < 6; i++ {
t.Log(i)
driftGenesisTime(service, primitives.Slot(i), 0)
st, err := service.HeadState(ctx)
require.NoError(t, err)
@@ -2089,7 +1919,9 @@ func TestNoViableHead_Reboot(t *testing.T) {
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
service.cfg.ForkChoiceStore.Unlock()
}
for i := 6; i < 12; i++ {
@@ -2109,8 +1941,9 @@ func TestNoViableHead_Reboot(t *testing.T) {
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false})
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
service.cfg.ForkChoiceStore.Unlock()
}
// import the merge block
@@ -2130,7 +1963,9 @@ func TestNoViableHead_Reboot(t *testing.T) {
require.NoError(t, service.savePostStateInfo(ctx, lastValidRoot, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, lastValidRoot)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
err = service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false})
service.cfg.ForkChoiceStore.Unlock()
require.NoError(t, err)
// save the post state and the payload Hash of this block since it will
// be the LVH
@@ -2161,7 +1996,9 @@ func TestNoViableHead_Reboot(t *testing.T) {
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
service.cfg.ForkChoiceStore.Unlock()
require.NoError(t, service.updateJustificationOnBlock(ctx, preState, postState, currStoreJustifiedEpoch))
_, err = service.updateFinalizationOnBlock(ctx, preState, postState, currStoreFinalizedEpoch)
require.NoError(t, err)
@@ -2282,7 +2119,9 @@ func TestOnBlock_HandleBlockAttestations(t *testing.T) {
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
service.cfg.ForkChoiceStore.Unlock()
st, err = service.HeadState(ctx)
require.NoError(t, err)
@@ -2348,7 +2187,9 @@ func TestOnBlock_HandleBlockAttestations(t *testing.T) {
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
service.cfg.ForkChoiceStore.Unlock()
st, err = service.HeadState(ctx)
require.NoError(t, err)
@@ -2631,7 +2472,10 @@ func TestRollbackBlock(t *testing.T) {
require.NoError(t, err)
// Rollback block insertion into db and caches.
require.ErrorContains(t, fmt.Sprintf("could not insert block %d to fork choice store", roblock.Block().Slot()), service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
service.cfg.ForkChoiceStore.Lock()
err = service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false})
service.cfg.ForkChoiceStore.Unlock()
require.ErrorContains(t, fmt.Sprintf("could not insert block %d to fork choice store", roblock.Block().Slot()), err)
// The block should no longer exist.
require.Equal(t, false, service.cfg.BeaconDB.HasBlock(ctx, root))
@@ -2732,7 +2576,9 @@ func TestRollbackBlock_ContextDeadline(t *testing.T) {
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
service.cfg.ForkChoiceStore.Unlock()
b, err = util.GenerateFullBlock(postState, keys, util.DefaultBlockGenConfig(), 34)
require.NoError(t, err)
@@ -2766,7 +2612,10 @@ func TestRollbackBlock_ContextDeadline(t *testing.T) {
require.NoError(t, postState.SetFinalizedCheckpoint(cj))
// Rollback block insertion into db and caches.
require.ErrorContains(t, "context canceled", service.postBlockProcess(&postBlockProcessConfig{cancCtx, roblock, [32]byte{}, postState, false}))
service.cfg.ForkChoiceStore.Lock()
err = service.postBlockProcess(&postBlockProcessConfig{cancCtx, roblock, [32]byte{}, postState, false})
service.cfg.ForkChoiceStore.Unlock()
require.ErrorContains(t, "context canceled", err)
// The block should no longer exist.
require.Equal(t, false, service.cfg.BeaconDB.HasBlock(ctx, root))
@@ -3262,7 +3111,9 @@ func Test_postBlockProcess_EventSending(t *testing.T) {
}
// Execute postBlockProcess
service.cfg.ForkChoiceStore.Lock()
err = service.postBlockProcess(cfg)
service.cfg.ForkChoiceStore.Unlock()
// Check error expectation
if tt.expectError {

View File

@@ -156,13 +156,15 @@ func (s *Service) UpdateHead(ctx context.Context, proposingSlot primitives.Slot)
}
if s.inRegularSync() {
fcuArgs.attributes = s.getPayloadAttribute(ctx, headState, proposingSlot, newHeadRoot[:])
if fcuArgs.attributes != nil && s.shouldOverrideFCU(newHeadRoot, proposingSlot) {
return
}
go s.forkchoiceUpdateWithExecution(s.ctx, fcuArgs)
}
if fcuArgs.attributes != nil && s.shouldOverrideFCU(newHeadRoot, proposingSlot) {
return
}
if err := s.forkchoiceUpdateWithExecution(s.ctx, fcuArgs); err != nil {
log.WithError(err).Error("Could not update forkchoice")
if err := s.saveHead(s.ctx, fcuArgs.headRoot, fcuArgs.headBlock, fcuArgs.headState); err != nil {
log.WithError(err).Error("Could not save head")
}
s.pruneAttsFromPool(s.ctx, fcuArgs.headState, fcuArgs.headBlock)
}
// This processes fork choice attestations from the pool to account for validator votes and fork choice.

View File

@@ -117,7 +117,9 @@ func TestService_ProcessAttestationsAndUpdateHead(t *testing.T) {
require.NoError(t, service.savePostStateInfo(ctx, tRoot, wsb, postState))
roblock, err := blocks.NewROBlockWithRoot(wsb, tRoot)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
service.cfg.ForkChoiceStore.Unlock()
copied, err = service.cfg.StateGen.StateByRoot(ctx, tRoot)
require.NoError(t, err)
require.Equal(t, 2, fcs.NodeCount())
@@ -177,7 +179,9 @@ func TestService_UpdateHead_NoAtts(t *testing.T) {
require.NoError(t, service.savePostStateInfo(ctx, tRoot, wsb, postState))
roblock, err := blocks.NewROBlockWithRoot(wsb, tRoot)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
service.cfg.ForkChoiceStore.Unlock()
require.Equal(t, 2, fcs.NodeCount())
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wsb))
require.Equal(t, tRoot, service.head.root)

View File

@@ -290,52 +290,3 @@ func TestProcessBlockHeader_OK(t *testing.T) {
}
assert.Equal(t, true, proto.Equal(nsh, expected), "Expected %v, received %v", expected, nsh)
}
func TestBlockSignatureSet_OK(t *testing.T) {
validators := make([]*ethpb.Validator, params.BeaconConfig().MinGenesisActiveValidatorCount)
for i := range validators {
validators[i] = &ethpb.Validator{
PublicKey: make([]byte, 32),
WithdrawalCredentials: make([]byte, 32),
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
Slashed: true,
}
}
state, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, state.SetValidators(validators))
require.NoError(t, state.SetSlot(10))
require.NoError(t, state.SetLatestBlockHeader(util.HydrateBeaconHeader(&ethpb.BeaconBlockHeader{
Slot: 9,
ProposerIndex: 0,
})))
latestBlockSignedRoot, err := state.LatestBlockHeader().HashTreeRoot()
require.NoError(t, err)
currentEpoch := time.CurrentEpoch(state)
priv, err := bls.RandKey()
require.NoError(t, err)
pID, err := helpers.BeaconProposerIndex(t.Context(), state)
require.NoError(t, err)
block := util.NewBeaconBlock()
block.Block.Slot = 10
block.Block.ProposerIndex = pID
block.Block.Body.RandaoReveal = bytesutil.PadTo([]byte{'A', 'B', 'C'}, 96)
block.Block.ParentRoot = latestBlockSignedRoot[:]
block.Signature, err = signing.ComputeDomainAndSign(state, currentEpoch, block.Block, params.BeaconConfig().DomainBeaconProposer, priv)
require.NoError(t, err)
proposerIdx, err := helpers.BeaconProposerIndex(t.Context(), state)
require.NoError(t, err)
validators[proposerIdx].Slashed = false
validators[proposerIdx].PublicKey = priv.PublicKey().Marshal()
err = state.UpdateValidatorAtIndex(proposerIdx, validators[proposerIdx])
require.NoError(t, err)
set, err := blocks.BlockSignatureBatch(state, block.Block.ProposerIndex, block.Signature, block.Block.HashTreeRoot)
require.NoError(t, err)
verified, err := set.Verify()
require.NoError(t, err)
assert.Equal(t, true, verified, "Block signature set returned a set which was unable to be verified")
}

View File

@@ -122,24 +122,6 @@ func VerifyBlockSignatureUsingCurrentFork(beaconState state.ReadOnlyBeaconState,
return nil
}
// BlockSignatureBatch retrieves the block signature batch from the provided block and its corresponding state.
func BlockSignatureBatch(beaconState state.ReadOnlyBeaconState,
proposerIndex primitives.ValidatorIndex,
sig []byte,
rootFunc func() ([32]byte, error)) (*bls.SignatureBatch, error) {
currentEpoch := slots.ToEpoch(beaconState.Slot())
domain, err := signing.Domain(beaconState.Fork(), currentEpoch, params.BeaconConfig().DomainBeaconProposer, beaconState.GenesisValidatorsRoot())
if err != nil {
return nil, err
}
proposer, err := beaconState.ValidatorAtIndex(proposerIndex)
if err != nil {
return nil, err
}
proposerPubKey := proposer.PublicKey
return signing.BlockSignatureBatch(proposerPubKey, sig, domain, rootFunc)
}
// RandaoSignatureBatch retrieves the relevant randao specific signature batch object
// from a block and its corresponding state.
func RandaoSignatureBatch(

View File

@@ -278,12 +278,12 @@ func ProcessConsolidationRequests(ctx context.Context, st state.BeaconState, req
if uint64(curEpoch) < e {
continue
}
bal, err := st.PendingBalanceToWithdraw(srcIdx)
hasBal, err := st.HasPendingBalanceToWithdraw(srcIdx)
if err != nil {
log.WithError(err).Error("Failed to fetch pending balance to withdraw")
continue
}
if bal > 0 {
if hasBal {
continue
}

View File

@@ -0,0 +1,42 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = ["payload.go"],
importpath = "github.com/OffchainLabs/prysm/v7/beacon-chain/core/gloas",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/core/electra:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/state:go_default_library",
"//config/params:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//crypto/bls:go_default_library",
"//encoding/ssz:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = ["payload_test.go"],
embed = [":go_default_library"],
deps = [
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/bls:go_default_library",
"//encoding/ssz:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//testing/require:go_default_library",
"//time/slots:go_default_library",
],
)

View File

@@ -0,0 +1,336 @@
package gloas
import (
"bytes"
"context"
"fmt"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/electra"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/signing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v7/crypto/bls"
"github.com/OffchainLabs/prysm/v7/encoding/ssz"
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/pkg/errors"
)
// ProcessExecutionPayload processes the signed execution payload envelope for the Gloas fork.
// Spec v1.6.1 (pseudocode):
// def process_execution_payload(state, signed_envelope, execution_engine, verify=True):
//
// envelope = signed_envelope.message
// payload = envelope.payload
//
// if verify: assert verify_execution_payload_envelope_signature(state, signed_envelope)
//
// prev_root = hash_tree_root(state)
// if state.latest_block_header.state_root == Root():
// state.latest_block_header.state_root = prev_root
//
// assert envelope.beacon_block_root == hash_tree_root(state.latest_block_header)
// assert envelope.slot == state.slot
//
// committed_bid = state.latest_execution_payload_bid
// assert envelope.builder_index == committed_bid.builder_index
// assert committed_bid.blob_kzg_commitments_root == hash_tree_root(envelope.blob_kzg_commitments)
// assert committed_bid.prev_randao == payload.prev_randao
//
// assert hash_tree_root(payload.withdrawals) == hash_tree_root(state.payload_expected_withdrawals)
//
// assert committed_bid.gas_limit == payload.gas_limit
// assert committed_bid.block_hash == payload.block_hash
// assert payload.parent_hash == state.latest_block_hash
// assert payload.timestamp == compute_time_at_slot(state, state.slot)
// assert (
// len(envelope.blob_kzg_commitments)
// <= get_blob_parameters(get_current_epoch(state)).max_blobs_per_block
// )
// versioned_hashes = [
// kzg_commitment_to_versioned_hash(commitment) for commitment in envelope.blob_kzg_commitments
// ]
// requests = envelope.execution_requests
// assert execution_engine.verify_and_notify_new_payload(
// NewPayloadRequest(
// execution_payload=payload,
// versioned_hashes=versioned_hashes,
// parent_beacon_block_root=state.latest_block_header.parent_root,
// execution_requests=requests,
// )
// )
//
// for op in requests.deposits: process_deposit_request(state, op)
// for op in requests.withdrawals: process_withdrawal_request(state, op)
// for op in requests.consolidations: process_consolidation_request(state, op)
//
// payment = state.builder_pending_payments[SLOTS_PER_EPOCH + state.slot % SLOTS_PER_EPOCH]
// amount = payment.withdrawal.amount
// if amount > 0:
// exit_queue_epoch = compute_exit_epoch_and_update_churn(state, amount)
// payment.withdrawal.withdrawable_epoch = Epoch(exit_queue_epoch + MIN_VALIDATOR_WITHDRAWABILITY_DELAY)
// state.builder_pending_withdrawals.append(payment.withdrawal)
// state.builder_pending_payments[SLOTS_PER_EPOCH + state.slot % SLOTS_PER_EPOCH] = BuilderPendingPayment()
//
// state.execution_payload_availability[state.slot % SLOTS_PER_HISTORICAL_ROOT] = 0b1
// state.latest_block_hash = payload.block_hash
//
// if verify: assert envelope.state_root == h
func ProcessExecutionPayload(
ctx context.Context,
st state.BeaconState,
signedEnvelope interfaces.ROSignedExecutionPayloadEnvelope,
) error {
// Verify the signature on the signed execution payload envelope
if err := verifyExecutionPayloadEnvelopeSignature(st, signedEnvelope); err != nil {
return errors.Wrap(err, "signature verification failed")
}
envelope, err := signedEnvelope.Envelope()
if err != nil {
return errors.Wrap(err, "could not get envelope from signed envelope")
}
payload, err := envelope.Execution()
if err != nil {
return errors.Wrap(err, "could not get execution payload from envelope")
}
latestHeader := st.LatestBlockHeader()
if len(latestHeader.StateRoot) == 0 || bytes.Equal(latestHeader.StateRoot, make([]byte, 32)) {
previousStateRoot, err := st.HashTreeRoot(ctx)
if err != nil {
return errors.Wrap(err, "could not compute state root")
}
latestHeader.StateRoot = previousStateRoot[:]
if err := st.SetLatestBlockHeader(latestHeader); err != nil {
return errors.Wrap(err, "could not set latest block header")
}
}
blockHeaderRoot, err := latestHeader.HashTreeRoot()
if err != nil {
return errors.Wrap(err, "could not compute block header root")
}
beaconBlockRoot := envelope.BeaconBlockRoot()
if !bytes.Equal(beaconBlockRoot[:], blockHeaderRoot[:]) {
return errors.Errorf("envelope beacon block root does not match state latest block header root: envelope=%#x, header=%#x", beaconBlockRoot, blockHeaderRoot)
}
if envelope.Slot() != st.Slot() {
return errors.Errorf("envelope slot does not match state slot: envelope=%d, state=%d", envelope.Slot(), st.Slot())
}
latestBid, err := st.LatestExecutionPayloadBid()
if err != nil {
return errors.Wrap(err, "could not get latest execution payload bid")
}
if latestBid == nil {
return errors.New("latest execution payload bid is nil")
}
if envelope.BuilderIndex() != latestBid.BuilderIndex() {
return errors.Errorf("envelope builder index does not match committed bid builder index: envelope=%d, bid=%d", envelope.BuilderIndex(), latestBid.BuilderIndex())
}
envelopeBlobCommitments := envelope.BlobKzgCommitments()
envelopeBlobRoot, err := ssz.KzgCommitmentsRoot(envelopeBlobCommitments)
if err != nil {
return errors.Wrap(err, "could not compute envelope blob KZG commitments root")
}
committedBlobRoot := latestBid.BlobKzgCommitmentsRoot()
if !bytes.Equal(committedBlobRoot[:], envelopeBlobRoot[:]) {
return errors.Errorf("committed bid blob KZG commitments root does not match envelope: bid=%#x, envelope=%#x", committedBlobRoot, envelopeBlobRoot)
}
withdrawals, err := payload.Withdrawals()
if err != nil {
return errors.Wrap(err, "could not get withdrawals from payload")
}
cfg := params.BeaconConfig()
withdrawalsRoot, err := ssz.WithdrawalSliceRoot(withdrawals, cfg.MaxWithdrawalsPerPayload)
if err != nil {
return errors.Wrap(err, "could not compute withdrawals root")
}
latestWithdrawalsRoot, err := st.LatestWithdrawalsRoot()
if err != nil {
return errors.Wrap(err, "could not get latest withdrawals root")
}
if !bytes.Equal(withdrawalsRoot[:], latestWithdrawalsRoot[:]) {
return errors.Errorf("payload withdrawals root does not match state latest withdrawals root: payload=%#x, state=%#x", withdrawalsRoot, latestWithdrawalsRoot)
}
if latestBid.GasLimit() != payload.GasLimit() {
return errors.Errorf("committed bid gas limit does not match payload gas limit: bid=%d, payload=%d", latestBid.GasLimit(), payload.GasLimit())
}
latestBidPrevRandao := latestBid.PrevRandao()
if !bytes.Equal(payload.PrevRandao(), latestBidPrevRandao[:]) {
return errors.Errorf("payload prev randao does not match committed bid prev randao: payload=%#x, bid=%#x", payload.PrevRandao(), latestBidPrevRandao)
}
bidBlockHash := latestBid.BlockHash()
payloadBlockHash := payload.BlockHash()
if !bytes.Equal(bidBlockHash[:], payloadBlockHash) {
return errors.Errorf("committed bid block hash does not match payload block hash: bid=%#x, payload=%#x", bidBlockHash, payloadBlockHash)
}
latestBlockHash, err := st.LatestBlockHash()
if err != nil {
return errors.Wrap(err, "could not get latest block hash")
}
if !bytes.Equal(payload.ParentHash(), latestBlockHash[:]) {
return errors.Errorf("payload parent hash does not match state latest block hash: payload=%#x, state=%#x", payload.ParentHash(), latestBlockHash)
}
t, err := slots.StartTime(st.GenesisTime(), st.Slot())
if err != nil {
return errors.Wrap(err, "could not compute timestamp")
}
if payload.Timestamp() != uint64(t.Unix()) {
return errors.Errorf("payload timestamp does not match expected timestamp: payload=%d, expected=%d", payload.Timestamp(), uint64(t.Unix()))
}
maxBlobsPerBlock := cfg.MaxBlobsPerBlock(envelope.Slot())
if len(envelopeBlobCommitments) > maxBlobsPerBlock {
return errors.Errorf("too many blob KZG commitments: got=%d, max=%d", len(envelopeBlobCommitments), maxBlobsPerBlock)
}
if err := processExecutionRequests(ctx, st, envelope.ExecutionRequests()); err != nil {
return errors.Wrap(err, "could not process execution requests")
}
if err := queueBuilderPayment(ctx, st); err != nil {
return errors.Wrap(err, "could not queue builder payment")
}
if err := st.SetExecutionPayloadAvailability(st.Slot(), true); err != nil {
return errors.Wrap(err, "could not set execution payload availability")
}
if err := st.SetLatestBlockHash([32]byte(payload.BlockHash())); err != nil {
return errors.Wrap(err, "could not set latest block hash")
}
r, err := st.HashTreeRoot(ctx)
if err != nil {
return errors.Wrap(err, "could not get hash tree root")
}
if r != envelope.StateRoot() {
return fmt.Errorf("state root mismatch: expected %#x, got %#x", envelope.StateRoot(), r)
}
return nil
}
// processExecutionRequests processes deposits, withdrawals, and consolidations from execution requests.
// Spec v1.6.1 (pseudocode):
// for op in requests.deposits: process_deposit_request(state, op)
// for op in requests.withdrawals: process_withdrawal_request(state, op)
// for op in requests.consolidations: process_consolidation_request(state, op)
func processExecutionRequests(ctx context.Context, st state.BeaconState, requests *enginev1.ExecutionRequests) error {
var err error
st, err = electra.ProcessDepositRequests(ctx, st, requests.Deposits)
if err != nil {
return errors.Wrap(err, "could not process deposit requests")
}
st, err = electra.ProcessWithdrawalRequests(ctx, st, requests.Withdrawals)
if err != nil {
return errors.Wrap(err, "could not process withdrawal requests")
}
err = electra.ProcessConsolidationRequests(ctx, st, requests.Consolidations)
if err != nil {
return errors.Wrap(err, "could not process consolidation requests")
}
return nil
}
// queueBuilderPayment implements the builder payment queuing logic for Gloas.
// Spec v1.6.1 (pseudocode):
// payment = state.builder_pending_payments[SLOTS_PER_EPOCH + state.slot % SLOTS_PER_EPOCH]
// amount = payment.withdrawal.amount
// if amount > 0:
//
// exit_queue_epoch = compute_exit_epoch_and_update_churn(state, amount)
// payment.withdrawal.withdrawable_epoch = Epoch(exit_queue_epoch + MIN_VALIDATOR_WITHDRAWABILITY_DELAY)
// state.builder_pending_withdrawals.append(payment.withdrawal)
//
// state.builder_pending_payments[SLOTS_PER_EPOCH + state.slot % SLOTS_PER_EPOCH] = BuilderPendingPayment()
func queueBuilderPayment(ctx context.Context, st state.BeaconState) error {
payment, err := st.BuilderPendingPayment(st.Slot())
if err != nil {
return errors.Wrap(err, "could not get builder pending payment")
}
amount := payment.Withdrawal.Amount
if amount > 0 {
exitQueueEpoch, err := st.ExitEpochAndUpdateChurn(amount)
if err != nil {
return errors.Wrap(err, "could not compute exit epoch and update churn")
}
minValidatorWithdrawabilityDelay := params.BeaconConfig().MinValidatorWithdrawabilityDelay
payment.Withdrawal.WithdrawableEpoch = exitQueueEpoch + minValidatorWithdrawabilityDelay
if err := st.AppendBuilderPendingWithdrawal(payment.Withdrawal); err != nil {
return errors.Wrap(err, "could not append builder pending withdrawal")
}
}
emptyPayment := &ethpb.BuilderPendingPayment{
Withdrawal: &ethpb.BuilderPendingWithdrawal{
FeeRecipient: make([]byte, 20),
},
}
if err := st.SetBuilderPendingPayment(st.Slot(), emptyPayment); err != nil {
return errors.Wrap(err, "could not set builder pending payment")
}
return nil
}
// verifyExecutionPayloadEnvelopeSignature verifies the BLS signature on a signed execution payload envelope.
// Spec v1.6.1 (pseudocode):
// if verify: assert verify_execution_payload_envelope_signature(state, signed_envelope)
func verifyExecutionPayloadEnvelopeSignature(st state.BeaconState, signedEnvelope interfaces.ROSignedExecutionPayloadEnvelope) error {
envelope, err := signedEnvelope.Envelope()
if err != nil {
return fmt.Errorf("failed to get envelope: %w", err)
}
builderPubkey := st.PubkeyAtIndex(envelope.BuilderIndex())
publicKey, err := bls.PublicKeyFromBytes(builderPubkey[:])
if err != nil {
return fmt.Errorf("invalid builder public key: %w", err)
}
signatureBytes := signedEnvelope.Signature()
signature, err := bls.SignatureFromBytes(signatureBytes[:])
if err != nil {
return fmt.Errorf("invalid signature format: %w", err)
}
currentEpoch := slots.ToEpoch(envelope.Slot())
domain, err := signing.Domain(
st.Fork(),
currentEpoch,
params.BeaconConfig().DomainBeaconBuilder,
st.GenesisValidatorsRoot(),
)
if err != nil {
return fmt.Errorf("failed to compute signing domain: %w", err)
}
signingRoot, err := signedEnvelope.SigningRoot(domain)
if err != nil {
return fmt.Errorf("failed to compute signing root: %w", err)
}
if !signature.Verify(publicKey, signingRoot[:]) {
return fmt.Errorf("signature verification failed: %w", signing.ErrSigFailedToVerify)
}
return nil
}

View File

@@ -0,0 +1,276 @@
package gloas
import (
"bytes"
"context"
"testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/signing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
state_native "github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/crypto/bls"
"github.com/OffchainLabs/prysm/v7/encoding/ssz"
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/OffchainLabs/prysm/v7/time/slots"
)
type payloadFixture struct {
state state.BeaconState
signed interfaces.ROSignedExecutionPayloadEnvelope
signedProto *ethpb.SignedExecutionPayloadEnvelope
envelope *ethpb.ExecutionPayloadEnvelope
payload *enginev1.ExecutionPayloadDeneb
slot primitives.Slot
}
func buildPayloadFixture(t *testing.T, mutate func(payload *enginev1.ExecutionPayloadDeneb, bid *ethpb.ExecutionPayloadBid, envelope *ethpb.ExecutionPayloadEnvelope)) payloadFixture {
t.Helper()
cfg := params.BeaconConfig()
slot := primitives.Slot(5)
builderIdx := primitives.ValidatorIndex(0)
sk, err := bls.RandKey()
require.NoError(t, err)
pk := sk.PublicKey().Marshal()
randao := bytes.Repeat([]byte{0xAA}, 32)
parentHash := bytes.Repeat([]byte{0xBB}, 32)
blockHash := bytes.Repeat([]byte{0xCC}, 32)
withdrawals := []*enginev1.Withdrawal{
{Index: 0, ValidatorIndex: 1, Address: bytes.Repeat([]byte{0x01}, 20), Amount: 0},
}
withdrawalsRoot, err := ssz.WithdrawalSliceRoot(withdrawals, cfg.MaxWithdrawalsPerPayload)
require.NoError(t, err)
blobCommitments := [][]byte{}
blobRoot, err := ssz.KzgCommitmentsRoot(blobCommitments)
require.NoError(t, err)
payload := &enginev1.ExecutionPayloadDeneb{
ParentHash: parentHash,
FeeRecipient: bytes.Repeat([]byte{0x01}, 20),
StateRoot: bytes.Repeat([]byte{0x02}, 32),
ReceiptsRoot: bytes.Repeat([]byte{0x03}, 32),
LogsBloom: bytes.Repeat([]byte{0x04}, 256),
PrevRandao: randao,
BlockNumber: 1,
GasLimit: 1,
GasUsed: 0,
Timestamp: 100,
ExtraData: []byte{},
BaseFeePerGas: bytes.Repeat([]byte{0x05}, 32),
BlockHash: blockHash,
Transactions: [][]byte{},
Withdrawals: withdrawals,
BlobGasUsed: 0,
ExcessBlobGas: 0,
}
bid := &ethpb.ExecutionPayloadBid{
ParentBlockHash: parentHash,
ParentBlockRoot: bytes.Repeat([]byte{0xDD}, 32),
BlockHash: blockHash,
PrevRandao: randao,
GasLimit: 1,
BuilderIndex: builderIdx,
Slot: slot,
Value: 0,
ExecutionPayment: 0,
BlobKzgCommitmentsRoot: blobRoot[:],
FeeRecipient: bytes.Repeat([]byte{0xEE}, 20),
}
header := &ethpb.BeaconBlockHeader{
Slot: slot,
ParentRoot: bytes.Repeat([]byte{0x11}, 32),
StateRoot: bytes.Repeat([]byte{0x22}, 32),
BodyRoot: bytes.Repeat([]byte{0x33}, 32),
}
headerRoot, err := header.HashTreeRoot()
require.NoError(t, err)
envelope := &ethpb.ExecutionPayloadEnvelope{
Slot: slot,
BuilderIndex: builderIdx,
BeaconBlockRoot: headerRoot[:],
Payload: payload,
BlobKzgCommitments: blobCommitments,
ExecutionRequests: &enginev1.ExecutionRequests{},
}
if mutate != nil {
mutate(payload, bid, envelope)
}
withdrawalsRoot, err = ssz.WithdrawalSliceRoot(payload.Withdrawals, cfg.MaxWithdrawalsPerPayload)
require.NoError(t, err)
blobRoot, err = ssz.KzgCommitmentsRoot(envelope.BlobKzgCommitments)
require.NoError(t, err)
genesisRoot := bytes.Repeat([]byte{0xAB}, 32)
blockRoots := make([][]byte, cfg.SlotsPerHistoricalRoot)
stateRoots := make([][]byte, cfg.SlotsPerHistoricalRoot)
for i := range blockRoots {
blockRoots[i] = bytes.Repeat([]byte{0x44}, 32)
stateRoots[i] = bytes.Repeat([]byte{0x55}, 32)
}
randaoMixes := make([][]byte, cfg.EpochsPerHistoricalVector)
for i := range randaoMixes {
randaoMixes[i] = randao
}
withdrawalCreds := make([]byte, 32)
withdrawalCreds[0] = cfg.ETH1AddressWithdrawalPrefixByte
eth1Data := &ethpb.Eth1Data{
DepositRoot: bytes.Repeat([]byte{0x66}, 32),
DepositCount: 0,
BlockHash: bytes.Repeat([]byte{0x77}, 32),
}
vals := []*ethpb.Validator{
{
PublicKey: pk,
WithdrawalCredentials: withdrawalCreds,
EffectiveBalance: cfg.MinActivationBalance + 1_000,
},
}
balances := []uint64{cfg.MinActivationBalance + 1_000}
payments := make([]*ethpb.BuilderPendingPayment, cfg.SlotsPerEpoch*2)
for i := range payments {
payments[i] = &ethpb.BuilderPendingPayment{
Withdrawal: &ethpb.BuilderPendingWithdrawal{
FeeRecipient: make([]byte, 20),
},
}
}
executionPayloadAvailability := make([]byte, cfg.SlotsPerHistoricalRoot/8)
genesisTime := uint64(0)
slotSeconds := cfg.SecondsPerSlot * uint64(slot)
if payload.Timestamp > slotSeconds {
genesisTime = payload.Timestamp - slotSeconds
}
stProto := &ethpb.BeaconStateGloas{
Slot: slot,
GenesisTime: genesisTime,
GenesisValidatorsRoot: genesisRoot,
Fork: &ethpb.Fork{
CurrentVersion: bytes.Repeat([]byte{0x01}, 4),
PreviousVersion: bytes.Repeat([]byte{0x01}, 4),
Epoch: 0,
},
LatestBlockHeader: header,
BlockRoots: blockRoots,
StateRoots: stateRoots,
RandaoMixes: randaoMixes,
Eth1Data: eth1Data,
Validators: vals,
Balances: balances,
LatestBlockHash: payload.ParentHash,
LatestWithdrawalsRoot: withdrawalsRoot[:],
LatestExecutionPayloadBid: bid,
BuilderPendingPayments: payments,
ExecutionPayloadAvailability: executionPayloadAvailability,
BuilderPendingWithdrawals: []*ethpb.BuilderPendingWithdrawal{},
}
st, err := state_native.InitializeFromProtoGloas(stProto)
require.NoError(t, err)
expected := st.Copy()
ctx := context.Background()
require.NoError(t, processExecutionRequests(ctx, expected, envelope.ExecutionRequests))
require.NoError(t, queueBuilderPayment(ctx, expected))
require.NoError(t, expected.SetExecutionPayloadAvailability(slot, true))
var blockHashArr [32]byte
copy(blockHashArr[:], payload.BlockHash)
require.NoError(t, expected.SetLatestBlockHash(blockHashArr))
expectedRoot, err := expected.HashTreeRoot(ctx)
require.NoError(t, err)
envelope.StateRoot = expectedRoot[:]
epoch := slots.ToEpoch(slot)
domain, err := signing.Domain(st.Fork(), epoch, cfg.DomainBeaconBuilder, st.GenesisValidatorsRoot())
require.NoError(t, err)
signingRoot, err := signing.ComputeSigningRoot(envelope, domain)
require.NoError(t, err)
signature := sk.Sign(signingRoot[:]).Marshal()
signedProto := &ethpb.SignedExecutionPayloadEnvelope{
Message: envelope,
Signature: signature,
}
signed, err := blocks.WrappedROSignedExecutionPayloadEnvelope(signedProto)
require.NoError(t, err)
return payloadFixture{
state: st,
signed: signed,
signedProto: signedProto,
envelope: envelope,
payload: payload,
slot: slot,
}
}
func TestProcessExecutionPayload_Success(t *testing.T) {
fixture := buildPayloadFixture(t, nil)
require.NoError(t, ProcessExecutionPayload(t.Context(), fixture.state, fixture.signed))
latestHash, err := fixture.state.LatestBlockHash()
require.NoError(t, err)
var expectedHash [32]byte
copy(expectedHash[:], fixture.payload.BlockHash)
require.Equal(t, expectedHash, latestHash)
payment, err := fixture.state.BuilderPendingPayment(fixture.slot)
require.NoError(t, err)
require.NotNil(t, payment)
require.Equal(t, primitives.Gwei(0), payment.Withdrawal.Amount)
}
func TestProcessExecutionPayload_PrevRandaoMismatch(t *testing.T) {
fixture := buildPayloadFixture(t, func(_ *enginev1.ExecutionPayloadDeneb, bid *ethpb.ExecutionPayloadBid, _ *ethpb.ExecutionPayloadEnvelope) {
bid.PrevRandao = bytes.Repeat([]byte{0xFF}, 32)
})
err := ProcessExecutionPayload(t.Context(), fixture.state, fixture.signed)
require.ErrorContains(t, "prev randao", err)
}
func TestQueueBuilderPayment_ZeroAmountClearsSlot(t *testing.T) {
fixture := buildPayloadFixture(t, nil)
require.NoError(t, queueBuilderPayment(t.Context(), fixture.state))
payment, err := fixture.state.BuilderPendingPayment(fixture.slot)
require.NoError(t, err)
require.NotNil(t, payment)
require.Equal(t, primitives.Gwei(0), payment.Withdrawal.Amount)
}
func TestVerifyExecutionPayloadEnvelopeSignature_InvalidSig(t *testing.T) {
fixture := buildPayloadFixture(t, nil)
signedProto := &ethpb.SignedExecutionPayloadEnvelope{
Message: fixture.signedProto.Message,
Signature: bytes.Repeat([]byte{0xFF}, 96),
}
badSigned, err := blocks.WrappedROSignedExecutionPayloadEnvelope(signedProto)
require.NoError(t, err)
err = verifyExecutionPayloadEnvelopeSignature(fixture.state, badSigned)
require.ErrorContains(t, "invalid signature format", err)
}

View File

@@ -5,10 +5,20 @@ import (
"github.com/prometheus/client_golang/prometheus/promauto"
)
var dataColumnComputationTime = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "beacon_data_column_sidecar_computation_milliseconds",
Help: "Captures the time taken to compute data column sidecars from blobs.",
Buckets: []float64{25, 50, 100, 250, 500, 750, 1000},
},
var (
dataColumnComputationTime = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "beacon_data_column_sidecar_computation_milliseconds",
Help: "Captures the time taken to compute data column sidecars from blobs.",
Buckets: []float64{25, 50, 100, 250, 500, 750, 1000},
},
)
cellsAndProofsFromStructuredComputationTime = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "cells_and_proofs_from_structured_computation_milliseconds",
Help: "Captures the time taken to compute cells and proofs from structured computation.",
Buckets: []float64{10, 20, 30, 40, 50, 100, 200},
},
)
)

View File

@@ -3,6 +3,7 @@ package peerdas
import (
"sort"
"sync"
"time"
"github.com/OffchainLabs/prysm/v7/beacon-chain/blockchain/kzg"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
@@ -296,32 +297,42 @@ func ComputeCellsAndProofsFromFlat(blobs [][]byte, cellProofs [][]byte) ([][]kzg
return nil, nil, ErrBlobsCellsProofsMismatch
}
cellsPerBlob := make([][]kzg.Cell, 0, blobCount)
proofsPerBlob := make([][]kzg.Proof, 0, blobCount)
var wg errgroup.Group
cellsPerBlob := make([][]kzg.Cell, blobCount)
proofsPerBlob := make([][]kzg.Proof, blobCount)
for i, blob := range blobs {
var kzgBlob kzg.Blob
if copy(kzgBlob[:], blob) != len(kzgBlob) {
return nil, nil, errors.New("wrong blob size - should never happen")
}
// Compute the extended cells from the (non-extended) blob.
cells, err := kzg.ComputeCells(&kzgBlob)
if err != nil {
return nil, nil, errors.Wrap(err, "compute cells")
}
var proofs []kzg.Proof
for idx := uint64(i) * numberOfColumns; idx < (uint64(i)+1)*numberOfColumns; idx++ {
var kzgProof kzg.Proof
if copy(kzgProof[:], cellProofs[idx]) != len(kzgProof) {
return nil, nil, errors.New("wrong KZG proof size - should never happen")
wg.Go(func() error {
var kzgBlob kzg.Blob
if copy(kzgBlob[:], blob) != len(kzgBlob) {
return errors.New("wrong blob size - should never happen")
}
proofs = append(proofs, kzgProof)
}
// Compute the extended cells from the (non-extended) blob.
cells, err := kzg.ComputeCells(&kzgBlob)
if err != nil {
return errors.Wrap(err, "compute cells")
}
cellsPerBlob = append(cellsPerBlob, cells)
proofsPerBlob = append(proofsPerBlob, proofs)
proofs := make([]kzg.Proof, 0, numberOfColumns)
for idx := uint64(i) * numberOfColumns; idx < (uint64(i)+1)*numberOfColumns; idx++ {
var kzgProof kzg.Proof
if copy(kzgProof[:], cellProofs[idx]) != len(kzgProof) {
return errors.New("wrong KZG proof size - should never happen")
}
proofs = append(proofs, kzgProof)
}
cellsPerBlob[i] = cells
proofsPerBlob[i] = proofs
return nil
})
}
if err := wg.Wait(); err != nil {
return nil, nil, err
}
return cellsPerBlob, proofsPerBlob, nil
@@ -329,40 +340,55 @@ func ComputeCellsAndProofsFromFlat(blobs [][]byte, cellProofs [][]byte) ([][]kzg
// ComputeCellsAndProofsFromStructured computes the cells and proofs from blobs and cell proofs.
func ComputeCellsAndProofsFromStructured(blobsAndProofs []*pb.BlobAndProofV2) ([][]kzg.Cell, [][]kzg.Proof, error) {
cellsPerBlob := make([][]kzg.Cell, 0, len(blobsAndProofs))
proofsPerBlob := make([][]kzg.Proof, 0, len(blobsAndProofs))
for _, blobAndProof := range blobsAndProofs {
start := time.Now()
defer func() {
cellsAndProofsFromStructuredComputationTime.Observe(float64(time.Since(start).Milliseconds()))
}()
var wg errgroup.Group
cellsPerBlob := make([][]kzg.Cell, len(blobsAndProofs))
proofsPerBlob := make([][]kzg.Proof, len(blobsAndProofs))
for i, blobAndProof := range blobsAndProofs {
if blobAndProof == nil {
return nil, nil, ErrNilBlobAndProof
}
var kzgBlob kzg.Blob
if copy(kzgBlob[:], blobAndProof.Blob) != len(kzgBlob) {
return nil, nil, errors.New("wrong blob size - should never happen")
}
// Compute the extended cells from the (non-extended) blob.
cells, err := kzg.ComputeCells(&kzgBlob)
if err != nil {
return nil, nil, errors.Wrap(err, "compute cells")
}
kzgProofs := make([]kzg.Proof, 0, fieldparams.NumberOfColumns)
for _, kzgProofBytes := range blobAndProof.KzgProofs {
if len(kzgProofBytes) != kzg.BytesPerProof {
return nil, nil, errors.New("wrong KZG proof size - should never happen")
wg.Go(func() error {
var kzgBlob kzg.Blob
if copy(kzgBlob[:], blobAndProof.Blob) != len(kzgBlob) {
return errors.New("wrong blob size - should never happen")
}
var kzgProof kzg.Proof
if copy(kzgProof[:], kzgProofBytes) != len(kzgProof) {
return nil, nil, errors.New("wrong copied KZG proof size - should never happen")
// Compute the extended cells from the (non-extended) blob.
cells, err := kzg.ComputeCells(&kzgBlob)
if err != nil {
return errors.Wrap(err, "compute cells")
}
kzgProofs = append(kzgProofs, kzgProof)
}
kzgProofs := make([]kzg.Proof, 0, fieldparams.NumberOfColumns)
for _, kzgProofBytes := range blobAndProof.KzgProofs {
if len(kzgProofBytes) != kzg.BytesPerProof {
return errors.New("wrong KZG proof size - should never happen")
}
cellsPerBlob = append(cellsPerBlob, cells)
proofsPerBlob = append(proofsPerBlob, kzgProofs)
var kzgProof kzg.Proof
if copy(kzgProof[:], kzgProofBytes) != len(kzgProof) {
return errors.New("wrong copied KZG proof size - should never happen")
}
kzgProofs = append(kzgProofs, kzgProof)
}
cellsPerBlob[i] = cells
proofsPerBlob[i] = kzgProofs
return nil
})
}
if err := wg.Wait(); err != nil {
return nil, nil, err
}
return cellsPerBlob, proofsPerBlob, nil

View File

@@ -182,12 +182,6 @@ func ProcessBlockNoVerifyAnySig(
return nil, nil, err
}
sig := signed.Signature()
bSet, err := b.BlockSignatureBatch(st, blk.ProposerIndex(), sig[:], blk.HashTreeRoot)
if err != nil {
tracing.AnnotateError(span, err)
return nil, nil, errors.Wrap(err, "could not retrieve block signature set")
}
randaoReveal := signed.Block().Body().RandaoReveal()
rSet, err := b.RandaoSignatureBatch(ctx, st, randaoReveal[:])
if err != nil {
@@ -201,7 +195,7 @@ func ProcessBlockNoVerifyAnySig(
// Merge beacon block, randao and attestations signatures into a set.
set := bls.NewSet()
set.Join(bSet).Join(rSet).Join(aSet)
set.Join(rSet).Join(aSet)
if blk.Version() >= version.Capella {
changes, err := signed.Block().Body().BLSToExecutionChanges()

View File

@@ -157,9 +157,8 @@ func TestProcessBlockNoVerify_SigSetContainsDescriptions(t *testing.T) {
set, _, err := transition.ProcessBlockNoVerifyAnySig(t.Context(), beaconState, wsb)
require.NoError(t, err)
assert.Equal(t, len(set.Signatures), len(set.Descriptions), "Signatures and descriptions do not match up")
assert.Equal(t, "block signature", set.Descriptions[0])
assert.Equal(t, "randao signature", set.Descriptions[1])
assert.Equal(t, "attestation signature", set.Descriptions[2])
assert.Equal(t, "randao signature", set.Descriptions[0])
assert.Equal(t, "attestation signature", set.Descriptions[1])
}
func TestProcessOperationsNoVerifyAttsSigs_OK(t *testing.T) {

View File

@@ -67,9 +67,9 @@ func NewSyncNeeds(current CurrentSlotter, oldestSlotFlagPtr *primitives.Slot, bl
// Override spec minimum block retention with user-provided flag only if it is lower than the spec minimum.
sn.blockRetention = primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
if oldestSlotFlagPtr != nil {
oldestEpoch := slots.ToEpoch(*oldestSlotFlagPtr)
if oldestEpoch < sn.blockRetention {
if *oldestSlotFlagPtr <= syncEpochOffset(current(), sn.blockRetention) {
sn.validOldestSlotPtr = oldestSlotFlagPtr
} else {
log.WithField("backfill-oldest-slot", *oldestSlotFlagPtr).

View File

@@ -128,6 +128,9 @@ func TestSyncNeedsInitialize(t *testing.T) {
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
minBlobEpochs := params.BeaconConfig().MinEpochsForBlobsSidecarsRequest
minColEpochs := params.BeaconConfig().MinEpochsForDataColumnSidecarsRequest
denebSlot := slots.UnsafeEpochStart(params.BeaconConfig().DenebForkEpoch)
fuluSlot := slots.UnsafeEpochStart(params.BeaconConfig().FuluForkEpoch)
minSlots := slots.UnsafeEpochStart(primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests))
currentSlot := primitives.Slot(10000)
currentFunc := func() primitives.Slot { return currentSlot }
@@ -141,6 +144,7 @@ func TestSyncNeedsInitialize(t *testing.T) {
expectedCol primitives.Epoch
name string
input SyncNeeds
current func() primitives.Slot
}{
{
name: "basic initialization with no flags",
@@ -174,13 +178,13 @@ func TestSyncNeedsInitialize(t *testing.T) {
{
name: "valid oldestSlotFlagPtr (earlier than spec minimum)",
blobRetentionFlag: 0,
oldestSlotFlagPtr: func() *primitives.Slot {
slot := primitives.Slot(10)
return &slot
}(),
oldestSlotFlagPtr: &denebSlot,
expectValidOldest: true,
expectedBlob: minBlobEpochs,
expectedCol: minColEpochs,
current: func() primitives.Slot {
return fuluSlot + minSlots
},
},
{
name: "invalid oldestSlotFlagPtr (later than spec minimum)",
@@ -210,6 +214,9 @@ func TestSyncNeedsInitialize(t *testing.T) {
{
name: "both blob retention flag and oldest slot set",
blobRetentionFlag: minBlobEpochs + 5,
current: func() primitives.Slot {
return fuluSlot + minSlots
},
oldestSlotFlagPtr: func() *primitives.Slot {
slot := primitives.Slot(100)
return &slot
@@ -232,16 +239,27 @@ func TestSyncNeedsInitialize(t *testing.T) {
expectedBlob: 5000,
expectedCol: 5000,
},
{
name: "regression for deneb start",
blobRetentionFlag: 8212500,
expectValidOldest: true,
oldestSlotFlagPtr: &denebSlot,
current: func() primitives.Slot {
return fuluSlot + minSlots
},
expectedBlob: 8212500,
expectedCol: 8212500,
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
result, err := NewSyncNeeds(currentFunc, tc.oldestSlotFlagPtr, tc.blobRetentionFlag)
if tc.current == nil {
tc.current = currentFunc
}
result, err := NewSyncNeeds(tc.current, tc.oldestSlotFlagPtr, tc.blobRetentionFlag)
require.NoError(t, err)
// Check that current, deneb, fulu are set correctly
require.Equal(t, currentSlot, result.current())
// Check retention calculations
require.Equal(t, tc.expectedBlob, result.blobRetention)
require.Equal(t, tc.expectedCol, result.colRetention)

View File

@@ -38,6 +38,7 @@ go_library(
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_spf13_afero//:go_default_library",
"@org_golang_x_sync//errgroup:go_default_library",
],
)

View File

@@ -25,6 +25,7 @@ import (
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/pkg/errors"
"github.com/spf13/afero"
"golang.org/x/sync/errgroup"
)
const (
@@ -185,73 +186,162 @@ func (dcs *DataColumnStorage) WarmCache() {
highestStoredEpoch := primitives.Epoch(0)
// Walk the data column filesystem to warm up the cache.
if err := afero.Walk(dcs.fs, ".", func(path string, info os.FileInfo, fileErr error) (err error) {
if fileErr != nil {
return fileErr
}
// If not a leaf, skip.
if info.IsDir() {
return nil
}
// Extract metadata from the file path.
fileMetadata, err := extractFileMetadata(path)
if err != nil {
log.WithError(err).Error("Error encountered while extracting file metadata")
return nil
}
// Open the data column filesystem file.
f, err := dcs.fs.Open(path)
if err != nil {
log.WithError(err).Error("Error encountered while opening data column filesystem file")
return nil
}
// Close the file.
defer func() {
// Overwrite the existing error only if it is nil, since the close error is less important.
closeErr := f.Close()
if closeErr != nil && err == nil {
err = closeErr
}
}()
// Read the metadata of the file.
metadata, err := dcs.metadata(f)
if err != nil {
log.WithError(err).Error("Error encountered while reading metadata from data column filesystem file")
return nil
}
// Check the indices.
indices := metadata.indices.all()
if len(indices) == 0 {
return nil
}
// Build the ident.
dataColumnsIdent := DataColumnsIdent{Root: fileMetadata.blockRoot, Epoch: fileMetadata.epoch, Indices: indices}
// Update the highest stored epoch.
highestStoredEpoch = max(highestStoredEpoch, fileMetadata.epoch)
// Set the ident in the cache.
if err := dcs.cache.set(dataColumnsIdent); err != nil {
log.WithError(err).Error("Error encountered while ensuring data column filesystem cache")
}
return nil
}); err != nil {
log.WithError(err).Error("Error encountered while walking data column filesystem.")
// List all period directories
periodFileInfos, err := afero.ReadDir(dcs.fs, ".")
if err != nil {
log.WithError(err).Error("Error reading top directory during warm cache")
return
}
// Prune the cache and the filesystem.
// Iterate through periods
for _, periodFileInfo := range periodFileInfos {
if !periodFileInfo.IsDir() {
continue
}
periodPath := periodFileInfo.Name()
// List all epoch directories in this period
epochFileInfos, err := afero.ReadDir(dcs.fs, periodPath)
if err != nil {
log.WithError(err).WithField("period", periodPath).Error("Error reading period directory during warm cache")
continue
}
// Iterate through epochs
for _, epochFileInfo := range epochFileInfos {
if !epochFileInfo.IsDir() {
continue
}
epochPath := path.Join(periodPath, epochFileInfo.Name())
// List all .sszs files in this epoch
files, err := listEpochFiles(dcs.fs, epochPath)
if err != nil {
log.WithError(err).WithField("epoch", epochPath).Error("Error listing epoch files during warm cache")
continue
}
if len(files) == 0 {
continue
}
// Process all files in this epoch in parallel
epochHighest, err := dcs.processEpochFiles(files)
if err != nil {
log.WithError(err).WithField("epoch", epochPath).Error("Error processing epoch files during warm cache")
}
highestStoredEpoch = max(highestStoredEpoch, epochHighest)
}
}
// Prune the cache and the filesystem
dcs.prune()
log.WithField("elapsed", time.Since(start)).Info("Data column filesystem cache warm-up complete")
totalElapsed := time.Since(start)
// Log summary
log.WithField("elapsed", totalElapsed).Info("Data column filesystem cache warm-up complete")
}
// listEpochFiles lists all .sszs files in an epoch directory.
func listEpochFiles(fs afero.Fs, epochPath string) ([]string, error) {
fileInfos, err := afero.ReadDir(fs, epochPath)
if err != nil {
return nil, errors.Wrap(err, "read epoch directory")
}
files := make([]string, 0, len(fileInfos))
for _, fileInfo := range fileInfos {
if fileInfo.IsDir() {
continue
}
fileName := fileInfo.Name()
if strings.HasSuffix(fileName, "."+dataColumnsFileExtension) {
files = append(files, path.Join(epochPath, fileName))
}
}
return files, nil
}
// processEpochFiles processes all .sszs files in an epoch directory in parallel.
func (dcs *DataColumnStorage) processEpochFiles(files []string) (primitives.Epoch, error) {
var (
eg errgroup.Group
mu sync.Mutex
)
highestEpoch := primitives.Epoch(0)
for _, filePath := range files {
eg.Go(func() error {
epoch, err := dcs.processFile(filePath)
if err != nil {
log.WithError(err).WithField("file", filePath).Error("Error processing file during warm cache")
return nil
}
mu.Lock()
defer mu.Unlock()
highestEpoch = max(highestEpoch, epoch)
return nil
})
}
if err := eg.Wait(); err != nil {
return highestEpoch, err
}
return highestEpoch, nil
}
// processFile processes a single .sszs file.
func (dcs *DataColumnStorage) processFile(filePath string) (primitives.Epoch, error) {
// Extract metadata from the file path
fileMetadata, err := extractFileMetadata(filePath)
if err != nil {
return 0, errors.Wrap(err, "extract file metadata")
}
// Open the file (each goroutine gets its own FD)
f, err := dcs.fs.Open(filePath)
if err != nil {
return 0, errors.Wrap(err, "open file")
}
defer func() {
if closeErr := f.Close(); closeErr != nil {
log.WithError(closeErr).WithField("file", filePath).Error("Error closing file during warm cache")
}
}()
// Read metadata
metadata, err := dcs.metadata(f)
if err != nil {
return 0, errors.Wrap(err, "read metadata")
}
// Extract indices
indices := metadata.indices.all()
if len(indices) == 0 {
return fileMetadata.epoch, nil // No indices, skip
}
// Build ident and set in cache (thread-safe)
dataColumnsIdent := DataColumnsIdent{
Root: fileMetadata.blockRoot,
Epoch: fileMetadata.epoch,
Indices: indices,
}
if err := dcs.cache.set(dataColumnsIdent); err != nil {
return 0, errors.Wrap(err, "cache set")
}
return fileMetadata.epoch, nil
}
// Summary returns the DataColumnStorageSummary.
@@ -515,6 +605,11 @@ func (dcs *DataColumnStorage) Clear() error {
// prune clean the cache, the filesystem and mutexes.
func (dcs *DataColumnStorage) prune() {
startTime := time.Now()
defer func() {
dataColumnPruneLatency.Observe(float64(time.Since(startTime).Milliseconds()))
}()
highestStoredEpoch := dcs.cache.HighestEpoch()
// Check if we need to prune.
@@ -622,6 +717,9 @@ func (dcs *DataColumnStorage) saveDataColumnSidecarsExistingFile(filePath string
// Create the SSZ encoded data column sidecars.
var sszEncodedDataColumnSidecars []byte
// Initialize the count of the saved SSZ encoded data column sidecar.
storedCount := uint8(0)
for {
dataColumnSidecars := pullChan(inputDataColumnSidecars)
if len(dataColumnSidecars) == 0 {
@@ -668,6 +766,9 @@ func (dcs *DataColumnStorage) saveDataColumnSidecarsExistingFile(filePath string
return errors.Wrap(err, "set index")
}
// Increment the count of the saved SSZ encoded data column sidecar.
storedCount++
// Append the SSZ encoded data column sidecar to the SSZ encoded data column sidecars.
sszEncodedDataColumnSidecars = append(sszEncodedDataColumnSidecars, sszEncodedDataColumnSidecar...)
}
@@ -692,9 +793,12 @@ func (dcs *DataColumnStorage) saveDataColumnSidecarsExistingFile(filePath string
return errWrongBytesWritten
}
syncStart := time.Now()
if err := file.Sync(); err != nil {
return errors.Wrap(err, "sync")
}
dataColumnFileSyncLatency.Observe(float64(time.Since(syncStart).Milliseconds()))
dataColumnBatchStoreCount.Observe(float64(storedCount))
return nil
}
@@ -808,10 +912,14 @@ func (dcs *DataColumnStorage) saveDataColumnSidecarsNewFile(filePath string, inp
return errWrongBytesWritten
}
syncStart := time.Now()
if err := file.Sync(); err != nil {
return errors.Wrap(err, "sync")
}
dataColumnFileSyncLatency.Observe(float64(time.Since(syncStart).Milliseconds()))
dataColumnBatchStoreCount.Observe(float64(storedCount))
return nil
}

View File

@@ -36,16 +36,15 @@ var (
})
// Data columns
dataColumnBuckets = []float64{3, 5, 7, 9, 11, 13}
dataColumnSaveLatency = promauto.NewHistogram(prometheus.HistogramOpts{
Name: "data_column_storage_save_latency",
Help: "Latency of DataColumnSidecar storage save operations in milliseconds",
Buckets: dataColumnBuckets,
Buckets: []float64{10, 20, 30, 50, 100, 200, 500},
})
dataColumnFetchLatency = promauto.NewHistogram(prometheus.HistogramOpts{
Name: "data_column_storage_get_latency",
Help: "Latency of DataColumnSidecar storage get operations in milliseconds",
Buckets: dataColumnBuckets,
Buckets: []float64{3, 5, 7, 9, 11, 13},
})
dataColumnPrunedCounter = promauto.NewCounter(prometheus.CounterOpts{
Name: "data_column_pruned",
@@ -59,4 +58,16 @@ var (
Name: "data_column_disk_count",
Help: "Approximate number of data columns in storage",
})
dataColumnFileSyncLatency = promauto.NewSummary(prometheus.SummaryOpts{
Name: "data_column_file_sync_latency",
Help: "Latency of sync operations when saving data columns in milliseconds",
})
dataColumnBatchStoreCount = promauto.NewSummary(prometheus.SummaryOpts{
Name: "data_column_batch_store_count",
Help: "Number of data columns stored in a batch",
})
dataColumnPruneLatency = promauto.NewSummary(prometheus.SummaryOpts{
Name: "data_column_prune_latency",
Help: "Latency of data column prune operations in milliseconds",
})
)

View File

@@ -532,12 +532,19 @@ func (s *Service) GetBlobsV2(ctx context.Context, versionedHashes []common.Hash)
ctx, span := trace.StartSpan(ctx, "powchain.engine-api-client.GetBlobsV2")
defer span.End()
start := time.Now()
if !s.capabilityCache.has(GetBlobsV2) {
return nil, errors.New(fmt.Sprintf("%s is not supported", GetBlobsV2))
}
result := make([]*pb.BlobAndProofV2, len(versionedHashes))
err := s.rpcClient.CallContext(ctx, &result, GetBlobsV2, versionedHashes)
if len(result) != 0 {
getBlobsV2Latency.Observe(float64(time.Since(start).Milliseconds()))
}
return result, handleRPCError(err)
}

View File

@@ -27,6 +27,13 @@ var (
Buckets: []float64{25, 50, 100, 200, 500, 1000, 2000, 4000},
},
)
getBlobsV2Latency = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "get_blobs_v2_latency_milliseconds",
Help: "Captures RPC latency for getBlobsV2 in milliseconds",
Buckets: []float64{25, 50, 100, 200, 500, 1000, 2000, 4000},
},
)
errParseCount = promauto.NewCounter(prometheus.CounterOpts{
Name: "execution_parse_error_count",
Help: "The number of errors that occurred while parsing execution payload",

View File

@@ -204,6 +204,9 @@ func InitializeDataMaps() {
bytesutil.ToBytes4(params.BeaconConfig().ElectraForkVersion): func() (interfaces.LightClientOptimisticUpdate, error) {
return lightclientConsensusTypes.NewEmptyOptimisticUpdateDeneb(), nil
},
bytesutil.ToBytes4(params.BeaconConfig().FuluForkVersion): func() (interfaces.LightClientOptimisticUpdate, error) {
return lightclientConsensusTypes.NewEmptyOptimisticUpdateDeneb(), nil
},
}
// Reset our light client finality update map.
@@ -223,5 +226,8 @@ func InitializeDataMaps() {
bytesutil.ToBytes4(params.BeaconConfig().ElectraForkVersion): func() (interfaces.LightClientFinalityUpdate, error) {
return lightclientConsensusTypes.NewEmptyFinalityUpdateElectra(), nil
},
bytesutil.ToBytes4(params.BeaconConfig().FuluForkVersion): func() (interfaces.LightClientFinalityUpdate, error) {
return lightclientConsensusTypes.NewEmptyFinalityUpdateElectra(), nil
},
}
}

View File

@@ -14,6 +14,7 @@ import (
"github.com/OffchainLabs/prysm/v7/api"
"github.com/OffchainLabs/prysm/v7/api/server/structs"
"github.com/OffchainLabs/prysm/v7/beacon-chain/blockchain/kzg"
coreblocks "github.com/OffchainLabs/prysm/v7/beacon-chain/core/blocks"
corehelpers "github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/transition"
"github.com/OffchainLabs/prysm/v7/beacon-chain/db/filters"
@@ -957,6 +958,13 @@ func (s *Server) validateConsensus(ctx context.Context, b *eth.GenericSignedBeac
}
}
}
blockRoot, err := blk.Block().HashTreeRoot()
if err != nil {
return errors.Wrap(err, "could not hash block")
}
if err := coreblocks.VerifyBlockSignatureUsingCurrentFork(parentState, blk, blockRoot); err != nil {
return errors.Wrap(err, "could not verify block signature")
}
_, err = transition.ExecuteStateTransition(ctx, parentState, blk)
if err != nil {
return errors.Wrap(err, "could not execute state transition")

View File

@@ -130,6 +130,10 @@ func (s *Server) SubmitAttestationsV2(w http.ResponseWriter, r *http.Request) {
ctx, span := trace.StartSpan(r.Context(), "beacon.SubmitAttestationsV2")
defer span.End()
if shared.IsSyncing(ctx, w, s.SyncChecker, s.HeadFetcher, s.TimeFetcher, s.OptimisticModeFetcher) {
return
}
versionHeader := r.Header.Get(api.VersionHeader)
if versionHeader == "" {
httputil.HandleError(w, api.VersionHeader+" header is required", http.StatusBadRequest)
@@ -238,22 +242,14 @@ func (s *Server) handleAttestationsElectra(
},
})
targetState, err := s.AttestationStateFetcher.AttestationTargetState(ctx, singleAtt.Data.Target)
if err != nil {
return nil, nil, errors.Wrap(err, "could not get target state for attestation")
}
committee, err := corehelpers.BeaconCommitteeFromState(ctx, targetState, singleAtt.Data.Slot, singleAtt.CommitteeId)
if err != nil {
return nil, nil, errors.Wrap(err, "could not get committee for attestation")
}
att := singleAtt.ToAttestationElectra(committee)
wantedEpoch := slots.ToEpoch(att.Data.Slot)
// Broadcast first using CommitteeId directly (fast path)
// This matches gRPC behavior and avoids blocking on state fetching
wantedEpoch := slots.ToEpoch(singleAtt.Data.Slot)
vals, err := s.HeadFetcher.HeadValidatorsIndices(ctx, wantedEpoch)
if err != nil {
return nil, nil, errors.Wrap(err, "could not get head validator indices")
}
subnet := corehelpers.ComputeSubnetFromCommitteeAndSlot(uint64(len(vals)), att.GetCommitteeIndex(), att.Data.Slot)
subnet := corehelpers.ComputeSubnetFromCommitteeAndSlot(uint64(len(vals)), singleAtt.CommitteeId, singleAtt.Data.Slot)
if err = s.Broadcaster.BroadcastAttestation(ctx, subnet, singleAtt); err != nil {
failedBroadcasts = append(failedBroadcasts, &server.IndexedError{
Index: i,
@@ -264,17 +260,35 @@ func (s *Server) handleAttestationsElectra(
}
continue
}
}
if features.Get().EnableExperimentalAttestationPool {
if err = s.AttestationCache.Add(att); err != nil {
log.WithError(err).Error("Could not save attestation")
// Save to pool after broadcast (slow path - requires state fetching)
// Run in goroutine to avoid blocking the HTTP response
go func() {
for _, singleAtt := range validAttestations {
targetState, err := s.AttestationStateFetcher.AttestationTargetState(context.Background(), singleAtt.Data.Target)
if err != nil {
log.WithError(err).Error("Could not get target state for attestation")
continue
}
} else {
if err = s.AttestationsPool.SaveUnaggregatedAttestation(att); err != nil {
log.WithError(err).Error("Could not save attestation")
committee, err := corehelpers.BeaconCommitteeFromState(context.Background(), targetState, singleAtt.Data.Slot, singleAtt.CommitteeId)
if err != nil {
log.WithError(err).Error("Could not get committee for attestation")
continue
}
att := singleAtt.ToAttestationElectra(committee)
if features.Get().EnableExperimentalAttestationPool {
if err = s.AttestationCache.Add(att); err != nil {
log.WithError(err).Error("Could not save attestation")
}
} else {
if err = s.AttestationsPool.SaveUnaggregatedAttestation(att); err != nil {
log.WithError(err).Error("Could not save attestation")
}
}
}
}
}()
if len(failedBroadcasts) > 0 {
log.WithFields(logrus.Fields{
@@ -470,6 +484,10 @@ func (s *Server) SubmitSyncCommitteeSignatures(w http.ResponseWriter, r *http.Re
ctx, span := trace.StartSpan(r.Context(), "beacon.SubmitPoolSyncCommitteeSignatures")
defer span.End()
if shared.IsSyncing(ctx, w, s.SyncChecker, s.HeadFetcher, s.TimeFetcher, s.OptimisticModeFetcher) {
return
}
var req structs.SubmitSyncCommitteeSignaturesRequest
err := json.NewDecoder(r.Body).Decode(&req.Data)
switch {

View File

@@ -26,6 +26,7 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/operations/voluntaryexits/mock"
p2pMock "github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/rpc/core"
mockSync "github.com/OffchainLabs/prysm/v7/beacon-chain/sync/initial-sync/testing"
state_native "github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
@@ -622,6 +623,8 @@ func TestSubmitAttestationsV2(t *testing.T) {
HeadFetcher: chainService,
ChainInfoFetcher: chainService,
TimeFetcher: chainService,
OptimisticModeFetcher: chainService,
SyncChecker: &mockSync.Sync{IsSyncing: false},
OperationNotifier: &blockchainmock.MockOperationNotifier{},
AttestationStateFetcher: chainService,
}
@@ -654,6 +657,7 @@ func TestSubmitAttestationsV2(t *testing.T) {
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Source.Epoch)
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetData().Target.Root))
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Target.Epoch)
time.Sleep(100 * time.Millisecond) // Wait for async pool save
assert.Equal(t, 1, s.AttestationsPool.UnaggregatedAttestationCount())
})
t.Run("multiple", func(t *testing.T) {
@@ -673,6 +677,7 @@ func TestSubmitAttestationsV2(t *testing.T) {
assert.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
assert.Equal(t, 2, broadcaster.NumAttestations())
time.Sleep(100 * time.Millisecond) // Wait for async pool save
assert.Equal(t, 2, s.AttestationsPool.UnaggregatedAttestationCount())
})
t.Run("phase0 att post electra", func(t *testing.T) {
@@ -793,6 +798,7 @@ func TestSubmitAttestationsV2(t *testing.T) {
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Source.Epoch)
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetData().Target.Root))
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Target.Epoch)
time.Sleep(100 * time.Millisecond) // Wait for async pool save
assert.Equal(t, 1, s.AttestationsPool.UnaggregatedAttestationCount())
})
t.Run("multiple", func(t *testing.T) {
@@ -812,6 +818,7 @@ func TestSubmitAttestationsV2(t *testing.T) {
assert.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
assert.Equal(t, 2, broadcaster.NumAttestations())
time.Sleep(100 * time.Millisecond) // Wait for async pool save
assert.Equal(t, 2, s.AttestationsPool.UnaggregatedAttestationCount())
})
t.Run("no body", func(t *testing.T) {
@@ -861,6 +868,27 @@ func TestSubmitAttestationsV2(t *testing.T) {
assert.Equal(t, true, strings.Contains(e.Failures[0].Message, "Incorrect attestation signature"))
})
})
t.Run("syncing", func(t *testing.T) {
chainService := &blockchainmock.ChainService{}
s := &Server{
HeadFetcher: chainService,
TimeFetcher: chainService,
OptimisticModeFetcher: chainService,
SyncChecker: &mockSync.Sync{IsSyncing: true},
}
var body bytes.Buffer
_, err := body.WriteString(singleAtt)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
request.Header.Set(api.VersionHeader, version.String(version.Phase0))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestationsV2(writer, request)
assert.Equal(t, http.StatusServiceUnavailable, writer.Code)
assert.Equal(t, true, strings.Contains(writer.Body.String(), "Beacon node is currently syncing"))
})
}
func TestListVoluntaryExits(t *testing.T) {
@@ -1057,14 +1085,19 @@ func TestSubmitSyncCommitteeSignatures(t *testing.T) {
t.Run("single", func(t *testing.T) {
broadcaster := &p2pMock.MockBroadcaster{}
chainService := &blockchainmock.ChainService{
State: st,
SyncCommitteeIndices: []primitives.CommitteeIndex{0},
}
s := &Server{
HeadFetcher: chainService,
TimeFetcher: chainService,
OptimisticModeFetcher: chainService,
SyncChecker: &mockSync.Sync{IsSyncing: false},
CoreService: &core.Service{
SyncCommitteePool: synccommittee.NewStore(),
P2P: broadcaster,
HeadFetcher: &blockchainmock.ChainService{
State: st,
SyncCommitteeIndices: []primitives.CommitteeIndex{0},
},
HeadFetcher: chainService,
},
}
@@ -1089,14 +1122,19 @@ func TestSubmitSyncCommitteeSignatures(t *testing.T) {
})
t.Run("multiple", func(t *testing.T) {
broadcaster := &p2pMock.MockBroadcaster{}
chainService := &blockchainmock.ChainService{
State: st,
SyncCommitteeIndices: []primitives.CommitteeIndex{0},
}
s := &Server{
HeadFetcher: chainService,
TimeFetcher: chainService,
OptimisticModeFetcher: chainService,
SyncChecker: &mockSync.Sync{IsSyncing: false},
CoreService: &core.Service{
SyncCommitteePool: synccommittee.NewStore(),
P2P: broadcaster,
HeadFetcher: &blockchainmock.ChainService{
State: st,
SyncCommitteeIndices: []primitives.CommitteeIndex{0},
},
HeadFetcher: chainService,
},
}
@@ -1120,13 +1158,18 @@ func TestSubmitSyncCommitteeSignatures(t *testing.T) {
})
t.Run("invalid", func(t *testing.T) {
broadcaster := &p2pMock.MockBroadcaster{}
chainService := &blockchainmock.ChainService{
State: st,
}
s := &Server{
HeadFetcher: chainService,
TimeFetcher: chainService,
OptimisticModeFetcher: chainService,
SyncChecker: &mockSync.Sync{IsSyncing: false},
CoreService: &core.Service{
SyncCommitteePool: synccommittee.NewStore(),
P2P: broadcaster,
HeadFetcher: &blockchainmock.ChainService{
State: st,
},
HeadFetcher: chainService,
},
}
@@ -1149,7 +1192,13 @@ func TestSubmitSyncCommitteeSignatures(t *testing.T) {
assert.Equal(t, false, broadcaster.BroadcastCalled.Load())
})
t.Run("empty", func(t *testing.T) {
s := &Server{}
chainService := &blockchainmock.ChainService{State: st}
s := &Server{
HeadFetcher: chainService,
TimeFetcher: chainService,
OptimisticModeFetcher: chainService,
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
var body bytes.Buffer
_, err := body.WriteString("[]")
@@ -1166,7 +1215,13 @@ func TestSubmitSyncCommitteeSignatures(t *testing.T) {
assert.Equal(t, true, strings.Contains(e.Message, "No data submitted"))
})
t.Run("no body", func(t *testing.T) {
s := &Server{}
chainService := &blockchainmock.ChainService{State: st}
s := &Server{
HeadFetcher: chainService,
TimeFetcher: chainService,
OptimisticModeFetcher: chainService,
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
request := httptest.NewRequest(http.MethodPost, "http://example.com", nil)
writer := httptest.NewRecorder()
@@ -1179,6 +1234,26 @@ func TestSubmitSyncCommitteeSignatures(t *testing.T) {
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, true, strings.Contains(e.Message, "No data submitted"))
})
t.Run("syncing", func(t *testing.T) {
chainService := &blockchainmock.ChainService{State: st}
s := &Server{
HeadFetcher: chainService,
TimeFetcher: chainService,
OptimisticModeFetcher: chainService,
SyncChecker: &mockSync.Sync{IsSyncing: true},
}
var body bytes.Buffer
_, err := body.WriteString(singleSyncCommitteeMsg)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitSyncCommitteeSignatures(writer, request)
assert.Equal(t, http.StatusServiceUnavailable, writer.Code)
assert.Equal(t, true, strings.Contains(writer.Body.String(), "Beacon node is currently syncing"))
})
}
func TestListBLSToExecutionChanges(t *testing.T) {

View File

@@ -40,6 +40,7 @@ func GetForkSchedule(w http.ResponseWriter, r *http.Request) {
httputil.WriteJson(w, &structs.GetForkScheduleResponse{
Data: data,
})
return
}
previous := schedule[0]
for _, entry := range schedule {

View File

@@ -52,24 +52,27 @@ func (vs *Server) ProposeAttestation(ctx context.Context, att *ethpb.Attestation
ctx, span := trace.StartSpan(ctx, "AttesterServer.ProposeAttestation")
defer span.End()
if vs.SyncChecker.Syncing() {
return nil, status.Errorf(codes.Unavailable, "Syncing to latest head, not ready to respond")
}
resp, err := vs.proposeAtt(ctx, att, att.GetData().CommitteeIndex)
if err != nil {
return nil, err
}
if features.Get().EnableExperimentalAttestationPool {
if err = vs.AttestationCache.Add(att); err != nil {
log.WithError(err).Error("Could not save attestation")
}
} else {
go func() {
go func() {
if features.Get().EnableExperimentalAttestationPool {
if err := vs.AttestationCache.Add(att); err != nil {
log.WithError(err).Error("Could not save attestation")
}
} else {
attCopy := att.Copy()
if err := vs.AttPool.SaveUnaggregatedAttestation(attCopy); err != nil {
log.WithError(err).Error("Could not save unaggregated attestation")
return
}
}()
}
}
}()
return resp, nil
}
@@ -82,6 +85,10 @@ func (vs *Server) ProposeAttestationElectra(ctx context.Context, singleAtt *ethp
ctx, span := trace.StartSpan(ctx, "AttesterServer.ProposeAttestationElectra")
defer span.End()
if vs.SyncChecker.Syncing() {
return nil, status.Errorf(codes.Unavailable, "Syncing to latest head, not ready to respond")
}
resp, err := vs.proposeAtt(ctx, singleAtt, singleAtt.GetCommitteeIndex())
if err != nil {
return nil, err
@@ -98,18 +105,17 @@ func (vs *Server) ProposeAttestationElectra(ctx context.Context, singleAtt *ethp
singleAttCopy := singleAtt.Copy()
att := singleAttCopy.ToAttestationElectra(committee)
if features.Get().EnableExperimentalAttestationPool {
if err = vs.AttestationCache.Add(att); err != nil {
log.WithError(err).Error("Could not save attestation")
}
} else {
go func() {
go func() {
if features.Get().EnableExperimentalAttestationPool {
if err := vs.AttestationCache.Add(att); err != nil {
log.WithError(err).Error("Could not save attestation")
}
} else {
if err := vs.AttPool.SaveUnaggregatedAttestation(att); err != nil {
log.WithError(err).Error("Could not save unaggregated attestation")
return
}
}()
}
}
}()
return resp, nil
}

View File

@@ -38,6 +38,7 @@ func TestProposeAttestation(t *testing.T) {
OperationNotifier: (&mock.ChainService{}).OperationNotifier(),
TimeFetcher: chainService,
AttestationStateFetcher: chainService,
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
head := util.NewBeaconBlock()
head.Block.Slot = 999
@@ -141,6 +142,7 @@ func TestProposeAttestation_IncorrectSignature(t *testing.T) {
P2P: &mockp2p.MockBroadcaster{},
AttPool: attestations.NewPool(),
OperationNotifier: (&mock.ChainService{}).OperationNotifier(),
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
req := util.HydrateAttestation(&ethpb.Attestation{})
@@ -149,6 +151,37 @@ func TestProposeAttestation_IncorrectSignature(t *testing.T) {
assert.ErrorContains(t, wanted, err)
}
func TestProposeAttestation_Syncing(t *testing.T) {
attesterServer := &Server{
SyncChecker: &mockSync.Sync{IsSyncing: true},
}
req := util.HydrateAttestation(&ethpb.Attestation{})
_, err := attesterServer.ProposeAttestation(t.Context(), req)
assert.ErrorContains(t, "Syncing to latest head", err)
s, ok := status.FromError(err)
require.Equal(t, true, ok)
assert.Equal(t, codes.Unavailable, s.Code())
}
func TestProposeAttestationElectra_Syncing(t *testing.T) {
attesterServer := &Server{
SyncChecker: &mockSync.Sync{IsSyncing: true},
}
req := &ethpb.SingleAttestation{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Root: make([]byte, 32)},
Target: &ethpb.Checkpoint{Root: make([]byte, 32)},
},
}
_, err := attesterServer.ProposeAttestationElectra(t.Context(), req)
assert.ErrorContains(t, "Syncing to latest head", err)
s, ok := status.FromError(err)
require.Equal(t, true, ok)
assert.Equal(t, codes.Unavailable, s.Code())
}
func TestGetAttestationData_OK(t *testing.T) {
block := util.NewBeaconBlock()
block.Block.Slot = 3*params.BeaconConfig().SlotsPerEpoch + 1

View File

@@ -5,6 +5,7 @@ go_library(
srcs = [
"error.go",
"interfaces.go",
"interfaces_gloas.go",
"prometheus.go",
],
importpath = "github.com/OffchainLabs/prysm/v7/beacon-chain/state",

View File

@@ -63,6 +63,7 @@ type ReadOnlyBeaconState interface {
ReadOnlyDeposits
ReadOnlyConsolidations
ReadOnlyProposerLookahead
ReadOnlyGloasFields
ToProtoUnsafe() any
ToProto() any
GenesisTime() time.Time
@@ -99,6 +100,7 @@ type WriteOnlyBeaconState interface {
WriteOnlyDeposits
WriteOnlyProposerLookahead
SetGenesisTime(val time.Time) error
WriteOnlyGloasFields
SetGenesisValidatorsRoot(val []byte) error
SetSlot(val primitives.Slot) error
SetFork(val *ethpb.Fork) error

View File

@@ -0,0 +1,25 @@
package state
import (
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
)
type WriteOnlyGloasFields interface {
SetExecutionPayloadBid(h interfaces.ROExecutionPayloadBid) error
SetBuilderPendingPayment(index primitives.Slot, payment *ethpb.BuilderPendingPayment) error
SetLatestBlockHash(hash [32]byte) error
SetExecutionPayloadAvailability(index primitives.Slot, available bool) error
SetBuilderPendingPayments(payments []*ethpb.BuilderPendingPayment) error
AppendBuilderPendingWithdrawal(withdrawal *ethpb.BuilderPendingWithdrawal) error
}
type ReadOnlyGloasFields interface {
LatestBlockHash() ([32]byte, error)
BuilderPendingPayment(slot primitives.Slot) (*ethpb.BuilderPendingPayment, error)
BuilderPendingPayments() ([]*ethpb.BuilderPendingPayment, error)
BuilderPendingWithdrawals() ([]*ethpb.BuilderPendingWithdrawal, error)
LatestExecutionPayloadBid() (interfaces.ROExecutionPayloadBid, error)
LatestWithdrawalsRoot() ([32]byte, error)
}

View File

@@ -14,6 +14,7 @@ go_library(
"getters_deposits.go",
"getters_eth1.go",
"getters_exit.go",
"getters_gloas.go",
"getters_misc.go",
"getters_participation.go",
"getters_payload_header.go",
@@ -36,6 +37,7 @@ go_library(
"setters_deposit_requests.go",
"setters_deposits.go",
"setters_eth1.go",
"setters_gloas.go",
"setters_misc.go",
"setters_participation.go",
"setters_payload_header.go",
@@ -97,6 +99,7 @@ go_test(
"getters_deposit_requests_test.go",
"getters_deposits_test.go",
"getters_exit_test.go",
"getters_gloas_test.go",
"getters_participation_test.go",
"getters_setters_lookahead_test.go",
"getters_test.go",
@@ -113,6 +116,7 @@ go_test(
"setters_deposit_requests_test.go",
"setters_deposits_test.go",
"setters_eth1_test.go",
"setters_gloas_test.go",
"setters_misc_test.go",
"setters_participation_test.go",
"setters_payload_header_test.go",

View File

@@ -0,0 +1,80 @@
package state_native
import (
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
)
// LatestWithdrawalsRoot returns the root of the latest withdrawals.
func (b *BeaconState) LatestWithdrawalsRoot() ([32]byte, error) {
b.lock.RLock()
defer b.lock.RUnlock()
if b.latestWithdrawalsRoot == nil {
return [32]byte{}, nil
}
return [32]byte(b.latestWithdrawalsRoot), nil
}
// LatestBlockHash returns the hash of the latest execution block.
func (b *BeaconState) LatestBlockHash() ([32]byte, error) {
b.lock.RLock()
defer b.lock.RUnlock()
if b.latestBlockHash == nil {
return [32]byte{}, nil
}
return [32]byte(b.latestBlockHash), nil
}
// BuilderPendingPayment returns a specific builder pending payment for the given slot.
func (b *BeaconState) BuilderPendingPayment(slot primitives.Slot) (*ethpb.BuilderPendingPayment, error) {
b.lock.RLock()
defer b.lock.RUnlock()
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
paymentIndex := slotsPerEpoch + (slot % slotsPerEpoch)
return ethpb.CopyBuilderPendingPayment(b.builderPendingPayments[paymentIndex]), nil
}
// BuilderPendingPayments returns the builder pending payments.
func (b *BeaconState) BuilderPendingPayments() ([]*ethpb.BuilderPendingPayment, error) {
b.lock.RLock()
defer b.lock.RUnlock()
if b.builderPendingPayments == nil {
return make([]*ethpb.BuilderPendingPayment, 0), nil
}
return ethpb.CopyBuilderPendingPaymentSlice(b.builderPendingPayments), nil
}
// BuilderPendingWithdrawals returns the builder pending withdrawals.
func (b *BeaconState) BuilderPendingWithdrawals() ([]*ethpb.BuilderPendingWithdrawal, error) {
b.lock.RLock()
defer b.lock.RUnlock()
if b.builderPendingWithdrawals == nil {
return make([]*ethpb.BuilderPendingWithdrawal, 0), nil
}
return ethpb.CopyBuilderPendingWithdrawalSlice(b.builderPendingWithdrawals), nil
}
// LatestExecutionPayloadBid returns the cached latest execution payload bid for Gloas.
func (b *BeaconState) LatestExecutionPayloadBid() (interfaces.ROExecutionPayloadBid, error) {
b.lock.RLock()
defer b.lock.RUnlock()
if b.latestExecutionPayloadBid == nil {
return nil, nil
}
return blocks.WrappedROExecutionPayloadBid(b.latestExecutionPayloadBid.Copy())
}

View File

@@ -0,0 +1,93 @@
package state_native_test
import (
"testing"
state_native "github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/testing/assert"
"github.com/OffchainLabs/prysm/v7/testing/require"
)
func TestLatestWithdrawalsRoot(t *testing.T) {
t.Run("nil root returns zero", func(t *testing.T) {
s, err := state_native.InitializeFromProtoGloas(&ethpb.BeaconStateGloas{})
require.NoError(t, err)
root, err := s.LatestWithdrawalsRoot()
require.NoError(t, err)
assert.Equal(t, [32]byte{}, root)
})
t.Run("returns value", func(t *testing.T) {
var want [32]byte
copy(want[:], []byte("withdrawals-root"))
s, err := state_native.InitializeFromProtoGloas(&ethpb.BeaconStateGloas{
LatestWithdrawalsRoot: want[:],
})
require.NoError(t, err)
root, err := s.LatestWithdrawalsRoot()
require.NoError(t, err)
assert.Equal(t, want, root)
})
}
func TestLatestBlockHash(t *testing.T) {
t.Run("nil hash returns zero", func(t *testing.T) {
s, err := state_native.InitializeFromProtoGloas(&ethpb.BeaconStateGloas{})
require.NoError(t, err)
hash, err := s.LatestBlockHash()
require.NoError(t, err)
assert.Equal(t, [32]byte{}, hash)
})
t.Run("returns value", func(t *testing.T) {
var want [32]byte
copy(want[:], []byte("latest-block-hash"))
s, err := state_native.InitializeFromProtoGloas(&ethpb.BeaconStateGloas{
LatestBlockHash: want[:],
})
require.NoError(t, err)
hash, err := s.LatestBlockHash()
require.NoError(t, err)
assert.Equal(t, want, hash)
})
}
func TestBuilderPendingPayment(t *testing.T) {
t.Run("returns payment for slot", func(t *testing.T) {
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
slot := primitives.Slot(7)
paymentIndex := slotsPerEpoch + (slot % slotsPerEpoch)
payments := make([]*ethpb.BuilderPendingPayment, slotsPerEpoch*2)
payment := &ethpb.BuilderPendingPayment{
Weight: 11,
Withdrawal: &ethpb.BuilderPendingWithdrawal{
FeeRecipient: []byte{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20},
Amount: 99,
BuilderIndex: 3,
WithdrawableEpoch: 5,
},
}
payments[paymentIndex] = payment
s, err := state_native.InitializeFromProtoGloas(&ethpb.BeaconStateGloas{
BuilderPendingPayments: payments,
})
require.NoError(t, err)
got, err := s.BuilderPendingPayment(slot)
require.NoError(t, err)
require.DeepEqual(t, payment, got)
got.Withdrawal.FeeRecipient[0] = 9
require.Equal(t, byte(1), payment.Withdrawal.FeeRecipient[0])
})
}

View File

@@ -0,0 +1,101 @@
package state_native
import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native/types"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
)
func (b *BeaconState) SetExecutionPayloadBid(h interfaces.ROExecutionPayloadBid) error {
b.lock.Lock()
defer b.lock.Unlock()
parentBlockHash := h.ParentBlockHash()
parentBlockRoot := h.ParentBlockRoot()
blockHash := h.BlockHash()
randao := h.PrevRandao()
blobKzgCommitmentsRoot := h.BlobKzgCommitmentsRoot()
feeRecipient := h.FeeRecipient()
b.latestExecutionPayloadBid = &ethpb.ExecutionPayloadBid{
ParentBlockHash: parentBlockHash[:],
ParentBlockRoot: parentBlockRoot[:],
BlockHash: blockHash[:],
PrevRandao: randao[:],
GasLimit: h.GasLimit(),
BuilderIndex: h.BuilderIndex(),
Slot: h.Slot(),
Value: h.Value(),
ExecutionPayment: h.ExecutionPayment(),
BlobKzgCommitmentsRoot: blobKzgCommitmentsRoot[:],
FeeRecipient: feeRecipient[:],
}
b.markFieldAsDirty(types.LatestExecutionPayloadBid)
return nil
}
// SetBuilderPendingPayment sets a builder pending payment for the specified slot.
func (b *BeaconState) SetBuilderPendingPayment(slot primitives.Slot, payment *ethpb.BuilderPendingPayment) error {
b.lock.Lock()
defer b.lock.Unlock()
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
paymentIndex := slotsPerEpoch + (slot % slotsPerEpoch)
b.builderPendingPayments[paymentIndex] = ethpb.CopyBuilderPendingPayment(payment)
b.markFieldAsDirty(types.BuilderPendingPayments)
return nil
}
// SetLatestBlockHash sets the latest execution block hash.
func (b *BeaconState) SetLatestBlockHash(hash [32]byte) error {
b.lock.Lock()
defer b.lock.Unlock()
b.latestBlockHash = hash[:]
b.markFieldAsDirty(types.LatestBlockHash)
return nil
}
// SetExecutionPayloadAvailability sets the execution payload availability bit for a specific slot.
func (b *BeaconState) SetExecutionPayloadAvailability(index primitives.Slot, available bool) error {
b.lock.Lock()
defer b.lock.Unlock()
bitIndex := index % params.BeaconConfig().SlotsPerHistoricalRoot
byteIndex := bitIndex / 8
bitPosition := bitIndex % 8
// Set or clear the bit
if available {
b.executionPayloadAvailability[byteIndex] |= 1 << bitPosition
} else {
b.executionPayloadAvailability[byteIndex] &^= 1 << bitPosition
}
b.markFieldAsDirty(types.ExecutionPayloadAvailability)
return nil
}
// SetBuilderPendingPayments sets the entire builder pending payments array.
func (b *BeaconState) SetBuilderPendingPayments(payments []*ethpb.BuilderPendingPayment) error {
b.lock.Lock()
defer b.lock.Unlock()
b.builderPendingPayments = ethpb.CopyBuilderPendingPaymentSlice(payments)
b.markFieldAsDirty(types.BuilderPendingPayments)
return nil
}
// AppendBuilderPendingWithdrawal appends a builder pending withdrawal to the list.
func (b *BeaconState) AppendBuilderPendingWithdrawal(withdrawal *ethpb.BuilderPendingWithdrawal) error {
b.lock.Lock()
defer b.lock.Unlock()
b.builderPendingWithdrawals = append(b.builderPendingWithdrawals, ethpb.CopyBuilderPendingWithdrawal(withdrawal))
b.markFieldAsDirty(types.BuilderPendingWithdrawals)
return nil
}

View File

@@ -0,0 +1,125 @@
package state_native
import (
"testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native/types"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/testing/require"
)
func TestSetBuilderPendingPayment(t *testing.T) {
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
slot := primitives.Slot(7)
paymentIndex := slotsPerEpoch + (slot % slotsPerEpoch)
payment := &ethpb.BuilderPendingPayment{
Weight: 11,
Withdrawal: &ethpb.BuilderPendingWithdrawal{
FeeRecipient: []byte{1, 2, 3, 4, 5},
Amount: 99,
BuilderIndex: 3,
WithdrawableEpoch: 5,
},
}
state := &BeaconState{
builderPendingPayments: make([]*ethpb.BuilderPendingPayment, slotsPerEpoch*2),
dirtyFields: make(map[types.FieldIndex]bool),
}
require.NoError(t, state.SetBuilderPendingPayment(slot, payment))
require.Equal(t, true, state.dirtyFields[types.BuilderPendingPayments])
require.DeepEqual(t, payment, state.builderPendingPayments[paymentIndex])
state.builderPendingPayments[paymentIndex].Withdrawal.FeeRecipient[0] = 9
require.Equal(t, byte(1), payment.Withdrawal.FeeRecipient[0])
}
func TestSetLatestBlockHash(t *testing.T) {
var hash [32]byte
copy(hash[:], []byte("latest-block-hash"))
state := &BeaconState{
dirtyFields: make(map[types.FieldIndex]bool),
}
require.NoError(t, state.SetLatestBlockHash(hash))
require.Equal(t, true, state.dirtyFields[types.LatestBlockHash])
require.DeepEqual(t, hash[:], state.latestBlockHash)
}
func TestSetExecutionPayloadAvailability(t *testing.T) {
state := &BeaconState{
executionPayloadAvailability: make([]byte, params.BeaconConfig().SlotsPerHistoricalRoot/8),
dirtyFields: make(map[types.FieldIndex]bool),
}
slot := primitives.Slot(10)
bitIndex := slot % params.BeaconConfig().SlotsPerHistoricalRoot
byteIndex := bitIndex / 8
bitPosition := bitIndex % 8
require.NoError(t, state.SetExecutionPayloadAvailability(slot, true))
require.Equal(t, true, state.dirtyFields[types.ExecutionPayloadAvailability])
require.Equal(t, byte(1<<bitPosition), state.executionPayloadAvailability[byteIndex]&(1<<bitPosition))
require.NoError(t, state.SetExecutionPayloadAvailability(slot, false))
require.Equal(t, byte(0), state.executionPayloadAvailability[byteIndex]&(1<<bitPosition))
}
func TestSetBuilderPendingPayments(t *testing.T) {
payments := []*ethpb.BuilderPendingPayment{
{
Weight: 1,
Withdrawal: &ethpb.BuilderPendingWithdrawal{
FeeRecipient: []byte{1, 1, 1},
Amount: 5,
BuilderIndex: 2,
WithdrawableEpoch: 3,
},
},
{
Weight: 2,
Withdrawal: &ethpb.BuilderPendingWithdrawal{
FeeRecipient: []byte{2, 2, 2},
Amount: 6,
BuilderIndex: 4,
WithdrawableEpoch: 7,
},
},
}
state := &BeaconState{
dirtyFields: make(map[types.FieldIndex]bool),
}
require.NoError(t, state.SetBuilderPendingPayments(payments))
require.Equal(t, true, state.dirtyFields[types.BuilderPendingPayments])
require.DeepEqual(t, payments, state.builderPendingPayments)
state.builderPendingPayments[0].Weight = 99
require.Equal(t, primitives.Gwei(1), payments[0].Weight)
}
func TestAppendBuilderPendingWithdrawal(t *testing.T) {
withdrawal := &ethpb.BuilderPendingWithdrawal{
FeeRecipient: []byte{9, 8, 7},
Amount: 77,
BuilderIndex: 12,
WithdrawableEpoch: 13,
}
state := &BeaconState{
dirtyFields: make(map[types.FieldIndex]bool),
}
require.NoError(t, state.AppendBuilderPendingWithdrawal(withdrawal))
require.Equal(t, true, state.dirtyFields[types.BuilderPendingWithdrawals])
require.Equal(t, 1, len(state.builderPendingWithdrawals))
require.DeepEqual(t, withdrawal, state.builderPendingWithdrawals[0])
state.builderPendingWithdrawals[0].FeeRecipient[0] = 1
require.Equal(t, byte(9), withdrawal.FeeRecipient[0])
}

View File

@@ -3,6 +3,7 @@ package sync
import (
"bytes"
"context"
"fmt"
"maps"
"slices"
"sync"
@@ -243,8 +244,10 @@ func requestDirectSidecarsFromPeers(
}
// Compute missing indices by root, excluding those already in storage.
var lastRoot [fieldparams.RootLength]byte
missingIndicesByRoot := make(map[[fieldparams.RootLength]byte]map[uint64]bool, len(incompleteRoots))
for root := range incompleteRoots {
lastRoot = root
storedIndices := storedIndicesByRoot[root]
missingIndices := make(map[uint64]bool, len(requestedIndices))
@@ -259,6 +262,7 @@ func requestDirectSidecarsFromPeers(
}
}
initialMissingRootCount := len(missingIndicesByRoot)
initialMissingCount := computeTotalCount(missingIndicesByRoot)
indicesByRootByPeer, err := computeIndicesByRootByPeer(params.P2P, slotByRoot, missingIndicesByRoot, connectedPeers)
@@ -301,11 +305,19 @@ func requestDirectSidecarsFromPeers(
}
}
log.WithFields(logrus.Fields{
"duration": time.Since(start),
"initialMissingCount": initialMissingCount,
"finalMissingCount": computeTotalCount(missingIndicesByRoot),
}).Debug("Requested direct data column sidecars from peers")
log := log.WithFields(logrus.Fields{
"duration": time.Since(start),
"initialMissingRootCount": initialMissingRootCount,
"initialMissingCount": initialMissingCount,
"finalMissingRootCount": len(missingIndicesByRoot),
"finalMissingCount": computeTotalCount(missingIndicesByRoot),
})
if initialMissingRootCount == 1 {
log = log.WithField("root", fmt.Sprintf("%#x", lastRoot))
}
log.Debug("Requested direct data column sidecars from peers")
return verifiedColumnsByRoot, nil
}

View File

@@ -204,6 +204,13 @@ var (
},
)
dataColumnsRecoveredFromELAttempts = promauto.NewCounter(
prometheus.CounterOpts{
Name: "data_columns_recovered_from_el_attempts",
Help: "Count the number of data columns recovery attempts from the execution layer.",
},
)
dataColumnsRecoveredFromELTotal = promauto.NewCounter(
prometheus.CounterOpts{
Name: "data_columns_recovered_from_el_total",
@@ -242,6 +249,13 @@ var (
Buckets: []float64{100, 250, 500, 750, 1000, 1500, 2000, 4000, 8000, 12000, 16000},
},
)
dataColumnSidecarsObtainedViaELCount = promauto.NewSummary(
prometheus.SummaryOpts{
Name: "data_column_obtained_via_el_count",
Help: "Count the number of data column sidecars obtained via the execution layer.",
},
)
)
func (s *Service) updateMetrics() {

View File

@@ -3,9 +3,9 @@ package sync
import (
"bytes"
"context"
"encoding/hex"
"fmt"
"slices"
"time"
"github.com/OffchainLabs/prysm/v7/beacon-chain/blockchain"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/blocks"
@@ -21,13 +21,23 @@ import (
"github.com/OffchainLabs/prysm/v7/monitoring/tracing/trace"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/runtime/version"
"github.com/OffchainLabs/prysm/v7/time"
"github.com/OffchainLabs/prysm/v7/time/slots"
pubsub "github.com/libp2p/go-libp2p-pubsub"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
var pendingAttsLimit = 32768
const pendingAttsLimit = 32768
// aggregatorIndexFilter defines how aggregator index should be handled in equality checks.
type aggregatorIndexFilter int
const (
// ignoreAggregatorIndex means aggregates differing only by aggregator index are considered equal.
ignoreAggregatorIndex aggregatorIndexFilter = iota
// includeAggregatorIndex means aggregator index must also match for aggregates to be considered equal.
includeAggregatorIndex
)
// This method processes pending attestations as a "known" block as arrived. With validations,
// the valid attestations get saved into the operation mem pool, and the invalid attestations gets deleted
@@ -50,16 +60,7 @@ func (s *Service) processPendingAttsForBlock(ctx context.Context, bRoot [32]byte
attestations := s.blkRootToPendingAtts[bRoot]
s.pendingAttsLock.RUnlock()
if len(attestations) > 0 {
start := time.Now()
s.processAttestations(ctx, attestations)
duration := time.Since(start)
log.WithFields(logrus.Fields{
"blockRoot": hex.EncodeToString(bytesutil.Trunc(bRoot[:])),
"pendingAttsCount": len(attestations),
"duration": duration,
}).Debug("Verified and saved pending attestations to pool")
}
s.processAttestations(ctx, attestations)
randGen := rand.NewGenerator()
// Delete the missing block root key from pending attestation queue so a node will not request for the block again.
@@ -79,26 +80,71 @@ func (s *Service) processPendingAttsForBlock(ctx context.Context, bRoot [32]byte
return s.sendBatchRootRequest(ctx, pendingRoots, randGen)
}
// processAttestations processes a list of attestations.
// It assumes (for logging purposes only) that all attestations pertain to the same block.
func (s *Service) processAttestations(ctx context.Context, attestations []any) {
if len(attestations) == 0 {
return
}
firstAttestation := attestations[0]
var blockRoot []byte
switch v := firstAttestation.(type) {
case ethpb.Att:
blockRoot = v.GetData().BeaconBlockRoot
case ethpb.SignedAggregateAttAndProof:
blockRoot = v.AggregateAttestationAndProof().AggregateVal().GetData().BeaconBlockRoot
default:
log.Warnf("Unexpected attestation type %T, skipping processing", v)
return
}
validAggregates := make([]ethpb.SignedAggregateAttAndProof, 0, len(attestations))
startAggregate := time.Now()
atts := make([]ethpb.Att, 0, len(attestations))
aggregateAttAndProofCount := 0
for _, att := range attestations {
switch v := att.(type) {
case ethpb.Att:
atts = append(atts, v)
case ethpb.SignedAggregateAttAndProof:
s.processAggregate(ctx, v)
aggregateAttAndProofCount++
// Avoid processing multiple aggregates only differing by aggregator index.
if slices.ContainsFunc(validAggregates, func(other ethpb.SignedAggregateAttAndProof) bool {
return pendingAggregatesAreEqual(v, other, ignoreAggregatorIndex)
}) {
continue
}
if err := s.processAggregate(ctx, v); err != nil {
log.WithError(err).Debug("Pending aggregate attestation could not be processed")
continue
}
validAggregates = append(validAggregates, v)
default:
log.Warnf("Unexpected attestation type %T, skipping", v)
}
}
durationAggregateAttAndProof := time.Since(startAggregate)
startAtts := time.Now()
for _, bucket := range bucketAttestationsByData(atts) {
s.processAttestationBucket(ctx, bucket)
}
durationAtts := time.Since(startAtts)
log.WithFields(logrus.Fields{
"blockRoot": fmt.Sprintf("%#x", blockRoot),
"totalCount": len(attestations),
"aggregateAttAndProofCount": aggregateAttAndProofCount,
"uniqueAggregateAttAndProofCount": len(validAggregates),
"attCount": len(atts),
"durationTotal": durationAggregateAttAndProof + durationAtts,
"durationAggregateAttAndProof": durationAggregateAttAndProof,
"durationAtts": durationAtts,
}).Debug("Verified and saved pending attestations to pool")
}
// attestationBucket groups attestations with the same AttestationData for batch processing.
@@ -303,21 +349,20 @@ func (s *Service) processVerifiedAttestation(
})
}
func (s *Service) processAggregate(ctx context.Context, aggregate ethpb.SignedAggregateAttAndProof) {
func (s *Service) processAggregate(ctx context.Context, aggregate ethpb.SignedAggregateAttAndProof) error {
res, err := s.validateAggregatedAtt(ctx, aggregate)
if err != nil {
log.WithError(err).Debug("Pending aggregated attestation failed validation")
return
return errors.Wrap(err, "validate aggregated att")
}
if res != pubsub.ValidationAccept || !s.validateBlockInAttestation(ctx, aggregate) {
log.Debug("Pending aggregated attestation failed validation")
return
return errors.New("Pending aggregated attestation failed validation")
}
att := aggregate.AggregateAttestationAndProof().AggregateVal()
if err := s.saveAttestation(att); err != nil {
log.WithError(err).Debug("Could not save aggregated attestation")
return
return errors.Wrap(err, "save attestation")
}
_ = s.setAggregatorIndexEpochSeen(att.GetData().Target.Epoch, aggregate.AggregateAttestationAndProof().GetAggregatorIndex())
@@ -325,6 +370,8 @@ func (s *Service) processAggregate(ctx context.Context, aggregate ethpb.SignedAg
if err := s.cfg.p2p.Broadcast(ctx, aggregate); err != nil {
log.WithError(err).Debug("Could not broadcast aggregated attestation")
}
return nil
}
// This defines how pending aggregates are saved in the map. The key is the
@@ -336,7 +383,7 @@ func (s *Service) savePendingAggregate(agg ethpb.SignedAggregateAttAndProof) {
s.savePending(root, agg, func(other any) bool {
a, ok := other.(ethpb.SignedAggregateAttAndProof)
return ok && pendingAggregatesAreEqual(agg, a)
return ok && pendingAggregatesAreEqual(agg, a, includeAggregatorIndex)
})
}
@@ -391,13 +438,19 @@ func (s *Service) savePending(root [32]byte, pending any, isEqual func(other any
s.blkRootToPendingAtts[root] = append(s.blkRootToPendingAtts[root], pending)
}
func pendingAggregatesAreEqual(a, b ethpb.SignedAggregateAttAndProof) bool {
// pendingAggregatesAreEqual checks if two pending aggregate attestations are equal.
// The filter parameter controls whether aggregator index is considered in the equality check.
func pendingAggregatesAreEqual(a, b ethpb.SignedAggregateAttAndProof, filter aggregatorIndexFilter) bool {
if a.Version() != b.Version() {
return false
}
if a.AggregateAttestationAndProof().GetAggregatorIndex() != b.AggregateAttestationAndProof().GetAggregatorIndex() {
return false
if filter == includeAggregatorIndex {
if a.AggregateAttestationAndProof().GetAggregatorIndex() != b.AggregateAttestationAndProof().GetAggregatorIndex() {
return false
}
}
aAtt := a.AggregateAttestationAndProof().AggregateVal()
bAtt := b.AggregateAttestationAndProof().AggregateVal()
if aAtt.GetData().Slot != bAtt.GetData().Slot {

View File

@@ -94,7 +94,7 @@ func TestProcessPendingAtts_NoBlockRequestBlock(t *testing.T) {
// Process block A (which exists and has no pending attestations)
// This should skip processing attestations for A and request blocks B and C
require.NoError(t, r.processPendingAttsForBlock(t.Context(), rootA))
require.LogsContain(t, hook, "Requesting block by root")
require.LogsContain(t, hook, "Requesting blocks by root")
}
func TestProcessPendingAtts_HasBlockSaveUnaggregatedAtt(t *testing.T) {
@@ -911,17 +911,17 @@ func Test_pendingAggregatesAreEqual(t *testing.T) {
},
AggregationBits: bitfield.Bitlist{0b1111},
}}}
assert.Equal(t, true, pendingAggregatesAreEqual(a, b))
assert.Equal(t, true, pendingAggregatesAreEqual(a, b, includeAggregatorIndex))
})
t.Run("different version", func(t *testing.T) {
a := &ethpb.SignedAggregateAttestationAndProof{Message: &ethpb.AggregateAttestationAndProof{AggregatorIndex: 1}}
b := &ethpb.SignedAggregateAttestationAndProofElectra{Message: &ethpb.AggregateAttestationAndProofElectra{AggregatorIndex: 1}}
assert.Equal(t, false, pendingAggregatesAreEqual(a, b))
assert.Equal(t, false, pendingAggregatesAreEqual(a, b, includeAggregatorIndex))
})
t.Run("different aggregator index", func(t *testing.T) {
a := &ethpb.SignedAggregateAttestationAndProof{Message: &ethpb.AggregateAttestationAndProof{AggregatorIndex: 1}}
b := &ethpb.SignedAggregateAttestationAndProof{Message: &ethpb.AggregateAttestationAndProof{AggregatorIndex: 2}}
assert.Equal(t, false, pendingAggregatesAreEqual(a, b))
assert.Equal(t, false, pendingAggregatesAreEqual(a, b, includeAggregatorIndex))
})
t.Run("different slot", func(t *testing.T) {
a := &ethpb.SignedAggregateAttestationAndProof{
@@ -942,7 +942,7 @@ func Test_pendingAggregatesAreEqual(t *testing.T) {
},
AggregationBits: bitfield.Bitlist{0b1111},
}}}
assert.Equal(t, false, pendingAggregatesAreEqual(a, b))
assert.Equal(t, false, pendingAggregatesAreEqual(a, b, includeAggregatorIndex))
})
t.Run("different committee index", func(t *testing.T) {
a := &ethpb.SignedAggregateAttestationAndProof{
@@ -963,7 +963,7 @@ func Test_pendingAggregatesAreEqual(t *testing.T) {
},
AggregationBits: bitfield.Bitlist{0b1111},
}}}
assert.Equal(t, false, pendingAggregatesAreEqual(a, b))
assert.Equal(t, false, pendingAggregatesAreEqual(a, b, includeAggregatorIndex))
})
t.Run("different aggregation bits", func(t *testing.T) {
a := &ethpb.SignedAggregateAttestationAndProof{
@@ -984,7 +984,30 @@ func Test_pendingAggregatesAreEqual(t *testing.T) {
},
AggregationBits: bitfield.Bitlist{0b1000},
}}}
assert.Equal(t, false, pendingAggregatesAreEqual(a, b))
assert.Equal(t, false, pendingAggregatesAreEqual(a, b, includeAggregatorIndex))
})
t.Run("different aggregator index should be equal while ignoring aggregator index", func(t *testing.T) {
a := &ethpb.SignedAggregateAttestationAndProof{
Message: &ethpb.AggregateAttestationAndProof{
AggregatorIndex: 1,
Aggregate: &ethpb.Attestation{
Data: &ethpb.AttestationData{
Slot: 1,
CommitteeIndex: 1,
},
AggregationBits: bitfield.Bitlist{0b1111},
}}}
b := &ethpb.SignedAggregateAttestationAndProof{
Message: &ethpb.AggregateAttestationAndProof{
AggregatorIndex: 2,
Aggregate: &ethpb.Attestation{
Data: &ethpb.AttestationData{
Slot: 1,
CommitteeIndex: 1,
},
AggregationBits: bitfield.Bitlist{0b1111},
}}}
assert.Equal(t, true, pendingAggregatesAreEqual(a, b, ignoreAggregatorIndex))
})
}

View File

@@ -2,7 +2,6 @@ package sync
import (
"context"
"encoding/hex"
"fmt"
"slices"
"sync"
@@ -44,11 +43,13 @@ func (s *Service) processPendingBlocksQueue() {
if !s.chainIsStarted() {
return
}
locker.Lock()
defer locker.Unlock()
if err := s.processPendingBlocks(s.ctx); err != nil {
log.WithError(err).Debug("Could not process pending blocks")
}
locker.Unlock()
})
}
@@ -73,8 +74,10 @@ func (s *Service) processPendingBlocks(ctx context.Context) error {
randGen := rand.NewGenerator()
var parentRoots [][32]byte
blkRoots := make([][32]byte, 0, len(sortedSlots)*maxBlocksPerSlot)
// Iterate through sorted slots.
for _, slot := range sortedSlots {
for i, slot := range sortedSlots {
// Skip processing if slot is in the future.
if slot > s.cfg.clock.CurrentSlot() {
continue
@@ -91,6 +94,9 @@ func (s *Service) processPendingBlocks(ctx context.Context) error {
// Process each block in the queue.
for _, b := range blocksInCache {
start := time.Now()
totalDuration := time.Duration(0)
if err := blocks.BeaconBlockIsNil(b); err != nil {
continue
}
@@ -147,19 +153,34 @@ func (s *Service) processPendingBlocks(ctx context.Context) error {
}
cancelFunction()
// Process pending attestations for this block.
if err := s.processPendingAttsForBlock(ctx, blkRoot); err != nil {
log.WithError(err).Debug("Failed to process pending attestations for block")
}
blkRoots = append(blkRoots, blkRoot)
// Remove the processed block from the queue.
if err := s.removeBlockFromQueue(b, blkRoot); err != nil {
return err
}
log.WithFields(logrus.Fields{"slot": slot, "blockRoot": hex.EncodeToString(bytesutil.Trunc(blkRoot[:]))}).Debug("Processed pending block and cleared it in cache")
duration := time.Since(start)
totalDuration += duration
log.WithFields(logrus.Fields{
"slotIndex": fmt.Sprintf("%d/%d", i+1, len(sortedSlots)),
"slot": slot,
"root": fmt.Sprintf("%#x", blkRoot),
"duration": duration,
"totalDuration": totalDuration,
}).Debug("Processed pending block and cleared it in cache")
}
span.End()
}
for _, blkRoot := range blkRoots {
// Process pending attestations for this block.
if err := s.processPendingAttsForBlock(ctx, blkRoot); err != nil {
log.WithError(err).Debug("Failed to process pending attestations for block")
}
}
return s.sendBatchRootRequest(ctx, parentRoots, randGen)
}
@@ -379,6 +400,19 @@ func (s *Service) sendBatchRootRequest(ctx context.Context, roots [][32]byte, ra
req = roots[:maxReqBlock]
}
if logrus.GetLevel() >= logrus.DebugLevel {
rootsStr := make([]string, 0, len(roots))
for _, req := range roots {
rootsStr = append(rootsStr, fmt.Sprintf("%#x", req))
}
log.WithFields(logrus.Fields{
"peer": pid,
"count": len(req),
"roots": rootsStr,
}).Debug("Requesting blocks by root")
}
// Send the request to the peer.
if err := s.sendBeaconBlocksRequest(ctx, &req, pid); err != nil {
tracing.AnnotateError(span, err)
@@ -438,8 +472,6 @@ func (s *Service) filterOutPendingAndSynced(roots [][fieldparams.RootLength]byte
roots = append(roots[:i], roots[i+1:]...)
continue
}
log.WithField("blockRoot", fmt.Sprintf("%#x", r)).Debug("Requesting block by root")
}
return roots
}

View File

@@ -189,12 +189,30 @@ func (s *Service) processDataColumnSidecarsFromExecution(ctx context.Context, so
ctx, cancel := context.WithTimeout(ctx, secondsPerHalfSlot)
defer cancel()
log := log.WithFields(logrus.Fields{
"root": fmt.Sprintf("%#x", source.Root()),
"slot": source.Slot(),
"proposerIndex": source.ProposerIndex(),
"type": source.Type(),
})
var constructedSidecarCount uint64
for iteration := uint64(0); ; /*no stop condition*/ iteration++ {
log = log.WithField("iteration", iteration)
// Exit early if all sidecars to sample have been seen.
if s.haveAllSidecarsBeenSeen(source.Slot(), source.ProposerIndex(), columnIndicesToSample) {
if iteration > 0 && constructedSidecarCount == 0 {
log.Debug("No data column sidecars constructed from the execution client")
}
return nil, nil
}
if iteration == 0 {
dataColumnsRecoveredFromELAttempts.Inc()
}
// Try to reconstruct data column constructedSidecars from the execution client.
constructedSidecars, err := s.cfg.executionReconstructor.ConstructDataColumnSidecars(ctx, source)
if err != nil {
@@ -202,8 +220,8 @@ func (s *Service) processDataColumnSidecarsFromExecution(ctx context.Context, so
}
// No sidecars are retrieved from the EL, retry later
sidecarCount := uint64(len(constructedSidecars))
if sidecarCount == 0 {
constructedSidecarCount = uint64(len(constructedSidecars))
if constructedSidecarCount == 0 {
if ctx.Err() != nil {
return nil, ctx.Err()
}
@@ -212,9 +230,11 @@ func (s *Service) processDataColumnSidecarsFromExecution(ctx context.Context, so
continue
}
dataColumnsRecoveredFromELTotal.Inc()
// Boundary check.
if sidecarCount != fieldparams.NumberOfColumns {
return nil, errors.Errorf("reconstruct data column sidecars returned %d sidecars, expected %d - should never happen", sidecarCount, fieldparams.NumberOfColumns)
if constructedSidecarCount != fieldparams.NumberOfColumns {
return nil, errors.Errorf("reconstruct data column sidecars returned %d sidecars, expected %d - should never happen", constructedSidecarCount, fieldparams.NumberOfColumns)
}
unseenIndices, err := s.broadcastAndReceiveUnseenDataColumnSidecars(ctx, source.Slot(), source.ProposerIndex(), columnIndicesToSample, constructedSidecars)
@@ -222,19 +242,12 @@ func (s *Service) processDataColumnSidecarsFromExecution(ctx context.Context, so
return nil, errors.Wrap(err, "broadcast and receive unseen data column sidecars")
}
if len(unseenIndices) > 0 {
dataColumnsRecoveredFromELTotal.Inc()
log.WithFields(logrus.Fields{
"count": len(unseenIndices),
"indices": helpers.SortedPrettySliceFromMap(unseenIndices),
}).Debug("Constructed data column sidecars from the execution client")
log.WithFields(logrus.Fields{
"root": fmt.Sprintf("%#x", source.Root()),
"slot": source.Slot(),
"proposerIndex": source.ProposerIndex(),
"iteration": iteration,
"type": source.Type(),
"count": len(unseenIndices),
"indices": helpers.SortedPrettySliceFromMap(unseenIndices),
}).Debug("Constructed data column sidecars from the execution client")
}
dataColumnSidecarsObtainedViaELCount.Observe(float64(len(unseenIndices)))
return nil, nil
}

View File

@@ -3,11 +3,12 @@ package sync
import (
"context"
"fmt"
"math"
"slices"
"time"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/feed"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/feed/operation"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p"
"github.com/OffchainLabs/prysm/v7/beacon-chain/verification"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
@@ -51,14 +52,12 @@ func (s *Service) validateDataColumn(ctx context.Context, pid peer.ID, msg *pubs
// Decode the message, reject if it fails.
m, err := s.decodePubsubMessage(msg)
if err != nil {
log.WithError(err).Error("Failed to decode message")
return pubsub.ValidationReject, err
}
// Reject messages that are not of the expected type.
dcsc, ok := m.(*eth.DataColumnSidecar)
if !ok {
log.WithField("message", m).Error("Message is not of type *eth.DataColumnSidecar")
return pubsub.ValidationReject, errWrongMessage
}
@@ -194,19 +193,13 @@ func (s *Service) validateDataColumn(ctx context.Context, pid peer.ID, msg *pubs
dataColumnSidecarArrivalGossipSummary.Observe(float64(sinceSlotStartTime.Milliseconds()))
dataColumnSidecarVerificationGossipHistogram.Observe(float64(validationTime.Milliseconds()))
peerGossipScore := s.cfg.p2p.Peers().Scorers().GossipScorer().Score(pid)
select {
case s.dataColumnLogCh <- dataColumnLogEntry{
Slot: roDataColumn.Slot(),
ColIdx: roDataColumn.Index,
PropIdx: roDataColumn.ProposerIndex(),
BlockRoot: roDataColumn.BlockRoot(),
ParentRoot: roDataColumn.ParentRoot(),
PeerSuffix: pid.String()[len(pid.String())-6:],
PeerGossipScore: peerGossipScore,
validationTime: validationTime,
sinceStartTime: sinceSlotStartTime,
slot: roDataColumn.Slot(),
index: roDataColumn.Index,
root: roDataColumn.BlockRoot(),
validationTime: validationTime,
sinceStartTime: sinceSlotStartTime,
}:
default:
log.WithField("slot", roDataColumn.Slot()).Warn("Failed to send data column log entry")
@@ -251,68 +244,69 @@ func computeCacheKey(slot primitives.Slot, proposerIndex primitives.ValidatorInd
}
type dataColumnLogEntry struct {
Slot primitives.Slot
ColIdx uint64
PropIdx primitives.ValidatorIndex
BlockRoot [32]byte
ParentRoot [32]byte
PeerSuffix string
PeerGossipScore float64
validationTime time.Duration
sinceStartTime time.Duration
slot primitives.Slot
index uint64
root [32]byte
validationTime time.Duration
sinceStartTime time.Duration
}
func (s *Service) processDataColumnLogs() {
ticker := time.NewTicker(1 * time.Second)
defer ticker.Stop()
slotStats := make(map[primitives.Slot][fieldparams.NumberOfColumns]dataColumnLogEntry)
slotStats := make(map[[fieldparams.RootLength]byte][]dataColumnLogEntry)
for {
select {
case entry := <-s.dataColumnLogCh:
cols := slotStats[entry.Slot]
cols[entry.ColIdx] = entry
slotStats[entry.Slot] = cols
case col := <-s.dataColumnLogCh:
cols := slotStats[col.root]
cols = append(cols, col)
slotStats[col.root] = cols
case <-ticker.C:
for slot, columns := range slotStats {
var (
colIndices = make([]uint64, 0, fieldparams.NumberOfColumns)
peers = make([]string, 0, fieldparams.NumberOfColumns)
gossipScores = make([]float64, 0, fieldparams.NumberOfColumns)
validationTimes = make([]string, 0, fieldparams.NumberOfColumns)
sinceStartTimes = make([]string, 0, fieldparams.NumberOfColumns)
)
for root, columns := range slotStats {
indices := make([]uint64, 0, fieldparams.NumberOfColumns)
minValidationTime, maxValidationTime, sumValidationTime := time.Duration(0), time.Duration(0), time.Duration(0)
minSinceStartTime, maxSinceStartTime, sumSinceStartTime := time.Duration(0), time.Duration(0), time.Duration(0)
totalReceived := 0
for _, entry := range columns {
if entry.PeerSuffix == "" {
for _, column := range columns {
indices = append(indices, column.index)
sumValidationTime += column.validationTime
sumSinceStartTime += column.sinceStartTime
if totalReceived == 0 {
minValidationTime, maxValidationTime = column.validationTime, column.validationTime
minSinceStartTime, maxSinceStartTime = column.sinceStartTime, column.sinceStartTime
totalReceived++
continue
}
colIndices = append(colIndices, entry.ColIdx)
peers = append(peers, entry.PeerSuffix)
gossipScores = append(gossipScores, roundFloat(entry.PeerGossipScore, 2))
validationTimes = append(validationTimes, fmt.Sprintf("%.2fms", float64(entry.validationTime.Milliseconds())))
sinceStartTimes = append(sinceStartTimes, fmt.Sprintf("%.2fms", float64(entry.sinceStartTime.Milliseconds())))
minValidationTime, maxValidationTime = min(minValidationTime, column.validationTime), max(maxValidationTime, column.validationTime)
minSinceStartTime, maxSinceStartTime = min(minSinceStartTime, column.sinceStartTime), max(maxSinceStartTime, column.sinceStartTime)
totalReceived++
}
log.WithFields(logrus.Fields{
"slot": slot,
"receivedCount": totalReceived,
"columnIndices": colIndices,
"peers": peers,
"gossipScores": gossipScores,
"validationTimes": validationTimes,
"sinceStartTimes": sinceStartTimes,
}).Debug("Accepted data column sidecars summary")
if totalReceived > 0 {
slices.Sort(indices)
avgValidationTime := sumValidationTime / time.Duration(totalReceived)
avgSinceStartTime := sumSinceStartTime / time.Duration(totalReceived)
log.WithFields(logrus.Fields{
"slot": columns[0].slot,
"root": fmt.Sprintf("%#x", root),
"count": totalReceived,
"indices": helpers.PrettySlice(indices),
"validationTime": prettyMinMaxAverage(minValidationTime, maxValidationTime, avgValidationTime),
"sinceStartTime": prettyMinMaxAverage(minSinceStartTime, maxSinceStartTime, avgSinceStartTime),
}).Debug("Accepted data column sidecars summary")
}
}
slotStats = make(map[primitives.Slot][fieldparams.NumberOfColumns]dataColumnLogEntry)
slotStats = make(map[[fieldparams.RootLength]byte][]dataColumnLogEntry)
}
}
}
func roundFloat(f float64, decimals int) float64 {
mult := math.Pow(10, float64(decimals))
return math.Round(f*mult) / mult
func prettyMinMaxAverage(min, max, average time.Duration) string {
return fmt.Sprintf("[min: %v, avg: %v, max: %v]", min, average, max)
}

View File

@@ -54,11 +54,13 @@ func TestValidateLightClientOptimisticUpdate(t *testing.T) {
cfg.CapellaForkEpoch = 3
cfg.DenebForkEpoch = 4
cfg.ElectraForkEpoch = 5
cfg.FuluForkEpoch = 6
cfg.ForkVersionSchedule[[4]byte{1, 0, 0, 0}] = 1
cfg.ForkVersionSchedule[[4]byte{2, 0, 0, 0}] = 2
cfg.ForkVersionSchedule[[4]byte{3, 0, 0, 0}] = 3
cfg.ForkVersionSchedule[[4]byte{4, 0, 0, 0}] = 4
cfg.ForkVersionSchedule[[4]byte{5, 0, 0, 0}] = 5
cfg.ForkVersionSchedule[[4]byte{6, 0, 0, 0}] = 6
params.OverrideBeaconConfig(cfg)
secondsPerSlot := int(params.BeaconConfig().SecondsPerSlot)
@@ -101,7 +103,10 @@ func TestValidateLightClientOptimisticUpdate(t *testing.T) {
}
for _, test := range tests {
for v := 1; v < 6; v++ {
for v := range version.All() {
if v == version.Phase0 {
continue
}
t.Run(test.name+"_"+version.String(v), func(t *testing.T) {
ctx := t.Context()
p := p2ptest.NewTestP2P(t)
@@ -180,11 +185,13 @@ func TestValidateLightClientFinalityUpdate(t *testing.T) {
cfg.CapellaForkEpoch = 3
cfg.DenebForkEpoch = 4
cfg.ElectraForkEpoch = 5
cfg.FuluForkEpoch = 6
cfg.ForkVersionSchedule[[4]byte{1, 0, 0, 0}] = 1
cfg.ForkVersionSchedule[[4]byte{2, 0, 0, 0}] = 2
cfg.ForkVersionSchedule[[4]byte{3, 0, 0, 0}] = 3
cfg.ForkVersionSchedule[[4]byte{4, 0, 0, 0}] = 4
cfg.ForkVersionSchedule[[4]byte{5, 0, 0, 0}] = 5
cfg.ForkVersionSchedule[[4]byte{6, 0, 0, 0}] = 6
params.OverrideBeaconConfig(cfg)
secondsPerSlot := int(params.BeaconConfig().SecondsPerSlot)
@@ -227,7 +234,10 @@ func TestValidateLightClientFinalityUpdate(t *testing.T) {
}
for _, test := range tests {
for v := 1; v < 6; v++ {
for v := range version.All() {
if v == version.Phase0 {
continue
}
t.Run(test.name+"_"+version.String(v), func(t *testing.T) {
ctx := t.Context()
p := p2ptest.NewTestP2P(t)

View File

@@ -687,6 +687,12 @@ func sbrNotFound(t *testing.T, expectedRoot [32]byte) *mockStateByRooter {
}}
}
func sbrReturnsState(st state.BeaconState) *mockStateByRooter {
return &mockStateByRooter{sbr: func(_ context.Context, _ [32]byte) (state.BeaconState, error) {
return st, nil
}}
}
func sbrForValOverride(idx primitives.ValidatorIndex, val *ethpb.Validator) *mockStateByRooter {
return sbrForValOverrideWithT(nil, idx, val)
}

View File

@@ -11,12 +11,10 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/transition"
forkchoicetypes "github.com/OffchainLabs/prysm/v7/beacon-chain/forkchoice/types"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v7/runtime/logging"
"github.com/OffchainLabs/prysm/v7/time/slots"
@@ -361,7 +359,7 @@ func (dv *RODataColumnsVerifier) SidecarParentSeen(parentSeen func([fieldparams.
}
if !dv.fc.HasNode(parentRoot) {
return columnErrBuilder(errSidecarParentNotSeen)
return columnErrBuilder(errors.Wrapf(errSidecarParentNotSeen, "parent root: %#x", parentRoot))
}
}
@@ -484,88 +482,19 @@ func (dv *RODataColumnsVerifier) SidecarProposerExpected(ctx context.Context) (e
defer dv.recordResult(RequireSidecarProposerExpected, &err)
type slotParentRoot struct {
slot primitives.Slot
parentRoot [fieldparams.RootLength]byte
}
targetRootBySlotParentRoot := make(map[slotParentRoot][fieldparams.RootLength]byte)
var targetRootFromCache = func(slot primitives.Slot, parentRoot [fieldparams.RootLength]byte) ([fieldparams.RootLength]byte, error) {
// Use cached values if available.
slotParentRoot := slotParentRoot{slot: slot, parentRoot: parentRoot}
if root, ok := targetRootBySlotParentRoot[slotParentRoot]; ok {
return root, nil
}
// Compute the epoch of the data column slot.
dataColumnEpoch := slots.ToEpoch(slot)
if dataColumnEpoch > 0 {
dataColumnEpoch = dataColumnEpoch - 1
}
// Compute the target root for the epoch.
targetRoot, err := dv.fc.TargetRootForEpoch(parentRoot, dataColumnEpoch)
if err != nil {
return [fieldparams.RootLength]byte{}, columnErrBuilder(errors.Wrap(err, "target root from epoch"))
}
// Store the target root in the cache.
targetRootBySlotParentRoot[slotParentRoot] = targetRoot
return targetRoot, nil
}
for _, dataColumn := range dv.dataColumns {
// Extract the slot of the data column.
dataColumnSlot := dataColumn.Slot()
// Extract the root of the parent block corresponding to the data column.
parentRoot := dataColumn.ParentRoot()
// Compute the target root for the data column.
targetRoot, err := targetRootFromCache(dataColumnSlot, parentRoot)
// Get the verifying state, it is guaranteed to have the correct proposer in the lookahead.
verifyingState, err := dv.getVerifyingState(ctx, dataColumn)
if err != nil {
return columnErrBuilder(errors.Wrap(err, "target root"))
return columnErrBuilder(errors.Wrap(err, "verifying state"))
}
// Compute the epoch of the data column slot.
dataColumnEpoch := slots.ToEpoch(dataColumnSlot)
if dataColumnEpoch > 0 {
dataColumnEpoch = dataColumnEpoch - 1
}
// Create a checkpoint for the target root.
checkpoint := &forkchoicetypes.Checkpoint{Root: targetRoot, Epoch: dataColumnEpoch}
// Try to extract the proposer index from the data column in the cache.
idx, cached := dv.pc.Proposer(checkpoint, dataColumnSlot)
if !cached {
parentRoot := dataColumn.ParentRoot()
// Ensure the expensive index computation is only performed once for
// concurrent requests for the same signature data.
idxAny, err, _ := dv.sg.Do(concatRootSlot(parentRoot, dataColumnSlot), func() (any, error) {
verifyingState, err := dv.getVerifyingState(ctx, dataColumn)
if err != nil {
return nil, columnErrBuilder(errors.Wrap(err, "verifying state"))
}
idx, err = helpers.BeaconProposerIndexAtSlot(ctx, verifyingState, dataColumnSlot)
if err != nil {
return nil, columnErrBuilder(errors.Wrap(err, "compute proposer"))
}
return idx, nil
})
if err != nil {
return err
}
var ok bool
if idx, ok = idxAny.(primitives.ValidatorIndex); !ok {
return columnErrBuilder(errors.New("type assertion to ValidatorIndex failed"))
}
// Use proposer lookahead directly
idx, err := helpers.BeaconProposerIndexAtSlot(ctx, verifyingState, dataColumnSlot)
if err != nil {
return columnErrBuilder(errors.Wrap(err, "proposer from lookahead"))
}
if idx != dataColumn.ProposerIndex() {
@@ -626,7 +555,3 @@ func inclusionProofKey(c blocks.RODataColumn) ([32]byte, error) {
return sha256.Sum256(unhashedKey), nil
}
func concatRootSlot(root [fieldparams.RootLength]byte, slot primitives.Slot) string {
return string(root[:]) + fmt.Sprintf("%d", slot)
}

View File

@@ -2,7 +2,6 @@ package verification
import (
"reflect"
"sync"
"testing"
"time"
@@ -795,87 +794,90 @@ func TestDataColumnsSidecarProposerExpected(t *testing.T) {
blobCount = 1
)
parentRoot := [fieldparams.RootLength]byte{}
columns := GenerateTestDataColumns(t, parentRoot, columnSlot, blobCount)
firstColumn := columns[0]
ctx := t.Context()
testCases := []struct {
name string
stateByRooter StateByRooter
proposerCache proposerCache
columns []blocks.RODataColumn
error string
}{
{
name: "Cached, matches",
stateByRooter: nil,
proposerCache: &mockProposerCache{
ProposerCB: pcReturnsIdx(firstColumn.ProposerIndex()),
},
columns: columns,
},
{
name: "Cached, does not match",
stateByRooter: nil,
proposerCache: &mockProposerCache{
ProposerCB: pcReturnsIdx(firstColumn.ProposerIndex() + 1),
},
columns: columns,
error: errSidecarUnexpectedProposer.Error(),
},
{
name: "Not cached, state lookup failure",
stateByRooter: sbrNotFound(t, firstColumn.ParentRoot()),
proposerCache: &mockProposerCache{
ProposerCB: pcReturnsNotFound(),
},
columns: columns,
error: "verifying state",
},
}
parentRoot := [fieldparams.RootLength]byte{}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
initializer := Initializer{
shared: &sharedResources{
sr: tc.stateByRooter,
pc: tc.proposerCache,
hsp: &mockHeadStateProvider{},
fc: &mockForkchoicer{
TargetRootForEpochCB: fcReturnsTargetRoot([fieldparams.RootLength]byte{}),
},
// Create a Fulu state to get the expected proposer from the lookahead.
fuluState, _ := util.DeterministicGenesisStateFulu(t, 32)
expectedProposer, err := fuluState.ProposerLookahead()
require.NoError(t, err)
expectedProposerIdx := primitives.ValidatorIndex(expectedProposer[columnSlot])
// Generate data columns with the expected proposer index.
matchingColumns := generateTestDataColumnsWithProposer(t, parentRoot, columnSlot, blobCount, expectedProposerIdx)
// Generate data columns with wrong proposer index.
wrongColumns := generateTestDataColumnsWithProposer(t, parentRoot, columnSlot, blobCount, expectedProposerIdx+1)
t.Run("Proposer matches", func(t *testing.T) {
initializer := Initializer{
shared: &sharedResources{
sr: sbrReturnsState(fuluState),
hsp: &mockHeadStateProvider{
headRoot: parentRoot[:],
headSlot: columnSlot, // Same epoch so HeadStateReadOnly is used
headStateReadOnly: fuluState,
},
}
fc: &mockForkchoicer{},
},
}
verifier := initializer.NewDataColumnsVerifier(tc.columns, GossipDataColumnSidecarRequirements)
var wg sync.WaitGroup
verifier := initializer.NewDataColumnsVerifier(matchingColumns, GossipDataColumnSidecarRequirements)
err := verifier.SidecarProposerExpected(ctx)
require.NoError(t, err)
require.Equal(t, true, verifier.results.executed(RequireSidecarProposerExpected))
require.NoError(t, verifier.results.result(RequireSidecarProposerExpected))
})
var err1, err2 error
wg.Go(func() {
err1 = verifier.SidecarProposerExpected(ctx)
})
wg.Go(func() {
err2 = verifier.SidecarProposerExpected(ctx)
})
wg.Wait()
t.Run("Proposer does not match", func(t *testing.T) {
initializer := Initializer{
shared: &sharedResources{
sr: sbrReturnsState(fuluState),
hsp: &mockHeadStateProvider{
headRoot: parentRoot[:],
headSlot: columnSlot, // Same epoch so HeadStateReadOnly is used
headStateReadOnly: fuluState,
},
fc: &mockForkchoicer{},
},
}
require.Equal(t, true, verifier.results.executed(RequireSidecarProposerExpected))
verifier := initializer.NewDataColumnsVerifier(wrongColumns, GossipDataColumnSidecarRequirements)
err := verifier.SidecarProposerExpected(ctx)
require.ErrorContains(t, errSidecarUnexpectedProposer.Error(), err)
require.Equal(t, true, verifier.results.executed(RequireSidecarProposerExpected))
require.NotNil(t, verifier.results.result(RequireSidecarProposerExpected))
})
if len(tc.error) > 0 {
require.ErrorContains(t, tc.error, err1)
require.ErrorContains(t, tc.error, err2)
require.NotNil(t, verifier.results.result(RequireSidecarProposerExpected))
return
}
t.Run("State lookup failure", func(t *testing.T) {
columns := GenerateTestDataColumns(t, parentRoot, columnSlot, blobCount)
initializer := Initializer{
shared: &sharedResources{
sr: sbrNotFound(t, columns[0].ParentRoot()),
hsp: &mockHeadStateProvider{},
fc: &mockForkchoicer{},
},
}
require.NoError(t, err1)
require.NoError(t, err2)
require.NoError(t, verifier.results.result(RequireSidecarProposerExpected))
verifier := initializer.NewDataColumnsVerifier(columns, GossipDataColumnSidecarRequirements)
err := verifier.SidecarProposerExpected(ctx)
require.ErrorContains(t, "verifying state", err)
require.Equal(t, true, verifier.results.executed(RequireSidecarProposerExpected))
require.NotNil(t, verifier.results.result(RequireSidecarProposerExpected))
})
}
err := verifier.SidecarProposerExpected(ctx)
require.NoError(t, err)
})
func generateTestDataColumnsWithProposer(t *testing.T, parent [fieldparams.RootLength]byte, slot primitives.Slot, blobCount int, proposer primitives.ValidatorIndex) []blocks.RODataColumn {
roBlock, roBlobs := util.GenerateTestDenebBlockWithSidecar(t, parent, slot, blobCount, util.WithProposer(proposer))
blobs := make([]kzg.Blob, 0, len(roBlobs))
for i := range roBlobs {
blobs = append(blobs, kzg.Blob(roBlobs[i].Blob))
}
cellsPerBlob, proofsPerBlob := util.GenerateCellsAndProofs(t, blobs)
roDataColumnSidecars, err := peerdas.DataColumnSidecars(cellsPerBlob, proofsPerBlob, peerdas.PopulateFromBlock(roBlock))
require.NoError(t, err)
return roDataColumnSidecars
}
func TestColumnRequirementSatisfaction(t *testing.T) {
@@ -922,12 +924,3 @@ func TestColumnRequirementSatisfaction(t *testing.T) {
require.NoError(t, err)
}
func TestConcatRootSlot(t *testing.T) {
root := [fieldparams.RootLength]byte{1, 2, 3}
const slot = primitives.Slot(3210)
const expected = "\x01\x02\x03\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x003210"
actual := concatRootSlot(root, slot)
require.Equal(t, expected, actual)
}

View File

@@ -0,0 +1,3 @@
### Fixed
- Fix the missing fork version object mapping for Fulu in light client p2p.

View File

@@ -0,0 +1,3 @@
### Added
- `primitives.BuilderIndex`: SSZ `uint64` wrapper for builder registry indices.

View File

@@ -0,0 +1,3 @@
### Changed
- the /eth/v2/beacon/pool/attestations and /eth/v1/beacon/pool/sync_committees now returns a 503 error if the node is still syncing, the rest api is also working in a similar process to gRPC broadcasting immediately now.

View File

@@ -0,0 +1,2 @@
#### Fixed
- Fix validation logic for `--backfill-oldest-slot`, which was rejecting slots newer than 1056767.

3
changelog/manu-agg.md Normal file
View File

@@ -0,0 +1,3 @@
### Changed
- Pending aggregates: When multiple aggregated attestations only differing by the aggregator index are in the pending queue, only process one of them.

View File

@@ -0,0 +1,3 @@
### Changed
- Data column sidecars cache warmup: Process in parallel all sidecars for a given epoch.

3
changelog/manu-log.md Normal file
View File

@@ -0,0 +1,3 @@
### Changed
- Summarize DEBUG log corresponding to incoming via gossip data column sidecar.

View File

@@ -0,0 +1,2 @@
### Changed
- `validateDataColumn`: Remove error logs.

View File

@@ -0,0 +1,7 @@
### Added
- prometheus histogram `cells_and_proofs_from_structured_computation_milliseconds` to track computation time for cells and proofs from structured blobs.
- prometheus histogram `get_blobs_v2_latency_milliseconds` to track RPC latency for `getBlobsV2` calls to the execution layer.
### Changed
- Run `ComputeCellsAndProofsFromFlat` in parallel to improve performance when computing cells and proofs.
- Run `ComputeCellsAndProofsFromStructured` in parallel to improve performance when computing cells and proofs.

View File

@@ -0,0 +1,2 @@
### Changed
- Use lookahead to validate data column sidecar proposer index.

View File

@@ -0,0 +1,3 @@
### Changed
- Notify the engine about forkchoice updates in the background.

View File

@@ -0,0 +1,2 @@
### Changed
- Use a separate context when updating the slot cache.

View File

@@ -0,0 +1,3 @@
### Fixed
- Do not process slots and copy states for next epoch proposers after Fulu

View File

@@ -0,0 +1,2 @@
### Ignored
- D not send FCU on block batches.

View File

@@ -0,0 +1,3 @@
### Changed
- Do not check block signature on state transition.

View File

@@ -0,0 +1,3 @@
### Added
- Use the head state to validate attestations for the previous epoch if head is compatible with the target checkpoint.

View File

@@ -0,0 +1,3 @@
### Changed
- Extend `httperror` analyzer to more functions.

View File

@@ -0,0 +1,3 @@
### Fixed
- avoid panic when fork schedule is empty [#16175](https://github.com/OffchainLabs/prysm/pull/16175)

View File

@@ -0,0 +1,3 @@
### Changed
- Performance improvement in ProcessConsolidationRequests: Use more performance HasPendingBalanceToWithdraw instead of PendingBalanceToWithdraw as no need to calculate full total pending balance.

View File

@@ -139,6 +139,7 @@ type BeaconChainConfig struct {
DomainApplicationMask [4]byte `yaml:"DOMAIN_APPLICATION_MASK" spec:"true"` // DomainApplicationMask defines the BLS signature domain for application mask.
DomainApplicationBuilder [4]byte `yaml:"DOMAIN_APPLICATION_BUILDER" spec:"true"` // DomainApplicationBuilder defines the BLS signature domain for application builder.
DomainBLSToExecutionChange [4]byte `yaml:"DOMAIN_BLS_TO_EXECUTION_CHANGE" spec:"true"` // DomainBLSToExecutionChange defines the BLS signature domain to change withdrawal addresses to ETH1 prefix
DomainBeaconBuilder [4]byte `yaml:"DOMAIN_BEACON_BUILDER" spec:"true"` // DomainBeaconBuilder defines the BLS signature domain for beacon block builder.
// Prysm constants.
GenesisValidatorsRoot [32]byte // GenesisValidatorsRoot is the root hash of the genesis validators.

View File

@@ -182,6 +182,7 @@ var mainnetBeaconConfig = &BeaconChainConfig{
DomainApplicationMask: bytesutil.Uint32ToBytes4(0x00000001),
DomainApplicationBuilder: bytesutil.Uint32ToBytes4(0x00000001),
DomainBLSToExecutionChange: bytesutil.Uint32ToBytes4(0x0A000000),
DomainBeaconBuilder: bytesutil.Uint32ToBytes4(0x1B000000),
// Prysm constants.
GenesisValidatorsRoot: [32]byte{75, 54, 61, 185, 78, 40, 97, 32, 215, 110, 185, 5, 52, 15, 221, 78, 84, 191, 233, 240, 107, 243, 63, 246, 207, 90, 210, 127, 81, 27, 254, 149},

View File

@@ -4,6 +4,7 @@ go_library(
name = "go_default_library",
srcs = [
"execution.go",
"execution_payload_envelope.go",
"factory.go",
"get_payload.go",
"getters.go",
@@ -14,11 +15,13 @@ go_library(
"roblock.go",
"rodatacolumn.go",
"setters.go",
"signed_execution_bid.go",
"types.go",
],
importpath = "github.com/OffchainLabs/prysm/v7/consensus-types/blocks",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/state/stateutil:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
@@ -34,6 +37,7 @@ go_library(
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/validator-client:go_default_library",
"//runtime/version:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_fastssz//:go_default_library",
"@com_github_prysmaticlabs_gohashtree//:go_default_library",
@@ -44,6 +48,7 @@ go_library(
go_test(
name = "go_default_test",
srcs = [
"execution_payload_envelope_test.go",
"execution_test.go",
"factory_test.go",
"getters_test.go",
@@ -57,6 +62,7 @@ go_test(
],
embed = [":go_default_library"],
deps = [
"//beacon-chain/core/signing:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types:go_default_library",

View File

@@ -0,0 +1,156 @@
package blocks
import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/signing"
field_params "github.com/OffchainLabs/prysm/v7/config/fieldparams"
consensus_types "github.com/OffchainLabs/prysm/v7/consensus-types"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/encoding/ssz"
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/ethereum/go-ethereum/common"
"google.golang.org/protobuf/proto"
)
type signedExecutionPayloadEnvelope struct {
s *ethpb.SignedExecutionPayloadEnvelope
}
type executionPayloadEnvelope struct {
p *ethpb.ExecutionPayloadEnvelope
}
// WrappedROSignedExecutionPayloadEnvelope wraps a signed execution payload envelope proto in a read-only interface.
func WrappedROSignedExecutionPayloadEnvelope(s *ethpb.SignedExecutionPayloadEnvelope) (interfaces.ROSignedExecutionPayloadEnvelope, error) {
w := signedExecutionPayloadEnvelope{s: s}
if w.IsNil() {
return nil, consensus_types.ErrNilObjectWrapped
}
return w, nil
}
// WrappedROExecutionPayloadEnvelope wraps an execution payload envelope proto in a read-only interface.
func WrappedROExecutionPayloadEnvelope(p *ethpb.ExecutionPayloadEnvelope) (interfaces.ROExecutionPayloadEnvelope, error) {
w := &executionPayloadEnvelope{p: p}
if w.IsNil() {
return nil, consensus_types.ErrNilObjectWrapped
}
return w, nil
}
// Envelope returns the execution payload envelope as a read-only interface.
func (s signedExecutionPayloadEnvelope) Envelope() (interfaces.ROExecutionPayloadEnvelope, error) {
return WrappedROExecutionPayloadEnvelope(s.s.Message)
}
// Signature returns the BLS signature as a 96-byte array.
func (s signedExecutionPayloadEnvelope) Signature() [field_params.BLSSignatureLength]byte {
return [field_params.BLSSignatureLength]byte(s.s.Signature)
}
// IsNil reports whether the signed envelope or its contents are invalid.
func (s signedExecutionPayloadEnvelope) IsNil() bool {
if s.s == nil {
return true
}
if len(s.s.Signature) != field_params.BLSSignatureLength {
return true
}
w := executionPayloadEnvelope{p: s.s.Message}
return w.IsNil()
}
// SigningRoot computes the signing root for the signed envelope with the provided domain.
func (s signedExecutionPayloadEnvelope) SigningRoot(domain []byte) (root [32]byte, err error) {
return signing.ComputeSigningRoot(s.s.Message, domain)
}
// Proto returns the underlying protobuf message.
func (s signedExecutionPayloadEnvelope) Proto() proto.Message {
return s.s
}
// IsNil reports whether the envelope or its required fields are invalid.
func (p *executionPayloadEnvelope) IsNil() bool {
if p.p == nil {
return true
}
if p.p.Payload == nil {
return true
}
if len(p.p.BeaconBlockRoot) != field_params.RootLength {
return true
}
if p.p.BlobKzgCommitments == nil {
return true
}
return false
}
// IsBlinded reports whether the envelope contains a blinded payload.
func (p *executionPayloadEnvelope) IsBlinded() bool {
return !p.IsNil() && p.p.Payload == nil
}
// Execution returns the execution payload as a read-only interface.
func (p *executionPayloadEnvelope) Execution() (interfaces.ExecutionData, error) {
if p.IsBlinded() {
return nil, consensus_types.ErrNilObjectWrapped
}
return WrappedExecutionPayloadDeneb(p.p.Payload)
}
// ExecutionRequests returns the execution requests attached to the envelope.
func (p *executionPayloadEnvelope) ExecutionRequests() *enginev1.ExecutionRequests {
return p.p.ExecutionRequests
}
// BuilderIndex returns the proposer/builder index for the envelope.
func (p *executionPayloadEnvelope) BuilderIndex() primitives.ValidatorIndex {
return p.p.BuilderIndex
}
// BeaconBlockRoot returns the beacon block root referenced by the envelope.
func (p *executionPayloadEnvelope) BeaconBlockRoot() [field_params.RootLength]byte {
return [field_params.RootLength]byte(p.p.BeaconBlockRoot)
}
// BlobKzgCommitments returns a copy of the envelope's KZG commitments.
func (p *executionPayloadEnvelope) BlobKzgCommitments() [][]byte {
commitments := make([][]byte, len(p.p.BlobKzgCommitments))
for i, commit := range p.p.BlobKzgCommitments {
commitments[i] = make([]byte, len(commit))
copy(commitments[i], commit)
}
return commitments
}
// VersionedHashes returns versioned hashes derived from the KZG commitments.
func (p *executionPayloadEnvelope) VersionedHashes() []common.Hash {
commitments := p.p.BlobKzgCommitments
versionedHashes := make([]common.Hash, len(commitments))
for i, commitment := range commitments {
versionedHashes[i] = primitives.ConvertKzgCommitmentToVersionedHash(commitment)
}
return versionedHashes
}
// BlobKzgCommitmentsRoot returns the SSZ root of the envelope's KZG commitments.
func (p *executionPayloadEnvelope) BlobKzgCommitmentsRoot() ([field_params.RootLength]byte, error) {
if p.IsNil() || p.p.BlobKzgCommitments == nil {
return [field_params.RootLength]byte{}, consensus_types.ErrNilObjectWrapped
}
return ssz.KzgCommitmentsRoot(p.p.BlobKzgCommitments)
}
// Slot returns the slot of the envelope.
func (p *executionPayloadEnvelope) Slot() primitives.Slot {
return p.p.Slot
}
// StateRoot returns the state root carried by the envelope.
func (p *executionPayloadEnvelope) StateRoot() [field_params.RootLength]byte {
return [field_params.RootLength]byte(p.p.StateRoot)
}

View File

@@ -0,0 +1,115 @@
package blocks_test
import (
"bytes"
"testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/signing"
consensus_types "github.com/OffchainLabs/prysm/v7/consensus-types"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/testing/assert"
"github.com/OffchainLabs/prysm/v7/testing/require"
)
func validExecutionPayloadEnvelope() *ethpb.ExecutionPayloadEnvelope {
payload := &enginev1.ExecutionPayloadDeneb{
ParentHash: bytes.Repeat([]byte{0x01}, 32),
FeeRecipient: bytes.Repeat([]byte{0x02}, 20),
StateRoot: bytes.Repeat([]byte{0x03}, 32),
ReceiptsRoot: bytes.Repeat([]byte{0x04}, 32),
LogsBloom: bytes.Repeat([]byte{0x05}, 256),
PrevRandao: bytes.Repeat([]byte{0x06}, 32),
BlockNumber: 1,
GasLimit: 2,
GasUsed: 3,
Timestamp: 4,
BaseFeePerGas: bytes.Repeat([]byte{0x07}, 32),
BlockHash: bytes.Repeat([]byte{0x08}, 32),
Transactions: [][]byte{},
Withdrawals: []*enginev1.Withdrawal{},
BlobGasUsed: 0,
ExcessBlobGas: 0,
}
return &ethpb.ExecutionPayloadEnvelope{
Payload: payload,
ExecutionRequests: &enginev1.ExecutionRequests{},
BuilderIndex: 10,
BeaconBlockRoot: bytes.Repeat([]byte{0xAA}, 32),
Slot: 9,
BlobKzgCommitments: [][]byte{bytes.Repeat([]byte{0x0C}, 48)},
StateRoot: bytes.Repeat([]byte{0xBB}, 32),
}
}
func TestWrappedROExecutionPayloadEnvelope(t *testing.T) {
t.Run("returns error on invalid beacon root length", func(t *testing.T) {
invalid := validExecutionPayloadEnvelope()
invalid.BeaconBlockRoot = []byte{0x01}
_, err := blocks.WrappedROExecutionPayloadEnvelope(invalid)
require.Equal(t, consensus_types.ErrNilObjectWrapped, err)
})
t.Run("wraps and exposes fields", func(t *testing.T) {
env := validExecutionPayloadEnvelope()
wrapped, err := blocks.WrappedROExecutionPayloadEnvelope(env)
require.NoError(t, err)
require.Equal(t, primitives.ValidatorIndex(10), wrapped.BuilderIndex())
require.Equal(t, primitives.Slot(9), wrapped.Slot())
assert.DeepEqual(t, [32]byte(bytes.Repeat([]byte{0xAA}, 32)), wrapped.BeaconBlockRoot())
assert.DeepEqual(t, [32]byte(bytes.Repeat([]byte{0xBB}, 32)), wrapped.StateRoot())
commitments := wrapped.BlobKzgCommitments()
assert.DeepEqual(t, env.BlobKzgCommitments, commitments)
versioned := wrapped.VersionedHashes()
require.Equal(t, 1, len(versioned))
reqs := wrapped.ExecutionRequests()
require.NotNil(t, reqs)
exec, err := wrapped.Execution()
require.NoError(t, err)
assert.DeepEqual(t, env.Payload.ParentHash, exec.ParentHash())
})
}
func TestWrappedROSignedExecutionPayloadEnvelope(t *testing.T) {
t.Run("returns error for invalid signature length", func(t *testing.T) {
signed := &ethpb.SignedExecutionPayloadEnvelope{
Message: validExecutionPayloadEnvelope(),
Signature: bytes.Repeat([]byte{0xAA}, 95),
}
_, err := blocks.WrappedROSignedExecutionPayloadEnvelope(signed)
require.Equal(t, consensus_types.ErrNilObjectWrapped, err)
})
t.Run("wraps and provides envelope/signing data", func(t *testing.T) {
sig := bytes.Repeat([]byte{0xAB}, 96)
signed := &ethpb.SignedExecutionPayloadEnvelope{
Message: validExecutionPayloadEnvelope(),
Signature: sig,
}
wrapped, err := blocks.WrappedROSignedExecutionPayloadEnvelope(signed)
require.NoError(t, err)
gotSig := wrapped.Signature()
assert.DeepEqual(t, [96]byte(sig), gotSig)
env, err := wrapped.Envelope()
require.NoError(t, err)
assert.DeepEqual(t, [32]byte(bytes.Repeat([]byte{0xAA}, 32)), env.BeaconBlockRoot())
domain := bytes.Repeat([]byte{0xCC}, 32)
wantRoot, err := signing.ComputeSigningRoot(signed.Message, domain)
require.NoError(t, err)
gotRoot, err := wrapped.SigningRoot(domain)
require.NoError(t, err)
require.Equal(t, wantRoot, gotRoot)
})
}

View File

@@ -669,6 +669,7 @@ func hydrateBeaconBlockBodyGloas() *eth.BeaconBlockBodyGloas {
ParentBlockHash: make([]byte, fieldparams.RootLength),
ParentBlockRoot: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
PrevRandao: make([]byte, fieldparams.RootLength),
FeeRecipient: make([]byte, 20),
BlobKzgCommitmentsRoot: make([]byte, fieldparams.RootLength),
},

View File

@@ -0,0 +1,140 @@
package blocks
import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/signing"
consensus_types "github.com/OffchainLabs/prysm/v7/consensus-types"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
)
// signedExecutionPayloadBid wraps the protobuf signed execution payload bid
// and implements the ROSignedExecutionPayloadBid interface.
type signedExecutionPayloadBid struct {
bid *ethpb.SignedExecutionPayloadBid
}
// executionPayloadBidGloas wraps the protobuf execution payload bid for Gloas fork
// and implements the ROExecutionPayloadBidGloas interface.
type executionPayloadBidGloas struct {
payload *ethpb.ExecutionPayloadBid
}
// IsNil checks if the signed execution payload bid is nil or invalid.
func (s signedExecutionPayloadBid) IsNil() bool {
if s.bid == nil {
return true
}
if _, err := WrappedROExecutionPayloadBid(s.bid.Message); err != nil {
return true
}
return len(s.bid.Signature) != 96
}
// IsNil checks if the execution payload bid is nil or has invalid fields.
func (h executionPayloadBidGloas) IsNil() bool {
if h.payload == nil {
return true
}
if len(h.payload.ParentBlockHash) != 32 ||
len(h.payload.ParentBlockRoot) != 32 ||
len(h.payload.BlockHash) != 32 ||
len(h.payload.BlobKzgCommitmentsRoot) != 32 {
return true
}
return false
}
// WrappedROSignedExecutionPayloadBid creates a new read-only signed execution payload bid
// wrapper from the given protobuf message.
func WrappedROSignedExecutionPayloadBid(pb *ethpb.SignedExecutionPayloadBid) (interfaces.ROSignedExecutionPayloadBid, error) {
wrapper := signedExecutionPayloadBid{bid: pb}
if wrapper.IsNil() {
return nil, consensus_types.ErrNilObjectWrapped
}
return wrapper, nil
}
// WrappedROExecutionPayloadBid creates a new read-only execution payload bid
// wrapper for the Gloas fork from the given protobuf message.
func WrappedROExecutionPayloadBid(pb *ethpb.ExecutionPayloadBid) (interfaces.ROExecutionPayloadBid, error) {
wrapper := executionPayloadBidGloas{payload: pb}
if wrapper.IsNil() {
return nil, consensus_types.ErrNilObjectWrapped
}
return wrapper, nil
}
// Bid returns the execution payload bid as a read-only interface.
func (s signedExecutionPayloadBid) Bid() (interfaces.ROExecutionPayloadBid, error) {
return WrappedROExecutionPayloadBid(s.bid.Message)
}
// SigningRoot computes the signing root for the execution payload bid with the given domain.
func (s signedExecutionPayloadBid) SigningRoot(domain []byte) ([32]byte, error) {
return signing.ComputeSigningRoot(s.bid.Message, domain)
}
// Signature returns the BLS signature as a 96-byte array.
func (s signedExecutionPayloadBid) Signature() [96]byte {
return [96]byte(s.bid.Signature)
}
// ParentBlockHash returns the hash of the parent execution block.
func (h executionPayloadBidGloas) ParentBlockHash() [32]byte {
return [32]byte(h.payload.ParentBlockHash)
}
// ParentBlockRoot returns the beacon block root of the parent block.
func (h executionPayloadBidGloas) ParentBlockRoot() [32]byte {
return [32]byte(h.payload.ParentBlockRoot)
}
// PrevRandao returns the previous randao value for the execution block.
func (h executionPayloadBidGloas) PrevRandao() [32]byte {
return [32]byte(h.payload.PrevRandao)
}
// BlockHash returns the hash of the execution block.
func (h executionPayloadBidGloas) BlockHash() [32]byte {
return [32]byte(h.payload.BlockHash)
}
// GasLimit returns the gas limit for the execution block.
func (h executionPayloadBidGloas) GasLimit() uint64 {
return h.payload.GasLimit
}
// BuilderIndex returns the validator index of the builder who created this bid.
func (h executionPayloadBidGloas) BuilderIndex() primitives.ValidatorIndex {
return h.payload.BuilderIndex
}
// Slot returns the beacon chain slot for which this bid was created.
func (h executionPayloadBidGloas) Slot() primitives.Slot {
return h.payload.Slot
}
// Value returns the payment value offered by the builder in Gwei.
func (h executionPayloadBidGloas) Value() primitives.Gwei {
return primitives.Gwei(h.payload.Value)
}
// ExecutionPayment returns the execution payment offered by the builder.
func (h executionPayloadBidGloas) ExecutionPayment() primitives.Gwei {
return primitives.Gwei(h.payload.ExecutionPayment)
}
// BlobKzgCommitmentsRoot returns the root of the KZG commitments for blobs.
func (h executionPayloadBidGloas) BlobKzgCommitmentsRoot() [32]byte {
return [32]byte(h.payload.BlobKzgCommitmentsRoot)
}
// FeeRecipient returns the execution address that will receive the builder payment.
func (h executionPayloadBidGloas) FeeRecipient() [20]byte {
return [20]byte(h.payload.FeeRecipient)
}

View File

@@ -5,7 +5,9 @@ go_library(
srcs = [
"beacon_block.go",
"error.go",
"execution_payload_envelope.go",
"light_client.go",
"signed_execution_payload_bid.go",
"utils.go",
"validator.go",
],
@@ -18,6 +20,7 @@ go_library(
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/validator-client:go_default_library",
"//runtime/version:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_fastssz//:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",

View File

@@ -0,0 +1,31 @@
package interfaces
import (
field_params "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
"github.com/ethereum/go-ethereum/common"
"google.golang.org/protobuf/proto"
)
type ROSignedExecutionPayloadEnvelope interface {
Envelope() (ROExecutionPayloadEnvelope, error)
Signature() [field_params.BLSSignatureLength]byte
SigningRoot([]byte) ([32]byte, error)
IsNil() bool
Proto() proto.Message
}
type ROExecutionPayloadEnvelope interface {
Execution() (ExecutionData, error)
ExecutionRequests() *enginev1.ExecutionRequests
BuilderIndex() primitives.ValidatorIndex
BeaconBlockRoot() [field_params.RootLength]byte
BlobKzgCommitments() [][]byte
BlobKzgCommitmentsRoot() ([field_params.RootLength]byte, error)
VersionedHashes() []common.Hash
Slot() primitives.Slot
StateRoot() [field_params.RootLength]byte
IsBlinded() bool
IsNil() bool
}

View File

@@ -0,0 +1,28 @@
package interfaces
import (
field_params "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
)
type ROSignedExecutionPayloadBid interface {
Bid() (ROExecutionPayloadBid, error)
Signature() [field_params.BLSSignatureLength]byte
SigningRoot([]byte) ([32]byte, error)
IsNil() bool
}
type ROExecutionPayloadBid interface {
ParentBlockHash() [32]byte
ParentBlockRoot() [32]byte
PrevRandao() [32]byte
BlockHash() [32]byte
GasLimit() uint64
BuilderIndex() primitives.ValidatorIndex
Slot() primitives.Slot
Value() primitives.Gwei
ExecutionPayment() primitives.Gwei
BlobKzgCommitmentsRoot() [32]byte
FeeRecipient() [20]byte
IsNil() bool
}

View File

@@ -4,6 +4,7 @@ go_library(
name = "go_default_library",
srcs = [
"basis_points.go",
"builder_index.go",
"committee_bits_mainnet.go",
"committee_bits_minimal.go", # keep
"committee_index.go",
@@ -31,6 +32,7 @@ go_library(
go_test(
name = "go_default_test",
srcs = [
"builder_index_test.go",
"committee_index_test.go",
"domain_test.go",
"epoch_test.go",

View File

@@ -0,0 +1,54 @@
package primitives
import (
"fmt"
fssz "github.com/prysmaticlabs/fastssz"
)
var _ fssz.HashRoot = (BuilderIndex)(0)
var _ fssz.Marshaler = (*BuilderIndex)(nil)
var _ fssz.Unmarshaler = (*BuilderIndex)(nil)
// BuilderIndex is an index into the builder registry.
type BuilderIndex uint64
// HashTreeRoot returns the SSZ hash tree root of the index.
func (b BuilderIndex) HashTreeRoot() ([32]byte, error) {
return fssz.HashWithDefaultHasher(b)
}
// HashTreeRootWith appends the SSZ uint64 representation of the index to the given hasher.
func (b BuilderIndex) HashTreeRootWith(hh *fssz.Hasher) error {
hh.PutUint64(uint64(b))
return nil
}
// UnmarshalSSZ decodes the SSZ-encoded uint64 index from buf.
func (b *BuilderIndex) UnmarshalSSZ(buf []byte) error {
if len(buf) != b.SizeSSZ() {
return fmt.Errorf("expected buffer of length %d received %d", b.SizeSSZ(), len(buf))
}
*b = BuilderIndex(fssz.UnmarshallUint64(buf))
return nil
}
// MarshalSSZTo appends the SSZ-encoded index to dst and returns the extended buffer.
func (b *BuilderIndex) MarshalSSZTo(dst []byte) ([]byte, error) {
marshalled, err := b.MarshalSSZ()
if err != nil {
return nil, err
}
return append(dst, marshalled...), nil
}
// MarshalSSZ encodes the index as an SSZ uint64.
func (b *BuilderIndex) MarshalSSZ() ([]byte, error) {
marshalled := fssz.MarshalUint64([]byte{}, uint64(*b))
return marshalled, nil
}
// SizeSSZ returns the size of the SSZ-encoded index in bytes.
func (b *BuilderIndex) SizeSSZ() int {
return 8
}

View File

@@ -0,0 +1,86 @@
package primitives_test
import (
"encoding/binary"
"slices"
"strconv"
"testing"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/testing/require"
)
func TestBuilderIndex_SSZRoundTripAndHashRoot(t *testing.T) {
cases := []uint64{
0,
1,
42,
(1 << 32) - 1,
1 << 32,
^uint64(0),
}
for _, v := range cases {
t.Run("v="+u64name(v), func(t *testing.T) {
t.Parallel()
val := primitives.BuilderIndex(v)
require.Equal(t, 8, (&val).SizeSSZ())
enc, err := (&val).MarshalSSZ()
require.NoError(t, err)
require.Equal(t, 8, len(enc))
wantEnc := make([]byte, 8)
binary.LittleEndian.PutUint64(wantEnc, v)
require.DeepEqual(t, wantEnc, enc)
dstPrefix := []byte("prefix:")
dst, err := (&val).MarshalSSZTo(slices.Clone(dstPrefix))
require.NoError(t, err)
wantDst := append(dstPrefix, wantEnc...)
require.DeepEqual(t, wantDst, dst)
var decoded primitives.BuilderIndex
require.NoError(t, (&decoded).UnmarshalSSZ(enc))
require.Equal(t, val, decoded)
root, err := val.HashTreeRoot()
require.NoError(t, err)
var wantRoot [32]byte
binary.LittleEndian.PutUint64(wantRoot[:8], v)
require.Equal(t, wantRoot, root)
})
}
}
func TestBuilderIndex_UnmarshalSSZRejectsWrongSize(t *testing.T) {
for _, size := range []int{7, 9} {
t.Run("size="+strconv.Itoa(size), func(t *testing.T) {
t.Parallel()
var v primitives.BuilderIndex
err := (&v).UnmarshalSSZ(make([]byte, size))
require.ErrorContains(t, "expected buffer of length 8", err)
})
}
}
func u64name(v uint64) string {
switch v {
case 0:
return "0"
case 1:
return "1"
case 42:
return "42"
case (1 << 32) - 1:
return "2^32-1"
case 1 << 32:
return "2^32"
case ^uint64(0):
return "max"
default:
return "custom"
}
}

View File

@@ -5,6 +5,7 @@ import (
"encoding/binary"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/crypto/hash/htr"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
@@ -141,3 +142,24 @@ func withdrawalRoot(w *enginev1.Withdrawal) ([32]byte, error) {
}
return w.HashTreeRoot()
}
// KzgCommitmentsRoot computes the HTR for a list of KZG commitments
func KzgCommitmentsRoot(commitments [][]byte) ([32]byte, error) {
roots := make([][32]byte, len(commitments))
for i, commitment := range commitments {
chunks, err := PackByChunk([][]byte{commitment})
if err != nil {
return [32]byte{}, err
}
roots[i] = htr.VectorizedSha256(chunks)[0]
}
commitmentsRoot, err := BitwiseMerkleize(roots, uint64(len(roots)), fieldparams.MaxBlobCommitmentsPerBlock)
if err != nil {
return [32]byte{}, errors.Wrap(err, "could not compute merkleization")
}
length := make([]byte, 32)
binary.LittleEndian.PutUint64(length[:8], uint64(len(roots)))
return MixInLength(commitmentsRoot, length), nil
}

View File

@@ -43,6 +43,7 @@ func WriteJson(w http.ResponseWriter, v any) {
func WriteSsz(w http.ResponseWriter, respSsz []byte) {
w.Header().Set("Content-Length", strconv.Itoa(len(respSsz)))
w.Header().Set("Content-Type", api.OctetStreamMediaType)
w.WriteHeader(http.StatusOK)
if _, err := io.Copy(w, io.NopCloser(bytes.NewReader(respSsz))); err != nil {
log.WithError(err).Error("Could not write response message")
}

View File

@@ -174,3 +174,55 @@ func copyBeaconBlockBodyGloas(body *BeaconBlockBodyGloas) *BeaconBlockBodyGloas
return copied
}
// CopyBuilderPendingPaymentSlice creates a deep copy of a builder pending payment slice.
func CopyBuilderPendingPaymentSlice(original []*BuilderPendingPayment) []*BuilderPendingPayment {
if original == nil {
return nil
}
copied := make([]*BuilderPendingPayment, len(original))
for i, payment := range original {
copied[i] = CopyBuilderPendingPayment(payment)
}
return copied
}
// CopyBuilderPendingPayment creates a deep copy of a builder pending payment.
func CopyBuilderPendingPayment(original *BuilderPendingPayment) *BuilderPendingPayment {
if original == nil {
return nil
}
return &BuilderPendingPayment{
Weight: original.Weight,
Withdrawal: CopyBuilderPendingWithdrawal(original.Withdrawal),
}
}
// CopyBuilderPendingWithdrawalSlice creates a deep copy of a builder pending withdrawal slice.
func CopyBuilderPendingWithdrawalSlice(original []*BuilderPendingWithdrawal) []*BuilderPendingWithdrawal {
if original == nil {
return nil
}
copied := make([]*BuilderPendingWithdrawal, len(original))
for i, withdrawal := range original {
copied[i] = CopyBuilderPendingWithdrawal(withdrawal)
}
return copied
}
// CopyBuilderPendingWithdrawal creates a deep copy of a builder pending withdrawal.
func CopyBuilderPendingWithdrawal(original *BuilderPendingWithdrawal) *BuilderPendingWithdrawal {
if original == nil {
return nil
}
return &BuilderPendingWithdrawal{
FeeRecipient: bytesutil.SafeCopyBytes(original.FeeRecipient),
Amount: original.Amount,
BuilderIndex: original.BuilderIndex,
WithdrawableEpoch: original.WithdrawableEpoch,
}
}

View File

@@ -1246,3 +1246,99 @@ func genPayloadAttestations(num int) []*v1alpha1.PayloadAttestation {
}
return pas
}
func TestCopyBuilderPendingWithdrawal(t *testing.T) {
t.Run("nil returns nil", func(t *testing.T) {
if got := v1alpha1.CopyBuilderPendingWithdrawal(nil); got != nil {
t.Fatalf("CopyBuilderPendingWithdrawal(nil) = %v, want nil", got)
}
})
t.Run("deep copy", func(t *testing.T) {
original := &v1alpha1.BuilderPendingWithdrawal{
FeeRecipient: []byte{0x01, 0x02},
Amount: primitives.Gwei(10),
BuilderIndex: primitives.ValidatorIndex(3),
WithdrawableEpoch: primitives.Epoch(4),
}
copied := v1alpha1.CopyBuilderPendingWithdrawal(original)
if copied == original {
t.Fatalf("expected new pointer for withdrawal copy")
}
if !reflect.DeepEqual(copied, original) {
t.Fatalf("CopyBuilderPendingWithdrawal() = %v, want %v", copied, original)
}
original.FeeRecipient[0] = 0xFF
original.Amount = primitives.Gwei(20)
original.BuilderIndex = primitives.ValidatorIndex(9)
original.WithdrawableEpoch = primitives.Epoch(10)
if copied.FeeRecipient[0] == original.FeeRecipient[0] {
t.Fatalf("fee recipient was not deep copied")
}
if copied.Amount != primitives.Gwei(10) {
t.Fatalf("amount mutated on copy: %d", copied.Amount)
}
if copied.BuilderIndex != primitives.ValidatorIndex(3) {
t.Fatalf("builder index mutated on copy: %d", copied.BuilderIndex)
}
if copied.WithdrawableEpoch != primitives.Epoch(4) {
t.Fatalf("withdrawable epoch mutated on copy: %d", copied.WithdrawableEpoch)
}
})
}
func TestCopyBuilderPendingPayment(t *testing.T) {
t.Run("nil returns nil", func(t *testing.T) {
if got := v1alpha1.CopyBuilderPendingPayment(nil); got != nil {
t.Fatalf("CopyBuilderPendingPayment(nil) = %v, want nil", got)
}
})
t.Run("deep copy", func(t *testing.T) {
original := &v1alpha1.BuilderPendingPayment{
Weight: primitives.Gwei(5),
Withdrawal: &v1alpha1.BuilderPendingWithdrawal{
FeeRecipient: []byte{0x0A, 0x0B},
Amount: primitives.Gwei(50),
BuilderIndex: primitives.ValidatorIndex(7),
WithdrawableEpoch: primitives.Epoch(8),
},
}
copied := v1alpha1.CopyBuilderPendingPayment(original)
if copied == original {
t.Fatalf("expected new pointer for payment copy")
}
if copied.Withdrawal == original.Withdrawal {
t.Fatalf("expected nested withdrawal to be deep copied")
}
if !reflect.DeepEqual(copied, original) {
t.Fatalf("CopyBuilderPendingPayment() = %v, want %v", copied, original)
}
original.Weight = primitives.Gwei(10)
original.Withdrawal.FeeRecipient[0] = 0xFF
original.Withdrawal.Amount = primitives.Gwei(75)
original.Withdrawal.BuilderIndex = primitives.ValidatorIndex(9)
original.Withdrawal.WithdrawableEpoch = primitives.Epoch(11)
if copied.Weight != primitives.Gwei(5) {
t.Fatalf("weight mutated on copy: %d", copied.Weight)
}
if copied.Withdrawal.FeeRecipient[0] == original.Withdrawal.FeeRecipient[0] {
t.Fatalf("withdrawal fee recipient was not deep copied")
}
if copied.Withdrawal.Amount != primitives.Gwei(50) {
t.Fatalf("withdrawal amount mutated on copy: %d", copied.Withdrawal.Amount)
}
if copied.Withdrawal.BuilderIndex != primitives.ValidatorIndex(7) {
t.Fatalf("withdrawal builder index mutated on copy: %d", copied.Withdrawal.BuilderIndex)
}
if copied.Withdrawal.WithdrawableEpoch != primitives.Epoch(8) {
t.Fatalf("withdrawal epoch mutated on copy: %d", copied.Withdrawal.WithdrawableEpoch)
}
})
}

View File

@@ -13,11 +13,13 @@ func (header *ExecutionPayloadBid) Copy() *ExecutionPayloadBid {
ParentBlockHash: bytesutil.SafeCopyBytes(header.ParentBlockHash),
ParentBlockRoot: bytesutil.SafeCopyBytes(header.ParentBlockRoot),
BlockHash: bytesutil.SafeCopyBytes(header.BlockHash),
PrevRandao: bytesutil.SafeCopyBytes(header.PrevRandao),
FeeRecipient: bytesutil.SafeCopyBytes(header.FeeRecipient),
GasLimit: header.GasLimit,
BuilderIndex: header.BuilderIndex,
Slot: header.Slot,
Value: header.Value,
ExecutionPayment: header.ExecutionPayment,
BlobKzgCommitmentsRoot: bytesutil.SafeCopyBytes(header.BlobKzgCommitmentsRoot),
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -26,30 +26,37 @@ option go_package = "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1;eth";
// parent_block_hash: Hash32
// parent_block_root: Root
// block_hash: Hash32
// prev_randao: Bytes32
// fee_recipient: ExecutionAddress
// gas_limit: uint64
// builder_index: ValidatorIndex
// slot: Slot
// value: Gwei
// execution_payment: Gwei
// blob_kzg_commitments_root: Root
message ExecutionPayloadBid {
bytes parent_block_hash = 1 [ (ethereum.eth.ext.ssz_size) = "32" ];
bytes parent_block_root = 2 [ (ethereum.eth.ext.ssz_size) = "32" ];
bytes block_hash = 3 [ (ethereum.eth.ext.ssz_size) = "32" ];
bytes fee_recipient = 4 [ (ethereum.eth.ext.ssz_size) = "20" ];
uint64 gas_limit = 5;
uint64 builder_index = 6 [ (ethereum.eth.ext.cast_type) =
bytes prev_randao = 4 [ (ethereum.eth.ext.ssz_size) = "32" ];
bytes fee_recipient = 5 [ (ethereum.eth.ext.ssz_size) = "20" ];
uint64 gas_limit = 6;
uint64 builder_index = 7 [ (ethereum.eth.ext.cast_type) =
"github.com/OffchainLabs/prysm/v7/"
"consensus-types/primitives.ValidatorIndex" ];
uint64 slot = 7 [
uint64 slot = 8 [
(ethereum.eth.ext.cast_type) =
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives.Slot"
];
uint64 value = 8 [
uint64 value = 9 [
(ethereum.eth.ext.cast_type) =
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives.Gwei"
];
bytes blob_kzg_commitments_root = 9 [ (ethereum.eth.ext.ssz_size) = "32" ];
uint64 execution_payment = 10 [
(ethereum.eth.ext.cast_type) =
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives.Gwei"
];
bytes blob_kzg_commitments_root = 11 [ (ethereum.eth.ext.ssz_size) = "32" ];
}
// SignedExecutionPayloadBid wraps an execution payload bid with a signature.

View File

@@ -37,26 +37,36 @@ func (e *ExecutionPayloadBid) MarshalSSZTo(buf []byte) (dst []byte, err error) {
}
dst = append(dst, e.BlockHash...)
// Field (3) 'FeeRecipient'
// Field (3) 'PrevRandao'
if size := len(e.PrevRandao); size != 32 {
err = ssz.ErrBytesLengthFn("--.PrevRandao", size, 32)
return
}
dst = append(dst, e.PrevRandao...)
// Field (4) 'FeeRecipient'
if size := len(e.FeeRecipient); size != 20 {
err = ssz.ErrBytesLengthFn("--.FeeRecipient", size, 20)
return
}
dst = append(dst, e.FeeRecipient...)
// Field (4) 'GasLimit'
// Field (5) 'GasLimit'
dst = ssz.MarshalUint64(dst, e.GasLimit)
// Field (5) 'BuilderIndex'
// Field (6) 'BuilderIndex'
dst = ssz.MarshalUint64(dst, uint64(e.BuilderIndex))
// Field (6) 'Slot'
// Field (7) 'Slot'
dst = ssz.MarshalUint64(dst, uint64(e.Slot))
// Field (7) 'Value'
// Field (8) 'Value'
dst = ssz.MarshalUint64(dst, uint64(e.Value))
// Field (8) 'BlobKzgCommitmentsRoot'
// Field (9) 'ExecutionPayment'
dst = ssz.MarshalUint64(dst, uint64(e.ExecutionPayment))
// Field (10) 'BlobKzgCommitmentsRoot'
if size := len(e.BlobKzgCommitmentsRoot); size != 32 {
err = ssz.ErrBytesLengthFn("--.BlobKzgCommitmentsRoot", size, 32)
return
@@ -70,7 +80,7 @@ func (e *ExecutionPayloadBid) MarshalSSZTo(buf []byte) (dst []byte, err error) {
func (e *ExecutionPayloadBid) UnmarshalSSZ(buf []byte) error {
var err error
size := uint64(len(buf))
if size != 180 {
if size != 220 {
return ssz.ErrSize
}
@@ -92,36 +102,45 @@ func (e *ExecutionPayloadBid) UnmarshalSSZ(buf []byte) error {
}
e.BlockHash = append(e.BlockHash, buf[64:96]...)
// Field (3) 'FeeRecipient'
// Field (3) 'PrevRandao'
if cap(e.PrevRandao) == 0 {
e.PrevRandao = make([]byte, 0, len(buf[96:128]))
}
e.PrevRandao = append(e.PrevRandao, buf[96:128]...)
// Field (4) 'FeeRecipient'
if cap(e.FeeRecipient) == 0 {
e.FeeRecipient = make([]byte, 0, len(buf[96:116]))
e.FeeRecipient = make([]byte, 0, len(buf[128:148]))
}
e.FeeRecipient = append(e.FeeRecipient, buf[96:116]...)
e.FeeRecipient = append(e.FeeRecipient, buf[128:148]...)
// Field (4) 'GasLimit'
e.GasLimit = ssz.UnmarshallUint64(buf[116:124])
// Field (5) 'GasLimit'
e.GasLimit = ssz.UnmarshallUint64(buf[148:156])
// Field (5) 'BuilderIndex'
e.BuilderIndex = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.ValidatorIndex(ssz.UnmarshallUint64(buf[124:132]))
// Field (6) 'BuilderIndex'
e.BuilderIndex = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.ValidatorIndex(ssz.UnmarshallUint64(buf[156:164]))
// Field (6) 'Slot'
e.Slot = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Slot(ssz.UnmarshallUint64(buf[132:140]))
// Field (7) 'Slot'
e.Slot = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Slot(ssz.UnmarshallUint64(buf[164:172]))
// Field (7) 'Value'
e.Value = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Gwei(ssz.UnmarshallUint64(buf[140:148]))
// Field (8) 'Value'
e.Value = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Gwei(ssz.UnmarshallUint64(buf[172:180]))
// Field (8) 'BlobKzgCommitmentsRoot'
// Field (9) 'ExecutionPayment'
e.ExecutionPayment = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Gwei(ssz.UnmarshallUint64(buf[180:188]))
// Field (10) 'BlobKzgCommitmentsRoot'
if cap(e.BlobKzgCommitmentsRoot) == 0 {
e.BlobKzgCommitmentsRoot = make([]byte, 0, len(buf[148:180]))
e.BlobKzgCommitmentsRoot = make([]byte, 0, len(buf[188:220]))
}
e.BlobKzgCommitmentsRoot = append(e.BlobKzgCommitmentsRoot, buf[148:180]...)
e.BlobKzgCommitmentsRoot = append(e.BlobKzgCommitmentsRoot, buf[188:220]...)
return err
}
// SizeSSZ returns the ssz encoded size in bytes for the ExecutionPayloadBid object
func (e *ExecutionPayloadBid) SizeSSZ() (size int) {
size = 180
size = 220
return
}
@@ -155,26 +174,36 @@ func (e *ExecutionPayloadBid) HashTreeRootWith(hh *ssz.Hasher) (err error) {
}
hh.PutBytes(e.BlockHash)
// Field (3) 'FeeRecipient'
// Field (3) 'PrevRandao'
if size := len(e.PrevRandao); size != 32 {
err = ssz.ErrBytesLengthFn("--.PrevRandao", size, 32)
return
}
hh.PutBytes(e.PrevRandao)
// Field (4) 'FeeRecipient'
if size := len(e.FeeRecipient); size != 20 {
err = ssz.ErrBytesLengthFn("--.FeeRecipient", size, 20)
return
}
hh.PutBytes(e.FeeRecipient)
// Field (4) 'GasLimit'
// Field (5) 'GasLimit'
hh.PutUint64(e.GasLimit)
// Field (5) 'BuilderIndex'
// Field (6) 'BuilderIndex'
hh.PutUint64(uint64(e.BuilderIndex))
// Field (6) 'Slot'
// Field (7) 'Slot'
hh.PutUint64(uint64(e.Slot))
// Field (7) 'Value'
// Field (8) 'Value'
hh.PutUint64(uint64(e.Value))
// Field (8) 'BlobKzgCommitmentsRoot'
// Field (9) 'ExecutionPayment'
hh.PutUint64(uint64(e.ExecutionPayment))
// Field (10) 'BlobKzgCommitmentsRoot'
if size := len(e.BlobKzgCommitmentsRoot); size != 32 {
err = ssz.ErrBytesLengthFn("--.BlobKzgCommitmentsRoot", size, 32)
return
@@ -216,7 +245,7 @@ func (s *SignedExecutionPayloadBid) MarshalSSZTo(buf []byte) (dst []byte, err er
func (s *SignedExecutionPayloadBid) UnmarshalSSZ(buf []byte) error {
var err error
size := uint64(len(buf))
if size != 276 {
if size != 316 {
return ssz.ErrSize
}
@@ -224,22 +253,22 @@ func (s *SignedExecutionPayloadBid) UnmarshalSSZ(buf []byte) error {
if s.Message == nil {
s.Message = new(ExecutionPayloadBid)
}
if err = s.Message.UnmarshalSSZ(buf[0:180]); err != nil {
if err = s.Message.UnmarshalSSZ(buf[0:220]); err != nil {
return err
}
// Field (1) 'Signature'
if cap(s.Signature) == 0 {
s.Signature = make([]byte, 0, len(buf[180:276]))
s.Signature = make([]byte, 0, len(buf[220:316]))
}
s.Signature = append(s.Signature, buf[180:276]...)
s.Signature = append(s.Signature, buf[220:316]...)
return err
}
// SizeSSZ returns the ssz encoded size in bytes for the SignedExecutionPayloadBid object
func (s *SignedExecutionPayloadBid) SizeSSZ() (size int) {
size = 276
size = 316
return
}
@@ -713,7 +742,7 @@ func (b *BeaconBlockBodyGloas) MarshalSSZ() ([]byte, error) {
// MarshalSSZTo ssz marshals the BeaconBlockBodyGloas object to a target array
func (b *BeaconBlockBodyGloas) MarshalSSZTo(buf []byte) (dst []byte, err error) {
dst = buf
offset := int(664)
offset := int(704)
// Field (0) 'RandaoReveal'
if size := len(b.RandaoReveal); size != 96 {
@@ -885,7 +914,7 @@ func (b *BeaconBlockBodyGloas) MarshalSSZTo(buf []byte) (dst []byte, err error)
func (b *BeaconBlockBodyGloas) UnmarshalSSZ(buf []byte) error {
var err error
size := uint64(len(buf))
if size < 664 {
if size < 704 {
return ssz.ErrSize
}
@@ -917,7 +946,7 @@ func (b *BeaconBlockBodyGloas) UnmarshalSSZ(buf []byte) error {
return ssz.ErrOffset
}
if o3 != 664 {
if o3 != 704 {
return ssz.ErrInvalidVariableOffset
}
@@ -958,12 +987,12 @@ func (b *BeaconBlockBodyGloas) UnmarshalSSZ(buf []byte) error {
if b.SignedExecutionPayloadBid == nil {
b.SignedExecutionPayloadBid = new(SignedExecutionPayloadBid)
}
if err = b.SignedExecutionPayloadBid.UnmarshalSSZ(buf[384:660]); err != nil {
if err = b.SignedExecutionPayloadBid.UnmarshalSSZ(buf[384:700]); err != nil {
return err
}
// Offset (11) 'PayloadAttestations'
if o11 = ssz.ReadOffset(buf[660:664]); o11 > size || o9 > o11 {
if o11 = ssz.ReadOffset(buf[700:704]); o11 > size || o9 > o11 {
return ssz.ErrOffset
}
@@ -1105,7 +1134,7 @@ func (b *BeaconBlockBodyGloas) UnmarshalSSZ(buf []byte) error {
// SizeSSZ returns the ssz encoded size in bytes for the BeaconBlockBodyGloas object
func (b *BeaconBlockBodyGloas) SizeSSZ() (size int) {
size = 664
size = 704
// Field (3) 'ProposerSlashings'
size += len(b.ProposerSlashings) * 416
@@ -1408,7 +1437,7 @@ func (b *BeaconStateGloas) MarshalSSZ() ([]byte, error) {
// MarshalSSZTo ssz marshals the BeaconStateGloas object to a target array
func (b *BeaconStateGloas) MarshalSSZTo(buf []byte) (dst []byte, err error) {
dst = buf
offset := int(2741821)
offset := int(2741861)
// Field (0) 'GenesisTime'
dst = ssz.MarshalUint64(dst, b.GenesisTime)
@@ -1795,7 +1824,7 @@ func (b *BeaconStateGloas) MarshalSSZTo(buf []byte) (dst []byte, err error) {
func (b *BeaconStateGloas) UnmarshalSSZ(buf []byte) error {
var err error
size := uint64(len(buf))
if size < 2741821 {
if size < 2741861 {
return ssz.ErrSize
}
@@ -1853,7 +1882,7 @@ func (b *BeaconStateGloas) UnmarshalSSZ(buf []byte) error {
return ssz.ErrOffset
}
if o7 != 2741821 {
if o7 != 2741861 {
return ssz.ErrInvalidVariableOffset
}
@@ -1963,65 +1992,65 @@ func (b *BeaconStateGloas) UnmarshalSSZ(buf []byte) error {
if b.LatestExecutionPayloadBid == nil {
b.LatestExecutionPayloadBid = new(ExecutionPayloadBid)
}
if err = b.LatestExecutionPayloadBid.UnmarshalSSZ(buf[2736629:2736809]); err != nil {
if err = b.LatestExecutionPayloadBid.UnmarshalSSZ(buf[2736629:2736849]); err != nil {
return err
}
// Field (25) 'NextWithdrawalIndex'
b.NextWithdrawalIndex = ssz.UnmarshallUint64(buf[2736809:2736817])
b.NextWithdrawalIndex = ssz.UnmarshallUint64(buf[2736849:2736857])
// Field (26) 'NextWithdrawalValidatorIndex'
b.NextWithdrawalValidatorIndex = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.ValidatorIndex(ssz.UnmarshallUint64(buf[2736817:2736825]))
b.NextWithdrawalValidatorIndex = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.ValidatorIndex(ssz.UnmarshallUint64(buf[2736857:2736865]))
// Offset (27) 'HistoricalSummaries'
if o27 = ssz.ReadOffset(buf[2736825:2736829]); o27 > size || o21 > o27 {
if o27 = ssz.ReadOffset(buf[2736865:2736869]); o27 > size || o21 > o27 {
return ssz.ErrOffset
}
// Field (28) 'DepositRequestsStartIndex'
b.DepositRequestsStartIndex = ssz.UnmarshallUint64(buf[2736829:2736837])
b.DepositRequestsStartIndex = ssz.UnmarshallUint64(buf[2736869:2736877])
// Field (29) 'DepositBalanceToConsume'
b.DepositBalanceToConsume = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Gwei(ssz.UnmarshallUint64(buf[2736837:2736845]))
b.DepositBalanceToConsume = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Gwei(ssz.UnmarshallUint64(buf[2736877:2736885]))
// Field (30) 'ExitBalanceToConsume'
b.ExitBalanceToConsume = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Gwei(ssz.UnmarshallUint64(buf[2736845:2736853]))
b.ExitBalanceToConsume = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Gwei(ssz.UnmarshallUint64(buf[2736885:2736893]))
// Field (31) 'EarliestExitEpoch'
b.EarliestExitEpoch = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Epoch(ssz.UnmarshallUint64(buf[2736853:2736861]))
b.EarliestExitEpoch = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Epoch(ssz.UnmarshallUint64(buf[2736893:2736901]))
// Field (32) 'ConsolidationBalanceToConsume'
b.ConsolidationBalanceToConsume = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Gwei(ssz.UnmarshallUint64(buf[2736861:2736869]))
b.ConsolidationBalanceToConsume = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Gwei(ssz.UnmarshallUint64(buf[2736901:2736909]))
// Field (33) 'EarliestConsolidationEpoch'
b.EarliestConsolidationEpoch = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Epoch(ssz.UnmarshallUint64(buf[2736869:2736877]))
b.EarliestConsolidationEpoch = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Epoch(ssz.UnmarshallUint64(buf[2736909:2736917]))
// Offset (34) 'PendingDeposits'
if o34 = ssz.ReadOffset(buf[2736877:2736881]); o34 > size || o27 > o34 {
if o34 = ssz.ReadOffset(buf[2736917:2736921]); o34 > size || o27 > o34 {
return ssz.ErrOffset
}
// Offset (35) 'PendingPartialWithdrawals'
if o35 = ssz.ReadOffset(buf[2736881:2736885]); o35 > size || o34 > o35 {
if o35 = ssz.ReadOffset(buf[2736921:2736925]); o35 > size || o34 > o35 {
return ssz.ErrOffset
}
// Offset (36) 'PendingConsolidations'
if o36 = ssz.ReadOffset(buf[2736885:2736889]); o36 > size || o35 > o36 {
if o36 = ssz.ReadOffset(buf[2736925:2736929]); o36 > size || o35 > o36 {
return ssz.ErrOffset
}
// Field (37) 'ProposerLookahead'
b.ProposerLookahead = ssz.ExtendUint64(b.ProposerLookahead, 64)
for ii := 0; ii < 64; ii++ {
b.ProposerLookahead[ii] = ssz.UnmarshallUint64(buf[2736889:2737401][ii*8 : (ii+1)*8])
b.ProposerLookahead[ii] = ssz.UnmarshallUint64(buf[2736929:2737441][ii*8 : (ii+1)*8])
}
// Field (38) 'ExecutionPayloadAvailability'
if cap(b.ExecutionPayloadAvailability) == 0 {
b.ExecutionPayloadAvailability = make([]byte, 0, len(buf[2737401:2738425]))
b.ExecutionPayloadAvailability = make([]byte, 0, len(buf[2737441:2738465]))
}
b.ExecutionPayloadAvailability = append(b.ExecutionPayloadAvailability, buf[2737401:2738425]...)
b.ExecutionPayloadAvailability = append(b.ExecutionPayloadAvailability, buf[2737441:2738465]...)
// Field (39) 'BuilderPendingPayments'
b.BuilderPendingPayments = make([]*BuilderPendingPayment, 64)
@@ -2029,27 +2058,27 @@ func (b *BeaconStateGloas) UnmarshalSSZ(buf []byte) error {
if b.BuilderPendingPayments[ii] == nil {
b.BuilderPendingPayments[ii] = new(BuilderPendingPayment)
}
if err = b.BuilderPendingPayments[ii].UnmarshalSSZ(buf[2738425:2741753][ii*52 : (ii+1)*52]); err != nil {
if err = b.BuilderPendingPayments[ii].UnmarshalSSZ(buf[2738465:2741793][ii*52 : (ii+1)*52]); err != nil {
return err
}
}
// Offset (40) 'BuilderPendingWithdrawals'
if o40 = ssz.ReadOffset(buf[2741753:2741757]); o40 > size || o36 > o40 {
if o40 = ssz.ReadOffset(buf[2741793:2741797]); o40 > size || o36 > o40 {
return ssz.ErrOffset
}
// Field (41) 'LatestBlockHash'
if cap(b.LatestBlockHash) == 0 {
b.LatestBlockHash = make([]byte, 0, len(buf[2741757:2741789]))
b.LatestBlockHash = make([]byte, 0, len(buf[2741797:2741829]))
}
b.LatestBlockHash = append(b.LatestBlockHash, buf[2741757:2741789]...)
b.LatestBlockHash = append(b.LatestBlockHash, buf[2741797:2741829]...)
// Field (42) 'LatestWithdrawalsRoot'
if cap(b.LatestWithdrawalsRoot) == 0 {
b.LatestWithdrawalsRoot = make([]byte, 0, len(buf[2741789:2741821]))
b.LatestWithdrawalsRoot = make([]byte, 0, len(buf[2741829:2741861]))
}
b.LatestWithdrawalsRoot = append(b.LatestWithdrawalsRoot, buf[2741789:2741821]...)
b.LatestWithdrawalsRoot = append(b.LatestWithdrawalsRoot, buf[2741829:2741861]...)
// Field (7) 'HistoricalRoots'
{
@@ -2247,7 +2276,7 @@ func (b *BeaconStateGloas) UnmarshalSSZ(buf []byte) error {
// SizeSSZ returns the ssz encoded size in bytes for the BeaconStateGloas object
func (b *BeaconStateGloas) SizeSSZ() (size int) {
size = 2741821
size = 2741861
// Field (7) 'HistoricalRoots'
size += len(b.HistoricalRoots) * 32

View File

@@ -200,6 +200,7 @@ go_test(
"fulu__sanity__blocks_test.go",
"fulu__sanity__slots_test.go",
"fulu__ssz_static__ssz_static_test.go",
"gloas__operations__execution_payload_test.go",
"gloas__ssz_static__ssz_static_test.go",
"phase0__epoch_processing__effective_balance_updates_test.go",
"phase0__epoch_processing__epoch_processing_test.go",
@@ -278,6 +279,7 @@ go_test(
"//testing/spectest/shared/fulu/rewards:go_default_library",
"//testing/spectest/shared/fulu/sanity:go_default_library",
"//testing/spectest/shared/fulu/ssz_static:go_default_library",
"//testing/spectest/shared/gloas/operations:go_default_library",
"//testing/spectest/shared/gloas/ssz_static:go_default_library",
"//testing/spectest/shared/phase0/epoch_processing:go_default_library",
"//testing/spectest/shared/phase0/finality:go_default_library",

View File

@@ -0,0 +1,11 @@
package mainnet
import (
"testing"
"github.com/OffchainLabs/prysm/v7/testing/spectest/shared/gloas/operations"
)
func TestMainnet_Gloas_Operations_ExecutionPayloadEnvelope(t *testing.T) {
operations.RunExecutionPayloadTest(t, "mainnet")
}

View File

@@ -0,0 +1,29 @@
load("@prysm//tools/go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = [
"execution_payload.go",
"helpers.go",
],
importpath = "github.com/OffchainLabs/prysm/v7/testing/spectest/shared/gloas/operations",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/core/gloas:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//testing/require:go_default_library",
"//testing/spectest/utils:go_default_library",
"//testing/util:go_default_library",
"@com_github_golang_snappy//:go_default_library",
"@com_github_google_go_cmp//cmp:go_default_library",
"@io_bazel_rules_go//go/tools/bazel:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",
"@org_golang_google_protobuf//testing/protocmp:go_default_library",
],
)

View File

@@ -0,0 +1,119 @@
package operations
import (
"context"
"os"
"path"
"strings"
"testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/gloas"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/OffchainLabs/prysm/v7/testing/spectest/utils"
"github.com/OffchainLabs/prysm/v7/testing/util"
"github.com/bazelbuild/rules_go/go/tools/bazel"
"github.com/golang/snappy"
"github.com/google/go-cmp/cmp"
"google.golang.org/protobuf/proto"
"google.golang.org/protobuf/testing/protocmp"
)
type ExecutionConfig struct {
Valid bool `json:"execution_valid"`
}
func sszToSignedExecutionPayloadEnvelope(b []byte) (interfaces.ROSignedExecutionPayloadEnvelope, error) {
envelope := &enginev1.SignedExecutionPayloadEnvelope{}
if err := envelope.UnmarshalSSZ(b); err != nil {
return nil, err
}
return blocks.WrappedROSignedExecutionPayloadEnvelope(envelope)
}
func RunExecutionPayloadTest(t *testing.T, config string) {
require.NoError(t, utils.SetConfig(t, config))
testFolders, testsFolderPath := utils.TestFolders(t, config, "gloas", "operations/execution_payload/pyspec_tests")
if len(testFolders) == 0 {
t.Fatalf("No test folders found for %s/%s/%s", config, "gloas", "operations/execution_payload/pyspec_tests")
}
for _, folder := range testFolders {
t.Run(folder.Name(), func(t *testing.T) {
helpers.ClearCache()
// Check if signed_envelope.ssz_snappy exists, skip if not
_, err := bazel.Runfile(path.Join(testsFolderPath, folder.Name(), "signed_envelope.ssz_snappy"))
if err != nil && strings.Contains(err.Error(), "could not locate file") {
t.Skipf("Skipping test %s: signed_envelope.ssz_snappy not found", folder.Name())
return
}
// Read the signed execution payload envelope
envelopeFile, err := util.BazelFileBytes(testsFolderPath, folder.Name(), "signed_envelope.ssz_snappy")
require.NoError(t, err)
envelopeSSZ, err := snappy.Decode(nil /* dst */, envelopeFile)
require.NoError(t, err, "Failed to decompress envelope")
signedEnvelope, err := sszToSignedExecutionPayloadEnvelope(envelopeSSZ)
require.NoError(t, err, "Failed to unmarshal signed envelope")
preBeaconStateFile, err := util.BazelFileBytes(testsFolderPath, folder.Name(), "pre.ssz_snappy")
require.NoError(t, err)
preBeaconStateSSZ, err := snappy.Decode(nil /* dst */, preBeaconStateFile)
require.NoError(t, err, "Failed to decompress")
preBeaconState, err := sszToState(preBeaconStateSSZ)
require.NoError(t, err)
postSSZFilepath, err := bazel.Runfile(path.Join(testsFolderPath, folder.Name(), "post.ssz_snappy"))
postSSZExists := true
if err != nil && strings.Contains(err.Error(), "could not locate file") {
postSSZExists = false
} else {
require.NoError(t, err)
}
file, err := util.BazelFileBytes(testsFolderPath, folder.Name(), "execution.yaml")
require.NoError(t, err)
config := &ExecutionConfig{}
require.NoError(t, utils.UnmarshalYaml(file, config), "Failed to Unmarshal")
if !config.Valid {
t.Skip("Skipping invalid execution engine test as it's never supported")
}
err = gloas.ProcessExecutionPayload(context.Background(), preBeaconState, signedEnvelope)
if postSSZExists {
require.NoError(t, err)
comparePostState(t, postSSZFilepath, preBeaconState)
} else if config.Valid {
// Note: This doesn't test anything worthwhile. It essentially tests
// that *any* error has occurred, not any specific error.
if err == nil {
t.Fatal("Did not fail when expected")
}
t.Logf("Expected failure; failure reason = %v", err)
return
}
})
}
}
func comparePostState(t *testing.T, postSSZFilepath string, want state.BeaconState) {
postBeaconStateFile, err := os.ReadFile(postSSZFilepath) // #nosec G304
require.NoError(t, err)
postBeaconStateSSZ, err := snappy.Decode(nil /* dst */, postBeaconStateFile)
require.NoError(t, err, "Failed to decompress")
postBeaconState, err := sszToState(postBeaconStateSSZ)
require.NoError(t, err)
postBeaconStatePb, ok := postBeaconState.ToProtoUnsafe().(proto.Message)
require.Equal(t, true, ok, "post beacon state did not return a proto.Message")
pbState, ok := want.ToProtoUnsafe().(proto.Message)
require.Equal(t, true, ok, "beacon state did not return a proto.Message")
if !proto.Equal(postBeaconStatePb, pbState) {
diff := cmp.Diff(pbState, postBeaconStatePb, protocmp.Transform())
t.Fatalf("Post state does not match expected state, diff: %s", diff)
}
}

View File

@@ -0,0 +1,15 @@
package operations
import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
state_native "github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
)
func sszToState(b []byte) (state.BeaconState, error) {
base := &ethpb.BeaconStateGloas{}
if err := base.UnmarshalSSZ(b); err != nil {
return nil, err
}
return state_native.InitializeFromProtoGloas(base)
}

Some files were not shown because too many files have changed in this diff Show More