Compare commits

...

27 Commits

Author SHA1 Message Date
james-prysm
0d9e4aaaab adding some head tolerance 2026-01-20 11:59:27 -06:00
james-prysm
361cf6f617 adding in optimistic check and timeout 2026-01-20 11:21:46 -06:00
Luca | Serenita
055c6eb784 fix: typo in AggregateDueBPS (#16194)
<!-- Thanks for sending a PR! Before submitting:

1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
   fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
   in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
   Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->

**What type of PR is this?**

Bug fix

**What does this PR do? Why is it needed?**

This PR fixes a typo which resulted in a wrong variable name to be
returned on the Beacon API `/eth/v1/config/spec` endpoint:

```
curl http://127.0.0.1:49183/eth/v1/config/spec
{"data":{"AGGREGRATE_DUE_BPS":"6667", [...]
```

I discovered the discrepancy while testing the change to these "BPS"
values in the Vero VC which checks spec values against the ones it ships
with.

**Which issues(s) does this PR fix?**

N/A

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2026-01-20 15:16:13 +00:00
terence
d33389fb54 gloas: add new pending payment processing (#15655)
This PR implements
[process_builder_pending_payments](https://github.com/ethereum/consensus-specs/blob/master/specs/gloas/beacon-chain.md#new-process_builder_pending_payments)
and spec tests.
2026-01-16 21:19:06 +00:00
Maxim Evtush
ce72deb3c0 Fix authentication bypass for direct /v2/validator/* endpoints (#16226)
This PR fixes a security vulnerability where authenticated endpoints
could be accessed without authorization by using direct
`/v2/validator/*` paths instead of `/api/v2/validator/*`.

The `AuthTokenHandler` middleware only checked for authentication on
requests containing `/api/v2/validator/` or `/eth/v1` prefixes, but the
same handlers are also registered for direct `/v2/validator/*` routes.
This allowed attackers to bypass authentication by simply removing the
`/api` prefix from the URL.

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2026-01-16 19:43:27 +00:00
Manu NALEPA
ec48e6340c Stop batching of KZG verification for incoming via gossip data column sidecars (#16240)
**What type of PR is this?**
Optimisation

**What does this PR do? Why is it needed?**
This is an alternate take of:
- https://github.com/OffchainLabs/prysm/pull/16220


**Test configuration:**
- Using the `--disable-get-blobs-v2` and `--supernode` flags
- On [VPS 3000 G11](https://www.netcup.com/en/server/vps)

**4H average**
| Impl. | CPU usage| Sidecar gossip verif. dur. | DA waiting time |
Chain service proc. time | Total |
|--------|--------|--------|--------|--------|--------|
| `develop` | 132% | 185 ms | 82.2 ms | **457 ms** | 539 ms |
| https://github.com/OffchainLabs/prysm/pull/16220 | 144% | 76.5 ms |
21.7 ms | 473 ms | **495 ms** |
| This PR  | **117%** | **26 ms** | **16.3 ms** | 479 ms | **495 ms** |

 
**Before this PR:**
<img width="950" height="1296" alt="image"
src="https://github.com/user-attachments/assets/1fb45282-a9c8-4543-adb3-39b04b79eab2"
/>

**With this PR:**
<img width="950" height="1301" alt="image"
src="https://github.com/user-attachments/assets/993feb49-ef38-4052-9cb4-aebe93456eba"
/>

Metrics:
- `beacon_data_column_sidecar_gossip_verification_milliseconds`
- `da_waited_time_milliseconds`

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-01-16 18:53:45 +00:00
Preston Van Loon
a135a336c3 Fix issue : Prevent makeslice panic from invalid Count values (#16227)
**What type of PR is this?**

Bug fix

**What does this PR do? Why is it needed?**

Add defensive checks to prevent panic from large Count values that could
result from unsigned integer underflow:

1. In batch.blockRequest() and batch.blobRequest(): Return Count=0 when
end <= begin, preventing the underflow at the source.

2. In SendBeaconBlocksByRangeRequest(): Cap slice capacity to
MaxRequestBlock before allocation to prevent panic even if upstream code
produces invalid values.


**Which issues(s) does this PR fix?**

Fixes #16223

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-01-16 14:25:04 +00:00
Manu NALEPA
5f189f002e Remove unused delay parameter from fetchOriginDataColumnSidecars function. (#16262)
**What type of PR is this?**
Other

**What does this PR do? Why is it needed?**
Remove unused delay parameter from `fetchOriginDataColumnSidecars`
function.

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-01-16 14:04:42 +00:00
willian.eth
bca6166e82 Add shell completion for beacon-chain and validator CLI (#16245)
**What type of PR is this?**

Feature

**What does this PR do? Why is it needed?**

Introduces a `completion` subcommand to `beacon-chain` and `validator`
that outputs shell completion scripts. Supports Bash, Zsh, and Fish
shells.

```bash
# Load completions in current session
source <(beacon-chain completion bash)

# Persist for future sessions
beacon-chain completion zsh > "${fpath[1]}/_beacon-chain"
validator completion fish > ~/.config/fish/completions/validator.fish
```

Once loaded, users can press TAB to complete subcommands, nested
commands, and flags. Flag completion supports prefix matching (e.g.,
typing `--exec<TAB>` suggests `--execution-endpoint`,
`--execution-headers`).

**Which issues(s) does this PR fix?**

Fixes #16244

**Other notes for review**

The implementation adds three files to the existing `cmd` package:
- `completion.go` - Defines `CompletionCommand()` returning a
`*cli.Command` with `bash`, `zsh`, `fish` subcommands
- `completion_scripts.go` - Contains the shell script templates
- `completion_test.go` - Unit tests for command structure and script
content

Changes to `beacon-chain` and `validator`:
- Import `cmd.CompletionCommand("binary-name")` in the Commands slice
- Set `EnableBashCompletion: true` on the cli.App to activate
urfave/cli's `--generate-bash-completion` hidden flag

The shell scripts call the binary with `--generate-bash-completion`
appended to get context-aware suggestions. This means completions
automatically reflect the current binary's flags and commands.

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).

Signed-off-by: Willian Paixao <willian@ufpa.br>
2026-01-15 20:07:11 +00:00
Bastin
b6818853b4 state-diff small changes (#16260)
**What does this PR do?**
small touch ups on state diff code.
2026-01-15 18:47:29 +00:00
satushh
5a56bfcf98 Print commitments instead of indices (#16258)
**What type of PR is this?**

Other

**What does this PR do? Why is it needed?**

Print commitments instead of indices in `missingCommitError` function

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [ ] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [ ] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [ ] I have added a description with sufficient context for reviewers
to understand this PR.
- [ ] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-01-15 15:37:32 +00:00
terence
a08f185170 gloas: add new execution payload bid processing (#15638)
This PR implements
[process_execution_payload_bid](https://github.com/ethereum/consensus-specs/blob/master/specs/gloas/beacon-chain.md#new-process_execution_payload_bid)
and spec tests
2026-01-15 10:52:19 +00:00
Preston Van Loon
15b1d68249 CI: Add gazelle update-repos check (#16257)
**What type of PR is this?**

Other

**What does this PR do? Why is it needed?**

This wont pass til #16252 merges

**Which issues(s) does this PR fix?**

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-01-15 03:47:34 +00:00
Victor Farazdagi
885d9cc478 fix: use input genesis.json timestamp in prysmctl (#16239)
**What type of PR is this?**

Bug fix


**What does this PR do? Why is it needed?**

- When no `--genesis-time` is provided we default to `now()`, instead of
using `timestamp` from provided `--geth-genesis-json-in` input file
- This results in inconsistencies, especially, if the input file is not
overwritten using `--geth-genesis-json-out` (say, the generator is used
to produce the `genesis.ssz` file only, as described in the original
issue)

**Which issues(s) does this PR fix?**

Fixes #16002

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-01-15 03:47:06 +00:00
Victor Farazdagi
511248213c refactor: remove gometalinter refs (#16229)
**What type of PR is this?**

Refactor/Cleanup

**What does this PR do? Why is it needed?**

- [gometalinter](https://github.com/alecthomas/gometalinter) project is
deprecated (has been forever so)
- in #2100 it has been removed from the Prysm project
- while not being used `gometalinter` references still existed in Bazel
config. Since `gometalinter.json` was removed in PR-2100, the `bazel run
//:gometalinter` will err unless config file re-created, hence it
appears we have a dead code.

**Which issues(s) does this PR fix?**

No issue, simple cleanup

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-01-15 03:46:13 +00:00
Preston Van Loon
1a936e2ffa Update go-ethereum to v1.16.8 (security fix release) (#16252)
**What type of PR is this?**

Other

**What does this PR do? Why is it needed?**

Updating our internal geth dependency after today's release.

**Which issues(s) does this PR fix?**

**Other notes for review**

No known security issues in Prysm. 

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-01-14 20:48:44 +00:00
Galoretka
6027518ad5 fix: stop SlotIntervalTicker goroutine leaks (#16241)
`SlotIntervalTicker` was created in `prepareForkChoiceAtts` and in the
fork-choice attestation processing routine without ever calling `Done()`
when the service context was cancelled. Once the consuming goroutine
exits on `ctx.Done()`, the ticker keeps running and eventually blocks on
sending to its channel, leaving a leaked goroutine behind.

This change wires the lifetime of the `SlotIntervalTicker` to the
corresponding service contexts by calling `ticker.Done()` on
`ctx.Done()` in both call sites. This keeps the behaviour of the
routines unchanged while ensuring the ticker goroutines exit cleanly on
shutdown, consistent with how regular `SlotTicker` is handled elsewhere
in the codebase.
2026-01-14 17:14:17 +00:00
Manu NALEPA
e4a6bc7065 Add metrics monitoring blob count per block (#16254)
**What type of PR is this?**
Feature

**What does this PR do? Why is it needed?**
Add metrics monitoring blob count per block for block received via
gossip

<img width="954" height="659" alt="image"
src="https://github.com/user-attachments/assets/ae9ff9ed-06ae-473b-bb4e-f162cf17702b"
/>

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-01-14 13:18:59 +00:00
satushh
a2982f0807 Avoid unnecessary heap allocation (#16251)
**What type of PR is this?**

Micro optimisation

**What does this PR do? Why is it needed?**
Avoids unnecessary allocation 

**Acknowledgements**

- [ ] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [ ] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [ ] I have added a description with sufficient context for reviewers
to understand this PR.
- [ ] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-01-14 10:08:47 +00:00
james-prysm
73e9d6e0ce adding cache for attestation data so we don't call it multiple times (#16236)
**What type of PR is this?**

Other

**What does this PR do? Why is it needed?**

post electra attestation data calls are the same in a slot because
committee information is 0, we can save some api calls by caching the
attestation data per slot.

**Which issues(s) does this PR fix?**

Fixes # https://github.com/OffchainLabs/prysm/issues/16228

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-01-14 02:30:35 +00:00
Potuz
2e43d50364 Use dependent root when validating data column (#16250)
This PR uses the head state to validate data column in more places than
we currently do. When the parent state is from the previous epoch and is
the head (for example at slot 0) instead of replaying slot and doing an
epoch transition, we use the head state directly.

Another change is that instead of replaying until the parent state in
the case of a head miss, we only replay until the target checkpoint
state, which is more likely to be a checkpoint state in the epoch
boundary cache.

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-13 18:19:41 +00:00
Nina
71e7b526d2 docs: fix broken links (#15856)
<!-- Thanks for sending a PR! Before submitting:

1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
   fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
   in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
   Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->

**What type of PR is this?**

 Documentation

**What does this PR do? Why is it needed?**

Fixes broken and old links in documentation following code standards and
to avoid possible future problems with redirects



**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2026-01-13 16:55:50 +00:00
james-prysm
ea8baab7b0 fulu e2e (#15640)
<!-- Thanks for sending a PR! Before submitting:

1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
   fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
   in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
   Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->

**What type of PR is this?**

Feature

**What does this PR do? Why is it needed?**

- upgrading go ethereum to 1.16.7
- enabling fulu e2e 
- added new e2e field params and build option
- removes github.com/MariusVanDerWijden/FuzzyVM
v0.0.0-20240516070431-7828990cad7d and
github.com/MariusVanDerWijden/tx-fuzz v1.4.0

what changed
- e2e config on slots per epoch increased to match minimum 6->8 a 33%
increase in run time ( this is needed because field params only have
minimum presets and proposer look ahead feature uses it, so if it
doesn't match it fails)
- reduce presubmit epochs from 18 -> 10 and only run for electra -> fulu
- moves bellatrix -> fulu post merge test to post submit

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.

---------

Co-authored-by: Preston Van Loon <preston@pvl.dev>
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: Potuz <potuz@prysmaticlabs.com>
2026-01-13 02:29:51 +00:00
Potuz
37c5178fa8 Migrate to cold with state diffs (#16049)
This PR adds the logic to migrate to cold when the database has the
hdiff feature. The main difference is that the boundary states have to
have the right slot therefore they need to be advanced and aren't
necessarily the post-state of a given beacon block root.

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-12 18:56:18 +00:00
Potuz
13f8e7b47f Hashtree from source (#16216)
A try to #14524 without using the syso files

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-12 17:25:14 +00:00
Manu NALEPA
124eadd56e Earliest available slot at node start: Use the justified checkpoint. (#16230)
**What type of PR is this?**
Bug fix

**What does this PR do? Why is it needed?**
**Before this PR:**
When starting the node with the `--[semi-]supernode` flag with an
already existing DB, the new value of the earliest available slot was
set to the slot of the latest finalized checkpoint.

==> Between the latest finalized checkpoint and the slot where Prysm
starts to actually sync (the justified checkpoint) after the reboot,
Prysm advertises a higher `cgc` than it should.

**With this PR:**
If, at node start, the node needs to increase its `cgc`, then it uses
the latest justified checkpoint (+ 1) for the new `eas`.

**Which issues(s) does this PR fix?**
- https://github.com/OffchainLabs/prysm/issues/16066

**Example of a test case:**
1. Start the node with an empty DB, without any validator connected.
```
[2026-01-09 13:38:21.77] DEBUG db: Custody info earliestAvailableSlot=2145952 groupCount=4
```

2. Try: 
```
curl http://localhost:3500/eth/v1/beacon/blobs/2145952 | jq
{
  "message": "Not found: the node does not custody enough data columns to reconstruct blobs - please start the beacon node with the `--semi-supernode` flag to ensure this call to succeed",
  "code": 404
}
```

==> This is expected, since `cgc=4 < 64`

3. After a few epochs, add a few validators (< 64):
```
[2026-01-09 13:43:21.77] DEBUG db: Custody info earliestAvailableSlot=2146066 groupCount=10
```

4. Try:
```
curl http://localhost:3500/eth/v1/beacon/blobs/2146066 | jq
{
  "message": "Not found: the node does not custody enough data columns to reconstruct blobs - please start the beacon node with the `--semi-supernode` flag to ensure this call to succeed",
  "code": 404
}
```

==> This is expected, since `cgc=10 < 64`


5. After a few epochs, restart the node:
```
[2026-01-09 13:46:44.09] DEBUG db: Custody info earliestAvailableSlot=2146066 groupCount=10
```

==> OK (No change)

6. Restart the node with the `--semi-supernode` flag.
```
[2026-01-09 13:49:26.14] DEBUG db: Custody info earliestAvailableSlot=2146049 groupCount=64
```

The `eas` goes backward which is expected, since the node restarts
syncing from the latest justified checkpoint, which in this case is
lower than the slot where we added validators during step `3.`.

Try:
```
curl http://localhost:3500/eth/v1/beacon/blobs/2146049 | jq
==> OK
```




The whole `eas/cgc` advertisement management should probably be
re-thinked, for example by using what the node actually has in its DB to
decide what `eas/cgc` should be advertised.
(With a particular attention to the full node case until
https://github.com/OffchainLabs/prysm/issues/16065 is fixed.)

However, this PR fixes the linked issue, so it's a good fix until a
deeper redesign is done.

**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-01-12 14:25:12 +00:00
Potuz
76420d9428 Update Spectests to v1.7.0.alpha-1 (#16246) 2026-01-11 12:54:01 +00:00
176 changed files with 6449 additions and 1583 deletions

View File

@@ -1,5 +1,4 @@
load("@bazel_gazelle//:def.bzl", "gazelle")
load("@com_github_atlassian_bazel_tools//gometalinter:def.bzl", "gometalinter")
load("@com_github_atlassian_bazel_tools//goimports:def.bzl", "goimports")
load("@io_kubernetes_build//defs:run_in_workspace.bzl", "workspace_binary")
load("@io_bazel_rules_go//go:def.bzl", "nogo")
@@ -55,15 +54,6 @@ alias(
visibility = ["//visibility:public"],
)
gometalinter(
name = "gometalinter",
config = "//:.gometalinter.json",
paths = [
"./...",
],
prefix = prefix,
)
goimports(
name = "goimports",
display_diffs = True,

View File

@@ -72,7 +72,7 @@ Do NOT add new `go_repository` to the WORKSPACE file. All dependencies should li
To enable conditional compilation and custom configuration for tests (where compiled code has more
debug info, while not being completely optimized), we rely on Go's build tags/constraints mechanism
(see official docs on [build constraints](https://golang.org/pkg/go/build/#hdr-Build_Constraints)).
(see official docs on [build constraints](https://pkg.go.dev/go/build#hdr-Build_Constraints)).
Therefore, whenever using `go test`, do not forget to pass in extra build tag, eg:
```bash

View File

@@ -9,7 +9,7 @@ This README details how to setup Prysm for interop testing for usage with other
## Installation & Setup
1. Install [Bazel](https://docs.bazel.build/versions/master/install.html) **(Recommended)**
1. Install [Bazel](https://bazel.build/install) **(Recommended)**
2. `git clone https://github.com/OffchainLabs/prysm && cd prysm`
3. `bazel build //cmd/...`

View File

@@ -15,7 +15,7 @@
## 📖 Overview
This is the core repository for Prysm, a [Golang](https://golang.org/) implementation of the [Ethereum Consensus](https://ethereum.org/en/developers/docs/consensus-mechanisms/#proof-of-stake) [specification](https://github.com/ethereum/consensus-specs), developed by [Offchain Labs](https://www.offchainlabs.com).
This is the core repository for Prysm, a [Golang](https://go.dev/) implementation of the [Ethereum Consensus](https://ethereum.org/en/developers/docs/consensus-mechanisms/#proof-of-stake) [specification](https://github.com/ethereum/consensus-specs), developed by [Offchain Labs](https://www.offchainlabs.com).
See the [Changelog](https://github.com/OffchainLabs/prysm/releases) for details of the latest releases and upcoming breaking changes.
@@ -23,7 +23,7 @@ See the [Changelog](https://github.com/OffchainLabs/prysm/releases) for details
## 🚀 Getting Started
A detailed set of installation and usage instructions as well as breakdowns of each individual component are available in the **[official documentation portal](https://docs.prylabs.network)**.
A detailed set of installation and usage instructions as well as breakdowns of each individual component are available in the **[official documentation portal](https://prysm.offchainlabs.com/docs/)**.
💬 **Need help?** Join our **[Discord Community](https://discord.gg/prysm)** for support.
@@ -51,7 +51,7 @@ Prysm maintains two permanent branches:
### 🛠 Contribution Guide
Want to get involved? Check out our **[Contribution Guide](https://docs.prylabs.network/docs/contribute/contribution-guidelines/)** to learn more!
Want to get involved? Check out our **[Contribution Guide](https://prysm.offchainlabs.com/docs/contribute/contribution-guidelines/)** to learn more!
---

View File

@@ -273,16 +273,16 @@ filegroup(
url = "https://github.com/ethereum/EIPs/archive/5480440fe51742ed23342b68cf106cefd427e39d.tar.gz",
)
consensus_spec_version = "v1.7.0-alpha.0"
consensus_spec_version = "v1.7.0-alpha.1"
load("@prysm//tools:download_spectests.bzl", "consensus_spec_tests")
consensus_spec_tests(
name = "consensus_spec_tests",
flavors = {
"general": "sha256-b+rJOuVqq+Dy53quPcNYcQwPFoMU7Wp7tdUVe7n0g8w=",
"minimal": "sha256-qxRIxtjPxVsVCY90WsBJKhk0027XDSmhjnRvRN14V1c=",
"mainnet": "sha256-NsuOQG3LzeiEE1TrWuvQ6vu6BboHv7h7f/RTS0pWkCs=",
"general": "sha256-j5R3jA7Oo4OSDMTvpMuD+8RomaCByeFSwtfkq6fL0Zg=",
"minimal": "sha256-tdTqByoyswOS4r6OxFmo70y2BP7w1TgEok+gf4cbxB0=",
"mainnet": "sha256-5gB4dt6SnSDKzdBc06VedId3NkgvSYyv9n9FRxWKwYI=",
},
version = consensus_spec_version,
)
@@ -298,7 +298,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-hwNdUBgdBrkk6pWIpNYbzbwswUuOu6AMD2exN8uv+QQ=",
integrity = "sha256-J+43DrK1pF658kTXTwMS6zGf4KDjvas++m8w2a8swpg=",
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
)
@@ -423,10 +423,6 @@ load("@prysm//testing/endtoend:deps.bzl", "e2e_deps")
e2e_deps()
load("@com_github_atlassian_bazel_tools//gometalinter:deps.bzl", "gometalinter_dependencies")
gometalinter_dependencies()
load("@bazel_gazelle//:deps.bzl", "gazelle_dependencies")
gazelle_dependencies(go_sdk = "go_sdk")

View File

@@ -74,7 +74,6 @@ go_library(
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/stategen:go_default_library",
"//beacon-chain/verification:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
@@ -174,7 +173,6 @@ go_test(
"//beacon-chain/state/state-native:go_default_library",
"//beacon-chain/state/stategen:go_default_library",
"//beacon-chain/verification:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",

View File

@@ -13,7 +13,7 @@ go_library(
deps = [
"//consensus-types/blocks:go_default_library",
"@com_github_crate_crypto_go_kzg_4844//:go_default_library",
"@com_github_ethereum_c_kzg_4844//bindings/go:go_default_library",
"@com_github_ethereum_c_kzg_4844_v2//bindings/go:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_ethereum_go_ethereum//crypto/kzg4844:go_default_library",
"@com_github_pkg_errors//:go_default_library",

View File

@@ -221,6 +221,19 @@ var (
Buckets: []float64{1, 2, 4, 8, 16, 32},
},
)
commitmentCount = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "commitment_count_max_21",
Help: "The number of blob KZG commitments per block.",
Buckets: []float64{1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21},
},
)
maxBlobsPerBlock = promauto.NewGauge(
prometheus.GaugeOpts{
Name: "max_blobs_per_block",
Help: "The maximum number of blobs allowed in a block.",
},
)
)
// reportSlotMetrics reports slot related metrics.

View File

@@ -94,6 +94,7 @@ func (s *Service) spawnProcessAttestationsRoutine() {
for {
select {
case <-s.ctx.Done():
ticker.Done()
return
case slotInterval := <-ticker.C():
if slotInterval.Interval > 0 {

View File

@@ -16,6 +16,7 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/slasher/types"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
"github.com/OffchainLabs/prysm/v7/config/features"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
@@ -258,32 +259,55 @@ func (s *Service) handleDA(ctx context.Context, avs das.AvailabilityChecker, blo
}
func (s *Service) reportPostBlockProcessing(
block interfaces.SignedBeaconBlock,
signedBlock interfaces.SignedBeaconBlock,
blockRoot [32]byte,
receivedTime time.Time,
daWaitedTime time.Duration,
) {
block := signedBlock.Block()
if block == nil {
log.WithField("blockRoot", blockRoot).Error("Nil block")
return
}
// Reports on block and fork choice metrics.
cp := s.cfg.ForkChoiceStore.FinalizedCheckpoint()
finalized := &ethpb.Checkpoint{Epoch: cp.Epoch, Root: bytesutil.SafeCopyBytes(cp.Root[:])}
reportSlotMetrics(block.Block().Slot(), s.HeadSlot(), s.CurrentSlot(), finalized)
reportSlotMetrics(block.Slot(), s.HeadSlot(), s.CurrentSlot(), finalized)
// Log block sync status.
cp = s.cfg.ForkChoiceStore.JustifiedCheckpoint()
justified := &ethpb.Checkpoint{Epoch: cp.Epoch, Root: bytesutil.SafeCopyBytes(cp.Root[:])}
if err := logBlockSyncStatus(block.Block(), blockRoot, justified, finalized, receivedTime, s.genesisTime, daWaitedTime); err != nil {
if err := logBlockSyncStatus(block, blockRoot, justified, finalized, receivedTime, s.genesisTime, daWaitedTime); err != nil {
log.WithError(err).Error("Unable to log block sync status")
}
// Log payload data
if err := logPayload(block.Block()); err != nil {
if err := logPayload(block); err != nil {
log.WithError(err).Error("Unable to log debug block payload data")
}
// Log state transition data.
if err := logStateTransitionData(block.Block()); err != nil {
if err := logStateTransitionData(block); err != nil {
log.WithError(err).Error("Unable to log state transition data")
}
timeWithoutDaWait := time.Since(receivedTime) - daWaitedTime
chainServiceProcessingTime.Observe(float64(timeWithoutDaWait.Milliseconds()))
body := block.Body()
if body == nil {
log.WithField("blockRoot", blockRoot).Error("Nil block body")
return
}
commitments, err := body.BlobKzgCommitments()
if err != nil {
log.WithError(err).Error("Unable to get blob KZG commitments")
}
commitmentCount.Observe(float64(len(commitments)))
maxBlobsPerBlock.Set(float64(params.BeaconConfig().MaxBlobsPerBlock(block.Slot())))
}
func (s *Service) executePostFinalizationTasks(ctx context.Context, finalizedState state.BeaconState) {

View File

@@ -14,7 +14,6 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/cache"
statefeed "github.com/OffchainLabs/prysm/v7/beacon-chain/core/feed/state"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
coreTime "github.com/OffchainLabs/prysm/v7/beacon-chain/core/time"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/transition"
"github.com/OffchainLabs/prysm/v7/beacon-chain/db"
@@ -30,7 +29,6 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/startup"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state/stategen"
"github.com/OffchainLabs/prysm/v7/cmd/beacon-chain/flags"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
@@ -291,19 +289,6 @@ func (s *Service) StartFromSavedState(saved state.BeaconState) error {
return errors.Wrap(err, "failed to initialize blockchain service")
}
if !params.FuluEnabled() {
return nil
}
earliestAvailableSlot, custodySubnetCount, err := s.updateCustodyInfoInDB(saved.Slot())
if err != nil {
return errors.Wrap(err, "could not get and save custody group count")
}
if _, _, err := s.cfg.P2P.UpdateCustodyInfo(earliestAvailableSlot, custodySubnetCount); err != nil {
return errors.Wrap(err, "update custody info")
}
return nil
}
@@ -468,73 +453,6 @@ func (s *Service) removeStartupState() {
s.cfg.FinalizedStateAtStartUp = nil
}
// UpdateCustodyInfoInDB updates the custody information in the database.
// It returns the (potentially updated) custody group count and the earliest available slot.
func (s *Service) updateCustodyInfoInDB(slot primitives.Slot) (primitives.Slot, uint64, error) {
isSupernode := flags.Get().Supernode
isSemiSupernode := flags.Get().SemiSupernode
cfg := params.BeaconConfig()
custodyRequirement := cfg.CustodyRequirement
// Check if the node was previously subscribed to all data subnets, and if so,
// store the new status accordingly.
wasSupernode, err := s.cfg.BeaconDB.UpdateSubscribedToAllDataSubnets(s.ctx, isSupernode)
if err != nil {
return 0, 0, errors.Wrap(err, "update subscribed to all data subnets")
}
// Compute the target custody group count based on current flag configuration.
targetCustodyGroupCount := custodyRequirement
// Supernode: custody all groups (either currently set or previously enabled)
if isSupernode {
targetCustodyGroupCount = cfg.NumberOfCustodyGroups
}
// Semi-supernode: custody minimum needed for reconstruction, or custody requirement if higher
if isSemiSupernode {
semiSupernodeCustody, err := peerdas.MinimumCustodyGroupCountToReconstruct()
if err != nil {
return 0, 0, errors.Wrap(err, "minimum custody group count")
}
targetCustodyGroupCount = max(custodyRequirement, semiSupernodeCustody)
}
// Safely compute the fulu fork slot.
fuluForkSlot, err := fuluForkSlot()
if err != nil {
return 0, 0, errors.Wrap(err, "fulu fork slot")
}
// If slot is before the fulu fork slot, then use the earliest stored slot as the reference slot.
if slot < fuluForkSlot {
slot, err = s.cfg.BeaconDB.EarliestSlot(s.ctx)
if err != nil {
return 0, 0, errors.Wrap(err, "earliest slot")
}
}
earliestAvailableSlot, actualCustodyGroupCount, err := s.cfg.BeaconDB.UpdateCustodyInfo(s.ctx, slot, targetCustodyGroupCount)
if err != nil {
return 0, 0, errors.Wrap(err, "update custody info")
}
if isSupernode {
log.WithFields(logrus.Fields{
"current": actualCustodyGroupCount,
"target": cfg.NumberOfCustodyGroups,
}).Info("Supernode mode enabled. Will custody all data columns going forward.")
}
if wasSupernode && !isSupernode {
log.Warningf("Because the `--%s` flag was previously used, the node will continue to act as a super node.", flags.Supernode.Name)
}
return earliestAvailableSlot, actualCustodyGroupCount, nil
}
func spawnCountdownIfPreGenesis(ctx context.Context, genesisTime time.Time, db db.HeadAccessDatabase) {
currentTime := prysmTime.Now()
if currentTime.After(genesisTime) {
@@ -551,19 +469,3 @@ func spawnCountdownIfPreGenesis(ctx context.Context, genesisTime time.Time, db d
}
go slots.CountdownToGenesis(ctx, genesisTime, uint64(gState.NumValidators()), gRoot)
}
func fuluForkSlot() (primitives.Slot, error) {
cfg := params.BeaconConfig()
fuluForkEpoch := cfg.FuluForkEpoch
if fuluForkEpoch == cfg.FarFutureEpoch {
return cfg.FarFutureSlot, nil
}
forkFuluSlot, err := slots.EpochStart(fuluForkEpoch)
if err != nil {
return 0, errors.Wrap(err, "epoch start")
}
return forkFuluSlot, nil
}

View File

@@ -23,11 +23,9 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/startup"
state_native "github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state/stategen"
"github.com/OffchainLabs/prysm/v7/cmd/beacon-chain/flags"
"github.com/OffchainLabs/prysm/v7/config/features"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
consensusblocks "github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
@@ -596,218 +594,3 @@ func TestNotifyIndex(t *testing.T) {
t.Errorf("Notifier channel did not receive the index")
}
}
func TestUpdateCustodyInfoInDB(t *testing.T) {
const (
fuluForkEpoch = 10
custodyRequirement = uint64(4)
earliestStoredSlot = primitives.Slot(12)
numberOfCustodyGroups = uint64(64)
)
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.FuluForkEpoch = fuluForkEpoch
cfg.CustodyRequirement = custodyRequirement
cfg.NumberOfCustodyGroups = numberOfCustodyGroups
params.OverrideBeaconConfig(cfg)
ctx := t.Context()
pbBlock := util.NewBeaconBlock()
pbBlock.Block.Slot = 12
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(pbBlock)
require.NoError(t, err)
roBlock, err := blocks.NewROBlock(signedBeaconBlock)
require.NoError(t, err)
t.Run("CGC increases before fulu", func(t *testing.T) {
service, requirements := minimalTestService(t)
err = requirements.db.SaveBlock(ctx, roBlock)
require.NoError(t, err)
// Before Fulu
// -----------
actualEas, actualCgc, err := service.updateCustodyInfoInDB(15)
require.NoError(t, err)
require.Equal(t, earliestStoredSlot, actualEas)
require.Equal(t, custodyRequirement, actualCgc)
actualEas, actualCgc, err = service.updateCustodyInfoInDB(17)
require.NoError(t, err)
require.Equal(t, earliestStoredSlot, actualEas)
require.Equal(t, custodyRequirement, actualCgc)
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.Supernode = true
flags.Init(gFlags)
defer flags.Init(resetFlags)
actualEas, actualCgc, err = service.updateCustodyInfoInDB(19)
require.NoError(t, err)
require.Equal(t, earliestStoredSlot, actualEas)
require.Equal(t, numberOfCustodyGroups, actualCgc)
// After Fulu
// ----------
actualEas, actualCgc, err = service.updateCustodyInfoInDB(fuluForkEpoch*primitives.Slot(cfg.SlotsPerEpoch) + 1)
require.NoError(t, err)
require.Equal(t, earliestStoredSlot, actualEas)
require.Equal(t, numberOfCustodyGroups, actualCgc)
})
t.Run("CGC increases after fulu", func(t *testing.T) {
service, requirements := minimalTestService(t)
err = requirements.db.SaveBlock(ctx, roBlock)
require.NoError(t, err)
// Before Fulu
// -----------
actualEas, actualCgc, err := service.updateCustodyInfoInDB(15)
require.NoError(t, err)
require.Equal(t, earliestStoredSlot, actualEas)
require.Equal(t, custodyRequirement, actualCgc)
actualEas, actualCgc, err = service.updateCustodyInfoInDB(17)
require.NoError(t, err)
require.Equal(t, earliestStoredSlot, actualEas)
require.Equal(t, custodyRequirement, actualCgc)
// After Fulu
// ----------
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.Supernode = true
flags.Init(gFlags)
defer flags.Init(resetFlags)
slot := fuluForkEpoch*primitives.Slot(cfg.SlotsPerEpoch) + 1
actualEas, actualCgc, err = service.updateCustodyInfoInDB(slot)
require.NoError(t, err)
require.Equal(t, slot, actualEas)
require.Equal(t, numberOfCustodyGroups, actualCgc)
actualEas, actualCgc, err = service.updateCustodyInfoInDB(slot + 2)
require.NoError(t, err)
require.Equal(t, slot, actualEas)
require.Equal(t, numberOfCustodyGroups, actualCgc)
})
t.Run("Supernode downgrade prevented", func(t *testing.T) {
service, requirements := minimalTestService(t)
err = requirements.db.SaveBlock(ctx, roBlock)
require.NoError(t, err)
// Enable supernode
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.Supernode = true
flags.Init(gFlags)
slot := fuluForkEpoch*primitives.Slot(cfg.SlotsPerEpoch) + 1
actualEas, actualCgc, err := service.updateCustodyInfoInDB(slot)
require.NoError(t, err)
require.Equal(t, slot, actualEas)
require.Equal(t, numberOfCustodyGroups, actualCgc)
// Try to downgrade by removing flag
gFlags.Supernode = false
flags.Init(gFlags)
defer flags.Init(resetFlags)
// Should still be supernode
actualEas, actualCgc, err = service.updateCustodyInfoInDB(slot + 2)
require.NoError(t, err)
require.Equal(t, slot, actualEas)
require.Equal(t, numberOfCustodyGroups, actualCgc) // Still 64, not downgraded
})
t.Run("Semi-supernode downgrade prevented", func(t *testing.T) {
service, requirements := minimalTestService(t)
err = requirements.db.SaveBlock(ctx, roBlock)
require.NoError(t, err)
// Enable semi-supernode
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.SemiSupernode = true
flags.Init(gFlags)
slot := fuluForkEpoch*primitives.Slot(cfg.SlotsPerEpoch) + 1
actualEas, actualCgc, err := service.updateCustodyInfoInDB(slot)
require.NoError(t, err)
require.Equal(t, slot, actualEas)
semiSupernodeCustody := numberOfCustodyGroups / 2 // 64
require.Equal(t, semiSupernodeCustody, actualCgc) // Semi-supernode custodies 64 groups
// Try to downgrade by removing flag
gFlags.SemiSupernode = false
flags.Init(gFlags)
defer flags.Init(resetFlags)
// UpdateCustodyInfo should prevent downgrade - custody count should remain at 64
actualEas, actualCgc, err = service.updateCustodyInfoInDB(slot + 2)
require.NoError(t, err)
require.Equal(t, slot, actualEas)
require.Equal(t, semiSupernodeCustody, actualCgc) // Still 64 due to downgrade prevention by UpdateCustodyInfo
})
t.Run("Semi-supernode to supernode upgrade allowed", func(t *testing.T) {
service, requirements := minimalTestService(t)
err = requirements.db.SaveBlock(ctx, roBlock)
require.NoError(t, err)
// Start with semi-supernode
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.SemiSupernode = true
flags.Init(gFlags)
slot := fuluForkEpoch*primitives.Slot(cfg.SlotsPerEpoch) + 1
actualEas, actualCgc, err := service.updateCustodyInfoInDB(slot)
require.NoError(t, err)
require.Equal(t, slot, actualEas)
semiSupernodeCustody := numberOfCustodyGroups / 2 // 64
require.Equal(t, semiSupernodeCustody, actualCgc) // Semi-supernode custodies 64 groups
// Upgrade to full supernode
gFlags.SemiSupernode = false
gFlags.Supernode = true
flags.Init(gFlags)
defer flags.Init(resetFlags)
// Should upgrade to full supernode
upgradeSlot := slot + 2
actualEas, actualCgc, err = service.updateCustodyInfoInDB(upgradeSlot)
require.NoError(t, err)
require.Equal(t, upgradeSlot, actualEas) // Earliest slot updates when upgrading
require.Equal(t, numberOfCustodyGroups, actualCgc) // Upgraded to 128
})
t.Run("Semi-supernode with high validator requirements uses higher custody", func(t *testing.T) {
service, requirements := minimalTestService(t)
err = requirements.db.SaveBlock(ctx, roBlock)
require.NoError(t, err)
// Enable semi-supernode
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.SemiSupernode = true
flags.Init(gFlags)
defer flags.Init(resetFlags)
// Mock a high custody requirement (simulating many validators)
// We need to override the custody requirement calculation
// For this test, we'll verify the logic by checking if custodyRequirement > 64
// Since custodyRequirement in minimalTestService is 4, we can't test the high case here
// This would require a different test setup with actual validators
slot := fuluForkEpoch*primitives.Slot(cfg.SlotsPerEpoch) + 1
actualEas, actualCgc, err := service.updateCustodyInfoInDB(slot)
require.NoError(t, err)
require.Equal(t, slot, actualEas)
semiSupernodeCustody := numberOfCustodyGroups / 2 // 64
// With low validator requirements (4), should use semi-supernode minimum (64)
require.Equal(t, semiSupernodeCustody, actualCgc)
})
}

View File

@@ -26,7 +26,6 @@ go_library(
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/core/validators:go_default_library",
"//beacon-chain/state:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types:go_default_library",

View File

@@ -7,7 +7,6 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/signing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
"github.com/OffchainLabs/prysm/v7/config/features"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
@@ -213,12 +212,7 @@ func ProcessWithdrawals(st state.BeaconState, executionData interfaces.Execution
if err != nil {
return nil, errors.Wrap(err, "could not get next withdrawal validator index")
}
if features.Get().LowValcountSweep {
bound := min(uint64(st.NumValidators()), params.BeaconConfig().MaxValidatorsPerWithdrawalsSweep)
nextValidatorIndex += primitives.ValidatorIndex(bound)
} else {
nextValidatorIndex += primitives.ValidatorIndex(params.BeaconConfig().MaxValidatorsPerWithdrawalsSweep)
}
nextValidatorIndex += primitives.ValidatorIndex(params.BeaconConfig().MaxValidatorsPerWithdrawalsSweep)
nextValidatorIndex = nextValidatorIndex % primitives.ValidatorIndex(st.NumValidators())
} else {
nextValidatorIndex = expectedWithdrawals[len(expectedWithdrawals)-1].ValidatorIndex + 1

View File

@@ -0,0 +1,54 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"bid.go",
"pending_payment.go",
],
importpath = "github.com/OffchainLabs/prysm/v7/beacon-chain/core/gloas",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/state:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/bls:go_default_library",
"//crypto/bls/common:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = [
"bid_test.go",
"pending_payment_test.go",
],
embed = [":go_default_library"],
deps = [
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//config/params:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/bls:go_default_library",
"//crypto/bls/common:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/validator-client:go_default_library",
"//runtime/version:go_default_library",
"//testing/require:go_default_library",
"//time/slots:go_default_library",
"@com_github_prysmaticlabs_fastssz//:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",
],
)

View File

@@ -0,0 +1,193 @@
package gloas
import (
"fmt"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/signing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v7/crypto/bls"
"github.com/OffchainLabs/prysm/v7/crypto/bls/common"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/pkg/errors"
)
// ProcessExecutionPayloadBid processes a signed execution payload bid in the Gloas fork.
// Spec v1.7.0-alpha.0 (pseudocode):
// process_execution_payload_bid(state: BeaconState, block: BeaconBlock):
//
// signed_bid = block.body.signed_execution_payload_bid
// bid = signed_bid.message
// builder_index = bid.builder_index
// amount = bid.value
// if builder_index == BUILDER_INDEX_SELF_BUILD:
// assert amount == 0
// assert signed_bid.signature == G2_POINT_AT_INFINITY
// else:
// assert is_active_builder(state, builder_index)
// assert can_builder_cover_bid(state, builder_index, amount)
// assert verify_execution_payload_bid_signature(state, signed_bid)
// assert bid.slot == block.slot
// assert bid.parent_block_hash == state.latest_block_hash
// assert bid.parent_block_root == block.parent_root
// assert bid.prev_randao == get_randao_mix(state, get_current_epoch(state))
// if amount > 0:
// state.builder_pending_payments[...] = BuilderPendingPayment(weight=0, withdrawal=BuilderPendingWithdrawal(fee_recipient=bid.fee_recipient, amount=amount, builder_index=builder_index))
// state.latest_execution_payload_bid = bid
func ProcessExecutionPayloadBid(st state.BeaconState, block interfaces.ReadOnlyBeaconBlock) error {
signedBid, err := block.Body().SignedExecutionPayloadBid()
if err != nil {
return errors.Wrap(err, "failed to get signed execution payload bid")
}
wrappedBid, err := blocks.WrappedROSignedExecutionPayloadBid(signedBid)
if err != nil {
return errors.Wrap(err, "failed to wrap signed bid")
}
bid, err := wrappedBid.Bid()
if err != nil {
return errors.Wrap(err, "failed to get bid from wrapped bid")
}
builderIndex := bid.BuilderIndex()
amount := bid.Value()
if builderIndex == params.BeaconConfig().BuilderIndexSelfBuild {
if amount != 0 {
return fmt.Errorf("self-build amount must be zero, got %d", amount)
}
if wrappedBid.Signature() != common.InfiniteSignature {
return errors.New("self-build signature must be point at infinity")
}
} else {
ok, err := st.IsActiveBuilder(builderIndex)
if err != nil {
return errors.Wrap(err, "builder active check failed")
}
if !ok {
return fmt.Errorf("builder %d is not active", builderIndex)
}
ok, err = st.CanBuilderCoverBid(builderIndex, amount)
if err != nil {
return errors.Wrap(err, "builder balance check failed")
}
if !ok {
return fmt.Errorf("builder %d cannot cover bid amount %d", builderIndex, amount)
}
if err := validatePayloadBidSignature(st, wrappedBid); err != nil {
return errors.Wrap(err, "bid signature validation failed")
}
}
if err := validateBidConsistency(st, bid, block); err != nil {
return errors.Wrap(err, "bid consistency validation failed")
}
if amount > 0 {
feeRecipient := bid.FeeRecipient()
pendingPayment := &ethpb.BuilderPendingPayment{
Weight: 0,
Withdrawal: &ethpb.BuilderPendingWithdrawal{
FeeRecipient: feeRecipient[:],
Amount: amount,
BuilderIndex: builderIndex,
},
}
slotIndex := params.BeaconConfig().SlotsPerEpoch + (bid.Slot() % params.BeaconConfig().SlotsPerEpoch)
if err := st.SetBuilderPendingPayment(slotIndex, pendingPayment); err != nil {
return errors.Wrap(err, "failed to set pending payment")
}
}
if err := st.SetExecutionPayloadBid(bid); err != nil {
return errors.Wrap(err, "failed to cache execution payload bid")
}
return nil
}
// validateBidConsistency checks that the bid is consistent with the current beacon state.
func validateBidConsistency(st state.BeaconState, bid interfaces.ROExecutionPayloadBid, block interfaces.ReadOnlyBeaconBlock) error {
if bid.Slot() != block.Slot() {
return fmt.Errorf("bid slot %d does not match block slot %d", bid.Slot(), block.Slot())
}
latestBlockHash, err := st.LatestBlockHash()
if err != nil {
return errors.Wrap(err, "failed to get latest block hash")
}
if bid.ParentBlockHash() != latestBlockHash {
return fmt.Errorf("bid parent block hash mismatch: got %x, expected %x",
bid.ParentBlockHash(), latestBlockHash)
}
if bid.ParentBlockRoot() != block.ParentRoot() {
return fmt.Errorf("bid parent block root mismatch: got %x, expected %x",
bid.ParentBlockRoot(), block.ParentRoot())
}
randaoMix, err := helpers.RandaoMix(st, slots.ToEpoch(st.Slot()))
if err != nil {
return errors.Wrap(err, "failed to get randao mix")
}
if bid.PrevRandao() != [32]byte(randaoMix) {
return fmt.Errorf("bid prev randao mismatch: got %x, expected %x", bid.PrevRandao(), randaoMix)
}
return nil
}
// validatePayloadBidSignature verifies the BLS signature on a signed execution payload bid.
// It validates that the signature was created by the builder specified in the bid
// using the appropriate domain for the beacon builder.
func validatePayloadBidSignature(st state.ReadOnlyBeaconState, signedBid interfaces.ROSignedExecutionPayloadBid) error {
bid, err := signedBid.Bid()
if err != nil {
return errors.Wrap(err, "failed to get bid")
}
pubkey, err := st.BuilderPubkey(bid.BuilderIndex())
if err != nil {
return errors.Wrap(err, "failed to get builder pubkey")
}
publicKey, err := bls.PublicKeyFromBytes(pubkey[:])
if err != nil {
return errors.Wrap(err, "invalid builder public key")
}
signatureBytes := signedBid.Signature()
signature, err := bls.SignatureFromBytes(signatureBytes[:])
if err != nil {
return errors.Wrap(err, "invalid signature format")
}
currentEpoch := slots.ToEpoch(bid.Slot())
domain, err := signing.Domain(
st.Fork(),
currentEpoch,
params.BeaconConfig().DomainBeaconBuilder,
st.GenesisValidatorsRoot(),
)
if err != nil {
return errors.Wrap(err, "failed to compute signing domain")
}
signingRoot, err := signedBid.SigningRoot(domain)
if err != nil {
return errors.Wrap(err, "failed to compute signing root")
}
if !signature.Verify(publicKey, signingRoot[:]) {
return signing.ErrSigFailedToVerify
}
return nil
}

View File

@@ -0,0 +1,633 @@
package gloas
import (
"bytes"
"testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/signing"
state_native "github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/crypto/bls"
"github.com/OffchainLabs/prysm/v7/crypto/bls/common"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
validatorpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1/validator-client"
"github.com/OffchainLabs/prysm/v7/runtime/version"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/OffchainLabs/prysm/v7/time/slots"
fastssz "github.com/prysmaticlabs/fastssz"
"google.golang.org/protobuf/proto"
)
type stubBlockBody struct {
signedBid *ethpb.SignedExecutionPayloadBid
}
func (s stubBlockBody) Version() int { return version.Gloas }
func (s stubBlockBody) RandaoReveal() [96]byte { return [96]byte{} }
func (s stubBlockBody) Eth1Data() *ethpb.Eth1Data { return nil }
func (s stubBlockBody) Graffiti() [32]byte { return [32]byte{} }
func (s stubBlockBody) ProposerSlashings() []*ethpb.ProposerSlashing { return nil }
func (s stubBlockBody) AttesterSlashings() []ethpb.AttSlashing { return nil }
func (s stubBlockBody) Attestations() []ethpb.Att { return nil }
func (s stubBlockBody) Deposits() []*ethpb.Deposit { return nil }
func (s stubBlockBody) VoluntaryExits() []*ethpb.SignedVoluntaryExit { return nil }
func (s stubBlockBody) SyncAggregate() (*ethpb.SyncAggregate, error) { return nil, nil }
func (s stubBlockBody) IsNil() bool { return s.signedBid == nil }
func (s stubBlockBody) HashTreeRoot() ([32]byte, error) { return [32]byte{}, nil }
func (s stubBlockBody) Proto() (proto.Message, error) { return nil, nil }
func (s stubBlockBody) Execution() (interfaces.ExecutionData, error) { return nil, nil }
func (s stubBlockBody) BLSToExecutionChanges() ([]*ethpb.SignedBLSToExecutionChange, error) {
return nil, nil
}
func (s stubBlockBody) BlobKzgCommitments() ([][]byte, error) { return nil, nil }
func (s stubBlockBody) ExecutionRequests() (*enginev1.ExecutionRequests, error) {
return nil, nil
}
func (s stubBlockBody) PayloadAttestations() ([]*ethpb.PayloadAttestation, error) {
return nil, nil
}
func (s stubBlockBody) SignedExecutionPayloadBid() (*ethpb.SignedExecutionPayloadBid, error) {
return s.signedBid, nil
}
func (s stubBlockBody) MarshalSSZ() ([]byte, error) { return nil, nil }
func (s stubBlockBody) MarshalSSZTo([]byte) ([]byte, error) { return nil, nil }
func (s stubBlockBody) UnmarshalSSZ([]byte) error { return nil }
func (s stubBlockBody) SizeSSZ() int { return 0 }
type stubBlock struct {
slot primitives.Slot
proposer primitives.ValidatorIndex
parentRoot [32]byte
body stubBlockBody
v int
}
var (
_ interfaces.ReadOnlyBeaconBlockBody = (*stubBlockBody)(nil)
_ interfaces.ReadOnlyBeaconBlock = (*stubBlock)(nil)
)
func (s stubBlock) Slot() primitives.Slot { return s.slot }
func (s stubBlock) ProposerIndex() primitives.ValidatorIndex { return s.proposer }
func (s stubBlock) ParentRoot() [32]byte { return s.parentRoot }
func (s stubBlock) StateRoot() [32]byte { return [32]byte{} }
func (s stubBlock) Body() interfaces.ReadOnlyBeaconBlockBody { return s.body }
func (s stubBlock) IsNil() bool { return false }
func (s stubBlock) IsBlinded() bool { return false }
func (s stubBlock) HashTreeRoot() ([32]byte, error) { return [32]byte{}, nil }
func (s stubBlock) Proto() (proto.Message, error) { return nil, nil }
func (s stubBlock) MarshalSSZ() ([]byte, error) { return nil, nil }
func (s stubBlock) MarshalSSZTo([]byte) ([]byte, error) { return nil, nil }
func (s stubBlock) UnmarshalSSZ([]byte) error { return nil }
func (s stubBlock) SizeSSZ() int { return 0 }
func (s stubBlock) Version() int { return s.v }
func (s stubBlock) AsSignRequestObject() (validatorpb.SignRequestObject, error) {
return nil, nil
}
func (s stubBlock) HashTreeRootWith(*fastssz.Hasher) error { return nil }
func buildGloasState(t *testing.T, slot primitives.Slot, proposerIdx primitives.ValidatorIndex, builderIdx primitives.BuilderIndex, balance uint64, randao [32]byte, latestHash [32]byte, builderPubkey [48]byte) *state_native.BeaconState {
t.Helper()
cfg := params.BeaconConfig()
blockRoots := make([][]byte, cfg.SlotsPerHistoricalRoot)
stateRoots := make([][]byte, cfg.SlotsPerHistoricalRoot)
for i := range blockRoots {
blockRoots[i] = bytes.Repeat([]byte{0xAA}, 32)
stateRoots[i] = bytes.Repeat([]byte{0xBB}, 32)
}
randaoMixes := make([][]byte, cfg.EpochsPerHistoricalVector)
for i := range randaoMixes {
randaoMixes[i] = randao[:]
}
withdrawalCreds := make([]byte, 32)
withdrawalCreds[0] = cfg.BuilderWithdrawalPrefixByte
validatorCount := int(proposerIdx) + 1
validators := make([]*ethpb.Validator, validatorCount)
balances := make([]uint64, validatorCount)
for i := range validatorCount {
validators[i] = &ethpb.Validator{
PublicKey: builderPubkey[:],
WithdrawalCredentials: withdrawalCreds,
EffectiveBalance: balance,
Slashed: false,
ActivationEligibilityEpoch: 0,
ActivationEpoch: 0,
ExitEpoch: cfg.FarFutureEpoch,
WithdrawableEpoch: cfg.FarFutureEpoch,
}
balances[i] = balance
}
payments := make([]*ethpb.BuilderPendingPayment, cfg.SlotsPerEpoch*2)
for i := range payments {
payments[i] = &ethpb.BuilderPendingPayment{Withdrawal: &ethpb.BuilderPendingWithdrawal{}}
}
var builders []*ethpb.Builder
if builderIdx != params.BeaconConfig().BuilderIndexSelfBuild {
builderCount := int(builderIdx) + 1
builders = make([]*ethpb.Builder, builderCount)
builders[builderCount-1] = &ethpb.Builder{
Pubkey: builderPubkey[:],
Version: []byte{0},
ExecutionAddress: bytes.Repeat([]byte{0x01}, 20),
Balance: primitives.Gwei(balance),
DepositEpoch: 0,
WithdrawableEpoch: cfg.FarFutureEpoch,
}
}
stProto := &ethpb.BeaconStateGloas{
Slot: slot,
GenesisValidatorsRoot: bytes.Repeat([]byte{0x11}, 32),
Fork: &ethpb.Fork{
CurrentVersion: bytes.Repeat([]byte{0x22}, 4),
PreviousVersion: bytes.Repeat([]byte{0x22}, 4),
Epoch: 0,
},
BlockRoots: blockRoots,
StateRoots: stateRoots,
RandaoMixes: randaoMixes,
Validators: validators,
Balances: balances,
LatestBlockHash: latestHash[:],
BuilderPendingPayments: payments,
BuilderPendingWithdrawals: []*ethpb.BuilderPendingWithdrawal{},
Builders: builders,
FinalizedCheckpoint: &ethpb.Checkpoint{
Epoch: 1,
},
}
st, err := state_native.InitializeFromProtoGloas(stProto)
require.NoError(t, err)
return st.(*state_native.BeaconState)
}
func signBid(t *testing.T, sk common.SecretKey, bid *ethpb.ExecutionPayloadBid, fork *ethpb.Fork, genesisRoot [32]byte) [96]byte {
t.Helper()
epoch := slots.ToEpoch(primitives.Slot(bid.Slot))
domain, err := signing.Domain(fork, epoch, params.BeaconConfig().DomainBeaconBuilder, genesisRoot[:])
require.NoError(t, err)
root, err := signing.ComputeSigningRoot(bid, domain)
require.NoError(t, err)
sig := sk.Sign(root[:]).Marshal()
var out [96]byte
copy(out[:], sig)
return out
}
func TestProcessExecutionPayloadBid_SelfBuildSuccess(t *testing.T) {
slot := primitives.Slot(12)
proposerIdx := primitives.ValidatorIndex(0)
builderIdx := params.BeaconConfig().BuilderIndexSelfBuild
randao := [32]byte(bytes.Repeat([]byte{0xAA}, 32))
latestHash := [32]byte(bytes.Repeat([]byte{0xBB}, 32))
pubKey := [48]byte{}
state := buildGloasState(t, slot, proposerIdx, builderIdx, params.BeaconConfig().MinActivationBalance+1000, randao, latestHash, pubKey)
bid := &ethpb.ExecutionPayloadBid{
ParentBlockHash: latestHash[:],
ParentBlockRoot: bytes.Repeat([]byte{0xCC}, 32),
BlockHash: bytes.Repeat([]byte{0xDD}, 32),
PrevRandao: randao[:],
GasLimit: 1,
BuilderIndex: builderIdx,
Slot: slot,
Value: 0,
ExecutionPayment: 0,
BlobKzgCommitmentsRoot: bytes.Repeat([]byte{0xEE}, 32),
FeeRecipient: bytes.Repeat([]byte{0xFF}, 20),
}
signed := &ethpb.SignedExecutionPayloadBid{
Message: bid,
Signature: common.InfiniteSignature[:],
}
block := stubBlock{
slot: slot,
proposer: proposerIdx,
parentRoot: bytesutil.ToBytes32(bid.ParentBlockRoot),
body: stubBlockBody{signedBid: signed},
v: version.Gloas,
}
require.NoError(t, ProcessExecutionPayloadBid(state, block))
stateProto, ok := state.ToProto().(*ethpb.BeaconStateGloas)
require.Equal(t, true, ok)
slotIndex := params.BeaconConfig().SlotsPerEpoch + (slot % params.BeaconConfig().SlotsPerEpoch)
require.Equal(t, primitives.Gwei(0), stateProto.BuilderPendingPayments[slotIndex].Withdrawal.Amount)
}
func TestProcessExecutionPayloadBid_SelfBuildNonZeroAmountFails(t *testing.T) {
slot := primitives.Slot(2)
proposerIdx := primitives.ValidatorIndex(0)
builderIdx := params.BeaconConfig().BuilderIndexSelfBuild
randao := [32]byte{}
latestHash := [32]byte{1}
state := buildGloasState(t, slot, proposerIdx, builderIdx, params.BeaconConfig().MinActivationBalance+1000, randao, latestHash, [48]byte{})
bid := &ethpb.ExecutionPayloadBid{
ParentBlockHash: latestHash[:],
ParentBlockRoot: bytes.Repeat([]byte{0xAA}, 32),
BlockHash: bytes.Repeat([]byte{0xBB}, 32),
PrevRandao: randao[:],
BuilderIndex: builderIdx,
Slot: slot,
Value: 10,
ExecutionPayment: 0,
BlobKzgCommitmentsRoot: bytes.Repeat([]byte{0xCC}, 32),
FeeRecipient: bytes.Repeat([]byte{0xDD}, 20),
}
signed := &ethpb.SignedExecutionPayloadBid{
Message: bid,
Signature: common.InfiniteSignature[:],
}
block := stubBlock{
slot: slot,
proposer: proposerIdx,
parentRoot: bytesutil.ToBytes32(bid.ParentBlockRoot),
body: stubBlockBody{signedBid: signed},
v: version.Gloas,
}
err := ProcessExecutionPayloadBid(state, block)
require.ErrorContains(t, "self-build amount must be zero", err)
}
func TestProcessExecutionPayloadBid_PendingPaymentAndCacheBid(t *testing.T) {
slot := primitives.Slot(8)
builderIdx := primitives.BuilderIndex(1)
proposerIdx := primitives.ValidatorIndex(2)
randao := [32]byte(bytes.Repeat([]byte{0xAA}, 32))
latestHash := [32]byte(bytes.Repeat([]byte{0xBB}, 32))
sk, err := bls.RandKey()
require.NoError(t, err)
pub := sk.PublicKey().Marshal()
var pubKey [48]byte
copy(pubKey[:], pub)
balance := params.BeaconConfig().MinActivationBalance + 1_000_000
state := buildGloasState(t, slot, proposerIdx, builderIdx, balance, randao, latestHash, pubKey)
bid := &ethpb.ExecutionPayloadBid{
ParentBlockHash: latestHash[:],
ParentBlockRoot: bytes.Repeat([]byte{0xCC}, 32),
BlockHash: bytes.Repeat([]byte{0xDD}, 32),
PrevRandao: randao[:],
GasLimit: 1,
BuilderIndex: builderIdx,
Slot: slot,
Value: 500_000,
ExecutionPayment: 1,
BlobKzgCommitmentsRoot: bytes.Repeat([]byte{0xEE}, 32),
FeeRecipient: bytes.Repeat([]byte{0xFF}, 20),
}
genesis := bytesutil.ToBytes32(state.GenesisValidatorsRoot())
sig := signBid(t, sk, bid, state.Fork(), genesis)
signed := &ethpb.SignedExecutionPayloadBid{
Message: bid,
Signature: sig[:],
}
block := stubBlock{
slot: slot,
proposer: proposerIdx, // not self-build
parentRoot: bytesutil.ToBytes32(bid.ParentBlockRoot),
body: stubBlockBody{signedBid: signed},
v: version.Gloas,
}
require.NoError(t, ProcessExecutionPayloadBid(state, block))
stateProto, ok := state.ToProto().(*ethpb.BeaconStateGloas)
require.Equal(t, true, ok)
slotIndex := params.BeaconConfig().SlotsPerEpoch + (slot % params.BeaconConfig().SlotsPerEpoch)
require.Equal(t, primitives.Gwei(500_000), stateProto.BuilderPendingPayments[slotIndex].Withdrawal.Amount)
require.NotNil(t, stateProto.LatestExecutionPayloadBid)
require.Equal(t, primitives.BuilderIndex(1), stateProto.LatestExecutionPayloadBid.BuilderIndex)
require.Equal(t, primitives.Gwei(500_000), stateProto.LatestExecutionPayloadBid.Value)
}
func TestProcessExecutionPayloadBid_BuilderNotActive(t *testing.T) {
slot := primitives.Slot(4)
builderIdx := primitives.BuilderIndex(1)
proposerIdx := primitives.ValidatorIndex(2)
randao := [32]byte(bytes.Repeat([]byte{0x01}, 32))
latestHash := [32]byte(bytes.Repeat([]byte{0x02}, 32))
sk, err := bls.RandKey()
require.NoError(t, err)
var pubKey [48]byte
copy(pubKey[:], sk.PublicKey().Marshal())
state := buildGloasState(t, slot, proposerIdx, builderIdx, params.BeaconConfig().MinDepositAmount+1000, randao, latestHash, pubKey)
// Make builder inactive by setting withdrawable_epoch.
stateProto := state.ToProto().(*ethpb.BeaconStateGloas)
stateProto.Builders[int(builderIdx)].WithdrawableEpoch = 0
stateIface, err := state_native.InitializeFromProtoGloas(stateProto)
require.NoError(t, err)
state = stateIface.(*state_native.BeaconState)
bid := &ethpb.ExecutionPayloadBid{
ParentBlockHash: latestHash[:],
ParentBlockRoot: bytes.Repeat([]byte{0x03}, 32),
BlockHash: bytes.Repeat([]byte{0x04}, 32),
PrevRandao: randao[:],
GasLimit: 1,
BuilderIndex: builderIdx,
Slot: slot,
Value: 10,
ExecutionPayment: 0,
BlobKzgCommitmentsRoot: bytes.Repeat([]byte{0x05}, 32),
FeeRecipient: bytes.Repeat([]byte{0x06}, 20),
}
genesis := bytesutil.ToBytes32(state.GenesisValidatorsRoot())
sig := signBid(t, sk, bid, state.Fork(), genesis)
signed := &ethpb.SignedExecutionPayloadBid{Message: bid, Signature: sig[:]}
block := stubBlock{
slot: slot,
proposer: proposerIdx,
parentRoot: bytesutil.ToBytes32(bid.ParentBlockRoot),
body: stubBlockBody{signedBid: signed},
v: version.Gloas,
}
err = ProcessExecutionPayloadBid(state, block)
require.ErrorContains(t, "is not active", err)
}
func TestProcessExecutionPayloadBid_CannotCoverBid(t *testing.T) {
slot := primitives.Slot(5)
builderIdx := primitives.BuilderIndex(1)
proposerIdx := primitives.ValidatorIndex(2)
randao := [32]byte(bytes.Repeat([]byte{0x0A}, 32))
latestHash := [32]byte(bytes.Repeat([]byte{0x0B}, 32))
sk, err := bls.RandKey()
require.NoError(t, err)
var pubKey [48]byte
copy(pubKey[:], sk.PublicKey().Marshal())
state := buildGloasState(t, slot, proposerIdx, builderIdx, params.BeaconConfig().MinDepositAmount+10, randao, latestHash, pubKey)
stateProto := state.ToProto().(*ethpb.BeaconStateGloas)
// Add pending balances to push below required balance.
stateProto.BuilderPendingWithdrawals = []*ethpb.BuilderPendingWithdrawal{
{Amount: 15, BuilderIndex: builderIdx},
}
stateProto.BuilderPendingPayments = []*ethpb.BuilderPendingPayment{
{Withdrawal: &ethpb.BuilderPendingWithdrawal{Amount: 20, BuilderIndex: builderIdx}},
}
stateIface, err := state_native.InitializeFromProtoGloas(stateProto)
require.NoError(t, err)
state = stateIface.(*state_native.BeaconState)
bid := &ethpb.ExecutionPayloadBid{
ParentBlockHash: latestHash[:],
ParentBlockRoot: bytes.Repeat([]byte{0xCC}, 32),
BlockHash: bytes.Repeat([]byte{0xDD}, 32),
PrevRandao: randao[:],
GasLimit: 1,
BuilderIndex: builderIdx,
Slot: slot,
Value: 25,
ExecutionPayment: 0,
BlobKzgCommitmentsRoot: bytes.Repeat([]byte{0xEE}, 32),
FeeRecipient: bytes.Repeat([]byte{0xFF}, 20),
}
genesis := bytesutil.ToBytes32(state.GenesisValidatorsRoot())
sig := signBid(t, sk, bid, state.Fork(), genesis)
signed := &ethpb.SignedExecutionPayloadBid{Message: bid, Signature: sig[:]}
block := stubBlock{
slot: slot,
proposer: proposerIdx,
parentRoot: bytesutil.ToBytes32(bid.ParentBlockRoot),
body: stubBlockBody{signedBid: signed},
v: version.Gloas,
}
err = ProcessExecutionPayloadBid(state, block)
require.ErrorContains(t, "cannot cover bid amount", err)
}
func TestProcessExecutionPayloadBid_InvalidSignature(t *testing.T) {
slot := primitives.Slot(6)
builderIdx := primitives.BuilderIndex(1)
proposerIdx := primitives.ValidatorIndex(2)
randao := [32]byte(bytes.Repeat([]byte{0xAA}, 32))
latestHash := [32]byte(bytes.Repeat([]byte{0xBB}, 32))
sk, err := bls.RandKey()
require.NoError(t, err)
var pubKey [48]byte
copy(pubKey[:], sk.PublicKey().Marshal())
state := buildGloasState(t, slot, proposerIdx, builderIdx, params.BeaconConfig().MinDepositAmount+1000, randao, latestHash, pubKey)
bid := &ethpb.ExecutionPayloadBid{
ParentBlockHash: latestHash[:],
ParentBlockRoot: bytes.Repeat([]byte{0xCC}, 32),
BlockHash: bytes.Repeat([]byte{0xDD}, 32),
PrevRandao: randao[:],
GasLimit: 1,
BuilderIndex: builderIdx,
Slot: slot,
Value: 10,
ExecutionPayment: 0,
BlobKzgCommitmentsRoot: bytes.Repeat([]byte{0xEE}, 32),
FeeRecipient: bytes.Repeat([]byte{0xFF}, 20),
}
// Use an invalid signature.
invalidSig := [96]byte{1}
signed := &ethpb.SignedExecutionPayloadBid{Message: bid, Signature: invalidSig[:]}
block := stubBlock{
slot: slot,
proposer: proposerIdx,
parentRoot: bytesutil.ToBytes32(bid.ParentBlockRoot),
body: stubBlockBody{signedBid: signed},
v: version.Gloas,
}
err = ProcessExecutionPayloadBid(state, block)
require.ErrorContains(t, "bid signature validation failed", err)
}
func TestProcessExecutionPayloadBid_SlotMismatch(t *testing.T) {
slot := primitives.Slot(10)
builderIdx := primitives.BuilderIndex(1)
proposerIdx := primitives.ValidatorIndex(2)
randao := [32]byte(bytes.Repeat([]byte{0xAA}, 32))
latestHash := [32]byte(bytes.Repeat([]byte{0xBB}, 32))
sk, err := bls.RandKey()
require.NoError(t, err)
var pubKey [48]byte
copy(pubKey[:], sk.PublicKey().Marshal())
state := buildGloasState(t, slot, proposerIdx, builderIdx, params.BeaconConfig().MinDepositAmount+1000, randao, latestHash, pubKey)
bid := &ethpb.ExecutionPayloadBid{
ParentBlockHash: latestHash[:],
ParentBlockRoot: bytes.Repeat([]byte{0xAA}, 32),
BlockHash: bytes.Repeat([]byte{0xBB}, 32),
PrevRandao: randao[:],
GasLimit: 1,
BuilderIndex: builderIdx,
Slot: slot + 1, // mismatch
Value: 1,
ExecutionPayment: 0,
BlobKzgCommitmentsRoot: bytes.Repeat([]byte{0xCC}, 32),
FeeRecipient: bytes.Repeat([]byte{0xDD}, 20),
}
genesis := bytesutil.ToBytes32(state.GenesisValidatorsRoot())
sig := signBid(t, sk, bid, state.Fork(), genesis)
signed := &ethpb.SignedExecutionPayloadBid{Message: bid, Signature: sig[:]}
block := stubBlock{
slot: slot,
proposer: proposerIdx,
parentRoot: bytesutil.ToBytes32(bid.ParentBlockRoot),
body: stubBlockBody{signedBid: signed},
v: version.Gloas,
}
err = ProcessExecutionPayloadBid(state, block)
require.ErrorContains(t, "bid slot", err)
}
func TestProcessExecutionPayloadBid_ParentHashMismatch(t *testing.T) {
slot := primitives.Slot(11)
builderIdx := primitives.BuilderIndex(1)
proposerIdx := primitives.ValidatorIndex(2)
randao := [32]byte(bytes.Repeat([]byte{0xAA}, 32))
latestHash := [32]byte(bytes.Repeat([]byte{0xBB}, 32))
sk, err := bls.RandKey()
require.NoError(t, err)
var pubKey [48]byte
copy(pubKey[:], sk.PublicKey().Marshal())
state := buildGloasState(t, slot, proposerIdx, builderIdx, params.BeaconConfig().MinDepositAmount+1000, randao, latestHash, pubKey)
bid := &ethpb.ExecutionPayloadBid{
ParentBlockHash: bytes.Repeat([]byte{0x11}, 32), // mismatch
ParentBlockRoot: bytes.Repeat([]byte{0x22}, 32),
BlockHash: bytes.Repeat([]byte{0x33}, 32),
PrevRandao: randao[:],
GasLimit: 1,
BuilderIndex: builderIdx,
Slot: slot,
Value: 1,
ExecutionPayment: 0,
BlobKzgCommitmentsRoot: bytes.Repeat([]byte{0x44}, 32),
FeeRecipient: bytes.Repeat([]byte{0x55}, 20),
}
genesis := bytesutil.ToBytes32(state.GenesisValidatorsRoot())
sig := signBid(t, sk, bid, state.Fork(), genesis)
signed := &ethpb.SignedExecutionPayloadBid{Message: bid, Signature: sig[:]}
block := stubBlock{
slot: slot,
proposer: proposerIdx,
parentRoot: bytesutil.ToBytes32(bid.ParentBlockRoot),
body: stubBlockBody{signedBid: signed},
v: version.Gloas,
}
err = ProcessExecutionPayloadBid(state, block)
require.ErrorContains(t, "parent block hash mismatch", err)
}
func TestProcessExecutionPayloadBid_ParentRootMismatch(t *testing.T) {
slot := primitives.Slot(12)
builderIdx := primitives.BuilderIndex(1)
proposerIdx := primitives.ValidatorIndex(2)
randao := [32]byte(bytes.Repeat([]byte{0xAA}, 32))
latestHash := [32]byte(bytes.Repeat([]byte{0xBB}, 32))
sk, err := bls.RandKey()
require.NoError(t, err)
var pubKey [48]byte
copy(pubKey[:], sk.PublicKey().Marshal())
state := buildGloasState(t, slot, proposerIdx, builderIdx, params.BeaconConfig().MinDepositAmount+1000, randao, latestHash, pubKey)
parentRoot := bytes.Repeat([]byte{0x22}, 32)
bid := &ethpb.ExecutionPayloadBid{
ParentBlockHash: latestHash[:],
ParentBlockRoot: parentRoot,
BlockHash: bytes.Repeat([]byte{0x33}, 32),
PrevRandao: randao[:],
GasLimit: 1,
BuilderIndex: builderIdx,
Slot: slot,
Value: 1,
ExecutionPayment: 0,
BlobKzgCommitmentsRoot: bytes.Repeat([]byte{0x44}, 32),
FeeRecipient: bytes.Repeat([]byte{0x55}, 20),
}
genesis := bytesutil.ToBytes32(state.GenesisValidatorsRoot())
sig := signBid(t, sk, bid, state.Fork(), genesis)
signed := &ethpb.SignedExecutionPayloadBid{Message: bid, Signature: sig[:]}
block := stubBlock{
slot: slot,
proposer: proposerIdx,
parentRoot: bytesutil.ToBytes32(bytes.Repeat([]byte{0x99}, 32)), // mismatch
body: stubBlockBody{signedBid: signed},
v: version.Gloas,
}
err = ProcessExecutionPayloadBid(state, block)
require.ErrorContains(t, "parent block root mismatch", err)
}
func TestProcessExecutionPayloadBid_PrevRandaoMismatch(t *testing.T) {
slot := primitives.Slot(13)
builderIdx := primitives.BuilderIndex(1)
proposerIdx := primitives.ValidatorIndex(2)
randao := [32]byte(bytes.Repeat([]byte{0xAA}, 32))
latestHash := [32]byte(bytes.Repeat([]byte{0xBB}, 32))
sk, err := bls.RandKey()
require.NoError(t, err)
var pubKey [48]byte
copy(pubKey[:], sk.PublicKey().Marshal())
state := buildGloasState(t, slot, proposerIdx, builderIdx, params.BeaconConfig().MinDepositAmount+1000, randao, latestHash, pubKey)
bid := &ethpb.ExecutionPayloadBid{
ParentBlockHash: latestHash[:],
ParentBlockRoot: bytes.Repeat([]byte{0x22}, 32),
BlockHash: bytes.Repeat([]byte{0x33}, 32),
PrevRandao: bytes.Repeat([]byte{0x01}, 32), // mismatch
GasLimit: 1,
BuilderIndex: builderIdx,
Slot: slot,
Value: 1,
ExecutionPayment: 0,
BlobKzgCommitmentsRoot: bytes.Repeat([]byte{0x44}, 32),
FeeRecipient: bytes.Repeat([]byte{0x55}, 20),
}
genesis := bytesutil.ToBytes32(state.GenesisValidatorsRoot())
sig := signBid(t, sk, bid, state.Fork(), genesis)
signed := &ethpb.SignedExecutionPayloadBid{Message: bid, Signature: sig[:]}
block := stubBlock{
slot: slot,
proposer: proposerIdx,
parentRoot: bytesutil.ToBytes32(bid.ParentBlockRoot),
body: stubBlockBody{signedBid: signed},
v: version.Gloas,
}
err = ProcessExecutionPayloadBid(state, block)
require.ErrorContains(t, "prev randao mismatch", err)
}

View File

@@ -0,0 +1,76 @@
package gloas
import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/pkg/errors"
)
// ProcessBuilderPendingPayments processes the builder pending payments from the previous epoch.
// Spec v1.7.0-alpha.0 (pseudocode):
// def process_builder_pending_payments(state: BeaconState) -> None:
//
// quorum = get_builder_payment_quorum_threshold(state)
// for payment in state.builder_pending_payments[:SLOTS_PER_EPOCH]:
// if payment.weight >= quorum:
// state.builder_pending_withdrawals.append(payment.withdrawal)
//
// old_payments = state.builder_pending_payments[SLOTS_PER_EPOCH:]
// new_payments = [BuilderPendingPayment() for _ in range(SLOTS_PER_EPOCH)]
// state.builder_pending_payments = old_payments + new_payments
func ProcessBuilderPendingPayments(state state.BeaconState) error {
quorum, err := builderQuorumThreshold(state)
if err != nil {
return errors.Wrap(err, "could not compute builder payment quorum threshold")
}
payments, err := state.BuilderPendingPayments()
if err != nil {
return errors.Wrap(err, "could not get builder pending payments")
}
slotsPerEpoch := uint64(params.BeaconConfig().SlotsPerEpoch)
var withdrawals []*ethpb.BuilderPendingWithdrawal
for _, payment := range payments[:slotsPerEpoch] {
if quorum > payment.Weight {
continue
}
withdrawals = append(withdrawals, payment.Withdrawal)
}
if err := state.AppendBuilderPendingWithdrawals(withdrawals); err != nil {
return errors.Wrap(err, "could not append builder pending withdrawals")
}
if err := state.RotateBuilderPendingPayments(); err != nil {
return errors.Wrap(err, "could not rotate builder pending payments")
}
return nil
}
// builderQuorumThreshold calculates the quorum threshold for builder payments.
// Spec v1.7.0-alpha.0 (pseudocode):
// def get_builder_payment_quorum_threshold(state: BeaconState) -> uint64:
//
// per_slot_balance = get_total_active_balance(state) // SLOTS_PER_EPOCH
// quorum = per_slot_balance * BUILDER_PAYMENT_THRESHOLD_NUMERATOR
// return uint64(quorum // BUILDER_PAYMENT_THRESHOLD_DENOMINATOR)
func builderQuorumThreshold(state state.ReadOnlyBeaconState) (primitives.Gwei, error) {
activeBalance, err := helpers.TotalActiveBalance(state)
if err != nil {
return 0, errors.Wrap(err, "could not get total active balance")
}
cfg := params.BeaconConfig()
slotsPerEpoch := uint64(cfg.SlotsPerEpoch)
numerator := cfg.BuilderPaymentThresholdNumerator
denominator := cfg.BuilderPaymentThresholdDenominator
activeBalancePerSlot := activeBalance / slotsPerEpoch
quorum := (activeBalancePerSlot * numerator) / denominator
return primitives.Gwei(quorum), nil
}

View File

@@ -0,0 +1,119 @@
package gloas
import (
"slices"
"testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
state_native "github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/testing/require"
)
func TestBuilderQuorumThreshold(t *testing.T) {
helpers.ClearCache()
cfg := params.BeaconConfig()
validators := []*ethpb.Validator{
{EffectiveBalance: cfg.MaxEffectiveBalance, ActivationEpoch: 0, ExitEpoch: 1},
{EffectiveBalance: cfg.MaxEffectiveBalance, ActivationEpoch: 0, ExitEpoch: 1},
}
st, err := state_native.InitializeFromProtoUnsafeGloas(&ethpb.BeaconStateGloas{Validators: validators})
require.NoError(t, err)
got, err := builderQuorumThreshold(st)
require.NoError(t, err)
total := uint64(len(validators)) * cfg.MaxEffectiveBalance
perSlot := total / uint64(cfg.SlotsPerEpoch)
want := (perSlot * cfg.BuilderPaymentThresholdNumerator) / cfg.BuilderPaymentThresholdDenominator
require.Equal(t, primitives.Gwei(want), got)
}
func TestProcessBuilderPendingPayments(t *testing.T) {
helpers.ClearCache()
cfg := params.BeaconConfig()
buildPayments := func(weights ...primitives.Gwei) []*ethpb.BuilderPendingPayment {
p := make([]*ethpb.BuilderPendingPayment, 2*int(cfg.SlotsPerEpoch))
for i := range p {
p[i] = &ethpb.BuilderPendingPayment{
Withdrawal: &ethpb.BuilderPendingWithdrawal{FeeRecipient: make([]byte, 20)},
}
}
for i, w := range weights {
p[i].Weight = w
p[i].Withdrawal.Amount = 1
}
return p
}
validators := []*ethpb.Validator{
{EffectiveBalance: cfg.MaxEffectiveBalance, ActivationEpoch: 0, ExitEpoch: 1},
{EffectiveBalance: cfg.MaxEffectiveBalance, ActivationEpoch: 0, ExitEpoch: 1},
}
pbSt, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{Validators: validators})
require.NoError(t, err)
total := uint64(len(validators)) * cfg.MaxEffectiveBalance
perSlot := total / uint64(cfg.SlotsPerEpoch)
quorum := (perSlot * cfg.BuilderPaymentThresholdNumerator) / cfg.BuilderPaymentThresholdDenominator
slotsPerEpoch := int(cfg.SlotsPerEpoch)
t.Run("append qualifying withdrawals", func(t *testing.T) {
payments := buildPayments(primitives.Gwei(quorum+1), primitives.Gwei(quorum+2))
st := &testProcessState{BeaconState: pbSt, payments: payments}
require.NoError(t, ProcessBuilderPendingPayments(st))
require.Equal(t, 2, len(st.withdrawals))
require.Equal(t, payments[0].Withdrawal, st.withdrawals[0])
require.Equal(t, payments[1].Withdrawal, st.withdrawals[1])
require.Equal(t, 2*slotsPerEpoch, len(st.payments))
for i := slotsPerEpoch; i < 2*slotsPerEpoch; i++ {
require.Equal(t, primitives.Gwei(0), st.payments[i].Weight)
require.Equal(t, primitives.Gwei(0), st.payments[i].Withdrawal.Amount)
require.Equal(t, 20, len(st.payments[i].Withdrawal.FeeRecipient))
}
})
t.Run("no withdrawals when below quorum", func(t *testing.T) {
payments := buildPayments(primitives.Gwei(quorum - 1))
st := &testProcessState{BeaconState: pbSt, payments: payments}
require.NoError(t, ProcessBuilderPendingPayments(st))
require.Equal(t, 0, len(st.withdrawals))
})
}
type testProcessState struct {
state.BeaconState
payments []*ethpb.BuilderPendingPayment
withdrawals []*ethpb.BuilderPendingWithdrawal
}
func (t *testProcessState) BuilderPendingPayments() ([]*ethpb.BuilderPendingPayment, error) {
return t.payments, nil
}
func (t *testProcessState) AppendBuilderPendingWithdrawals(withdrawals []*ethpb.BuilderPendingWithdrawal) error {
t.withdrawals = append(t.withdrawals, withdrawals...)
return nil
}
func (t *testProcessState) RotateBuilderPendingPayments() error {
slotsPerEpoch := int(params.BeaconConfig().SlotsPerEpoch)
rotated := slices.Clone(t.payments[slotsPerEpoch:])
for range slotsPerEpoch {
rotated = append(rotated, &ethpb.BuilderPendingPayment{
Withdrawal: &ethpb.BuilderPendingWithdrawal{
FeeRecipient: make([]byte, 20),
},
})
}
t.payments = rotated
return nil
}

View File

@@ -89,6 +89,7 @@ type NoHeadAccessDatabase interface {
SaveBlocks(ctx context.Context, blocks []interfaces.ReadOnlySignedBeaconBlock) error
SaveROBlocks(ctx context.Context, blks []blocks.ROBlock, cache bool) error
SaveGenesisBlockRoot(ctx context.Context, blockRoot [32]byte) error
SlotByBlockRoot(context.Context, [32]byte) (primitives.Slot, error)
// State related methods.
SaveState(ctx context.Context, state state.ReadOnlyBeaconState, blockRoot [32]byte) error
SaveStates(ctx context.Context, states []state.ReadOnlyBeaconState, blockRoots [][32]byte) error
@@ -96,6 +97,7 @@ type NoHeadAccessDatabase interface {
DeleteStates(ctx context.Context, blockRoots [][32]byte) error
SaveStateSummary(ctx context.Context, summary *ethpb.StateSummary) error
SaveStateSummaries(ctx context.Context, summaries []*ethpb.StateSummary) error
SlotInDiffTree(primitives.Slot) (uint64, int, error)
// Checkpoint operations.
SaveJustifiedCheckpoint(ctx context.Context, checkpoint *ethpb.Checkpoint) error
SaveFinalizedCheckpoint(ctx context.Context, checkpoint *ethpb.Checkpoint) error

View File

@@ -32,6 +32,7 @@ go_library(
"state_diff_helpers.go",
"state_summary.go",
"state_summary_cache.go",
"testing_helpers.go",
"utils.go",
"validated_checkpoint.go",
"wss.go",

View File

@@ -1053,6 +1053,10 @@ func (s *Store) getStateUsingStateDiff(ctx context.Context, blockRoot [32]byte)
return nil, err
}
if uint64(slot) < s.getOffset() {
return nil, ErrSlotBeforeOffset
}
st, err := s.stateByDiff(ctx, slot)
if err != nil {
return nil, err
@@ -1070,6 +1074,10 @@ func (s *Store) hasStateUsingStateDiff(ctx context.Context, blockRoot [32]byte)
return false, err
}
if uint64(slot) < s.getOffset() {
return false, ErrSlotBeforeOffset
}
stateLvl := computeLevel(s.getOffset(), slot)
return stateLvl != -1, nil
}

View File

@@ -23,6 +23,16 @@ const (
The data at level 0 is saved every 2**exponent[0] slots and always contains a full state snapshot that is used as a base for the delta saved at other levels.
*/
// SlotInDiffTree returns whether the given slot is a saving point in the diff tree.
// If it is, it also returns the offset and level in the tree.
func (s *Store) SlotInDiffTree(slot primitives.Slot) (uint64, int, error) {
offset := s.getOffset()
if uint64(slot) < offset {
return 0, -1, ErrSlotBeforeOffset
}
return offset, computeLevel(offset, slot), nil
}
// saveStateByDiff takes a state and decides between saving a full state snapshot or a diff.
func (s *Store) saveStateByDiff(ctx context.Context, st state.ReadOnlyBeaconState) error {
_, span := trace.StartSpan(ctx, "BeaconDB.saveStateByDiff")
@@ -33,13 +43,10 @@ func (s *Store) saveStateByDiff(ctx context.Context, st state.ReadOnlyBeaconStat
}
slot := st.Slot()
offset := s.getOffset()
if uint64(slot) < offset {
return ErrSlotBeforeOffset
offset, lvl, err := s.SlotInDiffTree(slot)
if err != nil {
return errors.Wrap(err, "could not determine if slot is in diff tree")
}
// Find the level to save the state.
lvl := computeLevel(offset, slot)
if lvl == -1 {
return nil
}

View File

@@ -25,7 +25,7 @@ func newStateDiffCache(s *Store) (*stateDiffCache, error) {
return bbolt.ErrBucketNotFound
}
offsetBytes := bucket.Get([]byte("offset"))
offsetBytes := bucket.Get(offsetKey)
if offsetBytes == nil {
return errors.New("state diff cache: offset not found")
}

View File

@@ -19,7 +19,7 @@ import (
var (
offsetKey = []byte("offset")
ErrSlotBeforeOffset = errors.New("slot is before root offset")
ErrSlotBeforeOffset = errors.New("slot is before state-diff root offset")
)
func makeKeyForStateDiffTree(level int, slot uint64) []byte {
@@ -73,6 +73,9 @@ func (s *Store) getAnchorState(offset uint64, lvl int, slot primitives.Slot) (an
// computeLevel computes the level in the diff tree. Returns -1 in case slot should not be in tree.
func computeLevel(offset uint64, slot primitives.Slot) int {
if uint64(slot) < offset {
return -1
}
rel := uint64(slot) - offset
for i, exp := range flags.Get().StateDiffExponents {
if exp < 2 || exp >= 64 {

View File

@@ -43,8 +43,12 @@ func TestStateDiff_ComputeLevel(t *testing.T) {
offset := db.getOffset()
// should be -1. slot < offset
lvl := computeLevel(10, primitives.Slot(9))
require.Equal(t, -1, lvl)
// 2 ** 21
lvl := computeLevel(offset, primitives.Slot(math.PowerOf2(21)))
lvl = computeLevel(offset, primitives.Slot(math.PowerOf2(21)))
require.Equal(t, 0, lvl)
// 2 ** 21 * 3

View File

@@ -1395,6 +1395,23 @@ func TestStore_CanSaveRetrieveStateUsingStateDiff(t *testing.T) {
require.IsNil(t, readSt)
})
t.Run("slot before offset", func(t *testing.T) {
db := setupDB(t)
setDefaultStateDiffExponents()
err := setOffsetInDB(db, 10)
require.NoError(t, err)
r := bytesutil.ToBytes32([]byte{'A'})
ss := &ethpb.StateSummary{Slot: 9, Root: r[:]}
err = db.SaveStateSummary(t.Context(), ss)
require.NoError(t, err)
st, err := db.getStateUsingStateDiff(t.Context(), r)
require.ErrorIs(t, err, ErrSlotBeforeOffset)
require.IsNil(t, st)
})
t.Run("Full state snapshot", func(t *testing.T) {
t.Run("using state summary", func(t *testing.T) {
for v := range version.All() {
@@ -1627,4 +1644,21 @@ func TestStore_HasStateUsingStateDiff(t *testing.T) {
}
})
t.Run("slot before offset", func(t *testing.T) {
db := setupDB(t)
setDefaultStateDiffExponents()
err := setOffsetInDB(db, 10)
require.NoError(t, err)
r := bytesutil.ToBytes32([]byte{'B'})
ss := &ethpb.StateSummary{Slot: 0, Root: r[:]}
err = db.SaveStateSummary(t.Context(), ss)
require.NoError(t, err)
hasState, err := db.hasStateUsingStateDiff(t.Context(), r)
require.ErrorIs(t, err, ErrSlotBeforeOffset)
require.Equal(t, false, hasState)
})
}

View File

@@ -0,0 +1,37 @@
package kv
import (
"encoding/binary"
"testing"
"go.etcd.io/bbolt"
)
// InitStateDiffCacheForTesting initializes the state diff cache with the given offset.
// This is intended for testing purposes when setting up state diff after database creation.
// This file is only compiled when the "testing" build tag is set.
func (s *Store) InitStateDiffCacheForTesting(t testing.TB, offset uint64) error {
// First, set the offset in the database.
err := s.db.Update(func(tx *bbolt.Tx) error {
bucket := tx.Bucket(stateDiffBucket)
if bucket == nil {
return bbolt.ErrBucketNotFound
}
offsetBytes := make([]byte, 8)
binary.LittleEndian.PutUint64(offsetBytes, offset)
return bucket.Put([]byte("offset"), offsetBytes)
})
if err != nil {
return err
}
// Then create the state diff cache.
sdCache, err := newStateDiffCache(s)
if err != nil {
return err
}
s.stateDiffCache = sdCache
return nil
}

View File

@@ -49,6 +49,7 @@ func (s *Service) prepareForkChoiceAtts() {
}
case <-s.ctx.Done():
log.Debug("Context closed, exiting routine")
ticker.Done()
return
}
}

View File

@@ -132,7 +132,7 @@ func TestGetSpec(t *testing.T) {
config.MinSyncCommitteeParticipants = 71
config.ProposerReorgCutoffBPS = primitives.BP(121)
config.AttestationDueBPS = primitives.BP(122)
config.AggregrateDueBPS = primitives.BP(123)
config.AggregateDueBPS = primitives.BP(123)
config.ContributionDueBPS = primitives.BP(124)
config.TerminalBlockHash = common.HexToHash("TerminalBlockHash")
config.TerminalBlockHashActivationEpoch = 72
@@ -168,6 +168,10 @@ func TestGetSpec(t *testing.T) {
config.BlobsidecarSubnetCount = 101
config.BlobsidecarSubnetCountElectra = 102
config.SyncMessageDueBPS = 103
config.BuilderWithdrawalPrefixByte = byte('b')
config.BuilderIndexSelfBuild = primitives.BuilderIndex(125)
config.BuilderPaymentThresholdNumerator = 104
config.BuilderPaymentThresholdDenominator = 105
var dbp [4]byte
copy(dbp[:], []byte{'0', '0', '0', '1'})
@@ -190,6 +194,9 @@ func TestGetSpec(t *testing.T) {
var daap [4]byte
copy(daap[:], []byte{'0', '0', '0', '7'})
config.DomainAggregateAndProof = daap
var dbb [4]byte
copy(dbb[:], []byte{'0', '0', '0', '8'})
config.DomainBeaconBuilder = dbb
var dam [4]byte
copy(dam[:], []byte{'1', '0', '0', '0'})
config.DomainApplicationMask = dam
@@ -205,7 +212,7 @@ func TestGetSpec(t *testing.T) {
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), &resp))
data, ok := resp.Data.(map[string]any)
require.Equal(t, true, ok)
assert.Equal(t, 175, len(data))
assert.Equal(t, 180, len(data))
for k, v := range data {
t.Run(k, func(t *testing.T) {
switch k {
@@ -419,8 +426,14 @@ func TestGetSpec(t *testing.T) {
assert.Equal(t, "0x0a000000", v)
case "DOMAIN_APPLICATION_BUILDER":
assert.Equal(t, "0x00000001", v)
case "DOMAIN_BEACON_BUILDER":
assert.Equal(t, "0x30303038", v)
case "DOMAIN_BLOB_SIDECAR":
assert.Equal(t, "0x00000000", v)
case "BUILDER_WITHDRAWAL_PREFIX":
assert.Equal(t, "0x62", v)
case "BUILDER_INDEX_SELF_BUILD":
assert.Equal(t, "125", v)
case "TRANSITION_TOTAL_DIFFICULTY":
assert.Equal(t, "0", v)
case "TERMINAL_BLOCK_HASH_ACTIVATION_EPOCH":
@@ -457,7 +470,7 @@ func TestGetSpec(t *testing.T) {
assert.Equal(t, "121", v)
case "ATTESTATION_DUE_BPS":
assert.Equal(t, "122", v)
case "AGGREGRATE_DUE_BPS":
case "AGGREGATE_DUE_BPS":
assert.Equal(t, "123", v)
case "CONTRIBUTION_DUE_BPS":
assert.Equal(t, "124", v)
@@ -577,6 +590,10 @@ func TestGetSpec(t *testing.T) {
assert.Equal(t, "102", v)
case "SYNC_MESSAGE_DUE_BPS":
assert.Equal(t, "103", v)
case "BUILDER_PAYMENT_THRESHOLD_NUMERATOR":
assert.Equal(t, "104", v)
case "BUILDER_PAYMENT_THRESHOLD_DENOMINATOR":
assert.Equal(t, "105", v)
case "BLOB_SCHEDULE":
blobSchedule, ok := v.([]any)
assert.Equal(t, true, ok)

View File

@@ -5,6 +5,7 @@ go_library(
srcs = [
"error.go",
"interfaces.go",
"interfaces_gloas.go",
"prometheus.go",
],
importpath = "github.com/OffchainLabs/prysm/v7/beacon-chain/state",

View File

@@ -66,6 +66,7 @@ type ReadOnlyBeaconState interface {
ReadOnlyDeposits
ReadOnlyConsolidations
ReadOnlyProposerLookahead
readOnlyGloasFields
ToProtoUnsafe() any
ToProto() any
GenesisTime() time.Time
@@ -101,6 +102,7 @@ type WriteOnlyBeaconState interface {
WriteOnlyWithdrawals
WriteOnlyDeposits
WriteOnlyProposerLookahead
writeOnlyGloasFields
SetGenesisTime(val time.Time) error
SetGenesisValidatorsRoot(val []byte) error
SetSlot(val primitives.Slot) error

View File

@@ -0,0 +1,22 @@
package state
import (
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
)
type writeOnlyGloasFields interface {
SetExecutionPayloadBid(h interfaces.ROExecutionPayloadBid) error
SetBuilderPendingPayment(index primitives.Slot, payment *ethpb.BuilderPendingPayment) error
RotateBuilderPendingPayments() error
AppendBuilderPendingWithdrawals([]*ethpb.BuilderPendingWithdrawal) error
}
type readOnlyGloasFields interface {
BuilderPubkey(primitives.BuilderIndex) ([48]byte, error)
IsActiveBuilder(primitives.BuilderIndex) (bool, error)
CanBuilderCoverBid(primitives.BuilderIndex, primitives.Gwei) (bool, error)
LatestBlockHash() ([32]byte, error)
BuilderPendingPayments() ([]*ethpb.BuilderPendingPayment, error)
}

View File

@@ -14,6 +14,7 @@ go_library(
"getters_deposits.go",
"getters_eth1.go",
"getters_exit.go",
"getters_gloas.go",
"getters_misc.go",
"getters_participation.go",
"getters_payload_header.go",
@@ -36,6 +37,7 @@ go_library(
"setters_deposit_requests.go",
"setters_deposits.go",
"setters_eth1.go",
"setters_gloas.go",
"setters_misc.go",
"setters_participation.go",
"setters_payload_header.go",
@@ -97,6 +99,7 @@ go_test(
"getters_deposit_requests_test.go",
"getters_deposits_test.go",
"getters_exit_test.go",
"getters_gloas_test.go",
"getters_participation_test.go",
"getters_setters_lookahead_test.go",
"getters_test.go",
@@ -114,6 +117,7 @@ go_test(
"setters_deposit_requests_test.go",
"setters_deposits_test.go",
"setters_eth1_test.go",
"setters_gloas_test.go",
"setters_misc_test.go",
"setters_participation_test.go",
"setters_payload_header_test.go",

View File

@@ -0,0 +1,149 @@
package state_native
import (
"fmt"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/runtime/version"
)
// LatestBlockHash returns the hash of the latest execution block.
func (b *BeaconState) LatestBlockHash() ([32]byte, error) {
if b.version < version.Gloas {
return [32]byte{}, errNotSupported("LatestBlockHash", b.version)
}
b.lock.RLock()
defer b.lock.RUnlock()
if b.latestBlockHash == nil {
return [32]byte{}, nil
}
return [32]byte(b.latestBlockHash), nil
}
// BuilderPubkey returns the builder pubkey at the provided index.
func (b *BeaconState) BuilderPubkey(builderIndex primitives.BuilderIndex) ([fieldparams.BLSPubkeyLength]byte, error) {
if b.version < version.Gloas {
return [fieldparams.BLSPubkeyLength]byte{}, errNotSupported("BuilderPubkey", b.version)
}
b.lock.RLock()
defer b.lock.RUnlock()
builder, err := b.builderAtIndex(builderIndex)
if err != nil {
return [fieldparams.BLSPubkeyLength]byte{}, err
}
var pk [fieldparams.BLSPubkeyLength]byte
copy(pk[:], builder.Pubkey)
return pk, nil
}
// IsActiveBuilder returns true if the builder placement is finalized and it has not initiated exit.
// Spec v1.7.0-alpha.0 (pseudocode):
// def is_active_builder(state: BeaconState, builder_index: BuilderIndex) -> bool:
//
// builder = state.builders[builder_index]
// return (
// builder.deposit_epoch < state.finalized_checkpoint.epoch
// and builder.withdrawable_epoch == FAR_FUTURE_EPOCH
// )
func (b *BeaconState) IsActiveBuilder(builderIndex primitives.BuilderIndex) (bool, error) {
if b.version < version.Gloas {
return false, errNotSupported("IsActiveBuilder", b.version)
}
b.lock.RLock()
defer b.lock.RUnlock()
builder, err := b.builderAtIndex(builderIndex)
if err != nil {
return false, err
}
finalizedEpoch := b.finalizedCheckpoint.Epoch
return builder.DepositEpoch < finalizedEpoch && builder.WithdrawableEpoch == params.BeaconConfig().FarFutureEpoch, nil
}
// CanBuilderCoverBid returns true if the builder has enough balance to cover the given bid amount.
// Spec v1.7.0-alpha.0 (pseudocode):
// def can_builder_cover_bid(state: BeaconState, builder_index: BuilderIndex, bid_amount: Gwei) -> bool:
//
// builder_balance = state.builders[builder_index].balance
// pending_withdrawals_amount = get_pending_balance_to_withdraw_for_builder(state, builder_index)
// min_balance = MIN_DEPOSIT_AMOUNT + pending_withdrawals_amount
// if builder_balance < min_balance:
// return False
// return builder_balance - min_balance >= bid_amount
func (b *BeaconState) CanBuilderCoverBid(builderIndex primitives.BuilderIndex, bidAmount primitives.Gwei) (bool, error) {
if b.version < version.Gloas {
return false, errNotSupported("CanBuilderCoverBid", b.version)
}
b.lock.RLock()
defer b.lock.RUnlock()
builder, err := b.builderAtIndex(builderIndex)
if err != nil {
return false, err
}
pendingBalanceToWithdraw := b.builderPendingBalanceToWithdraw(builderIndex)
minBalance := params.BeaconConfig().MinDepositAmount + pendingBalanceToWithdraw
balance := uint64(builder.Balance)
if balance < minBalance {
return false, nil
}
return balance-minBalance >= uint64(bidAmount), nil
}
// builderAtIndex intentionally returns the underlying pointer without copying.
func (b *BeaconState) builderAtIndex(builderIndex primitives.BuilderIndex) (*ethpb.Builder, error) {
idx := uint64(builderIndex)
if idx >= uint64(len(b.builders)) {
return nil, fmt.Errorf("builder index %d out of range (len=%d)", builderIndex, len(b.builders))
}
builder := b.builders[idx]
if builder == nil {
return nil, fmt.Errorf("builder at index %d is nil", builderIndex)
}
return builder, nil
}
// builderPendingBalanceToWithdraw mirrors get_pending_balance_to_withdraw_for_builder in the spec,
// summing both pending withdrawals and pending payments for a builder.
func (b *BeaconState) builderPendingBalanceToWithdraw(builderIndex primitives.BuilderIndex) uint64 {
var total uint64
for _, withdrawal := range b.builderPendingWithdrawals {
if withdrawal.BuilderIndex == builderIndex {
total += uint64(withdrawal.Amount)
}
}
for _, payment := range b.builderPendingPayments {
if payment.Withdrawal.BuilderIndex == builderIndex {
total += uint64(payment.Withdrawal.Amount)
}
}
return total
}
// BuilderPendingPayments returns a copy of the builder pending payments.
func (b *BeaconState) BuilderPendingPayments() ([]*ethpb.BuilderPendingPayment, error) {
if b.version < version.Gloas {
return nil, errNotSupported("BuilderPendingPayments", b.version)
}
b.lock.RLock()
defer b.lock.RUnlock()
return b.builderPendingPaymentsVal(), nil
}

View File

@@ -0,0 +1,168 @@
package state_native_test
import (
"bytes"
"testing"
state_native "github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/OffchainLabs/prysm/v7/testing/util"
)
func TestLatestBlockHash(t *testing.T) {
t.Run("returns error before gloas", func(t *testing.T) {
st, _ := util.DeterministicGenesisState(t, 1)
_, err := st.LatestBlockHash()
require.ErrorContains(t, "is not supported", err)
})
t.Run("returns zero hash when unset", func(t *testing.T) {
st, err := state_native.InitializeFromProtoGloas(&ethpb.BeaconStateGloas{})
require.NoError(t, err)
got, err := st.LatestBlockHash()
require.NoError(t, err)
require.Equal(t, [32]byte{}, got)
})
t.Run("returns configured hash", func(t *testing.T) {
hashBytes := bytes.Repeat([]byte{0xAB}, 32)
var want [32]byte
copy(want[:], hashBytes)
st, err := state_native.InitializeFromProtoGloas(&ethpb.BeaconStateGloas{
LatestBlockHash: hashBytes,
})
require.NoError(t, err)
got, err := st.LatestBlockHash()
require.NoError(t, err)
require.Equal(t, want, got)
})
}
func TestBuilderPubkey(t *testing.T) {
t.Run("returns error before gloas", func(t *testing.T) {
stIface, _ := util.DeterministicGenesisState(t, 1)
native, ok := stIface.(*state_native.BeaconState)
require.Equal(t, true, ok)
_, err := native.BuilderPubkey(0)
require.ErrorContains(t, "is not supported", err)
})
t.Run("returns pubkey copy", func(t *testing.T) {
pubkey := bytes.Repeat([]byte{0xAA}, 48)
stIface, err := state_native.InitializeFromProtoGloas(&ethpb.BeaconStateGloas{
Builders: []*ethpb.Builder{
{
Pubkey: pubkey,
Balance: 42,
DepositEpoch: 3,
WithdrawableEpoch: 4,
},
},
})
require.NoError(t, err)
gotPk, err := stIface.BuilderPubkey(0)
require.NoError(t, err)
var wantPk [48]byte
copy(wantPk[:], pubkey)
require.Equal(t, wantPk, gotPk)
// Mutate original to ensure copy.
pubkey[0] = 0
require.Equal(t, byte(0xAA), gotPk[0])
})
t.Run("out of range returns error", func(t *testing.T) {
stIface, err := state_native.InitializeFromProtoGloas(&ethpb.BeaconStateGloas{
Builders: []*ethpb.Builder{},
})
require.NoError(t, err)
st := stIface.(*state_native.BeaconState)
_, err = st.BuilderPubkey(1)
require.ErrorContains(t, "out of range", err)
})
}
func TestBuilderHelpers(t *testing.T) {
t.Run("is active builder", func(t *testing.T) {
st, err := state_native.InitializeFromProtoGloas(&ethpb.BeaconStateGloas{
Builders: []*ethpb.Builder{
{
Balance: 10,
DepositEpoch: 0,
WithdrawableEpoch: params.BeaconConfig().FarFutureEpoch,
},
},
FinalizedCheckpoint: &ethpb.Checkpoint{Epoch: 1},
})
require.NoError(t, err)
active, err := st.IsActiveBuilder(0)
require.NoError(t, err)
require.Equal(t, true, active)
// Not active when withdrawable epoch is set.
stProto := &ethpb.BeaconStateGloas{
Builders: []*ethpb.Builder{
{
Balance: 10,
DepositEpoch: 0,
WithdrawableEpoch: 1,
},
},
FinalizedCheckpoint: &ethpb.Checkpoint{Epoch: 2},
}
stInactive, err := state_native.InitializeFromProtoGloas(stProto)
require.NoError(t, err)
active, err = stInactive.IsActiveBuilder(0)
require.NoError(t, err)
require.Equal(t, false, active)
})
t.Run("can builder cover bid", func(t *testing.T) {
stIface, err := state_native.InitializeFromProtoGloas(&ethpb.BeaconStateGloas{
Builders: []*ethpb.Builder{
{
Balance: primitives.Gwei(params.BeaconConfig().MinDepositAmount + 50),
DepositEpoch: 0,
WithdrawableEpoch: params.BeaconConfig().FarFutureEpoch,
},
},
BuilderPendingWithdrawals: []*ethpb.BuilderPendingWithdrawal{
{Amount: 10, BuilderIndex: 0},
},
BuilderPendingPayments: []*ethpb.BuilderPendingPayment{
{Withdrawal: &ethpb.BuilderPendingWithdrawal{Amount: 15, BuilderIndex: 0}},
},
FinalizedCheckpoint: &ethpb.Checkpoint{Epoch: 1},
})
require.NoError(t, err)
st := stIface.(*state_native.BeaconState)
ok, err := st.CanBuilderCoverBid(0, 20)
require.NoError(t, err)
require.Equal(t, true, ok)
ok, err = st.CanBuilderCoverBid(0, 30)
require.NoError(t, err)
require.Equal(t, false, ok)
})
}
func TestBuilderPendingPayments_UnsupportedVersion(t *testing.T) {
stIface, err := state_native.InitializeFromProtoElectra(&ethpb.BeaconStateElectra{})
require.NoError(t, err)
st := stIface.(*state_native.BeaconState)
_, err = st.BuilderPendingPayments()
require.ErrorContains(t, "BuilderPendingPayments", err)
}

View File

@@ -725,3 +725,13 @@ func ProtobufBeaconStateFulu(s any) (*ethpb.BeaconStateFulu, error) {
}
return pbState, nil
}
// ProtobufBeaconStateGloas transforms an input into beacon state Gloas in the form of protobuf.
// Error is returned if the input is not type protobuf beacon state.
func ProtobufBeaconStateGloas(s any) (*ethpb.BeaconStateGloas, error) {
pbState, ok := s.(*ethpb.BeaconStateGloas)
if !ok {
return nil, errors.New("input is not type pb.BeaconStateGloas")
}
return pbState, nil
}

View File

@@ -0,0 +1,126 @@
package state_native
import (
"fmt"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native/types"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state/stateutil"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/runtime/version"
)
// RotateBuilderPendingPayments rotates the queue by dropping slots per epoch payments from the
// front and appending slots per epoch empty payments to the end.
// This implements: state.builder_pending_payments = state.builder_pending_payments[SLOTS_PER_EPOCH:] + [BuilderPendingPayment() for _ in range(SLOTS_PER_EPOCH)]
func (b *BeaconState) RotateBuilderPendingPayments() error {
if b.version < version.Gloas {
return errNotSupported("RotateBuilderPendingPayments", b.version)
}
b.lock.Lock()
defer b.lock.Unlock()
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
copy(b.builderPendingPayments[:slotsPerEpoch], b.builderPendingPayments[slotsPerEpoch:2*slotsPerEpoch])
for i := slotsPerEpoch; i < primitives.Slot(len(b.builderPendingPayments)); i++ {
b.builderPendingPayments[i] = emptyPayment()
}
b.markFieldAsDirty(types.BuilderPendingPayments)
b.rebuildTrie[types.BuilderPendingPayments] = true
return nil
}
// AppendBuilderPendingWithdrawals appends builder pending withdrawals to the beacon state.
// If the withdrawals slice is shared, it copies the slice first to preserve references.
func (b *BeaconState) AppendBuilderPendingWithdrawals(withdrawals []*ethpb.BuilderPendingWithdrawal) error {
if b.version < version.Gloas {
return errNotSupported("AppendBuilderPendingWithdrawals", b.version)
}
if len(withdrawals) == 0 {
return nil
}
b.lock.Lock()
defer b.lock.Unlock()
pendingWithdrawals := b.builderPendingWithdrawals
if b.sharedFieldReferences[types.BuilderPendingWithdrawals].Refs() > 1 {
pendingWithdrawals = make([]*ethpb.BuilderPendingWithdrawal, 0, len(b.builderPendingWithdrawals)+len(withdrawals))
pendingWithdrawals = append(pendingWithdrawals, b.builderPendingWithdrawals...)
b.sharedFieldReferences[types.BuilderPendingWithdrawals].MinusRef()
b.sharedFieldReferences[types.BuilderPendingWithdrawals] = stateutil.NewRef(1)
}
b.builderPendingWithdrawals = append(pendingWithdrawals, withdrawals...)
b.markFieldAsDirty(types.BuilderPendingWithdrawals)
return nil
}
func emptyPayment() *ethpb.BuilderPendingPayment {
return &ethpb.BuilderPendingPayment{
Weight: 0,
Withdrawal: &ethpb.BuilderPendingWithdrawal{
FeeRecipient: make([]byte, 20),
Amount: 0,
BuilderIndex: 0,
},
}
}
// SetExecutionPayloadBid sets the latest execution payload bid in the state.
func (b *BeaconState) SetExecutionPayloadBid(h interfaces.ROExecutionPayloadBid) error {
if b.version < version.Gloas {
return errNotSupported("SetExecutionPayloadBid", b.version)
}
b.lock.Lock()
defer b.lock.Unlock()
parentBlockHash := h.ParentBlockHash()
parentBlockRoot := h.ParentBlockRoot()
blockHash := h.BlockHash()
randao := h.PrevRandao()
blobKzgCommitmentsRoot := h.BlobKzgCommitmentsRoot()
feeRecipient := h.FeeRecipient()
b.latestExecutionPayloadBid = &ethpb.ExecutionPayloadBid{
ParentBlockHash: parentBlockHash[:],
ParentBlockRoot: parentBlockRoot[:],
BlockHash: blockHash[:],
PrevRandao: randao[:],
GasLimit: h.GasLimit(),
BuilderIndex: h.BuilderIndex(),
Slot: h.Slot(),
Value: h.Value(),
ExecutionPayment: h.ExecutionPayment(),
BlobKzgCommitmentsRoot: blobKzgCommitmentsRoot[:],
FeeRecipient: feeRecipient[:],
}
b.markFieldAsDirty(types.LatestExecutionPayloadBid)
return nil
}
// SetBuilderPendingPayment sets a builder pending payment at the specified index.
func (b *BeaconState) SetBuilderPendingPayment(index primitives.Slot, payment *ethpb.BuilderPendingPayment) error {
if b.version < version.Gloas {
return errNotSupported("SetBuilderPendingPayment", b.version)
}
b.lock.Lock()
defer b.lock.Unlock()
if uint64(index) >= uint64(len(b.builderPendingPayments)) {
return fmt.Errorf("builder pending payments index %d out of range (len=%d)", index, len(b.builderPendingPayments))
}
b.builderPendingPayments[index] = ethpb.CopyBuilderPendingPayment(payment)
b.markFieldAsDirty(types.BuilderPendingPayments)
return nil
}

View File

@@ -0,0 +1,249 @@
package state_native
import (
"bytes"
"testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native/types"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state/stateutil"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/runtime/version"
"github.com/OffchainLabs/prysm/v7/testing/require"
)
type testExecutionPayloadBid struct {
parentBlockHash [32]byte
parentBlockRoot [32]byte
blockHash [32]byte
prevRandao [32]byte
blobKzgCommitmentsRoot [32]byte
feeRecipient [20]byte
gasLimit uint64
builderIndex primitives.BuilderIndex
slot primitives.Slot
value primitives.Gwei
executionPayment primitives.Gwei
}
func (t testExecutionPayloadBid) ParentBlockHash() [32]byte { return t.parentBlockHash }
func (t testExecutionPayloadBid) ParentBlockRoot() [32]byte { return t.parentBlockRoot }
func (t testExecutionPayloadBid) PrevRandao() [32]byte { return t.prevRandao }
func (t testExecutionPayloadBid) BlockHash() [32]byte { return t.blockHash }
func (t testExecutionPayloadBid) GasLimit() uint64 { return t.gasLimit }
func (t testExecutionPayloadBid) BuilderIndex() primitives.BuilderIndex {
return t.builderIndex
}
func (t testExecutionPayloadBid) Slot() primitives.Slot { return t.slot }
func (t testExecutionPayloadBid) Value() primitives.Gwei { return t.value }
func (t testExecutionPayloadBid) ExecutionPayment() primitives.Gwei {
return t.executionPayment
}
func (t testExecutionPayloadBid) BlobKzgCommitmentsRoot() [32]byte { return t.blobKzgCommitmentsRoot }
func (t testExecutionPayloadBid) FeeRecipient() [20]byte { return t.feeRecipient }
func (t testExecutionPayloadBid) IsNil() bool { return false }
func TestSetExecutionPayloadBid(t *testing.T) {
t.Run("previous fork returns expected error", func(t *testing.T) {
st := &BeaconState{version: version.Fulu}
err := st.SetExecutionPayloadBid(testExecutionPayloadBid{})
require.ErrorContains(t, "is not supported", err)
})
t.Run("sets bid and marks dirty", func(t *testing.T) {
var (
parentBlockHash = [32]byte(bytes.Repeat([]byte{0xAB}, 32))
parentBlockRoot = [32]byte(bytes.Repeat([]byte{0xCD}, 32))
blockHash = [32]byte(bytes.Repeat([]byte{0xEF}, 32))
prevRandao = [32]byte(bytes.Repeat([]byte{0x11}, 32))
blobRoot = [32]byte(bytes.Repeat([]byte{0x22}, 32))
feeRecipient [20]byte
)
copy(feeRecipient[:], bytes.Repeat([]byte{0x33}, len(feeRecipient)))
st := &BeaconState{
version: version.Gloas,
dirtyFields: make(map[types.FieldIndex]bool),
}
bid := testExecutionPayloadBid{
parentBlockHash: parentBlockHash,
parentBlockRoot: parentBlockRoot,
blockHash: blockHash,
prevRandao: prevRandao,
blobKzgCommitmentsRoot: blobRoot,
feeRecipient: feeRecipient,
gasLimit: 123,
builderIndex: 7,
slot: 9,
value: 11,
executionPayment: 22,
}
require.NoError(t, st.SetExecutionPayloadBid(bid))
require.NotNil(t, st.latestExecutionPayloadBid)
require.DeepEqual(t, parentBlockHash[:], st.latestExecutionPayloadBid.ParentBlockHash)
require.DeepEqual(t, parentBlockRoot[:], st.latestExecutionPayloadBid.ParentBlockRoot)
require.DeepEqual(t, blockHash[:], st.latestExecutionPayloadBid.BlockHash)
require.DeepEqual(t, prevRandao[:], st.latestExecutionPayloadBid.PrevRandao)
require.DeepEqual(t, blobRoot[:], st.latestExecutionPayloadBid.BlobKzgCommitmentsRoot)
require.DeepEqual(t, feeRecipient[:], st.latestExecutionPayloadBid.FeeRecipient)
require.Equal(t, uint64(123), st.latestExecutionPayloadBid.GasLimit)
require.Equal(t, primitives.BuilderIndex(7), st.latestExecutionPayloadBid.BuilderIndex)
require.Equal(t, primitives.Slot(9), st.latestExecutionPayloadBid.Slot)
require.Equal(t, primitives.Gwei(11), st.latestExecutionPayloadBid.Value)
require.Equal(t, primitives.Gwei(22), st.latestExecutionPayloadBid.ExecutionPayment)
require.Equal(t, true, st.dirtyFields[types.LatestExecutionPayloadBid])
})
}
func TestSetBuilderPendingPayment(t *testing.T) {
t.Run("previous fork returns expected error", func(t *testing.T) {
st := &BeaconState{version: version.Fulu}
err := st.SetBuilderPendingPayment(0, &ethpb.BuilderPendingPayment{})
require.ErrorContains(t, "is not supported", err)
})
t.Run("sets copy and marks dirty", func(t *testing.T) {
st := &BeaconState{
version: version.Gloas,
dirtyFields: make(map[types.FieldIndex]bool),
builderPendingPayments: make([]*ethpb.BuilderPendingPayment, 2),
}
payment := &ethpb.BuilderPendingPayment{
Weight: 2,
Withdrawal: &ethpb.BuilderPendingWithdrawal{
Amount: 99,
BuilderIndex: 1,
},
}
require.NoError(t, st.SetBuilderPendingPayment(1, payment))
require.DeepEqual(t, payment, st.builderPendingPayments[1])
require.Equal(t, true, st.dirtyFields[types.BuilderPendingPayments])
// Mutating the original should not affect the state copy.
payment.Withdrawal.Amount = 12345
require.Equal(t, primitives.Gwei(99), st.builderPendingPayments[1].Withdrawal.Amount)
})
t.Run("returns error on out of range index", func(t *testing.T) {
st := &BeaconState{
version: version.Gloas,
dirtyFields: make(map[types.FieldIndex]bool),
builderPendingPayments: make([]*ethpb.BuilderPendingPayment, 1),
}
err := st.SetBuilderPendingPayment(2, &ethpb.BuilderPendingPayment{})
require.ErrorContains(t, "out of range", err)
require.Equal(t, false, st.dirtyFields[types.BuilderPendingPayments])
})
}
func TestRotateBuilderPendingPayments(t *testing.T) {
totalPayments := 2 * params.BeaconConfig().SlotsPerEpoch
payments := make([]*ethpb.BuilderPendingPayment, totalPayments)
for i := range payments {
idx := uint64(i)
payments[i] = &ethpb.BuilderPendingPayment{
Weight: primitives.Gwei(idx * 100e9),
Withdrawal: &ethpb.BuilderPendingWithdrawal{
FeeRecipient: make([]byte, 20),
Amount: primitives.Gwei(idx * 1e9),
BuilderIndex: primitives.BuilderIndex(idx + 100),
},
}
}
statePb, err := InitializeFromProtoUnsafeGloas(&ethpb.BeaconStateGloas{
BuilderPendingPayments: payments,
})
require.NoError(t, err)
st, ok := statePb.(*BeaconState)
require.Equal(t, true, ok)
oldPayments, err := st.BuilderPendingPayments()
require.NoError(t, err)
require.NoError(t, st.RotateBuilderPendingPayments())
newPayments, err := st.BuilderPendingPayments()
require.NoError(t, err)
slotsPerEpoch := int(params.BeaconConfig().SlotsPerEpoch)
for i := range slotsPerEpoch {
require.DeepEqual(t, oldPayments[slotsPerEpoch+i], newPayments[i])
}
for i := slotsPerEpoch; i < 2*slotsPerEpoch; i++ {
payment := newPayments[i]
require.Equal(t, primitives.Gwei(0), payment.Weight)
require.Equal(t, 20, len(payment.Withdrawal.FeeRecipient))
require.Equal(t, primitives.Gwei(0), payment.Withdrawal.Amount)
require.Equal(t, primitives.BuilderIndex(0), payment.Withdrawal.BuilderIndex)
}
}
func TestRotateBuilderPendingPayments_UnsupportedVersion(t *testing.T) {
st := &BeaconState{version: version.Electra}
err := st.RotateBuilderPendingPayments()
require.ErrorContains(t, "RotateBuilderPendingPayments", err)
}
func TestAppendBuilderPendingWithdrawal_CopyOnWrite(t *testing.T) {
wd := &ethpb.BuilderPendingWithdrawal{
FeeRecipient: make([]byte, 20),
Amount: 1,
BuilderIndex: 2,
}
statePb, err := InitializeFromProtoUnsafeGloas(&ethpb.BeaconStateGloas{
BuilderPendingWithdrawals: []*ethpb.BuilderPendingWithdrawal{wd},
})
require.NoError(t, err)
st, ok := statePb.(*BeaconState)
require.Equal(t, true, ok)
copied := st.Copy().(*BeaconState)
require.Equal(t, uint(2), st.sharedFieldReferences[types.BuilderPendingWithdrawals].Refs())
appended := &ethpb.BuilderPendingWithdrawal{
FeeRecipient: make([]byte, 20),
Amount: 4,
BuilderIndex: 5,
}
require.NoError(t, copied.AppendBuilderPendingWithdrawals([]*ethpb.BuilderPendingWithdrawal{appended}))
require.Equal(t, 1, len(st.builderPendingWithdrawals))
require.Equal(t, 2, len(copied.builderPendingWithdrawals))
require.DeepEqual(t, wd, copied.builderPendingWithdrawals[0])
require.DeepEqual(t, appended, copied.builderPendingWithdrawals[1])
require.DeepEqual(t, wd, st.builderPendingWithdrawals[0])
require.Equal(t, uint(1), st.sharedFieldReferences[types.BuilderPendingWithdrawals].Refs())
require.Equal(t, uint(1), copied.sharedFieldReferences[types.BuilderPendingWithdrawals].Refs())
}
func TestAppendBuilderPendingWithdrawals(t *testing.T) {
st := &BeaconState{
version: version.Gloas,
dirtyFields: make(map[types.FieldIndex]bool),
sharedFieldReferences: map[types.FieldIndex]*stateutil.Reference{
types.BuilderPendingWithdrawals: stateutil.NewRef(1),
},
builderPendingWithdrawals: make([]*ethpb.BuilderPendingWithdrawal, 0),
}
first := &ethpb.BuilderPendingWithdrawal{Amount: 1}
second := &ethpb.BuilderPendingWithdrawal{Amount: 2}
require.NoError(t, st.AppendBuilderPendingWithdrawals([]*ethpb.BuilderPendingWithdrawal{first, second}))
require.Equal(t, 2, len(st.builderPendingWithdrawals))
require.DeepEqual(t, first, st.builderPendingWithdrawals[0])
require.DeepEqual(t, second, st.builderPendingWithdrawals[1])
require.Equal(t, true, st.dirtyFields[types.BuilderPendingWithdrawals])
}
func TestAppendBuilderPendingWithdrawals_UnsupportedVersion(t *testing.T) {
st := &BeaconState{version: version.Electra}
err := st.AppendBuilderPendingWithdrawals([]*ethpb.BuilderPendingWithdrawal{{}})
require.ErrorContains(t, "AppendBuilderPendingWithdrawals", err)
}

View File

@@ -29,6 +29,7 @@ go_library(
"//beacon-chain/state:go_default_library",
"//beacon-chain/sync/backfill/coverage:go_default_library",
"//cache/lru:go_default_library",
"//config/features:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
@@ -68,11 +69,14 @@ go_test(
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/transition:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/kv:go_default_library",
"//beacon-chain/db/testing:go_default_library",
"//beacon-chain/forkchoice/doubly-linked-tree:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//beacon-chain/state/testing:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/features:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/blocks/testing:go_default_library",

View File

@@ -5,10 +5,13 @@ import (
"encoding/hex"
"fmt"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/transition"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
"github.com/OffchainLabs/prysm/v7/config/features"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v7/monitoring/tracing/trace"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
@@ -25,6 +28,10 @@ func (s *State) MigrateToCold(ctx context.Context, fRoot [32]byte) error {
s.migrationLock.Lock()
defer s.migrationLock.Unlock()
if features.Get().EnableStateDiff {
return s.migrateToColdHdiff(ctx, fRoot)
}
s.finalizedInfo.lock.RLock()
oldFSlot := s.finalizedInfo.slot
s.finalizedInfo.lock.RUnlock()
@@ -90,21 +97,8 @@ func (s *State) MigrateToCold(ctx context.Context, fRoot [32]byte) error {
}
}
}
if s.beaconDB.HasState(ctx, aRoot) {
// If you are migrating a state and its already part of the hot state cache saved to the db,
// you can just remove it from the hot state cache as it becomes redundant.
s.saveHotStateDB.lock.Lock()
roots := s.saveHotStateDB.blockRootsOfSavedStates
for i := range roots {
if aRoot == roots[i] {
s.saveHotStateDB.blockRootsOfSavedStates = append(roots[:i], roots[i+1:]...)
// There shouldn't be duplicated roots in `blockRootsOfSavedStates`.
// Break here is ok.
break
}
}
s.saveHotStateDB.lock.Unlock()
s.migrateHotToCold(aRoot)
continue
}
@@ -129,3 +123,103 @@ func (s *State) MigrateToCold(ctx context.Context, fRoot [32]byte) error {
return nil
}
// migrateToColdHdiff saves the state-diffs for slots that are in the state diff tree after finalization
func (s *State) migrateToColdHdiff(ctx context.Context, fRoot [32]byte) error {
s.finalizedInfo.lock.RLock()
oldFSlot := s.finalizedInfo.slot
s.finalizedInfo.lock.RUnlock()
fSlot, err := s.beaconDB.SlotByBlockRoot(ctx, fRoot)
if err != nil {
return errors.Wrap(err, "could not get slot by block root")
}
for slot := oldFSlot; slot < fSlot; slot++ {
if ctx.Err() != nil {
return ctx.Err()
}
_, lvl, err := s.beaconDB.SlotInDiffTree(slot)
if err != nil {
log.WithError(err).Errorf("could not determine if slot %d is in diff tree", slot)
continue
}
if lvl == -1 {
continue
}
// The state needs to be saved.
// Try the epoch boundary cache first.
cached, exists, err := s.epochBoundaryStateCache.getBySlot(slot)
if err != nil {
log.WithError(err).Errorf("could not get epoch boundary state for slot %d", slot)
cached = nil
exists = false
}
var aRoot [32]byte
var aState state.BeaconState
if exists {
aRoot = cached.root
aState = cached.state
} else {
_, roots, err := s.beaconDB.HighestRootsBelowSlot(ctx, slot)
if err != nil {
return err
}
// Given the block has been finalized, the db should not have more than one block in a given slot.
// We should error out when this happens.
if len(roots) != 1 {
return errUnknownBlock
}
aRoot = roots[0]
// Different than the legacy MigrateToCold, we need to always get the state even if
// the state exists in DB as part of the hot state db, because we need to process slots
// to the state diff tree slots.
aState, err = s.StateByRoot(ctx, aRoot)
if err != nil {
return err
}
}
if s.beaconDB.HasState(ctx, aRoot) {
s.migrateHotToCold(aRoot)
continue
}
// advance slots to the target slot
if aState.Slot() < slot {
aState, err = transition.ProcessSlots(ctx, aState, slot)
if err != nil {
return errors.Wrapf(err, "could not process slots to slot %d", slot)
}
}
if err := s.beaconDB.SaveState(ctx, aState, aRoot); err != nil {
return err
}
log.WithFields(
logrus.Fields{
"slot": aState.Slot(),
"root": fmt.Sprintf("%#x", aRoot),
}).Info("Saved state in DB")
}
// Update finalized info in memory.
fInfo, ok, err := s.epochBoundaryStateCache.getByBlockRoot(fRoot)
if err != nil {
return err
}
if ok {
s.SaveFinalizedState(fSlot, fRoot, fInfo.state)
}
return nil
}
func (s *State) migrateHotToCold(aRoot [32]byte) {
// If you are migrating a state and its already part of the hot state cache saved to the db,
// you can just remove it from the hot state cache as it becomes redundant.
s.saveHotStateDB.lock.Lock()
roots := s.saveHotStateDB.blockRootsOfSavedStates
for i := range roots {
if aRoot == roots[i] {
s.saveHotStateDB.blockRootsOfSavedStates = append(roots[:i], roots[i+1:]...)
// There shouldn't be duplicated roots in `blockRootsOfSavedStates`.
// Break here is ok.
break
}
}
s.saveHotStateDB.lock.Unlock()
}

View File

@@ -4,8 +4,11 @@ import (
"testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/blocks"
"github.com/OffchainLabs/prysm/v7/beacon-chain/db/kv"
testDB "github.com/OffchainLabs/prysm/v7/beacon-chain/db/testing"
doublylinkedtree "github.com/OffchainLabs/prysm/v7/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/OffchainLabs/prysm/v7/cmd/beacon-chain/flags"
"github.com/OffchainLabs/prysm/v7/config/features"
consensusblocks "github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
@@ -224,3 +227,170 @@ func TestMigrateToCold_ParallelCalls(t *testing.T) {
assert.DeepEqual(t, [][32]byte{r7}, service.saveHotStateDB.blockRootsOfSavedStates, "Did not remove all saved hot state roots")
require.LogsContain(t, hook, "Saved state in DB")
}
// =========================================================================
// Tests for migrateToColdHdiff (state diff migration)
// =========================================================================
// setStateDiffExponents sets state diff exponents for testing.
// Uses exponents [6, 5] which means:
// - Level 0: Every 2^6 = 64 slots (full snapshot)
// - Level 1: Every 2^5 = 32 slots (diff)
func setStateDiffExponents() {
globalFlags := flags.GlobalFlags{
StateDiffExponents: []int{6, 5},
}
flags.Init(&globalFlags)
}
// TestMigrateToColdHdiff_CanUpdateFinalizedInfo verifies that the migration
// correctly updates finalized info when migrating to slots not in the diff tree.
func TestMigrateToColdHdiff_CanUpdateFinalizedInfo(t *testing.T) {
ctx := t.Context()
// Set exponents and create DB first (without EnableStateDiff flag).
setStateDiffExponents()
beaconDB := testDB.SetupDB(t)
// Initialize the state diff cache via the method on *kv.Store (not in interface).
require.NoError(t, beaconDB.(*kv.Store).InitStateDiffCacheForTesting(t, 0))
// Now enable the feature flag.
resetCfg := features.InitWithReset(&features.Flags{EnableStateDiff: true})
defer resetCfg()
service := New(beaconDB, doublylinkedtree.New())
beaconState, _ := util.DeterministicGenesisState(t, 32)
genesisStateRoot, err := beaconState.HashTreeRoot(ctx)
require.NoError(t, err)
genesis := blocks.NewGenesisBlock(genesisStateRoot[:])
util.SaveBlock(t, ctx, beaconDB, genesis)
gRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, gRoot))
// Put genesis state in epoch boundary cache so migrateToColdHdiff doesn't need to retrieve from DB.
require.NoError(t, service.epochBoundaryStateCache.put(gRoot, beaconState))
// Set initial finalized info at genesis.
service.finalizedInfo = &finalizedInfo{
slot: 0,
root: gRoot,
state: beaconState,
}
// Create finalized block at slot 10 (not in diff tree, so no intermediate states saved).
finalizedState := beaconState.Copy()
require.NoError(t, finalizedState.SetSlot(10))
b := util.NewBeaconBlock()
b.Block.Slot = 10
fRoot, err := b.Block.HashTreeRoot()
require.NoError(t, err)
util.SaveBlock(t, ctx, beaconDB, b)
require.NoError(t, service.epochBoundaryStateCache.put(fRoot, finalizedState))
require.NoError(t, service.MigrateToCold(ctx, fRoot))
// Verify finalized info is updated.
assert.Equal(t, primitives.Slot(10), service.finalizedInfo.slot)
assert.DeepEqual(t, fRoot, service.finalizedInfo.root)
expectedHTR, err := finalizedState.HashTreeRoot(ctx)
require.NoError(t, err)
actualHTR, err := service.finalizedInfo.state.HashTreeRoot(ctx)
require.NoError(t, err)
assert.DeepEqual(t, expectedHTR, actualHTR)
}
// TestMigrateToColdHdiff_SkipsSlotsNotInDiffTree verifies that the migration
// skips slots that are not in the diff tree.
func TestMigrateToColdHdiff_SkipsSlotsNotInDiffTree(t *testing.T) {
hook := logTest.NewGlobal()
ctx := t.Context()
// Set exponents and create DB first (without EnableStateDiff flag).
setStateDiffExponents()
beaconDB := testDB.SetupDB(t)
// Initialize the state diff cache via the method on *kv.Store (not in interface).
require.NoError(t, beaconDB.(*kv.Store).InitStateDiffCacheForTesting(t, 0))
// Now enable the feature flag.
resetCfg := features.InitWithReset(&features.Flags{EnableStateDiff: true})
defer resetCfg()
service := New(beaconDB, doublylinkedtree.New())
beaconState, pks := util.DeterministicGenesisState(t, 32)
genesisStateRoot, err := beaconState.HashTreeRoot(ctx)
require.NoError(t, err)
genesis := blocks.NewGenesisBlock(genesisStateRoot[:])
util.SaveBlock(t, ctx, beaconDB, genesis)
gRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, gRoot))
// Start from slot 1 to avoid slot 0 which is in the diff tree.
service.finalizedInfo = &finalizedInfo{
slot: 1,
root: gRoot,
state: beaconState,
}
// Reset the log hook to ignore setup logs.
hook.Reset()
// Create a block at slot 20 (NOT in diff tree with exponents [6,5]).
b20, err := util.GenerateFullBlock(beaconState, pks, util.DefaultBlockGenConfig(), 20)
require.NoError(t, err)
r20, err := b20.Block.HashTreeRoot()
require.NoError(t, err)
util.SaveBlock(t, ctx, beaconDB, b20)
require.NoError(t, beaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{Slot: 20, Root: r20[:]}))
// Put finalized state in cache.
finalizedState := beaconState.Copy()
require.NoError(t, finalizedState.SetSlot(20))
require.NoError(t, service.epochBoundaryStateCache.put(r20, finalizedState))
require.NoError(t, service.MigrateToCold(ctx, r20))
// Verify NO states were saved during migration (slots 1-19 are not in diff tree).
assert.LogsDoNotContain(t, hook, "Saved state in DB")
}
// TestMigrateToColdHdiff_NoOpWhenFinalizedSlotNotAdvanced verifies that
// migration is a no-op when the finalized slot has not advanced.
func TestMigrateToColdHdiff_NoOpWhenFinalizedSlotNotAdvanced(t *testing.T) {
ctx := t.Context()
// Set exponents and create DB first (without EnableStateDiff flag).
setStateDiffExponents()
beaconDB := testDB.SetupDB(t)
// Initialize the state diff cache via the method on *kv.Store (not in interface).
require.NoError(t, beaconDB.(*kv.Store).InitStateDiffCacheForTesting(t, 0))
// Now enable the feature flag.
resetCfg := features.InitWithReset(&features.Flags{EnableStateDiff: true})
defer resetCfg()
service := New(beaconDB, doublylinkedtree.New())
beaconState, _ := util.DeterministicGenesisState(t, 32)
genesisStateRoot, err := beaconState.HashTreeRoot(ctx)
require.NoError(t, err)
genesis := blocks.NewGenesisBlock(genesisStateRoot[:])
util.SaveBlock(t, ctx, beaconDB, genesis)
gRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, gRoot))
// Set finalized info already at slot 50.
finalizedState := beaconState.Copy()
require.NoError(t, finalizedState.SetSlot(50))
service.finalizedInfo = &finalizedInfo{
slot: 50,
root: gRoot,
state: finalizedState,
}
// Create block at same slot 50.
b := util.NewBeaconBlock()
b.Block.Slot = 50
fRoot, err := b.Block.HashTreeRoot()
require.NoError(t, err)
util.SaveBlock(t, ctx, beaconDB, b)
require.NoError(t, service.epochBoundaryStateCache.put(fRoot, finalizedState))
// Migration should be a no-op (finalized slot not advancing).
require.NoError(t, service.MigrateToCold(ctx, fRoot))
}

View File

@@ -238,14 +238,9 @@ func recomputeRootFromLayerVariable(idx int, item [32]byte, layers [][]*[32]byte
// AddInMixin describes a method from which a length mixin is added to the
// provided root.
func AddInMixin(root [32]byte, length uint64) ([32]byte, error) {
rootBuf := new(bytes.Buffer)
if err := binary.Write(rootBuf, binary.LittleEndian, length); err != nil {
return [32]byte{}, errors.Wrap(err, "could not marshal eth1data votes length")
}
// We need to mix in the length of the slice.
rootBufRoot := make([]byte, 32)
copy(rootBufRoot, rootBuf.Bytes())
return ssz.MixInLength(root, rootBufRoot), nil
var rootBufRoot [32]byte
binary.LittleEndian.PutUint64(rootBufRoot[:], length)
return ssz.MixInLength(root, rootBufRoot[:]), nil
}
// Merkleize 32-byte leaves into a Merkle trie for its adequate depth, returning

View File

@@ -148,7 +148,7 @@ func (b batch) ensureParent(expected [32]byte) error {
func (b batch) blockRequest() *eth.BeaconBlocksByRangeRequest {
return &eth.BeaconBlocksByRangeRequest{
StartSlot: b.begin,
Count: uint64(b.end - b.begin),
Count: uint64(b.end.FlooredSubSlot(b.begin)),
Step: 1,
}
}
@@ -156,7 +156,7 @@ func (b batch) blockRequest() *eth.BeaconBlocksByRangeRequest {
func (b batch) blobRequest() *eth.BlobSidecarsByRangeRequest {
return &eth.BlobSidecarsByRangeRequest{
StartSlot: b.begin,
Count: uint64(b.end - b.begin),
Count: uint64(b.end.FlooredSubSlot(b.begin)),
}
}

View File

@@ -10,6 +10,93 @@ import (
"github.com/pkg/errors"
)
func TestBlockRequest(t *testing.T) {
cases := []struct {
name string
begin primitives.Slot
end primitives.Slot
expectedCount uint64
}{
{
name: "normal case",
begin: 100,
end: 200,
expectedCount: 100,
},
{
name: "end equals begin",
begin: 100,
end: 100,
expectedCount: 0,
},
{
name: "end less than begin (would underflow without check)",
begin: 200,
end: 100,
expectedCount: 0,
},
{
name: "zero values",
begin: 0,
end: 0,
expectedCount: 0,
},
{
name: "single slot",
begin: 0,
end: 1,
expectedCount: 1,
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
b := batch{begin: tc.begin, end: tc.end}
req := b.blockRequest()
require.Equal(t, tc.expectedCount, req.Count)
require.Equal(t, tc.begin, req.StartSlot)
require.Equal(t, uint64(1), req.Step)
})
}
}
func TestBlobRequest(t *testing.T) {
cases := []struct {
name string
begin primitives.Slot
end primitives.Slot
expectedCount uint64
}{
{
name: "normal case",
begin: 100,
end: 200,
expectedCount: 100,
},
{
name: "end equals begin",
begin: 100,
end: 100,
expectedCount: 0,
},
{
name: "end less than begin (would underflow without check)",
begin: 200,
end: 100,
expectedCount: 0,
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
b := batch{begin: tc.begin, end: tc.end}
req := b.blobRequest()
require.Equal(t, tc.expectedCount, req.Count)
require.Equal(t, tc.begin, req.StartSlot)
})
}
}
func TestSortBatchDesc(t *testing.T) {
orderIn := []primitives.Slot{100, 10000, 1}
orderOut := []primitives.Slot{10000, 100, 1}

View File

@@ -4,9 +4,6 @@ import (
"context"
"time"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v7/beacon-chain/verification"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/crypto/bls"
"github.com/OffchainLabs/prysm/v7/monitoring/tracing"
@@ -56,32 +53,6 @@ func (s *Service) verifierRoutine() {
}
}
// A routine that runs in the background to perform batch
// KZG verifications by draining the channel and processing all pending requests.
func (s *Service) kzgVerifierRoutine() {
for {
kzgBatch := make([]*kzgVerifier, 0, 1)
select {
case <-s.ctx.Done():
return
case kzg := <-s.kzgChan:
kzgBatch = append(kzgBatch, kzg)
}
for {
select {
case <-s.ctx.Done():
return
case kzg := <-s.kzgChan:
kzgBatch = append(kzgBatch, kzg)
continue
default:
verifyKzgBatch(kzgBatch)
}
break
}
}
}
func (s *Service) validateWithBatchVerifier(ctx context.Context, message string, set *bls.SignatureBatch) (pubsub.ValidationResult, error) {
_, span := trace.StartSpan(ctx, "sync.validateWithBatchVerifier")
defer span.End()
@@ -154,71 +125,3 @@ func performBatchAggregation(aggSet *bls.SignatureBatch) (*bls.SignatureBatch, e
}
return aggSet, nil
}
func (s *Service) validateWithKzgBatchVerifier(ctx context.Context, dataColumns []blocks.RODataColumn) (pubsub.ValidationResult, error) {
_, span := trace.StartSpan(ctx, "sync.validateWithKzgBatchVerifier")
defer span.End()
timeout := time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second
resChan := make(chan error, 1)
verificationSet := &kzgVerifier{dataColumns: dataColumns, resChan: resChan}
ctx, cancel := context.WithTimeout(ctx, timeout)
defer cancel()
select {
case s.kzgChan <- verificationSet:
case <-ctx.Done():
return pubsub.ValidationIgnore, ctx.Err()
}
select {
case <-ctx.Done():
return pubsub.ValidationIgnore, ctx.Err() // parent context canceled, give up
case err := <-resChan:
if err != nil {
log.WithError(err).Trace("Could not perform batch verification")
tracing.AnnotateError(span, err)
return s.validateUnbatchedColumnsKzg(ctx, dataColumns)
}
}
return pubsub.ValidationAccept, nil
}
func (s *Service) validateUnbatchedColumnsKzg(ctx context.Context, columns []blocks.RODataColumn) (pubsub.ValidationResult, error) {
_, span := trace.StartSpan(ctx, "sync.validateUnbatchedColumnsKzg")
defer span.End()
start := time.Now()
if err := peerdas.VerifyDataColumnsSidecarKZGProofs(columns); err != nil {
err = errors.Wrap(err, "could not verify")
tracing.AnnotateError(span, err)
return pubsub.ValidationReject, err
}
verification.DataColumnBatchKZGVerificationHistogram.WithLabelValues("fallback").Observe(float64(time.Since(start).Milliseconds()))
return pubsub.ValidationAccept, nil
}
func verifyKzgBatch(kzgBatch []*kzgVerifier) {
if len(kzgBatch) == 0 {
return
}
allDataColumns := make([]blocks.RODataColumn, 0, len(kzgBatch))
for _, kzgVerifier := range kzgBatch {
allDataColumns = append(allDataColumns, kzgVerifier.dataColumns...)
}
var verificationErr error
start := time.Now()
err := peerdas.VerifyDataColumnsSidecarKZGProofs(allDataColumns)
if err != nil {
verificationErr = errors.Wrap(err, "batch KZG verification failed")
} else {
verification.DataColumnBatchKZGVerificationHistogram.WithLabelValues("batch").Observe(float64(time.Since(start).Milliseconds()))
}
// Send the same result to all verifiers in the batch
for _, verifier := range kzgBatch {
verifier.resChan <- verificationErr
}
}

View File

@@ -10,20 +10,72 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p"
"github.com/OffchainLabs/prysm/v7/cmd/beacon-chain/flags"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
var nilFinalizedStateError = errors.New("finalized state is nil")
func (s *Service) maintainCustodyInfo() {
func (s *Service) maintainCustodyInfo() error {
// Rationale of slot choice:
// - If syncing with an empty DB from genesis, then justifiedSlot = finalizedSlot = 0,
// and the node starts to sync from slot 0 ==> Using justifiedSlot is correct.
// - If syncing with an empty DB from a checkpoint, then justifiedSlot = finalizedSlot = checkpointSlot,
// and the node starts to sync from checkpointSlot ==> Using justifiedSlot is correct.
// - If syncing with a non-empty DB, then justifiedSlot > finalizedSlot,
// and the node starts to sync from justifiedSlot + 1 ==> Using justifiedSlot + 1 is correct.
const interval = 1 * time.Minute
finalizedCheckpoint, err := s.cfg.beaconDB.FinalizedCheckpoint(s.ctx)
if err != nil {
return errors.Wrap(err, "finalized checkpoint")
}
if finalizedCheckpoint == nil {
return errors.New("finalized checkpoint is nil")
}
finalizedSlot, err := slots.EpochStart(finalizedCheckpoint.Epoch)
if err != nil {
return errors.Wrap(err, "epoch start for finalized slot")
}
justifiedCheckpoint, err := s.cfg.beaconDB.JustifiedCheckpoint(s.ctx)
if err != nil {
return errors.Wrap(err, "justified checkpoint")
}
if justifiedCheckpoint == nil {
return errors.New("justified checkpoint is nil")
}
justifiedSlot, err := slots.EpochStart(justifiedCheckpoint.Epoch)
if err != nil {
return errors.Wrap(err, "epoch start for justified slot")
}
slot := justifiedSlot
if justifiedSlot > finalizedSlot {
slot++
}
earliestAvailableSlot, custodySubnetCount, err := s.updateCustodyInfoInDB(slot)
if err != nil {
return errors.Wrap(err, "could not get and save custody group count")
}
if _, _, err := s.cfg.p2p.UpdateCustodyInfo(earliestAvailableSlot, custodySubnetCount); err != nil {
return errors.Wrap(err, "update custody info")
}
async.RunEvery(s.ctx, interval, func() {
if err := s.updateCustodyInfoIfNeeded(); err != nil {
log.WithError(err).Error("Failed to update custody info")
}
})
return nil
}
func (s *Service) updateCustodyInfoIfNeeded() error {

View File

@@ -668,7 +668,7 @@ func populateBlock(bw *blocks.BlockWithROSidecars, blobs []blocks.ROBlob, req *p
func missingCommitError(root [32]byte, slot primitives.Slot, missing [][]byte) error {
missStr := make([]string, 0, len(missing))
for k := range missing {
for _, k := range missing {
missStr = append(missStr, fmt.Sprintf("%#x", k))
}
return errors.Wrapf(errMissingBlobsForBlockCommitments,

View File

@@ -226,8 +226,6 @@ func (s *Service) Start() {
// fetchOriginSidecars fetches origin sidecars
func (s *Service) fetchOriginSidecars(peers []peer.ID) error {
const delay = 10 * time.Second // The delay between each attempt to fetch origin data column sidecars
blockRoot, err := s.cfg.DB.OriginCheckpointBlockRoot(s.ctx)
if errors.Is(err, db.ErrNotFoundOriginBlockRoot) {
return nil
@@ -260,7 +258,7 @@ func (s *Service) fetchOriginSidecars(peers []peer.ID) error {
blockVersion := roBlock.Version()
if blockVersion >= version.Fulu {
if err := s.fetchOriginDataColumnSidecars(roBlock, delay); err != nil {
if err := s.fetchOriginDataColumnSidecars(roBlock); err != nil {
return errors.Wrap(err, "fetch origin columns")
}
return nil
@@ -414,7 +412,7 @@ func (s *Service) fetchOriginBlobSidecars(pids []peer.ID, rob blocks.ROBlock) er
return fmt.Errorf("no connected peer able to provide blobs for checkpoint sync block %#x", r)
}
func (s *Service) fetchOriginDataColumnSidecars(roBlock blocks.ROBlock, delay time.Duration) error {
func (s *Service) fetchOriginDataColumnSidecars(roBlock blocks.ROBlock) error {
const (
errorMessage = "Failed to fetch origin data column sidecars"
warningIteration = 10
@@ -501,7 +499,6 @@ func (s *Service) fetchOriginDataColumnSidecars(roBlock blocks.ROBlock, delay ti
log := log.WithFields(logrus.Fields{
"attempt": attempt,
"missingIndices": helpers.SortedPrettySliceFromMap(missingIndicesByRoot[root]),
"delay": delay,
})
logFunc := log.Debug

View File

@@ -687,10 +687,7 @@ func TestFetchOriginColumns(t *testing.T) {
cfg.BlobSchedule = []params.BlobScheduleEntry{{Epoch: 0, MaxBlobsPerBlock: 10}}
params.OverrideBeaconConfig(cfg)
const (
delay = 0
blobCount = 1
)
const blobCount = 1
t.Run("block has no commitments", func(t *testing.T) {
service := new(Service)
@@ -702,7 +699,7 @@ func TestFetchOriginColumns(t *testing.T) {
roBlock, err := blocks.NewROBlock(signedBlock)
require.NoError(t, err)
err = service.fetchOriginDataColumnSidecars(roBlock, delay)
err = service.fetchOriginDataColumnSidecars(roBlock)
require.NoError(t, err)
})
@@ -724,7 +721,7 @@ func TestFetchOriginColumns(t *testing.T) {
err := storage.Save(verifiedSidecars)
require.NoError(t, err)
err = service.fetchOriginDataColumnSidecars(roBlock, delay)
err = service.fetchOriginDataColumnSidecars(roBlock)
require.NoError(t, err)
})
@@ -829,7 +826,7 @@ func TestFetchOriginColumns(t *testing.T) {
attempt++
})
err = service.fetchOriginDataColumnSidecars(roBlock, delay)
err = service.fetchOriginDataColumnSidecars(roBlock)
require.NoError(t, err)
// Check all corresponding sidecars are saved in the store.

View File

@@ -1,339 +1,14 @@
package sync
import (
"context"
"sync"
"testing"
"time"
"github.com/OffchainLabs/prysm/v7/beacon-chain/blockchain/kzg"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/testing/assert"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/OffchainLabs/prysm/v7/testing/util"
pubsub "github.com/libp2p/go-libp2p-pubsub"
)
func TestValidateWithKzgBatchVerifier(t *testing.T) {
err := kzg.Start()
require.NoError(t, err)
tests := []struct {
name string
dataColumns []blocks.RODataColumn
expectedResult pubsub.ValidationResult
expectError bool
}{
{
name: "single valid data column",
dataColumns: createValidTestDataColumns(t, 1),
expectedResult: pubsub.ValidationAccept,
expectError: false,
},
{
name: "multiple valid data columns",
dataColumns: createValidTestDataColumns(t, 3),
expectedResult: pubsub.ValidationAccept,
expectError: false,
},
{
name: "single invalid data column",
dataColumns: createInvalidTestDataColumns(t, 1),
expectedResult: pubsub.ValidationReject,
expectError: true,
},
{
name: "empty data column slice",
dataColumns: []blocks.RODataColumn{},
expectedResult: pubsub.ValidationAccept,
expectError: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
ctx := t.Context()
service := &Service{
ctx: ctx,
kzgChan: make(chan *kzgVerifier, 100),
}
go service.kzgVerifierRoutine()
result, err := service.validateWithKzgBatchVerifier(ctx, tt.dataColumns)
require.Equal(t, tt.expectedResult, result)
if tt.expectError {
assert.NotNil(t, err)
} else {
assert.NoError(t, err)
}
})
}
}
func TestVerifierRoutine(t *testing.T) {
err := kzg.Start()
require.NoError(t, err)
t.Run("processes single request", func(t *testing.T) {
ctx := t.Context()
service := &Service{
ctx: ctx,
kzgChan: make(chan *kzgVerifier, 100),
}
go service.kzgVerifierRoutine()
dataColumns := createValidTestDataColumns(t, 1)
resChan := make(chan error, 1)
service.kzgChan <- &kzgVerifier{dataColumns: dataColumns, resChan: resChan}
select {
case err := <-resChan:
require.NoError(t, err)
case <-time.After(time.Second):
t.Fatal("timeout waiting for verification result")
}
})
t.Run("batches multiple requests", func(t *testing.T) {
ctx := t.Context()
service := &Service{
ctx: ctx,
kzgChan: make(chan *kzgVerifier, 100),
}
go service.kzgVerifierRoutine()
const numRequests = 5
resChans := make([]chan error, numRequests)
for i := range numRequests {
dataColumns := createValidTestDataColumns(t, 1)
resChan := make(chan error, 1)
resChans[i] = resChan
service.kzgChan <- &kzgVerifier{dataColumns: dataColumns, resChan: resChan}
}
for i := range numRequests {
select {
case err := <-resChans[i]:
require.NoError(t, err)
case <-time.After(time.Second):
t.Fatalf("timeout waiting for verification result %d", i)
}
}
})
t.Run("context cancellation stops routine", func(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
service := &Service{
ctx: ctx,
kzgChan: make(chan *kzgVerifier, 100),
}
routineDone := make(chan struct{})
go func() {
service.kzgVerifierRoutine()
close(routineDone)
}()
cancel()
select {
case <-routineDone:
case <-time.After(time.Second):
t.Fatal("timeout waiting for routine to exit")
}
})
}
func TestVerifyKzgBatch(t *testing.T) {
err := kzg.Start()
require.NoError(t, err)
t.Run("all valid data columns succeed", func(t *testing.T) {
dataColumns := createValidTestDataColumns(t, 3)
resChan := make(chan error, 1)
kzgVerifiers := []*kzgVerifier{{dataColumns: dataColumns, resChan: resChan}}
verifyKzgBatch(kzgVerifiers)
select {
case err := <-resChan:
require.NoError(t, err)
case <-time.After(time.Second):
t.Fatal("timeout waiting for batch verification")
}
})
t.Run("invalid proofs fail entire batch", func(t *testing.T) {
validColumns := createValidTestDataColumns(t, 1)
invalidColumns := createInvalidTestDataColumns(t, 1)
allColumns := append(validColumns, invalidColumns...)
resChan := make(chan error, 1)
kzgVerifiers := []*kzgVerifier{{dataColumns: allColumns, resChan: resChan}}
verifyKzgBatch(kzgVerifiers)
select {
case err := <-resChan:
assert.NotNil(t, err)
case <-time.After(time.Second):
t.Fatal("timeout waiting for batch verification")
}
})
t.Run("empty batch handling", func(t *testing.T) {
verifyKzgBatch([]*kzgVerifier{})
})
}
func TestKzgBatchVerifierConcurrency(t *testing.T) {
err := kzg.Start()
require.NoError(t, err)
ctx := t.Context()
service := &Service{
ctx: ctx,
kzgChan: make(chan *kzgVerifier, 100),
}
go service.kzgVerifierRoutine()
const numGoroutines = 10
const numRequestsPerGoroutine = 5
var wg sync.WaitGroup
wg.Add(numGoroutines)
// Multiple goroutines sending verification requests simultaneously
for i := range numGoroutines {
go func(goroutineID int) {
defer wg.Done()
for range numRequestsPerGoroutine {
dataColumns := createValidTestDataColumns(t, 1)
result, err := service.validateWithKzgBatchVerifier(ctx, dataColumns)
require.Equal(t, pubsub.ValidationAccept, result)
require.NoError(t, err)
}
}(i)
}
wg.Wait()
}
func TestKzgBatchVerifierFallback(t *testing.T) {
err := kzg.Start()
require.NoError(t, err)
t.Run("fallback handles mixed valid/invalid batch correctly", func(t *testing.T) {
ctx := t.Context()
service := &Service{
ctx: ctx,
kzgChan: make(chan *kzgVerifier, 100),
}
go service.kzgVerifierRoutine()
validColumns := createValidTestDataColumns(t, 1)
invalidColumns := createInvalidTestDataColumns(t, 1)
result, err := service.validateWithKzgBatchVerifier(ctx, validColumns)
require.Equal(t, pubsub.ValidationAccept, result)
require.NoError(t, err)
result, err = service.validateWithKzgBatchVerifier(ctx, invalidColumns)
require.Equal(t, pubsub.ValidationReject, result)
assert.NotNil(t, err)
})
t.Run("empty data columns fallback", func(t *testing.T) {
ctx := t.Context()
service := &Service{
ctx: ctx,
kzgChan: make(chan *kzgVerifier, 100),
}
go service.kzgVerifierRoutine()
result, err := service.validateWithKzgBatchVerifier(ctx, []blocks.RODataColumn{})
require.Equal(t, pubsub.ValidationAccept, result)
require.NoError(t, err)
})
}
func TestValidateWithKzgBatchVerifier_DeadlockOnTimeout(t *testing.T) {
err := kzg.Start()
require.NoError(t, err)
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.SecondsPerSlot = 0
params.OverrideBeaconConfig(cfg)
ctx, cancel := context.WithCancel(t.Context())
defer cancel()
service := &Service{
ctx: ctx,
kzgChan: make(chan *kzgVerifier),
}
go service.kzgVerifierRoutine()
result, err := service.validateWithKzgBatchVerifier(context.Background(), nil)
require.Equal(t, pubsub.ValidationIgnore, result)
require.ErrorIs(t, err, context.DeadlineExceeded)
done := make(chan struct{})
go func() {
_, _ = service.validateWithKzgBatchVerifier(context.Background(), nil)
close(done)
}()
select {
case <-done:
case <-time.After(500 * time.Millisecond):
t.Fatal("validateWithKzgBatchVerifier blocked")
}
}
func TestValidateWithKzgBatchVerifier_ContextCanceledBeforeSend(t *testing.T) {
cancelledCtx, cancel := context.WithCancel(t.Context())
cancel()
service := &Service{
ctx: context.Background(),
kzgChan: make(chan *kzgVerifier),
}
done := make(chan struct{})
go func() {
result, err := service.validateWithKzgBatchVerifier(cancelledCtx, nil)
require.Equal(t, pubsub.ValidationIgnore, result)
require.ErrorIs(t, err, context.Canceled)
close(done)
}()
select {
case <-done:
case <-time.After(500 * time.Millisecond):
t.Fatal("validateWithKzgBatchVerifier did not return after context cancellation")
}
select {
case <-service.kzgChan:
t.Fatal("verificationSet was sent to kzgChan despite canceled context")
default:
}
}
func createValidTestDataColumns(t *testing.T, count int) []blocks.RODataColumn {
_, roSidecars, _ := util.GenerateTestFuluBlockWithSidecars(t, count)
if len(roSidecars) >= count {

View File

@@ -77,8 +77,13 @@ func SendBeaconBlocksByRangeRequest(
}
defer closeStream(stream, log)
// Cap the slice capacity to MaxRequestBlock to prevent panic from invalid Count values.
// This guards against upstream bugs that may produce astronomically large Count values
// (e.g., due to unsigned integer underflow).
sliceCap := min(req.Count, params.MaxRequestBlock(slots.ToEpoch(tor.CurrentSlot())))
// Augment block processing function, if non-nil block processor is provided.
blocks := make([]interfaces.ReadOnlySignedBeaconBlock, 0, req.Count)
blocks := make([]interfaces.ReadOnlySignedBeaconBlock, 0, sliceCap)
process := func(blk interfaces.ReadOnlySignedBeaconBlock) error {
blocks = append(blocks, blk)
if blockProcessor != nil {

View File

@@ -12,6 +12,7 @@ import (
mock "github.com/OffchainLabs/prysm/v7/beacon-chain/blockchain/testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/transition"
"github.com/OffchainLabs/prysm/v7/beacon-chain/db/kv"
dbTest "github.com/OffchainLabs/prysm/v7/beacon-chain/db/testing"
testingDB "github.com/OffchainLabs/prysm/v7/beacon-chain/db/testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p"
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/peers"
@@ -934,6 +935,7 @@ func TestStatusRPCRequest_BadPeerHandshake(t *testing.T) {
r := &Service{
cfg: &config{
p2p: p1,
beaconDB: dbTest.SetupDB(t),
chain: chain,
stateNotifier: chain.StateNotifier(),
initialSync: &mockSync.Sync{IsSyncing: false},

View File

@@ -17,6 +17,7 @@ import (
blockfeed "github.com/OffchainLabs/prysm/v7/beacon-chain/core/feed/block"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/feed/operation"
statefeed "github.com/OffchainLabs/prysm/v7/beacon-chain/core/feed/state"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v7/beacon-chain/db"
"github.com/OffchainLabs/prysm/v7/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v7/beacon-chain/execution"
@@ -33,9 +34,11 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/sync/backfill/coverage"
"github.com/OffchainLabs/prysm/v7/beacon-chain/verification"
lruwrpr "github.com/OffchainLabs/prysm/v7/cache/lru"
"github.com/OffchainLabs/prysm/v7/cmd/beacon-chain/flags"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
leakybucket "github.com/OffchainLabs/prysm/v7/container/leaky-bucket"
"github.com/OffchainLabs/prysm/v7/crypto/rand"
"github.com/OffchainLabs/prysm/v7/runtime"
@@ -165,7 +168,6 @@ type Service struct {
syncContributionBitsOverlapLock sync.RWMutex
syncContributionBitsOverlapCache *lru.Cache
signatureChan chan *signatureVerifier
kzgChan chan *kzgVerifier
clockWaiter startup.ClockWaiter
initialSyncComplete chan struct{}
verifierWaiter *verification.InitializerWaiter
@@ -206,10 +208,7 @@ func NewService(ctx context.Context, opts ...Option) *Service {
}
// Initialize signature channel with configured limit
r.signatureChan = make(chan *signatureVerifier, r.cfg.batchVerifierLimit)
// Initialize KZG channel with fixed buffer size of 100.
// This buffer size is designed to handle burst traffic of data column gossip messages:
// - Data columns arrive less frequently than attestations (default batchVerifierLimit=1000)
r.kzgChan = make(chan *kzgVerifier, 100)
// Correctly remove it from our seen pending block map.
// The eviction method always assumes that the mutex is held.
r.slotToPendingBlocks.OnEvicted(func(s string, i any) {
@@ -262,7 +261,6 @@ func (s *Service) Start() {
s.newColumnsVerifier = newDataColumnsVerifierFromInitializer(v)
go s.verifierRoutine()
go s.kzgVerifierRoutine()
go s.startDiscoveryAndSubscriptions()
go s.processDataColumnLogs()
@@ -275,11 +273,6 @@ func (s *Service) Start() {
s.processPendingBlocksQueue()
s.maintainPeerStatuses()
if params.FuluEnabled() {
s.maintainCustodyInfo()
}
s.resyncIfBehind()
// Update sync metrics.
@@ -287,6 +280,15 @@ func (s *Service) Start() {
// Prune data column cache periodically on finalization.
async.RunEvery(s.ctx, 30*time.Second, s.pruneDataColumnCache)
if !params.FuluEnabled() {
return
}
if err := s.maintainCustodyInfo(); err != nil {
log.WithError(err).Error("Failed to maintain custody info")
}
}
// Stop the regular sync service.
@@ -452,6 +454,89 @@ func (s *Service) waitForInitialSync(ctx context.Context) error {
}
}
// UpdateCustodyInfoInDB updates the custody information in the database.
// It returns the (potentially updated) custody group count and the earliest available slot.
func (s *Service) updateCustodyInfoInDB(slot primitives.Slot) (primitives.Slot, uint64, error) {
isSupernode := flags.Get().Supernode
isSemiSupernode := flags.Get().SemiSupernode
cfg := params.BeaconConfig()
custodyRequirement := cfg.CustodyRequirement
// Check if the node was previously subscribed to all data subnets, and if so,
// store the new status accordingly.
wasSupernode, err := s.cfg.beaconDB.UpdateSubscribedToAllDataSubnets(s.ctx, isSupernode)
if err != nil {
return 0, 0, errors.Wrap(err, "update subscribed to all data subnets")
}
// Compute the target custody group count based on current flag configuration.
targetCustodyGroupCount := custodyRequirement
// Supernode: custody all groups (either currently set or previously enabled)
if isSupernode {
targetCustodyGroupCount = cfg.NumberOfCustodyGroups
}
// Semi-supernode: custody minimum needed for reconstruction, or custody requirement if higher
if isSemiSupernode {
semiSupernodeCustody, err := peerdas.MinimumCustodyGroupCountToReconstruct()
if err != nil {
return 0, 0, errors.Wrap(err, "minimum custody group count")
}
targetCustodyGroupCount = max(custodyRequirement, semiSupernodeCustody)
}
// Safely compute the fulu fork slot.
fuluForkSlot, err := fuluForkSlot()
if err != nil {
return 0, 0, errors.Wrap(err, "fulu fork slot")
}
// If slot is before the fulu fork slot, then use the earliest stored slot as the reference slot.
if slot < fuluForkSlot {
slot, err = s.cfg.beaconDB.EarliestSlot(s.ctx)
if err != nil {
return 0, 0, errors.Wrap(err, "earliest slot")
}
}
earliestAvailableSlot, actualCustodyGroupCount, err := s.cfg.beaconDB.UpdateCustodyInfo(s.ctx, slot, targetCustodyGroupCount)
if err != nil {
return 0, 0, errors.Wrap(err, "update custody info")
}
if isSupernode {
log.WithFields(logrus.Fields{
"current": actualCustodyGroupCount,
"target": cfg.NumberOfCustodyGroups,
}).Info("Supernode mode enabled. Will custody all data columns going forward.")
}
if wasSupernode && !isSupernode {
log.Warningf("Because the `--%s` flag was previously used, the node will continue to act as a super node.", flags.Supernode.Name)
}
return earliestAvailableSlot, actualCustodyGroupCount, nil
}
func fuluForkSlot() (primitives.Slot, error) {
cfg := params.BeaconConfig()
fuluForkEpoch := cfg.FuluForkEpoch
if fuluForkEpoch == cfg.FarFutureEpoch {
return cfg.FarFutureSlot, nil
}
forkFuluSlot, err := slots.EpochStart(fuluForkEpoch)
if err != nil {
return 0, errors.Wrap(err, "epoch start")
}
return forkFuluSlot, nil
}
// Checker defines a struct which can verify whether a node is currently
// synchronizing a chain with the rest of peers in the network.
type Checker interface {

View File

@@ -16,6 +16,9 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/startup"
state_native "github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native"
mockSync "github.com/OffchainLabs/prysm/v7/beacon-chain/sync/initial-sync/testing"
"github.com/OffchainLabs/prysm/v7/cmd/beacon-chain/flags"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
leakybucket "github.com/OffchainLabs/prysm/v7/container/leaky-bucket"
"github.com/OffchainLabs/prysm/v7/crypto/bls"
@@ -440,3 +443,224 @@ func TestService_Stop_ConcurrentGoodbyeMessages(t *testing.T) {
require.Equal(t, false, util.WaitTimeout(&wg, 2*time.Second))
}
func TestUpdateCustodyInfoInDB(t *testing.T) {
const (
fuluForkEpoch = 10
custodyRequirement = uint64(4)
earliestStoredSlot = primitives.Slot(12)
numberOfCustodyGroups = uint64(64)
)
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.FuluForkEpoch = fuluForkEpoch
cfg.CustodyRequirement = custodyRequirement
cfg.NumberOfCustodyGroups = numberOfCustodyGroups
params.OverrideBeaconConfig(cfg)
ctx := t.Context()
pbBlock := util.NewBeaconBlock()
pbBlock.Block.Slot = 12
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(pbBlock)
require.NoError(t, err)
roBlock, err := blocks.NewROBlock(signedBeaconBlock)
require.NoError(t, err)
t.Run("CGC increases before fulu", func(t *testing.T) {
beaconDB := dbTest.SetupDB(t)
service := Service{cfg: &config{beaconDB: beaconDB}}
err = beaconDB.SaveBlock(ctx, roBlock)
require.NoError(t, err)
// Before Fulu
// -----------
actualEas, actualCgc, err := service.updateCustodyInfoInDB(15)
require.NoError(t, err)
require.Equal(t, earliestStoredSlot, actualEas)
require.Equal(t, custodyRequirement, actualCgc)
actualEas, actualCgc, err = service.updateCustodyInfoInDB(17)
require.NoError(t, err)
require.Equal(t, earliestStoredSlot, actualEas)
require.Equal(t, custodyRequirement, actualCgc)
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.Supernode = true
flags.Init(gFlags)
defer flags.Init(resetFlags)
actualEas, actualCgc, err = service.updateCustodyInfoInDB(19)
require.NoError(t, err)
require.Equal(t, earliestStoredSlot, actualEas)
require.Equal(t, numberOfCustodyGroups, actualCgc)
// After Fulu
// ----------
actualEas, actualCgc, err = service.updateCustodyInfoInDB(fuluForkEpoch*primitives.Slot(cfg.SlotsPerEpoch) + 1)
require.NoError(t, err)
require.Equal(t, earliestStoredSlot, actualEas)
require.Equal(t, numberOfCustodyGroups, actualCgc)
})
t.Run("CGC increases after fulu", func(t *testing.T) {
beaconDB := dbTest.SetupDB(t)
service := Service{cfg: &config{beaconDB: beaconDB}}
err = beaconDB.SaveBlock(ctx, roBlock)
require.NoError(t, err)
// Before Fulu
// -----------
actualEas, actualCgc, err := service.updateCustodyInfoInDB(15)
require.NoError(t, err)
require.Equal(t, earliestStoredSlot, actualEas)
require.Equal(t, custodyRequirement, actualCgc)
actualEas, actualCgc, err = service.updateCustodyInfoInDB(17)
require.NoError(t, err)
require.Equal(t, earliestStoredSlot, actualEas)
require.Equal(t, custodyRequirement, actualCgc)
// After Fulu
// ----------
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.Supernode = true
flags.Init(gFlags)
defer flags.Init(resetFlags)
slot := fuluForkEpoch*primitives.Slot(cfg.SlotsPerEpoch) + 1
actualEas, actualCgc, err = service.updateCustodyInfoInDB(slot)
require.NoError(t, err)
require.Equal(t, slot, actualEas)
require.Equal(t, numberOfCustodyGroups, actualCgc)
actualEas, actualCgc, err = service.updateCustodyInfoInDB(slot + 2)
require.NoError(t, err)
require.Equal(t, slot, actualEas)
require.Equal(t, numberOfCustodyGroups, actualCgc)
})
t.Run("Supernode downgrade prevented", func(t *testing.T) {
beaconDB := dbTest.SetupDB(t)
service := Service{cfg: &config{beaconDB: beaconDB}}
err = beaconDB.SaveBlock(ctx, roBlock)
require.NoError(t, err)
// Enable supernode
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.Supernode = true
flags.Init(gFlags)
slot := fuluForkEpoch*primitives.Slot(cfg.SlotsPerEpoch) + 1
actualEas, actualCgc, err := service.updateCustodyInfoInDB(slot)
require.NoError(t, err)
require.Equal(t, slot, actualEas)
require.Equal(t, numberOfCustodyGroups, actualCgc)
// Try to downgrade by removing flag
gFlags.Supernode = false
flags.Init(gFlags)
defer flags.Init(resetFlags)
// Should still be supernode
actualEas, actualCgc, err = service.updateCustodyInfoInDB(slot + 2)
require.NoError(t, err)
require.Equal(t, slot, actualEas)
require.Equal(t, numberOfCustodyGroups, actualCgc) // Still 64, not downgraded
})
t.Run("Semi-supernode downgrade prevented", func(t *testing.T) {
beaconDB := dbTest.SetupDB(t)
service := Service{cfg: &config{beaconDB: beaconDB}}
err = beaconDB.SaveBlock(ctx, roBlock)
require.NoError(t, err)
// Enable semi-supernode
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.SemiSupernode = true
flags.Init(gFlags)
slot := fuluForkEpoch*primitives.Slot(cfg.SlotsPerEpoch) + 1
actualEas, actualCgc, err := service.updateCustodyInfoInDB(slot)
require.NoError(t, err)
require.Equal(t, slot, actualEas)
semiSupernodeCustody := numberOfCustodyGroups / 2 // 64
require.Equal(t, semiSupernodeCustody, actualCgc) // Semi-supernode custodies 64 groups
// Try to downgrade by removing flag
gFlags.SemiSupernode = false
flags.Init(gFlags)
defer flags.Init(resetFlags)
// UpdateCustodyInfo should prevent downgrade - custody count should remain at 64
actualEas, actualCgc, err = service.updateCustodyInfoInDB(slot + 2)
require.NoError(t, err)
require.Equal(t, slot, actualEas)
require.Equal(t, semiSupernodeCustody, actualCgc) // Still 64 due to downgrade prevention by UpdateCustodyInfo
})
t.Run("Semi-supernode to supernode upgrade allowed", func(t *testing.T) {
beaconDB := dbTest.SetupDB(t)
service := Service{cfg: &config{beaconDB: beaconDB}}
err = beaconDB.SaveBlock(ctx, roBlock)
require.NoError(t, err)
// Start with semi-supernode
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.SemiSupernode = true
flags.Init(gFlags)
slot := fuluForkEpoch*primitives.Slot(cfg.SlotsPerEpoch) + 1
actualEas, actualCgc, err := service.updateCustodyInfoInDB(slot)
require.NoError(t, err)
require.Equal(t, slot, actualEas)
semiSupernodeCustody := numberOfCustodyGroups / 2 // 64
require.Equal(t, semiSupernodeCustody, actualCgc) // Semi-supernode custodies 64 groups
// Upgrade to full supernode
gFlags.SemiSupernode = false
gFlags.Supernode = true
flags.Init(gFlags)
defer flags.Init(resetFlags)
// Should upgrade to full supernode
upgradeSlot := slot + 2
actualEas, actualCgc, err = service.updateCustodyInfoInDB(upgradeSlot)
require.NoError(t, err)
require.Equal(t, upgradeSlot, actualEas) // Earliest slot updates when upgrading
require.Equal(t, numberOfCustodyGroups, actualCgc) // Upgraded to 128
})
t.Run("Semi-supernode with high validator requirements uses higher custody", func(t *testing.T) {
beaconDB := dbTest.SetupDB(t)
service := Service{cfg: &config{beaconDB: beaconDB}}
err = beaconDB.SaveBlock(ctx, roBlock)
require.NoError(t, err)
// Enable semi-supernode
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.SemiSupernode = true
flags.Init(gFlags)
defer flags.Init(resetFlags)
// Mock a high custody requirement (simulating many validators)
// We need to override the custody requirement calculation
// For this test, we'll verify the logic by checking if custodyRequirement > 64
// Since custodyRequirement in minimalTestService is 4, we can't test the high case here
// This would require a different test setup with actual validators
slot := fuluForkEpoch*primitives.Slot(cfg.SlotsPerEpoch) + 1
actualEas, actualCgc, err := service.updateCustodyInfoInDB(slot)
require.NoError(t, err)
require.Equal(t, slot, actualEas)
semiSupernodeCustody := numberOfCustodyGroups / 2 // 64
// With low validator requirements (4), should use semi-supernode minimum (64)
require.Equal(t, semiSupernodeCustody, actualCgc)
})
}

View File

@@ -144,12 +144,9 @@ func (s *Service) validateDataColumn(ctx context.Context, pid peer.ID, msg *pubs
}
// [REJECT] The sidecar's column data is valid as verified by `verify_data_column_sidecar_kzg_proofs(sidecar)`.
validationResult, err := s.validateWithKzgBatchVerifier(ctx, roDataColumns)
if validationResult != pubsub.ValidationAccept {
return validationResult, err
if err := verifier.SidecarKzgProofVerified(); err != nil {
return pubsub.ValidationReject, err
}
// Mark KZG verification as satisfied since we did it via batch verifier
verifier.SatisfyRequirement(verification.RequireSidecarKzgProofVerified)
// [IGNORE] The sidecar is the first sidecar for the tuple `(block_header.slot, block_header.proposer_index, sidecar.index)`
// with valid header signature, sidecar inclusion proof, and kzg proof.

View File

@@ -71,10 +71,7 @@ func TestValidateDataColumn(t *testing.T) {
ctx: ctx,
newColumnsVerifier: newDataColumnsVerifier,
seenDataColumnCache: newSlotAwareCache(seenDataColumnSize),
kzgChan: make(chan *kzgVerifier, 100),
}
// Start the KZG verifier routine for batch verification
go service.kzgVerifierRoutine()
// Encode a `beaconBlock` message instead of expected.
buf := new(bytes.Buffer)

View File

@@ -588,6 +588,12 @@ func fcReturnsTargetRoot(root [32]byte) func([32]byte, primitives.Epoch) ([32]by
}
}
func fcReturnsDependentRoot() func([32]byte, primitives.Epoch) ([32]byte, error) {
return func(root [32]byte, epoch primitives.Epoch) ([32]byte, error) {
return root, nil
}
}
type mockSignatureCache struct {
svCalledForSig map[signatureData]bool
svcb func(sig signatureData) (bool, error)

View File

@@ -1,7 +1,6 @@
package verification
import (
"bytes"
"context"
"crypto/sha256"
"fmt"
@@ -19,6 +18,7 @@ import (
"github.com/OffchainLabs/prysm/v7/runtime/logging"
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
var (
@@ -293,55 +293,57 @@ func (dv *RODataColumnsVerifier) ValidProposerSignature(ctx context.Context) (er
// The returned state is guaranteed to be at the same epoch as the data column's epoch, and have the same randao mix and active
// validator indices as the data column's parent state advanced to the data column's slot.
func (dv *RODataColumnsVerifier) getVerifyingState(ctx context.Context, dataColumn blocks.RODataColumn) (state.ReadOnlyBeaconState, error) {
dataColumnSlot := dataColumn.Slot()
dataColumnEpoch := slots.ToEpoch(dataColumnSlot)
if dataColumnEpoch == 0 {
return dv.hsp.HeadStateReadOnly(ctx)
}
parentRoot := dataColumn.ParentRoot()
dcDependentRoot, err := dv.fc.DependentRootForEpoch(parentRoot, dataColumnEpoch-1)
if err != nil {
return nil, err
}
headRoot, err := dv.hsp.HeadRoot(ctx)
if err != nil {
return nil, err
}
parentRoot := dataColumn.ParentRoot()
dataColumnSlot := dataColumn.Slot()
dataColumnEpoch := slots.ToEpoch(dataColumnSlot)
headSlot := dv.hsp.HeadSlot()
headEpoch := slots.ToEpoch(headSlot)
// Use head if it's the parent
if bytes.Equal(parentRoot[:], headRoot) {
// If they are in the same epoch, then we can return the head state directly
if dataColumnEpoch == headEpoch {
return dv.hsp.HeadStateReadOnly(ctx)
}
// Otherwise, we need to process the head state to the data column's slot
headState, err := dv.hsp.HeadState(ctx)
if err != nil {
return nil, err
}
return transition.ProcessSlotsUsingNextSlotCache(ctx, headState, headRoot, dataColumnSlot)
}
// If head and data column are in the same epoch and head is compatible with the parent's depdendent root, then use head
if dataColumnEpoch == headEpoch {
headDependent, err := dv.fc.DependentRootForEpoch(bytesutil.ToBytes32(headRoot), dataColumnEpoch)
if err != nil {
return nil, err
}
parentDependent, err := dv.fc.DependentRootForEpoch(parentRoot, dataColumnEpoch)
if err != nil {
return nil, err
}
if bytes.Equal(headDependent[:], parentDependent[:]) {
return dv.hsp.HeadStateReadOnly(ctx)
}
}
// Otherwise retrieve the parent state and advance it to the data column's slot
parentState, err := dv.sr.StateByRoot(ctx, parentRoot)
headDependentRoot, err := dv.fc.DependentRootForEpoch(bytesutil.ToBytes32(headRoot), dataColumnEpoch-1)
if err != nil {
return nil, err
}
parentEpoch := slots.ToEpoch(parentState.Slot())
if dataColumnEpoch == parentEpoch {
return parentState, nil
if dcDependentRoot == headDependentRoot {
headSlot := dv.hsp.HeadSlot()
headEpoch := slots.ToEpoch(headSlot)
if headEpoch == dataColumnEpoch || headEpoch == dataColumnEpoch-1 {
return dv.hsp.HeadStateReadOnly(ctx)
}
if headEpoch+1 < dataColumnEpoch {
headState, err := dv.hsp.HeadState(ctx)
if err != nil {
return nil, err
}
return transition.ProcessSlotsUsingNextSlotCache(ctx, headState, headRoot, dataColumnSlot)
}
}
return transition.ProcessSlotsUsingNextSlotCache(ctx, parentState, parentRoot[:], dataColumnSlot)
logrus.WithFields(logrus.Fields{
"slot": dataColumnSlot,
"parentRoot": fmt.Sprintf("%#x", parentRoot),
"headRoot": fmt.Sprintf("%#x", headRoot),
}).Debug("Replying state for data column verification")
targetRoot, err := dv.fc.TargetRootForEpoch(parentRoot, dataColumnEpoch)
if err != nil {
return nil, err
}
targetState, err := dv.sr.StateByRoot(ctx, targetRoot)
if err != nil {
return nil, err
}
targetEpoch := slots.ToEpoch(targetState.Slot())
if targetEpoch == dataColumnEpoch || targetEpoch == dataColumnEpoch-1 {
return targetState, nil
}
return transition.ProcessSlotsUsingNextSlotCache(ctx, targetState, parentRoot[:], dataColumnSlot)
}
func (dv *RODataColumnsVerifier) SidecarParentSeen(parentSeen func([fieldparams.RootLength]byte) bool) (err error) {

View File

@@ -1,6 +1,7 @@
package verification
import (
"context"
"reflect"
"testing"
"time"
@@ -9,6 +10,7 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
forkchoicetypes "github.com/OffchainLabs/prysm/v7/beacon-chain/forkchoice/types"
"github.com/OffchainLabs/prysm/v7/beacon-chain/startup"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
@@ -281,7 +283,7 @@ func TestColumnSlotAboveFinalized(t *testing.T) {
func TestValidProposerSignature(t *testing.T) {
const (
columnSlot = 0
columnSlot = 97
blobCount = 1
)
@@ -294,59 +296,83 @@ func TestValidProposerSignature(t *testing.T) {
// The signature data does not depend on the data column itself, so we can use the first one.
expectedSignatureData := columnToSignatureData(firstColumn)
// Create a proper Fulu state for verification.
// We need enough validators to cover the proposer index.
numValidators := max(uint64(firstColumn.ProposerIndex()+1), 64)
fuluState, _ := util.DeterministicGenesisStateFulu(t, numValidators)
// Head state provider that returns the fuluState via HeadStateReadOnly path.
headStateWithState := &mockHeadStateProvider{
headRoot: parentRoot[:],
headSlot: columnSlot,
headStateReadOnly: fuluState,
}
// Head state provider that will fail (headStateReadOnly is nil).
headStateNotFound := &mockHeadStateProvider{
headRoot: parentRoot[:],
headSlot: columnSlot,
}
testCases := []struct {
isError bool
vscbShouldError bool
svcbReturn bool
stateByRooter StateByRooter
vscbError error
svcbError error
name string
isError bool
vscbShouldError bool
svcbReturn bool
stateByRooter StateByRooter
headStateProvider *mockHeadStateProvider
vscbError error
svcbError error
name string
}{
{
name: "cache hit - success",
svcbReturn: true,
svcbError: nil,
vscbShouldError: true,
vscbError: nil,
stateByRooter: &mockStateByRooter{sbr: sbrErrorIfCalled(t)},
isError: false,
name: "cache hit - success",
svcbReturn: true,
svcbError: nil,
vscbShouldError: true,
vscbError: nil,
stateByRooter: &mockStateByRooter{sbr: sbrErrorIfCalled(t)},
headStateProvider: headStateWithState,
isError: false,
},
{
name: "cache hit - error",
svcbReturn: true,
svcbError: errors.New("derp"),
vscbShouldError: true,
vscbError: nil,
stateByRooter: &mockStateByRooter{sbr: sbrErrorIfCalled(t)},
isError: true,
name: "cache hit - error",
svcbReturn: true,
svcbError: errors.New("derp"),
vscbShouldError: true,
vscbError: nil,
stateByRooter: &mockStateByRooter{sbr: sbrErrorIfCalled(t)},
headStateProvider: headStateWithState,
isError: true,
},
{
name: "cache miss - success",
svcbReturn: false,
svcbError: nil,
vscbShouldError: false,
vscbError: nil,
stateByRooter: sbrForValOverrideWithT(t, firstColumn.ProposerIndex(), validator),
isError: false,
name: "cache miss - success",
svcbReturn: false,
svcbError: nil,
vscbShouldError: false,
vscbError: nil,
stateByRooter: sbrForValOverrideWithT(t, firstColumn.ProposerIndex(), validator),
headStateProvider: headStateWithState,
isError: false,
},
{
name: "cache miss - state not found",
svcbReturn: false,
svcbError: nil,
vscbShouldError: false,
vscbError: nil,
stateByRooter: sbrNotFound(t, expectedSignatureData.Parent),
isError: true,
name: "cache miss - state not found",
svcbReturn: false,
svcbError: nil,
vscbShouldError: false,
vscbError: nil,
stateByRooter: sbrNotFound(t, expectedSignatureData.Parent),
headStateProvider: headStateNotFound,
isError: true,
},
{
name: "cache miss - signature failure",
svcbReturn: false,
svcbError: nil,
vscbShouldError: false,
vscbError: errors.New("signature, not so good!"),
stateByRooter: sbrForValOverrideWithT(t, firstColumn.ProposerIndex(), validator),
isError: true,
name: "cache miss - signature failure",
svcbReturn: false,
svcbError: nil,
vscbShouldError: false,
vscbError: errors.New("signature, not so good!"),
stateByRooter: sbrForValOverrideWithT(t, firstColumn.ProposerIndex(), validator),
headStateProvider: headStateWithState,
isError: true,
},
}
@@ -377,9 +403,10 @@ func TestValidProposerSignature(t *testing.T) {
shared: &sharedResources{
sc: signatureCache,
sr: tc.stateByRooter,
hsp: &mockHeadStateProvider{},
hsp: tc.headStateProvider,
fc: &mockForkchoicer{
TargetRootForEpochCB: fcReturnsTargetRoot([fieldparams.RootLength]byte{}),
DependentRootForEpochCB: fcReturnsDependentRoot(),
TargetRootForEpochCB: fcReturnsTargetRoot([fieldparams.RootLength]byte{}),
},
},
}
@@ -405,7 +432,7 @@ func TestValidProposerSignature(t *testing.T) {
func TestDataColumnsSidecarParentSeen(t *testing.T) {
const (
columnSlot = 0
columnSlot = 97
blobCount = 1
)
@@ -509,7 +536,7 @@ func TestDataColumnsSidecarParentValid(t *testing.T) {
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
const (
columnSlot = 0
columnSlot = 97
blobCount = 1
)
@@ -630,7 +657,7 @@ func TestDataColumnsSidecarDescendsFromFinalized(t *testing.T) {
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
const (
columnSlot = 0
columnSlot = 97
blobCount = 1
)
@@ -693,7 +720,7 @@ func TestDataColumnsSidecarInclusionProven(t *testing.T) {
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
const (
columnSlot = 0
columnSlot = 97
blobCount = 1
)
@@ -748,7 +775,7 @@ func TestDataColumnsSidecarKzgProofVerified(t *testing.T) {
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
const (
columnSlot = 0
columnSlot = 97
blobCount = 1
)
@@ -924,3 +951,135 @@ func TestColumnRequirementSatisfaction(t *testing.T) {
require.NoError(t, err)
}
func TestGetVerifyingStateEdgeCases(t *testing.T) {
const (
columnSlot = 97 // epoch 3
blobCount = 1
)
parentRoot := [fieldparams.RootLength]byte{}
columns := GenerateTestDataColumns(t, parentRoot, columnSlot, blobCount)
// Create a proper Fulu state for verification.
numValidators := max(uint64(columns[0].ProposerIndex()+1), 64)
fuluState, _ := util.DeterministicGenesisStateFulu(t, numValidators)
t.Run("different dependent roots - uses StateByRoot path", func(t *testing.T) {
// Parent and head are on different forks with different dependent roots.
// This forces the code to use TargetRootForEpoch -> StateByRoot path.
signatureCache := &mockSignatureCache{
svcb: func(signatureData signatureData) (bool, error) {
return false, nil // Cache miss
},
vscb: func(signatureData signatureData, _ validatorAtIndexer) (err error) {
return nil // Signature valid
},
}
// StateByRoot will be called because dependent roots differ
stateByRootCalled := false
stateByRooter := &mockStateByRooter{
sbr: func(_ context.Context, root [32]byte) (state.BeaconState, error) {
stateByRootCalled = true
return fuluState, nil
},
}
initializer := Initializer{
shared: &sharedResources{
sc: signatureCache,
sr: stateByRooter,
hsp: &mockHeadStateProvider{
headRoot: []byte{0xff}, // Different from parentRoot
headSlot: columnSlot,
},
fc: &mockForkchoicer{
// Return different roots for parent vs head to simulate different forks
DependentRootForEpochCB: func(root [32]byte, epoch primitives.Epoch) ([32]byte, error) {
return root, nil // Returns input, so parent [0...] != head [0xff...]
},
TargetRootForEpochCB: fcReturnsTargetRoot([fieldparams.RootLength]byte{}),
},
},
}
verifier := initializer.NewDataColumnsVerifier(columns, GossipDataColumnSidecarRequirements)
err := verifier.ValidProposerSignature(t.Context())
require.NoError(t, err)
require.Equal(t, true, stateByRootCalled, "StateByRoot should be called when dependent roots differ")
})
t.Run("same dependent root head far ahead - uses head state with ProcessSlots", func(t *testing.T) {
// Parent is ancestor of head on same chain, but head is in epoch 1 while column is in epoch 3.
// headEpoch (1) + 1 < dataColumnEpoch (3), so ProcessSlots is called on head state.
signatureCache := &mockSignatureCache{
svcb: func(signatureData signatureData) (bool, error) {
return false, nil // Cache miss
},
vscb: func(signatureData signatureData, _ validatorAtIndexer) (err error) {
return nil // Signature valid
},
}
headStateCalled := false
initializer := Initializer{
shared: &sharedResources{
sc: signatureCache,
sr: &mockStateByRooter{sbr: sbrErrorIfCalled(t)}, // Should not be called
hsp: &mockHeadStateProvider{
headRoot: parentRoot[:], // Same as parent
headSlot: 32, // Epoch 1
headState: fuluState.Copy(), // HeadState (not ReadOnly) for ProcessSlots
headStateReadOnly: nil, // Should not use ReadOnly path
},
fc: &mockForkchoicer{
// Return same root for both to simulate same chain
DependentRootForEpochCB: func(root [32]byte, epoch primitives.Epoch) ([32]byte, error) {
return [32]byte{0xaa}, nil // Same for all inputs
},
TargetRootForEpochCB: fcReturnsTargetRoot([fieldparams.RootLength]byte{}),
},
},
}
// Wrap to detect HeadState call
originalHsp := initializer.shared.hsp.(*mockHeadStateProvider)
wrappedHsp := &mockHeadStateProvider{
headRoot: originalHsp.headRoot,
headSlot: originalHsp.headSlot,
headState: originalHsp.headState,
}
initializer.shared.hsp = &headStateCallTracker{
mockHeadStateProvider: wrappedHsp,
headStateCalled: &headStateCalled,
}
verifier := initializer.NewDataColumnsVerifier(columns, GossipDataColumnSidecarRequirements)
err := verifier.ValidProposerSignature(t.Context())
require.NoError(t, err)
require.Equal(t, true, headStateCalled, "HeadState should be called when head is far ahead")
})
}
// headStateCallTracker wraps mockHeadStateProvider to track HeadState calls.
type headStateCallTracker struct {
*mockHeadStateProvider
headStateCalled *bool
}
func (h *headStateCallTracker) HeadState(ctx context.Context) (state.BeaconState, error) {
*h.headStateCalled = true
return h.mockHeadStateProvider.HeadState(ctx)
}
func (h *headStateCallTracker) HeadRoot(ctx context.Context) ([]byte, error) {
return h.mockHeadStateProvider.HeadRoot(ctx)
}
func (h *headStateCallTracker) HeadSlot() primitives.Slot {
return h.mockHeadStateProvider.HeadSlot()
}
func (h *headStateCallTracker) HeadStateReadOnly(ctx context.Context) (state.ReadOnlyBeaconState, error) {
return h.mockHeadStateProvider.HeadStateReadOnly(ctx)
}

View File

@@ -0,0 +1,3 @@
### Ignored
- small touch ups on state diff code.

View File

@@ -0,0 +1,2 @@
### Fixed
- Fixed a typo: AggregrateDueBPS -> AggregateDueBPS.

View File

@@ -0,0 +1,3 @@
### Fixed
- Fix `prysmctl testnet generate-genesis` to use the timestamp from `--geth-genesis-json-in` when `--genesis-time` is not explicitly provided.

View File

@@ -0,0 +1,3 @@
### Ignored
- Removed deprecated gometalinter references from Bazel configuration.

View File

@@ -0,0 +1,3 @@
### Fixed
- Prevent authentication bypass on direct `/v2/validator/*` endpoints by enforcing auth checks for non-public routes.

View File

@@ -0,0 +1,3 @@
### Fixed
- stop SlotIntervalTicker goroutine leaks [#16241](https://github.com/OffchainLabs/prysm/pull/16241)

View File

@@ -0,0 +1,3 @@
### Changed
- post electra we now call attestation data once per slot and use a cache for subsequent requests

View File

@@ -0,0 +1,11 @@
### Added
- Adding basic fulu fork transition support for mainnet and minimal e2e tests (multi scenario is not included)
### Changed
- updated go ethereum to 1.16.7
### Removed
- removed github.com/MariusVanDerWijden/FuzzyVM and github.com/MariusVanDerWijden/tx-fuzz due to lack of support post 1.16.7, only used in e2e for transaction fuzzing

View File

@@ -0,0 +1,3 @@
### Ignored
- adding a optimistic check for e2e evlauator on synced head, it may be slower post fulu to sync.

3
changelog/manu-eas.md Normal file
View File

@@ -0,0 +1,3 @@
### Fixed
- When adding the `--[semi-]supernode` flag, update the ealiest available slot accordingly.

View File

@@ -0,0 +1,3 @@
### Removed
- Batching of KZG verification for incoming via gossip data column sidecars

View File

@@ -0,0 +1,2 @@
### Added
- `commitment_count_in_gossip_processed_blocks` gauge metric to track the number of blob KZG commitments in processed beacon blocks.

View File

@@ -0,0 +1,3 @@
### Removed
- Remove unused `delay` parameter from `fetchOriginDataColumnSidecars` function.

View File

@@ -0,0 +1,3 @@
### Fixed
- fixed broken and old links to actual

View File

@@ -0,0 +1,2 @@
### Changed
- Use dependent root and target root to verify data column proposer index.

View File

@@ -0,0 +1,3 @@
### Added
- Migrate to cold with the hdiff feature.

View File

@@ -0,0 +1,2 @@
### Added
- Update spectests to v1.7.0-alpha-1.

View File

@@ -0,0 +1,2 @@
### Added
- Add feature flag to use hashtree instead of gohashtre.

View File

@@ -0,0 +1,3 @@
### Changed
- Added some defensive checks to prevent overflows in block batch requests.

View File

@@ -0,0 +1,3 @@
### Ignored
- Gazelle update-repos is now enforced via ./hack/check_gazelle.sh

View File

@@ -0,0 +1,3 @@
### Security
- Update go-ethereum to v1.16.8

View File

@@ -0,0 +1,3 @@
### Changed
- Avoid unnessary heap allocation while calling MixInLength

3
changelog/satushh-log.md Normal file
View File

@@ -0,0 +1,3 @@
### Changed
- Log commitments instead of indices in missingCommitError

View File

@@ -0,0 +1,2 @@
### Added
- add pending payments processing and quorum threshold, plus spectests and state hooks (rotate/append)

View File

@@ -0,0 +1,3 @@
### Added
- Add Gloas latest execution bid processing

View File

@@ -0,0 +1,3 @@
### Added
- Added shell completion support for `beacon-chain` and `validator` CLI tools.

View File

@@ -3,6 +3,8 @@ load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"completion.go",
"completion_scripts.go",
"config.go",
"defaults.go",
"flags.go",
@@ -28,6 +30,7 @@ go_test(
name = "go_default_test",
size = "small",
srcs = [
"completion_test.go",
"config_test.go",
"flags_test.go",
"helpers_test.go",

View File

@@ -271,9 +271,11 @@ func main() {
Commands: []*cli.Command{
dbcommands.Commands,
jwtcommands.Commands,
cmd.CompletionCommand("beacon-chain"),
},
Flags: appFlags,
Before: before,
Flags: appFlags,
Before: before,
EnableBashCompletion: true,
}
defer func() {

63
cmd/completion.go Normal file
View File

@@ -0,0 +1,63 @@
package cmd
import (
"fmt"
"github.com/urfave/cli/v2"
)
// CompletionCommand returns the completion command for the given binary name.
// The binaryName parameter should be "beacon-chain" or "validator".
func CompletionCommand(binaryName string) *cli.Command {
return &cli.Command{
Name: "completion",
Category: "completion",
Usage: "Generate shell completion scripts",
Description: fmt.Sprintf(`Generate shell completion scripts for bash, zsh, or fish.
To load completions:
Bash:
$ source <(%[1]s completion bash)
# To load completions for each session, execute once:
$ %[1]s completion bash > /etc/bash_completion.d/%[1]s
Zsh:
# To load completions for each session, execute once:
$ %[1]s completion zsh > "${fpath[1]}/_%[1]s"
# You may need to start a new shell for completions to take effect.
Fish:
$ %[1]s completion fish | source
# To load completions for each session, execute once:
$ %[1]s completion fish > ~/.config/fish/completions/%[1]s.fish
`, binaryName),
Subcommands: []*cli.Command{
{
Name: "bash",
Usage: "Generate bash completion script",
Action: func(_ *cli.Context) error {
fmt.Println(bashCompletionScript(binaryName))
return nil
},
},
{
Name: "zsh",
Usage: "Generate zsh completion script",
Action: func(_ *cli.Context) error {
fmt.Println(zshCompletionScript(binaryName))
return nil
},
},
{
Name: "fish",
Usage: "Generate fish completion script",
Action: func(_ *cli.Context) error {
fmt.Println(fishCompletionScript(binaryName))
return nil
},
},
},
}
}

99
cmd/completion_scripts.go Normal file
View File

@@ -0,0 +1,99 @@
package cmd
import (
"fmt"
"strings"
)
// bashCompletionScript returns the bash completion script for the given binary.
func bashCompletionScript(binaryName string) string {
// Convert hyphens to underscores for bash function names
funcName := strings.ReplaceAll(binaryName, "-", "_")
return fmt.Sprintf(`#!/bin/bash
_%[1]s_completions() {
local cur prev words cword opts
COMPREPLY=()
# Use bash-completion if available, otherwise set variables directly
if declare -F _init_completion >/dev/null 2>&1; then
_init_completion -n "=:" || return
else
cur="${COMP_WORDS[COMP_CWORD]}"
prev="${COMP_WORDS[COMP_CWORD-1]}"
words=("${COMP_WORDS[@]}")
cword=$COMP_CWORD
fi
# Build command array for completion - flag must be at the END
local -a requestComp
if [[ "$cur" == "-"* ]]; then
requestComp=("${COMP_WORDS[@]:0:COMP_CWORD}" "$cur" --generate-bash-completion)
else
requestComp=("${COMP_WORDS[@]:0:COMP_CWORD}" --generate-bash-completion)
fi
opts=$("${requestComp[@]}" 2>/dev/null)
COMPREPLY=($(compgen -W "${opts}" -- "${cur}"))
return 0
}
complete -o bashdefault -o default -o nospace -F _%[1]s_completions %[2]s
`, funcName, binaryName)
}
// zshCompletionScript returns the zsh completion script for the given binary.
func zshCompletionScript(binaryName string) string {
// Convert hyphens to underscores for zsh function names
funcName := strings.ReplaceAll(binaryName, "-", "_")
return fmt.Sprintf(`#compdef %[2]s
_%[1]s() {
local curcontext="$curcontext" ret=1
local -a completions
# Build command array with --generate-bash-completion at the END
local -a requestComp
if [[ "${words[CURRENT]}" == -* ]]; then
requestComp=(${words[1,CURRENT]} --generate-bash-completion)
else
requestComp=(${words[1,CURRENT-1]} --generate-bash-completion)
fi
completions=($("${requestComp[@]}" 2>/dev/null))
if [[ ${#completions[@]} -gt 0 ]]; then
_describe -t commands '%[2]s' completions && ret=0
fi
# Fallback to file completion
_files && ret=0
return ret
}
compdef _%[1]s %[2]s
`, funcName, binaryName)
}
// fishCompletionScript returns the fish completion script for the given binary.
func fishCompletionScript(binaryName string) string {
// Convert hyphens to underscores for fish function names
funcName := strings.ReplaceAll(binaryName, "-", "_")
return fmt.Sprintf(`# Fish completion for %[2]s
function __fish_%[1]s_complete
set -l args (commandline -opc)
set -l cur (commandline -ct)
# Build command with --generate-bash-completion at the END
if string match -q -- '-*' "$cur"
%[2]s $args $cur --generate-bash-completion 2>/dev/null
else
%[2]s $args --generate-bash-completion 2>/dev/null
end
end
complete -c %[2]s -f -a "(__fish_%[1]s_complete)"
`, funcName, binaryName)
}

105
cmd/completion_test.go Normal file
View File

@@ -0,0 +1,105 @@
package cmd
import (
"strings"
"testing"
"github.com/OffchainLabs/prysm/v7/testing/assert"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/urfave/cli/v2"
)
func TestCompletionCommand(t *testing.T) {
t.Run("creates command with correct name", func(t *testing.T) {
cmd := CompletionCommand("beacon-chain")
require.Equal(t, "completion", cmd.Name)
})
t.Run("has three subcommands", func(t *testing.T) {
cmd := CompletionCommand("beacon-chain")
require.Equal(t, 3, len(cmd.Subcommands))
names := make([]string, len(cmd.Subcommands))
for i, sub := range cmd.Subcommands {
names[i] = sub.Name
}
assert.DeepEqual(t, []string{"bash", "zsh", "fish"}, names)
})
t.Run("description contains binary name", func(t *testing.T) {
cmd := CompletionCommand("validator")
assert.Equal(t, true, strings.Contains(cmd.Description, "validator"))
})
}
func TestBashCompletionScript(t *testing.T) {
script := bashCompletionScript("beacon-chain")
assert.Equal(t, true, strings.Contains(script, "beacon-chain"), "script should contain binary name")
assert.Equal(t, true, strings.Contains(script, "_beacon_chain_completions"), "script should contain function name with underscores")
assert.Equal(t, true, strings.Contains(script, "complete -o bashdefault"), "script should contain complete command")
assert.Equal(t, true, strings.Contains(script, "--generate-bash-completion"), "script should use generate-bash-completion flag")
}
func TestZshCompletionScript(t *testing.T) {
script := zshCompletionScript("validator")
assert.Equal(t, true, strings.Contains(script, "#compdef validator"), "script should contain compdef directive")
assert.Equal(t, true, strings.Contains(script, "_validator"), "script should contain function name")
assert.Equal(t, true, strings.Contains(script, "--generate-bash-completion"), "script should use generate-bash-completion flag")
}
func TestFishCompletionScript(t *testing.T) {
script := fishCompletionScript("beacon-chain")
assert.Equal(t, true, strings.Contains(script, "complete -c beacon-chain"), "script should contain complete command")
assert.Equal(t, true, strings.Contains(script, "__fish_beacon_chain_complete"), "script should contain function name with underscores")
assert.Equal(t, true, strings.Contains(script, "--generate-bash-completion"), "script should use generate-bash-completion flag")
}
func TestScriptFunctionNames(t *testing.T) {
// Test that hyphens are converted to underscores in function names
bashScript := bashCompletionScript("beacon-chain")
assert.Equal(t, true, strings.Contains(bashScript, "_beacon_chain_completions"))
assert.Equal(t, false, strings.Contains(bashScript, "_beacon-chain_completions"))
zshScript := zshCompletionScript("beacon-chain")
assert.Equal(t, true, strings.Contains(zshScript, "_beacon_chain"))
fishScript := fishCompletionScript("beacon-chain")
assert.Equal(t, true, strings.Contains(fishScript, "__fish_beacon_chain_complete"))
}
func TestCompletionSubcommandActions(t *testing.T) {
// Test that Action functions execute without errors
cmd := CompletionCommand("beacon-chain")
tests := []struct {
name string
subcommand string
}{
{"bash action executes", "bash"},
{"zsh action executes", "zsh"},
{"fish action executes", "fish"},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
var subCmd *cli.Command
for _, sub := range cmd.Subcommands {
if sub.Name == tt.subcommand {
subCmd = sub
break
}
}
require.NotNil(t, subCmd, "subcommand should exist")
require.NotNil(t, subCmd.Action, "subcommand should have an action")
// Action should not return an error; use a real cli.Context
app := &cli.App{}
ctx := cli.NewContext(app, nil, nil)
err := subCmd.Action(ctx)
require.NoError(t, err)
})
}
}

View File

@@ -214,13 +214,6 @@ func setGlobalParams() error {
func generateGenesis(ctx context.Context) (state.BeaconState, error) {
f := &generateGenesisStateFlags
if f.GenesisTime == 0 {
f.GenesisTime = uint64(time.Now().Unix())
log.Info("No genesis time specified, defaulting to now()")
}
log.Infof("Delaying genesis %v by %v seconds", f.GenesisTime, f.GenesisTimeDelay)
f.GenesisTime += f.GenesisTimeDelay
log.Infof("Genesis is now %v", f.GenesisTime)
v, err := version.FromString(f.ForkName)
if err != nil {
@@ -258,7 +251,28 @@ func generateGenesis(ctx context.Context) (state.BeaconState, error) {
if err := json.Unmarshal(gbytes, gen); err != nil {
return nil, err
}
// set timestamps for genesis and shanghai fork
// Use input file's timestamp (if `--genesis-time` is NOT explicitly provided)
if f.GenesisTime == 0 {
f.GenesisTime = gen.Timestamp
log.Infof("Using genesis time from input file: %d", f.GenesisTime)
}
}
// If still unset (no `--genesis-time` and no input file), default to `now()`
if f.GenesisTime == 0 {
f.GenesisTime = uint64(time.Now().Unix())
log.Info("No genesis time specified, defaulting to now()")
}
// Apply delay (if any) and expose used genesis time
if f.GenesisTimeDelay > 0 {
log.Infof("Delaying genesis %d by %d seconds", f.GenesisTime, f.GenesisTimeDelay)
f.GenesisTime += f.GenesisTimeDelay
}
log.Infof("Genesis time is %d", f.GenesisTime)
// Set the timestamps for genesis and forks
if f.GethGenesisJsonIn != "" {
gen.Timestamp = f.GenesisTime
genesis := time.Unix(int64(f.GenesisTime), 0)
gen.Config.ShanghaiTime = interop.GethShanghaiTime(genesis, params.BeaconConfig())
@@ -301,7 +315,7 @@ func generateGenesis(ctx context.Context) (state.BeaconState, error) {
if err != nil {
return nil, err
}
if err := os.WriteFile(f.GethGenesisJsonOut, gbytes, 0600); err != nil {
if err := os.WriteFile(f.GethGenesisJsonOut, gbytes, 0o600); err != nil {
return nil, errors.Wrapf(err, "failed to write %s", f.GethGenesisJsonOut)
}
}

View File

@@ -125,6 +125,75 @@ func Test_generateGenesis_BaseFeeValidation(t *testing.T) {
}
}
func writeFile(path string, data []byte) error {
return os.WriteFile(path, data, 0644)
func Test_generateGenesis_TimestampHandling(t *testing.T) {
tests := []struct {
name string
inputTimestamp uint64 // timestamp in input genesis.json (0 = no input file)
genesisTime uint64 // --genesis-time (0 = not set)
genesisTimeDelay uint64 // --genesis-time-delay
wantTimestamp uint64
}{
{
name: "uses input file timestamp when no --genesis-time",
inputTimestamp: 1700000000,
wantTimestamp: 1700000000,
},
{
name: "explicit --genesis-time overrides input file",
inputTimestamp: 1700000000,
genesisTime: 1600000000,
wantTimestamp: 1600000000,
},
{
name: "delay applied to input file timestamp",
inputTimestamp: 1700000000,
genesisTimeDelay: 100,
wantTimestamp: 1700000100,
},
{
name: "delay applied to explicit --genesis-time",
inputTimestamp: 1700000000,
genesisTime: 1600000000,
genesisTimeDelay: 50,
wantTimestamp: 1600000050,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
originalFlags := generateGenesisStateFlags
defer func() {
generateGenesisStateFlags = originalFlags
}()
generateGenesisStateFlags.NumValidators = 2
generateGenesisStateFlags.GenesisTime = tt.genesisTime
generateGenesisStateFlags.GenesisTimeDelay = tt.genesisTimeDelay
generateGenesisStateFlags.ForkName = version.String(version.Deneb)
if tt.inputTimestamp > 0 {
genesis := &core.Genesis{
Timestamp: tt.inputTimestamp,
BaseFee: big.NewInt(1000000000),
Difficulty: big.NewInt(0),
GasLimit: 15000000,
Alloc: types.GenesisAlloc{},
Config: &params.ChainConfig{ChainID: big.NewInt(32382)},
}
genesisJSON, err := json.Marshal(genesis)
require.NoError(t, err)
tmpFile := t.TempDir() + "/genesis.json"
require.NoError(t, writeFile(tmpFile, genesisJSON))
generateGenesisStateFlags.GethGenesisJsonIn = tmpFile
}
st, err := generateGenesis(context.Background())
require.NoError(t, err)
assert.Equal(t, int64(tt.wantTimestamp), st.GenesisTime().Unix())
})
}
}
func writeFile(path string, data []byte) error {
return os.WriteFile(path, data, 0o644)
}

View File

@@ -140,8 +140,10 @@ func main() {
slashingprotectioncommands.Commands,
dbcommands.Commands,
web.Commands,
cmd.CompletionCommand("validator"),
},
Flags: appFlags,
Flags: appFlags,
EnableBashCompletion: true,
Before: func(ctx *cli.Context) error {
// Load flags from config file, if specified.
if err := cmd.LoadFlagsFromConfig(ctx, appFlags); err != nil {

View File

@@ -70,6 +70,7 @@ type Flags struct {
DisableStakinContractCheck bool // Disables check for deposit contract when proposing blocks
IgnoreUnviableAttestations bool // Ignore attestations whose target state is not viable (avoids lagging-node DoS).
EnableHashtree bool // Enables usage of the hashtree library for hashing
EnableVerboseSigVerification bool // EnableVerboseSigVerification specifies whether to verify individual signature if batch verification fails
EnableProposerPreprocessing bool // EnableProposerPreprocessing enables proposer pre-processing of blocks before proposing.
@@ -81,7 +82,6 @@ type Flags struct {
SaveInvalidBlob bool // SaveInvalidBlob saves invalid blob to temp.
EnableDiscoveryReboot bool // EnableDiscoveryReboot allows the node to have its local listener to be rebooted in the event of discovery issues.
LowValcountSweep bool // LowValcountSweep bounds withdrawal sweep by validator count.
// KeystoreImportDebounceInterval specifies the time duration the validator waits to reload new keys if they have
// changed on disk. This feature is for advanced use cases only.
@@ -238,6 +238,10 @@ func ConfigureBeaconChain(ctx *cli.Context) error {
logEnabled(enableFullSSZDataLogging)
cfg.EnableFullSSZDataLogging = true
}
if ctx.IsSet(enableHashtree.Name) {
logEnabled(enableHashtree)
cfg.EnableHashtree = true
}
cfg.EnableVerboseSigVerification = true
if ctx.IsSet(disableVerboseSigVerification.Name) {
logEnabled(disableVerboseSigVerification)
@@ -290,11 +294,6 @@ func ConfigureBeaconChain(ctx *cli.Context) error {
logEnabled(ignoreUnviableAttestations)
cfg.IgnoreUnviableAttestations = true
}
if ctx.Bool(lowValcountSweep.Name) {
logEnabled(lowValcountSweep)
cfg.LowValcountSweep = true
}
if ctx.IsSet(EnableStateDiff.Name) {
logEnabled(EnableStateDiff)
cfg.EnableStateDiff = true

View File

@@ -133,6 +133,10 @@ var (
Name: "enable-beacon-rest-api",
Usage: "(Experimental): Enables of the beacon REST API when querying a beacon node.",
}
enableHashtree = &cli.BoolFlag{
Name: "enable-hashtree",
Usage: "(Experimental): Enables the hashtree hashing library.",
}
disableVerboseSigVerification = &cli.BoolFlag{
Name: "disable-verbose-sig-verification",
Usage: "Disables identifying invalid signatures if batch verification fails when processing block.",
@@ -216,13 +220,6 @@ var (
Name: "ignore-unviable-attestations",
Usage: "Ignores attestations whose target state is not viable with respect to the current head (avoid expensive state replay from lagging attesters).",
}
// lowValcountSweep bounds withdrawal sweep by validator count.
lowValcountSweep = &cli.BoolFlag{
Name: "low-valcount-sweep",
Usage: "Uses validator count bound for withdrawal sweep when validator count is less than MaxValidatorsPerWithdrawalsSweep.",
Value: false,
Hidden: true,
}
)
// devModeFlags holds list of flags that are set when development mode is on.
@@ -285,7 +282,7 @@ var BeaconChainFlags = combinedFlags([]cli.Flag{
enableExperimentalAttestationPool,
forceHeadFlag,
blacklistRoots,
lowValcountSweep,
enableHashtree,
}, deprecatedBeaconFlags, deprecatedFlags, upcomingDeprecation)
func combinedFlags(flags ...[]cli.Flag) []cli.Flag {

View File

@@ -28,6 +28,7 @@ go_library(
"testutils_develop.go", # keep
"values.go",
],
embedsrcs = ["testdata/e2e_config.yaml"],
importpath = "github.com/OffchainLabs/prysm/v7/config/params",
visibility = ["//visibility:public"],
deps = [

View File

@@ -56,10 +56,12 @@ type BeaconChainConfig struct {
EffectiveBalanceIncrement uint64 `yaml:"EFFECTIVE_BALANCE_INCREMENT" spec:"true"` // EffectiveBalanceIncrement is used for converting the high balance into the low balance for validators.
// Initial value constants.
BLSWithdrawalPrefixByte byte `yaml:"BLS_WITHDRAWAL_PREFIX" spec:"true"` // BLSWithdrawalPrefixByte is used for BLS withdrawal and it's the first byte.
ETH1AddressWithdrawalPrefixByte byte `yaml:"ETH1_ADDRESS_WITHDRAWAL_PREFIX" spec:"true"` // ETH1AddressWithdrawalPrefixByte is used for withdrawals and it's the first byte.
CompoundingWithdrawalPrefixByte byte `yaml:"COMPOUNDING_WITHDRAWAL_PREFIX" spec:"true"` // CompoundingWithdrawalPrefixByteByte is used for compounding withdrawals and it's the first byte.
ZeroHash [32]byte // ZeroHash is used to represent a zeroed out 32 byte array.
BLSWithdrawalPrefixByte byte `yaml:"BLS_WITHDRAWAL_PREFIX" spec:"true"` // BLSWithdrawalPrefixByte is used for BLS withdrawal and it's the first byte.
ETH1AddressWithdrawalPrefixByte byte `yaml:"ETH1_ADDRESS_WITHDRAWAL_PREFIX" spec:"true"` // ETH1AddressWithdrawalPrefixByte is used for withdrawals and it's the first byte.
CompoundingWithdrawalPrefixByte byte `yaml:"COMPOUNDING_WITHDRAWAL_PREFIX" spec:"true"` // CompoundingWithdrawalPrefixByteByte is used for compounding withdrawals and it's the first byte.
BuilderWithdrawalPrefixByte byte `yaml:"BUILDER_WITHDRAWAL_PREFIX" spec:"true"` // BuilderWithdrawalPrefixByte is used for builder withdrawals and it's the first byte.
BuilderIndexSelfBuild primitives.BuilderIndex `yaml:"BUILDER_INDEX_SELF_BUILD" spec:"true"` // BuilderIndexSelfBuild indicates proposer self-built payloads.
ZeroHash [32]byte // ZeroHash is used to represent a zeroed out 32 byte array.
// Time parameters constants.
GenesisDelay uint64 `yaml:"GENESIS_DELAY" spec:"true"` // GenesisDelay is the minimum number of seconds to delay starting the Ethereum Beacon Chain genesis. Must be at least 1 second.
@@ -86,7 +88,7 @@ type BeaconChainConfig struct {
IntervalsPerSlot uint64 `yaml:"INTERVALS_PER_SLOT"` // IntervalsPerSlot defines the number of fork choice intervals in a slot defined in the fork choice spec.
ProposerReorgCutoffBPS primitives.BP `yaml:"PROPOSER_REORG_CUTOFF_BPS" spec:"true"` // ProposerReorgCutoffBPS defines the proposer reorg deadline in basis points of the slot.
AttestationDueBPS primitives.BP `yaml:"ATTESTATION_DUE_BPS" spec:"true"` // AttestationDueBPS defines the attestation due time in basis points of the slot.
AggregrateDueBPS primitives.BP `yaml:"AGGREGRATE_DUE_BPS" spec:"true"` // AggregrateDueBPS defines the aggregate due time in basis points of the slot.
AggregateDueBPS primitives.BP `yaml:"AGGREGATE_DUE_BPS" spec:"true"` // AggregateDueBPS defines the aggregate due time in basis points of the slot.
SyncMessageDueBPS primitives.BP `yaml:"SYNC_MESSAGE_DUE_BPS" spec:"true"` // SyncMessageDueBPS defines the sync message due time in basis points of the slot.
ContributionDueBPS primitives.BP `yaml:"CONTRIBUTION_DUE_BPS" spec:"true"` // ContributionDueBPS defines the contribution due time in basis points of the slot.
@@ -139,6 +141,7 @@ type BeaconChainConfig struct {
DomainApplicationMask [4]byte `yaml:"DOMAIN_APPLICATION_MASK" spec:"true"` // DomainApplicationMask defines the BLS signature domain for application mask.
DomainApplicationBuilder [4]byte `yaml:"DOMAIN_APPLICATION_BUILDER" spec:"true"` // DomainApplicationBuilder defines the BLS signature domain for application builder.
DomainBLSToExecutionChange [4]byte `yaml:"DOMAIN_BLS_TO_EXECUTION_CHANGE" spec:"true"` // DomainBLSToExecutionChange defines the BLS signature domain to change withdrawal addresses to ETH1 prefix
DomainBeaconBuilder [4]byte `yaml:"DOMAIN_BEACON_BUILDER" spec:"true"` // DomainBeaconBuilder defines the BLS signature domain for beacon block builder.
// Prysm constants.
GenesisValidatorsRoot [32]byte // GenesisValidatorsRoot is the root hash of the genesis validators.
@@ -290,6 +293,10 @@ type BeaconChainConfig struct {
ValidatorCustodyRequirement uint64 `yaml:"VALIDATOR_CUSTODY_REQUIREMENT" spec:"true"` // ValidatorCustodyRequirement is the minimum number of custody groups an honest node with validators attached custodies and serves samples from
BalancePerAdditionalCustodyGroup uint64 `yaml:"BALANCE_PER_ADDITIONAL_CUSTODY_GROUP" spec:"true"` // BalancePerAdditionalCustodyGroup is the balance increment corresponding to one additional group to custody.
// Values introduced in Gloas upgrade
BuilderPaymentThresholdNumerator uint64 `yaml:"BUILDER_PAYMENT_THRESHOLD_NUMERATOR" spec:"true"` // BuilderPaymentThresholdNumerator is the numerator for builder payment quorum threshold calculation.
BuilderPaymentThresholdDenominator uint64 `yaml:"BUILDER_PAYMENT_THRESHOLD_DENOMINATOR" spec:"true"` // BuilderPaymentThresholdDenominator is the denominator for builder payment quorum threshold calculation.
// Networking Specific Parameters
MaxPayloadSize uint64 `yaml:"MAX_PAYLOAD_SIZE" spec:"true"` // MAX_PAYLOAD_SIZE is the maximum allowed size of uncompressed payload in gossip messages and rpc chunks.
AttestationSubnetCount uint64 `yaml:"ATTESTATION_SUBNET_COUNT" spec:"true"` // AttestationSubnetCount is the number of attestation subnets used in the gossipsub protocol.

View File

@@ -243,7 +243,7 @@ func ConfigToYaml(cfg *BeaconChainConfig) []byte {
fmt.Sprintf("MAX_BLOBS_PER_BLOCK: %d", cfg.DeprecatedMaxBlobsPerBlock),
fmt.Sprintf("PROPOSER_REORG_CUTOFF_BPS: %d", cfg.ProposerReorgCutoffBPS),
fmt.Sprintf("ATTESTATION_DUE_BPS: %d", cfg.AttestationDueBPS),
fmt.Sprintf("AGGREGRATE_DUE_BPS: %d", cfg.AggregrateDueBPS),
fmt.Sprintf("AGGREGATE_DUE_BPS: %d", cfg.AggregateDueBPS),
fmt.Sprintf("SYNC_MESSAGE_DUE_BPS: %d", cfg.SyncMessageDueBPS),
fmt.Sprintf("CONTRIBUTION_DUE_BPS: %d", cfg.ContributionDueBPS),
}

View File

@@ -24,7 +24,6 @@ import (
// These are variables that we don't use in Prysm. (i.e. future hardfork, light client... etc)
// IMPORTANT: Use one field per line and sort these alphabetically to reduce conflicts.
var placeholderFields = []string{
"AGGREGATE_DUE_BPS",
"AGGREGATE_DUE_BPS_GLOAS",
"ATTESTATION_DEADLINE",
"ATTESTATION_DUE_BPS_GLOAS",
@@ -99,7 +98,7 @@ func assertEqualConfigs(t *testing.T, name string, fields []string, expected, ac
assert.Equal(t, expected.HysteresisDownwardMultiplier, actual.HysteresisDownwardMultiplier, "%s: HysteresisDownwardMultiplier", name)
assert.Equal(t, expected.HysteresisUpwardMultiplier, actual.HysteresisUpwardMultiplier, "%s: HysteresisUpwardMultiplier", name)
assert.Equal(t, expected.AttestationDueBPS, actual.AttestationDueBPS, "%s: AttestationDueBPS", name)
assert.Equal(t, expected.AggregrateDueBPS, actual.AggregrateDueBPS, "%s: AggregrateDueBPS", name)
assert.Equal(t, expected.AggregateDueBPS, actual.AggregateDueBPS, "%s: AggregateDueBPS", name)
assert.Equal(t, expected.ContributionDueBPS, actual.ContributionDueBPS, "%s: ContributionDueBPS", name)
assert.Equal(t, expected.ProposerReorgCutoffBPS, actual.ProposerReorgCutoffBPS, "%s: ProposerReorgCutoffBPS", name)
assert.Equal(t, expected.SyncMessageDueBPS, actual.SyncMessageDueBPS, "%s: SyncMessageDueBPS", name)

Some files were not shown because too many files have changed in this diff Show More