Compare commits

...

22 Commits

Author SHA1 Message Date
terence
f80f8338fa Move kzg commitments to bid 2026-01-30 19:23:37 -08:00
terence tsao
7197093cf3 gloas: implement modified process withdrawals 2026-01-29 19:03:26 -08:00
terence
bcf060619b Refactor expected withdrawals helpers (#16282)
This refactors `ExpectedWithdrawals` to use shared helpers for pending
partial withdrawals and the validator sweep. The goal is to keep the
logic spec-aligned while making it easier to reuse across future forks.
```
**Example (Gloas expected withdrawals order):**
builder withdrawals
pending partial withdrawals
builder sweep withdrawals
validator sweep withdrawals
```

Note to reviewers: the helpers take a `*[]*enginev1.Withdrawal` so we
can append efficiently without returning/re-appending slices.
2026-01-26 14:02:05 +00:00
terence
37b27fdd3c Move deposit helpers out of blocks to break blocks <-> gloas cycle (#16277)
- moves deposit-related helpers (deposit signature verification, batch
verification, merkle proof verification, and activation helper) from
`beacon-chain/core/blocks` into `beacon-chain/core/helpers`
- updates call sites (Altair/Electra) to use helpers

Why?
- In gloas, the blocks package needs to call into gloas logic (e.g.
clearing builder pending payments/withdrawals on proposer slashing)
- gloas also introduces deposit-request processing which needs deposit
signature verification previously located in blocks.
That creates a Bazel/Go dependency cycle (blocks -> gloas -> blocks)
- the natural layering is for blocks and fork logic to depend on a lower
level util package for deposit verification, so moving deposit helpers
to core/helpers breaks the cycle
2026-01-23 21:33:28 +00:00
Bastin
6a9bcbab3a logging: per package verbosity (#16272)
**What type of PR is this?**
Feature

**What does this PR do? Why is it needed?**
This PR adds a `--log.vmodule` flag to the beacon-chain and validator
apps, that allows setting a different verbosity for every* package.

*: not every package, but most packages. (all packages that define their
logger variable with a `package` field)

Combined with the `--verbosity` flag this allows users to control
exactly what they see.

This affects both the terminal and the log file (`--log-file`), but not
the ephemeral debug log file.

example usage: 
```
./beacon-chain --log.vmodule=beacon-chain/p2p=info,beacon-chain/sync=error,beacon-chain/sync/initial-sync=debug
```

There are improvements to be done later, like accepting just the package
name instead of the full path, etc.
2026-01-23 19:22:20 +00:00
Bastin
1b2524b0fe add NewBeaconStateGloas() (#16275)
**What does this PR do? Why is it needed?**
adds `util.NewBeaconStateGloas()`

also adds a missing fulu test.
2026-01-23 18:39:05 +00:00
satushh
67d11607ea Close opened file in defer in data_column.go (#16274)
**What type of PR is this?**

Bug fix

**What does this PR do? Why is it needed?**

Closes an opened file which was left open. 

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [ ] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [ ] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [ ] I have added a description with sufficient context for reviewers
to understand this PR.
- [ ] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-01-23 15:45:03 +00:00
satushh
4ff15fa988 Add missing fulu presets to beacon config (#16170)
<!-- Thanks for sending a PR! Before submitting:

1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
   fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
   in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
   Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->

**What type of PR is this?**

Feature

**What does this PR do? Why is it needed?**

Added some missing constants of fulu to beacon config so that beacon api
returns expected values

**Which issues(s) does this PR fix?**

Fixes https://github.com/OffchainLabs/prysm/issues/16138

**Other notes for review**

**Acknowledgements**

- [ ] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [ ] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [ ] I have added a description with sufficient context for reviewers
to understand this PR.
- [ ] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-01-23 15:20:22 +00:00
james-prysm
b37b3e26ba constants update for ethspecify phase 0 (#16273)
**What type of PR is this?**

Other

**What does this PR do? Why is it needed?**

mapping ethspecify items that we have implemented but missing from
ethspecify for constants

**Which issues(s) does this PR fix?**
follow up on https://github.com/OffchainLabs/prysm/pull/16194

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-01-22 21:01:23 +00:00
Manu NALEPA
4ff9eb067c Allow a flag to be hidden and hide the --disable-get-blobs-v2 flag. (#16265)
**What type of PR is this?**
Other

**What does this PR do? Why is it needed?**
Allow a flag to be hidden and hide the `--disable-get-blobs-v2` flag.

This flag is still usable, but does not show up any more in the help.
This flag is used for internal purpose only, so no need to expose it
publicly.

**Why do we need to modify the `cli.HelpPrinter` function whereas other
flags like `--aggregate-first-interval` are hidden just by using the
`Hidden: true` property?**

The `Hidden: true` property on the flag definition doesn't work by
itself because `usage.go` uses a custom help template that bypasses the
standard urfave/cli help rendering.

**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).

---------

Co-authored-by: Bastin <bastin.m@proton.me>
2026-01-22 16:35:24 +00:00
terence
d440aafacf gloas: add modified proposer slashing processing (#16212)
This PR implements
[process_proposer_slashing](https://github.com/ethereum/consensus-specs/blob/master/specs/gloas/beacon-chain.md#modified-process_proposer_slashing)
alongside spec tests
2026-01-22 15:38:55 +00:00
james-prysm
e336f7fe59 adding in mid epoch timeout for e2e head synced evaluator (#16268)
**What type of PR is this?**

tests

**What does this PR do? Why is it needed?**

reduce e2e flakes by adding a mid epoch check for headslot sync

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-01-22 14:07:59 +00:00
terence
fde63a217a gloas: add modified slot processing (#15730)
This PR implements
[process_slot](https://github.com/ethereum/consensus-specs/blob/master/specs/gloas/beacon-chain.md#modified-process_slot)
and spec tests
2026-01-20 22:44:09 +00:00
Luca | Serenita
055c6eb784 fix: typo in AggregateDueBPS (#16194)
<!-- Thanks for sending a PR! Before submitting:

1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
   fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
   in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
   Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->

**What type of PR is this?**

Bug fix

**What does this PR do? Why is it needed?**

This PR fixes a typo which resulted in a wrong variable name to be
returned on the Beacon API `/eth/v1/config/spec` endpoint:

```
curl http://127.0.0.1:49183/eth/v1/config/spec
{"data":{"AGGREGRATE_DUE_BPS":"6667", [...]
```

I discovered the discrepancy while testing the change to these "BPS"
values in the Vero VC which checks spec values against the ones it ships
with.

**Which issues(s) does this PR fix?**

N/A

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2026-01-20 15:16:13 +00:00
terence
d33389fb54 gloas: add new pending payment processing (#15655)
This PR implements
[process_builder_pending_payments](https://github.com/ethereum/consensus-specs/blob/master/specs/gloas/beacon-chain.md#new-process_builder_pending_payments)
and spec tests.
2026-01-16 21:19:06 +00:00
Maxim Evtush
ce72deb3c0 Fix authentication bypass for direct /v2/validator/* endpoints (#16226)
This PR fixes a security vulnerability where authenticated endpoints
could be accessed without authorization by using direct
`/v2/validator/*` paths instead of `/api/v2/validator/*`.

The `AuthTokenHandler` middleware only checked for authentication on
requests containing `/api/v2/validator/` or `/eth/v1` prefixes, but the
same handlers are also registered for direct `/v2/validator/*` routes.
This allowed attackers to bypass authentication by simply removing the
`/api` prefix from the URL.

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2026-01-16 19:43:27 +00:00
Manu NALEPA
ec48e6340c Stop batching of KZG verification for incoming via gossip data column sidecars (#16240)
**What type of PR is this?**
Optimisation

**What does this PR do? Why is it needed?**
This is an alternate take of:
- https://github.com/OffchainLabs/prysm/pull/16220


**Test configuration:**
- Using the `--disable-get-blobs-v2` and `--supernode` flags
- On [VPS 3000 G11](https://www.netcup.com/en/server/vps)

**4H average**
| Impl. | CPU usage| Sidecar gossip verif. dur. | DA waiting time |
Chain service proc. time | Total |
|--------|--------|--------|--------|--------|--------|
| `develop` | 132% | 185 ms | 82.2 ms | **457 ms** | 539 ms |
| https://github.com/OffchainLabs/prysm/pull/16220 | 144% | 76.5 ms |
21.7 ms | 473 ms | **495 ms** |
| This PR  | **117%** | **26 ms** | **16.3 ms** | 479 ms | **495 ms** |

 
**Before this PR:**
<img width="950" height="1296" alt="image"
src="https://github.com/user-attachments/assets/1fb45282-a9c8-4543-adb3-39b04b79eab2"
/>

**With this PR:**
<img width="950" height="1301" alt="image"
src="https://github.com/user-attachments/assets/993feb49-ef38-4052-9cb4-aebe93456eba"
/>

Metrics:
- `beacon_data_column_sidecar_gossip_verification_milliseconds`
- `da_waited_time_milliseconds`

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-01-16 18:53:45 +00:00
Preston Van Loon
a135a336c3 Fix issue : Prevent makeslice panic from invalid Count values (#16227)
**What type of PR is this?**

Bug fix

**What does this PR do? Why is it needed?**

Add defensive checks to prevent panic from large Count values that could
result from unsigned integer underflow:

1. In batch.blockRequest() and batch.blobRequest(): Return Count=0 when
end <= begin, preventing the underflow at the source.

2. In SendBeaconBlocksByRangeRequest(): Cap slice capacity to
MaxRequestBlock before allocation to prevent panic even if upstream code
produces invalid values.


**Which issues(s) does this PR fix?**

Fixes #16223

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-01-16 14:25:04 +00:00
Manu NALEPA
5f189f002e Remove unused delay parameter from fetchOriginDataColumnSidecars function. (#16262)
**What type of PR is this?**
Other

**What does this PR do? Why is it needed?**
Remove unused delay parameter from `fetchOriginDataColumnSidecars`
function.

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-01-16 14:04:42 +00:00
willian.eth
bca6166e82 Add shell completion for beacon-chain and validator CLI (#16245)
**What type of PR is this?**

Feature

**What does this PR do? Why is it needed?**

Introduces a `completion` subcommand to `beacon-chain` and `validator`
that outputs shell completion scripts. Supports Bash, Zsh, and Fish
shells.

```bash
# Load completions in current session
source <(beacon-chain completion bash)

# Persist for future sessions
beacon-chain completion zsh > "${fpath[1]}/_beacon-chain"
validator completion fish > ~/.config/fish/completions/validator.fish
```

Once loaded, users can press TAB to complete subcommands, nested
commands, and flags. Flag completion supports prefix matching (e.g.,
typing `--exec<TAB>` suggests `--execution-endpoint`,
`--execution-headers`).

**Which issues(s) does this PR fix?**

Fixes #16244

**Other notes for review**

The implementation adds three files to the existing `cmd` package:
- `completion.go` - Defines `CompletionCommand()` returning a
`*cli.Command` with `bash`, `zsh`, `fish` subcommands
- `completion_scripts.go` - Contains the shell script templates
- `completion_test.go` - Unit tests for command structure and script
content

Changes to `beacon-chain` and `validator`:
- Import `cmd.CompletionCommand("binary-name")` in the Commands slice
- Set `EnableBashCompletion: true` on the cli.App to activate
urfave/cli's `--generate-bash-completion` hidden flag

The shell scripts call the binary with `--generate-bash-completion`
appended to get context-aware suggestions. This means completions
automatically reflect the current binary's flags and commands.

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).

Signed-off-by: Willian Paixao <willian@ufpa.br>
2026-01-15 20:07:11 +00:00
Bastin
b6818853b4 state-diff small changes (#16260)
**What does this PR do?**
small touch ups on state diff code.
2026-01-15 18:47:29 +00:00
satushh
5a56bfcf98 Print commitments instead of indices (#16258)
**What type of PR is this?**

Other

**What does this PR do? Why is it needed?**

Print commitments instead of indices in `missingCommitError` function

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [ ] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [ ] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [ ] I have added a description with sufficient context for reviewers
to understand this PR.
- [ ] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-01-15 15:37:32 +00:00
175 changed files with 4021 additions and 1449 deletions

View File

@@ -3,7 +3,6 @@ package altair
import (
"context"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/blocks"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
"github.com/OffchainLabs/prysm/v7/config/params"
@@ -24,7 +23,7 @@ func ProcessPreGenesisDeposits(
if err != nil {
return nil, errors.Wrap(err, "could not process deposit")
}
beaconState, err = blocks.ActivateValidatorWithEffectiveBalance(beaconState, deposits)
beaconState, err = helpers.ActivateValidatorWithEffectiveBalance(beaconState, deposits)
if err != nil {
return nil, err
}
@@ -37,7 +36,7 @@ func ProcessDeposits(
beaconState state.BeaconState,
deposits []*ethpb.Deposit,
) (state.BeaconState, error) {
allSignaturesVerified, err := blocks.BatchVerifyDepositsSignatures(ctx, deposits)
allSignaturesVerified, err := helpers.BatchVerifyDepositsSignatures(ctx, deposits)
if err != nil {
return nil, err
}
@@ -82,7 +81,7 @@ func ProcessDeposits(
// signature=deposit.data.signature,
// )
func ProcessDeposit(beaconState state.BeaconState, deposit *ethpb.Deposit, allSignaturesVerified bool) (state.BeaconState, error) {
if err := blocks.VerifyDeposit(beaconState, deposit); err != nil {
if err := helpers.VerifyDeposit(beaconState, deposit); err != nil {
if deposit == nil || deposit.Data == nil {
return nil, err
}
@@ -122,7 +121,7 @@ func ApplyDeposit(beaconState state.BeaconState, data *ethpb.Deposit_Data, allSi
index, ok := beaconState.ValidatorIndexByPubkey(bytesutil.ToBytes48(pubKey))
if !ok {
if !allSignaturesVerified {
valid, err := blocks.IsValidDepositSignature(data)
valid, err := helpers.IsValidDepositSignature(data)
if err != nil {
return nil, err
}

View File

@@ -5,7 +5,6 @@ go_library(
srcs = [
"attestation.go",
"attester_slashing.go",
"deposit.go",
"error.go",
"eth1_data.go",
"exit.go",
@@ -21,6 +20,7 @@ go_library(
importpath = "github.com/OffchainLabs/prysm/v7/beacon-chain/core/blocks",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/core/gloas:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/time:go_default_library",
@@ -33,8 +33,6 @@ go_library(
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//container/slice:go_default_library",
"//container/trie:go_default_library",
"//contracts/deposit:go_default_library",
"//crypto/bls:go_default_library",
"//crypto/hash:go_default_library",
"//encoding/bytesutil:go_default_library",
@@ -61,7 +59,6 @@ go_test(
"attester_slashing_test.go",
"block_operations_fuzz_test.go",
"block_regression_test.go",
"deposit_test.go",
"eth1_data_test.go",
"exit_test.go",
"exports_test.go",
@@ -90,7 +87,6 @@ go_test(
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//container/trie:go_default_library",
"//crypto/bls:go_default_library",
"//crypto/bls/common:go_default_library",
"//crypto/hash:go_default_library",

View File

@@ -3,6 +3,7 @@ package blocks
import (
"testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
v "github.com/OffchainLabs/prysm/v7/beacon-chain/core/validators"
state_native "github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
@@ -318,7 +319,7 @@ func TestFuzzverifyDeposit_10000(t *testing.T) {
fuzzer.Fuzz(deposit)
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
require.NoError(t, err)
err = VerifyDeposit(s, deposit)
err = helpers.VerifyDeposit(s, deposit)
_ = err
fuzz.FreeMemory(i)
}

View File

@@ -4,6 +4,7 @@ import (
"context"
"fmt"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/gloas"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/signing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/time"
@@ -11,6 +12,7 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
"github.com/OffchainLabs/prysm/v7/config/params"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/runtime/version"
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/pkg/errors"
"google.golang.org/protobuf/proto"
@@ -126,7 +128,16 @@ func processProposerSlashing(
if exitInfo == nil {
return nil, errors.New("exit info is required to process proposer slashing")
}
var err error
// [New in Gloas:EIP7732]: remove the BuilderPendingPayment corresponding to the slashed proposer within 2 epoch window
if beaconState.Version() >= version.Gloas {
err = gloas.RemoveBuilderPendingPayment(beaconState, slashing.Header_1.Header)
if err != nil {
return nil, err
}
}
beaconState, err = validators.SlashValidator(ctx, beaconState, slashing.Header_1.Header.ProposerIndex, exitInfo)
if err != nil {
return nil, errors.Wrapf(err, "could not slash proposer index %d", slashing.Header_1.Header.ProposerIndex)

View File

@@ -20,7 +20,6 @@ go_library(
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/core/altair:go_default_library",
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/epoch:go_default_library",
"//beacon-chain/core/epoch/precompute:go_default_library",
"//beacon-chain/core/helpers:go_default_library",

View File

@@ -3,7 +3,6 @@ package electra
import (
"context"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/blocks"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/signing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
@@ -37,7 +36,7 @@ func ProcessDeposits(
defer span.End()
// Attempt to verify all deposit signatures at once, if this fails then fall back to processing
// individual deposits with signature verification enabled.
allSignaturesVerified, err := blocks.BatchVerifyDepositsSignatures(ctx, deposits)
allSignaturesVerified, err := helpers.BatchVerifyDepositsSignatures(ctx, deposits)
if err != nil {
return nil, errors.Wrap(err, "could not verify deposit signatures in batch")
}
@@ -82,7 +81,7 @@ func ProcessDeposits(
// signature=deposit.data.signature,
// )
func ProcessDeposit(beaconState state.BeaconState, deposit *ethpb.Deposit, allSignaturesVerified bool) (state.BeaconState, error) {
if err := blocks.VerifyDeposit(beaconState, deposit); err != nil {
if err := helpers.VerifyDeposit(beaconState, deposit); err != nil {
if deposit == nil || deposit.Data == nil {
return nil, err
}
@@ -377,7 +376,7 @@ func batchProcessNewPendingDeposits(ctx context.Context, state state.BeaconState
return nil
}
allSignaturesVerified, err := blocks.BatchVerifyPendingDepositsSignatures(ctx, pendingDeposits)
allSignaturesVerified, err := helpers.BatchVerifyPendingDepositsSignatures(ctx, pendingDeposits)
if err != nil {
return errors.Wrap(err, "batch signature verification failed")
}
@@ -386,7 +385,7 @@ func batchProcessNewPendingDeposits(ctx context.Context, state state.BeaconState
validSig := allSignaturesVerified
if !allSignaturesVerified {
validSig, err = blocks.IsValidDepositSignature(&ethpb.Deposit_Data{
validSig, err = helpers.IsValidDepositSignature(&ethpb.Deposit_Data{
PublicKey: bytesutil.SafeCopyBytes(pd.PublicKey),
WithdrawalCredentials: bytesutil.SafeCopyBytes(pd.WithdrawalCredentials),
Amount: pd.Amount,
@@ -441,7 +440,7 @@ func ApplyPendingDeposit(ctx context.Context, st state.BeaconState, deposit *eth
defer span.End()
index, ok := st.ValidatorIndexByPubkey(bytesutil.ToBytes48(deposit.PublicKey))
if !ok {
verified, err := blocks.IsValidDepositSignature(&ethpb.Deposit_Data{
verified, err := helpers.IsValidDepositSignature(&ethpb.Deposit_Data{
PublicKey: bytesutil.SafeCopyBytes(deposit.PublicKey),
WithdrawalCredentials: bytesutil.SafeCopyBytes(deposit.WithdrawalCredentials),
Amount: deposit.Amount,

View File

@@ -2,16 +2,24 @@ load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = ["bid.go"],
srcs = [
"bid.go",
"builder.go",
"pending_payment.go",
"proposer_slashing.go",
"withdrawal.go",
],
importpath = "github.com/OffchainLabs/prysm/v7/beacon-chain/core/gloas",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/state:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/bls:go_default_library",
"//crypto/bls/common:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
@@ -22,10 +30,16 @@ go_library(
go_test(
name = "go_default_test",
srcs = ["bid_test.go"],
srcs = [
"bid_test.go",
"pending_payment_test.go",
"proposer_slashing_test.go",
],
embed = [":go_default_library"],
deps = [
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//config/params:go_default_library",
"//consensus-types/interfaces:go_default_library",

View File

@@ -203,7 +203,7 @@ func TestProcessExecutionPayloadBid_SelfBuildSuccess(t *testing.T) {
Slot: slot,
Value: 0,
ExecutionPayment: 0,
BlobKzgCommitmentsRoot: bytes.Repeat([]byte{0xEE}, 32),
BlobKzgCommitments: [][]byte{bytes.Repeat([]byte{0xEE}, 48)},
FeeRecipient: bytes.Repeat([]byte{0xFF}, 20),
}
signed := &ethpb.SignedExecutionPayloadBid{
@@ -244,7 +244,7 @@ func TestProcessExecutionPayloadBid_SelfBuildNonZeroAmountFails(t *testing.T) {
Slot: slot,
Value: 10,
ExecutionPayment: 0,
BlobKzgCommitmentsRoot: bytes.Repeat([]byte{0xCC}, 32),
BlobKzgCommitments: [][]byte{bytes.Repeat([]byte{0xCC}, 48)},
FeeRecipient: bytes.Repeat([]byte{0xDD}, 20),
}
signed := &ethpb.SignedExecutionPayloadBid{
@@ -289,7 +289,7 @@ func TestProcessExecutionPayloadBid_PendingPaymentAndCacheBid(t *testing.T) {
Slot: slot,
Value: 500_000,
ExecutionPayment: 1,
BlobKzgCommitmentsRoot: bytes.Repeat([]byte{0xEE}, 32),
BlobKzgCommitments: [][]byte{bytes.Repeat([]byte{0xEE}, 48)},
FeeRecipient: bytes.Repeat([]byte{0xFF}, 20),
}
@@ -350,7 +350,7 @@ func TestProcessExecutionPayloadBid_BuilderNotActive(t *testing.T) {
Slot: slot,
Value: 10,
ExecutionPayment: 0,
BlobKzgCommitmentsRoot: bytes.Repeat([]byte{0x05}, 32),
BlobKzgCommitments: [][]byte{bytes.Repeat([]byte{0x05}, 48)},
FeeRecipient: bytes.Repeat([]byte{0x06}, 20),
}
genesis := bytesutil.ToBytes32(state.GenesisValidatorsRoot())
@@ -403,7 +403,7 @@ func TestProcessExecutionPayloadBid_CannotCoverBid(t *testing.T) {
Slot: slot,
Value: 25,
ExecutionPayment: 0,
BlobKzgCommitmentsRoot: bytes.Repeat([]byte{0xEE}, 32),
BlobKzgCommitments: [][]byte{bytes.Repeat([]byte{0xEE}, 48)},
FeeRecipient: bytes.Repeat([]byte{0xFF}, 20),
}
genesis := bytesutil.ToBytes32(state.GenesisValidatorsRoot())
@@ -445,7 +445,7 @@ func TestProcessExecutionPayloadBid_InvalidSignature(t *testing.T) {
Slot: slot,
Value: 10,
ExecutionPayment: 0,
BlobKzgCommitmentsRoot: bytes.Repeat([]byte{0xEE}, 32),
BlobKzgCommitments: [][]byte{bytes.Repeat([]byte{0xEE}, 48)},
FeeRecipient: bytes.Repeat([]byte{0xFF}, 20),
}
// Use an invalid signature.
@@ -487,7 +487,7 @@ func TestProcessExecutionPayloadBid_SlotMismatch(t *testing.T) {
Slot: slot + 1, // mismatch
Value: 1,
ExecutionPayment: 0,
BlobKzgCommitmentsRoot: bytes.Repeat([]byte{0xCC}, 32),
BlobKzgCommitments: [][]byte{bytes.Repeat([]byte{0xCC}, 48)},
FeeRecipient: bytes.Repeat([]byte{0xDD}, 20),
}
genesis := bytesutil.ToBytes32(state.GenesisValidatorsRoot())
@@ -529,7 +529,7 @@ func TestProcessExecutionPayloadBid_ParentHashMismatch(t *testing.T) {
Slot: slot,
Value: 1,
ExecutionPayment: 0,
BlobKzgCommitmentsRoot: bytes.Repeat([]byte{0x44}, 32),
BlobKzgCommitments: [][]byte{bytes.Repeat([]byte{0x44}, 48)},
FeeRecipient: bytes.Repeat([]byte{0x55}, 20),
}
genesis := bytesutil.ToBytes32(state.GenesisValidatorsRoot())
@@ -572,7 +572,7 @@ func TestProcessExecutionPayloadBid_ParentRootMismatch(t *testing.T) {
Slot: slot,
Value: 1,
ExecutionPayment: 0,
BlobKzgCommitmentsRoot: bytes.Repeat([]byte{0x44}, 32),
BlobKzgCommitments: [][]byte{bytes.Repeat([]byte{0x44}, 48)},
FeeRecipient: bytes.Repeat([]byte{0x55}, 20),
}
genesis := bytesutil.ToBytes32(state.GenesisValidatorsRoot())
@@ -614,7 +614,7 @@ func TestProcessExecutionPayloadBid_PrevRandaoMismatch(t *testing.T) {
Slot: slot,
Value: 1,
ExecutionPayment: 0,
BlobKzgCommitmentsRoot: bytes.Repeat([]byte{0x44}, 32),
BlobKzgCommitments: [][]byte{bytes.Repeat([]byte{0x44}, 48)},
FeeRecipient: bytes.Repeat([]byte{0x55}, 20),
}
genesis := bytesutil.ToBytes32(state.GenesisValidatorsRoot())

View File

@@ -0,0 +1,17 @@
package gloas
import (
"github.com/OffchainLabs/prysm/v7/config/params"
)
// IsBuilderWithdrawalCredential returns true when the builder withdrawal prefix is set.
// Spec v1.6.1 (pseudocode):
// def is_builder_withdrawal_credential(withdrawal_credentials: Bytes32) -> bool:
//
// return withdrawal_credentials[:1] == BUILDER_WITHDRAWAL_PREFIX
func IsBuilderWithdrawalCredential(withdrawalCredentials []byte) bool {
if len(withdrawalCredentials) == 0 {
return false
}
return withdrawalCredentials[0] == params.BeaconConfig().BuilderWithdrawalPrefixByte
}

View File

@@ -0,0 +1,76 @@
package gloas
import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/pkg/errors"
)
// ProcessBuilderPendingPayments processes the builder pending payments from the previous epoch.
// Spec v1.7.0-alpha.0 (pseudocode):
// def process_builder_pending_payments(state: BeaconState) -> None:
//
// quorum = get_builder_payment_quorum_threshold(state)
// for payment in state.builder_pending_payments[:SLOTS_PER_EPOCH]:
// if payment.weight >= quorum:
// state.builder_pending_withdrawals.append(payment.withdrawal)
//
// old_payments = state.builder_pending_payments[SLOTS_PER_EPOCH:]
// new_payments = [BuilderPendingPayment() for _ in range(SLOTS_PER_EPOCH)]
// state.builder_pending_payments = old_payments + new_payments
func ProcessBuilderPendingPayments(state state.BeaconState) error {
quorum, err := builderQuorumThreshold(state)
if err != nil {
return errors.Wrap(err, "could not compute builder payment quorum threshold")
}
payments, err := state.BuilderPendingPayments()
if err != nil {
return errors.Wrap(err, "could not get builder pending payments")
}
slotsPerEpoch := uint64(params.BeaconConfig().SlotsPerEpoch)
var withdrawals []*ethpb.BuilderPendingWithdrawal
for _, payment := range payments[:slotsPerEpoch] {
if quorum > payment.Weight {
continue
}
withdrawals = append(withdrawals, payment.Withdrawal)
}
if err := state.AppendBuilderPendingWithdrawals(withdrawals); err != nil {
return errors.Wrap(err, "could not append builder pending withdrawals")
}
if err := state.RotateBuilderPendingPayments(); err != nil {
return errors.Wrap(err, "could not rotate builder pending payments")
}
return nil
}
// builderQuorumThreshold calculates the quorum threshold for builder payments.
// Spec v1.7.0-alpha.0 (pseudocode):
// def get_builder_payment_quorum_threshold(state: BeaconState) -> uint64:
//
// per_slot_balance = get_total_active_balance(state) // SLOTS_PER_EPOCH
// quorum = per_slot_balance * BUILDER_PAYMENT_THRESHOLD_NUMERATOR
// return uint64(quorum // BUILDER_PAYMENT_THRESHOLD_DENOMINATOR)
func builderQuorumThreshold(state state.ReadOnlyBeaconState) (primitives.Gwei, error) {
activeBalance, err := helpers.TotalActiveBalance(state)
if err != nil {
return 0, errors.Wrap(err, "could not get total active balance")
}
cfg := params.BeaconConfig()
slotsPerEpoch := uint64(cfg.SlotsPerEpoch)
numerator := cfg.BuilderPaymentThresholdNumerator
denominator := cfg.BuilderPaymentThresholdDenominator
activeBalancePerSlot := activeBalance / slotsPerEpoch
quorum := (activeBalancePerSlot * numerator) / denominator
return primitives.Gwei(quorum), nil
}

View File

@@ -0,0 +1,119 @@
package gloas
import (
"slices"
"testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
state_native "github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/testing/require"
)
func TestBuilderQuorumThreshold(t *testing.T) {
helpers.ClearCache()
cfg := params.BeaconConfig()
validators := []*ethpb.Validator{
{EffectiveBalance: cfg.MaxEffectiveBalance, ActivationEpoch: 0, ExitEpoch: 1},
{EffectiveBalance: cfg.MaxEffectiveBalance, ActivationEpoch: 0, ExitEpoch: 1},
}
st, err := state_native.InitializeFromProtoUnsafeGloas(&ethpb.BeaconStateGloas{Validators: validators})
require.NoError(t, err)
got, err := builderQuorumThreshold(st)
require.NoError(t, err)
total := uint64(len(validators)) * cfg.MaxEffectiveBalance
perSlot := total / uint64(cfg.SlotsPerEpoch)
want := (perSlot * cfg.BuilderPaymentThresholdNumerator) / cfg.BuilderPaymentThresholdDenominator
require.Equal(t, primitives.Gwei(want), got)
}
func TestProcessBuilderPendingPayments(t *testing.T) {
helpers.ClearCache()
cfg := params.BeaconConfig()
buildPayments := func(weights ...primitives.Gwei) []*ethpb.BuilderPendingPayment {
p := make([]*ethpb.BuilderPendingPayment, 2*int(cfg.SlotsPerEpoch))
for i := range p {
p[i] = &ethpb.BuilderPendingPayment{
Withdrawal: &ethpb.BuilderPendingWithdrawal{FeeRecipient: make([]byte, 20)},
}
}
for i, w := range weights {
p[i].Weight = w
p[i].Withdrawal.Amount = 1
}
return p
}
validators := []*ethpb.Validator{
{EffectiveBalance: cfg.MaxEffectiveBalance, ActivationEpoch: 0, ExitEpoch: 1},
{EffectiveBalance: cfg.MaxEffectiveBalance, ActivationEpoch: 0, ExitEpoch: 1},
}
pbSt, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{Validators: validators})
require.NoError(t, err)
total := uint64(len(validators)) * cfg.MaxEffectiveBalance
perSlot := total / uint64(cfg.SlotsPerEpoch)
quorum := (perSlot * cfg.BuilderPaymentThresholdNumerator) / cfg.BuilderPaymentThresholdDenominator
slotsPerEpoch := int(cfg.SlotsPerEpoch)
t.Run("append qualifying withdrawals", func(t *testing.T) {
payments := buildPayments(primitives.Gwei(quorum+1), primitives.Gwei(quorum+2))
st := &testProcessState{BeaconState: pbSt, payments: payments}
require.NoError(t, ProcessBuilderPendingPayments(st))
require.Equal(t, 2, len(st.withdrawals))
require.Equal(t, payments[0].Withdrawal, st.withdrawals[0])
require.Equal(t, payments[1].Withdrawal, st.withdrawals[1])
require.Equal(t, 2*slotsPerEpoch, len(st.payments))
for i := slotsPerEpoch; i < 2*slotsPerEpoch; i++ {
require.Equal(t, primitives.Gwei(0), st.payments[i].Weight)
require.Equal(t, primitives.Gwei(0), st.payments[i].Withdrawal.Amount)
require.Equal(t, 20, len(st.payments[i].Withdrawal.FeeRecipient))
}
})
t.Run("no withdrawals when below quorum", func(t *testing.T) {
payments := buildPayments(primitives.Gwei(quorum - 1))
st := &testProcessState{BeaconState: pbSt, payments: payments}
require.NoError(t, ProcessBuilderPendingPayments(st))
require.Equal(t, 0, len(st.withdrawals))
})
}
type testProcessState struct {
state.BeaconState
payments []*ethpb.BuilderPendingPayment
withdrawals []*ethpb.BuilderPendingWithdrawal
}
func (t *testProcessState) BuilderPendingPayments() ([]*ethpb.BuilderPendingPayment, error) {
return t.payments, nil
}
func (t *testProcessState) AppendBuilderPendingWithdrawals(withdrawals []*ethpb.BuilderPendingWithdrawal) error {
t.withdrawals = append(t.withdrawals, withdrawals...)
return nil
}
func (t *testProcessState) RotateBuilderPendingPayments() error {
slotsPerEpoch := int(params.BeaconConfig().SlotsPerEpoch)
rotated := slices.Clone(t.payments[slotsPerEpoch:])
for range slotsPerEpoch {
rotated = append(rotated, &ethpb.BuilderPendingPayment{
Withdrawal: &ethpb.BuilderPendingWithdrawal{
FeeRecipient: make([]byte, 20),
},
})
}
t.payments = rotated
return nil
}

View File

@@ -0,0 +1,43 @@
package gloas
import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/time"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
eth "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/pkg/errors"
)
// RemoveBuilderPendingPayment removes the pending builder payment for the proposal slot.
// Spec v1.7.0 (pseudocode):
//
// slot = header_1.slot
// proposal_epoch = compute_epoch_at_slot(slot)
// if proposal_epoch == get_current_epoch(state):
// payment_index = SLOTS_PER_EPOCH + slot % SLOTS_PER_EPOCH
// state.builder_pending_payments[payment_index] = BuilderPendingPayment()
// elif proposal_epoch == get_previous_epoch(state):
// payment_index = slot % SLOTS_PER_EPOCH
// state.builder_pending_payments[payment_index] = BuilderPendingPayment()
func RemoveBuilderPendingPayment(st state.BeaconState, header *eth.BeaconBlockHeader) error {
proposalEpoch := slots.ToEpoch(header.Slot)
currentEpoch := time.CurrentEpoch(st)
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
var paymentIndex primitives.Slot
if proposalEpoch == currentEpoch {
paymentIndex = slotsPerEpoch + header.Slot%slotsPerEpoch
} else if proposalEpoch+1 == currentEpoch {
paymentIndex = header.Slot % slotsPerEpoch
} else {
return nil
}
if err := st.ClearBuilderPendingPayment(paymentIndex); err != nil {
return errors.Wrap(err, "could not clear builder pending payment")
}
return nil
}

View File

@@ -0,0 +1,112 @@
package gloas
import (
"bytes"
"testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
state_native "github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
eth "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/testing/require"
)
func TestRemoveBuilderPendingPayment_CurrentEpoch(t *testing.T) {
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
stateSlot := slotsPerEpoch*2 + 1
headerSlot := slotsPerEpoch * 2
st := newGloasStateWithPayments(t, stateSlot)
paymentIndex := int(slotsPerEpoch + headerSlot%slotsPerEpoch)
setPendingPayment(t, st, paymentIndex, 123)
err := RemoveBuilderPendingPayment(st, &eth.BeaconBlockHeader{Slot: headerSlot})
require.NoError(t, err)
got := getPendingPayment(t, st, paymentIndex)
require.NotNil(t, got.Withdrawal)
require.DeepEqual(t, make([]byte, 20), got.Withdrawal.FeeRecipient)
require.Equal(t, uint64(0), uint64(got.Withdrawal.Amount))
}
func TestRemoveBuilderPendingPayment_PreviousEpoch(t *testing.T) {
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
stateSlot := slotsPerEpoch*2 + 1
headerSlot := slotsPerEpoch + 7
st := newGloasStateWithPayments(t, stateSlot)
paymentIndex := int(headerSlot % slotsPerEpoch)
setPendingPayment(t, st, paymentIndex, 456)
err := RemoveBuilderPendingPayment(st, &eth.BeaconBlockHeader{Slot: headerSlot})
require.NoError(t, err)
got := getPendingPayment(t, st, paymentIndex)
require.NotNil(t, got.Withdrawal)
require.DeepEqual(t, make([]byte, 20), got.Withdrawal.FeeRecipient)
require.Equal(t, uint64(0), uint64(got.Withdrawal.Amount))
}
func TestRemoveBuilderPendingPayment_OlderThanTwoEpoch(t *testing.T) {
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
stateSlot := slotsPerEpoch*4 + 1 // current epoch far ahead
headerSlot := slotsPerEpoch * 2 // two epochs behind
st := newGloasStateWithPayments(t, stateSlot)
paymentIndex := int(headerSlot % slotsPerEpoch)
original := getPendingPayment(t, st, paymentIndex)
err := RemoveBuilderPendingPayment(st, &eth.BeaconBlockHeader{Slot: headerSlot})
require.NoError(t, err)
after := getPendingPayment(t, st, paymentIndex)
require.DeepEqual(t, original.Withdrawal.FeeRecipient, after.Withdrawal.FeeRecipient)
require.Equal(t, original.Withdrawal.Amount, after.Withdrawal.Amount)
}
func newGloasStateWithPayments(t *testing.T, slot primitives.Slot) state.BeaconState {
t.Helper()
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
paymentCount := int(slotsPerEpoch * 2)
payments := make([]*eth.BuilderPendingPayment, paymentCount)
for i := range payments {
payments[i] = &eth.BuilderPendingPayment{
Withdrawal: &eth.BuilderPendingWithdrawal{
FeeRecipient: bytes.Repeat([]byte{0x01}, 20),
Amount: 1,
},
}
}
st, err := state_native.InitializeFromProtoUnsafeGloas(&eth.BeaconStateGloas{
Slot: slot,
BuilderPendingPayments: payments,
})
require.NoError(t, err)
return st
}
func setPendingPayment(t *testing.T, st state.BeaconState, index int, amount uint64) {
t.Helper()
payment := &eth.BuilderPendingPayment{
Withdrawal: &eth.BuilderPendingWithdrawal{
FeeRecipient: bytes.Repeat([]byte{0x02}, 20),
Amount: primitives.Gwei(amount),
},
}
require.NoError(t, st.SetBuilderPendingPayment(primitives.Slot(index), payment))
}
func getPendingPayment(t *testing.T, st state.BeaconState, index int) *eth.BuilderPendingPayment {
t.Helper()
stateProto := st.ToProtoUnsafe().(*eth.BeaconStateGloas)
return stateProto.BuilderPendingPayments[index]
}

View File

@@ -0,0 +1,117 @@
package gloas
import (
"fmt"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/pkg/errors"
)
// ProcessWithdrawals applies withdrawals to the state for Gloas.
//
// Spec v1.7.0-alpha.1 (pseudocode):
//
// def process_withdrawals(
//
// state: BeaconState,
// # [Modified in Gloas:EIP7732]
// # Removed `payload`
//
// ) -> None:
//
// # [New in Gloas:EIP7732]
// # Return early if the parent block is empty
// if not is_parent_block_full(state):
// return
//
// # Get expected withdrawals
// expected = get_expected_withdrawals(state)
//
// # Apply expected withdrawals
// apply_withdrawals(state, expected.withdrawals)
//
// # Update withdrawals fields in the state
// update_next_withdrawal_index(state, expected.withdrawals)
// # [New in Gloas:EIP7732]
// update_payload_expected_withdrawals(state, expected.withdrawals)
// # [New in Gloas:EIP7732]
// update_builder_pending_withdrawals(state, expected.processed_builder_withdrawals_count)
// update_pending_partial_withdrawals(state, expected.processed_partial_withdrawals_count)
// # [New in Gloas:EIP7732]
// update_next_withdrawal_builder_index(state, expected.processed_builders_sweep_count)
// update_next_withdrawal_validator_index(state, expected.withdrawals)
func ProcessWithdrawals(st state.BeaconState) error {
full, err := st.IsParentBlockFull()
if err != nil {
return errors.Wrap(err, "could not get parent block full status")
}
if !full {
return nil
}
expectedWithdrawals, processedBuilderWithdrawalsCount, processedPartialWithdrawalsCount, nextWithdrawalBuilderIndex, err := st.ExpectedWithdrawalsGloas()
if err != nil {
return errors.Wrap(err, "could not get expected withdrawals")
}
for _, withdrawal := range expectedWithdrawals {
if withdrawal.ValidatorIndex.IsBuilderIndex() {
builderIndex := withdrawal.ValidatorIndex.ToBuilderIndex()
err := st.DecreaseBuilderBalance(builderIndex, withdrawal.Amount)
if err != nil {
return errors.Wrap(err, "could not decrease builder balance")
}
} else {
err := helpers.DecreaseBalance(st, withdrawal.ValidatorIndex, withdrawal.Amount)
if err != nil {
return errors.Wrap(err, "could not decrease balance")
}
}
}
err = st.SetPayloadExpectedWithdrawals(expectedWithdrawals)
if err != nil {
return errors.Wrap(err, "could not set payload expected withdrawals")
}
err = st.DequeueBuilderPendingWithdrawals(processedBuilderWithdrawalsCount)
if err != nil {
return fmt.Errorf("unable to dequeue builder pending withdrawals from state: %w", err)
}
err = st.SetNextWithdrawalBuilderIndex(nextWithdrawalBuilderIndex)
if err != nil {
return errors.Wrap(err, "could not set next withdrawal builder index")
}
if err := st.DequeuePendingPartialWithdrawals(processedPartialWithdrawalsCount); err != nil {
return fmt.Errorf("unable to dequeue partial withdrawals from state: %w", err)
}
if len(expectedWithdrawals) > 0 {
if err := st.SetNextWithdrawalIndex(expectedWithdrawals[len(expectedWithdrawals)-1].Index + 1); err != nil {
return errors.Wrap(err, "could not set next withdrawal index")
}
}
var nextValidatorIndex primitives.ValidatorIndex
if uint64(len(expectedWithdrawals)) < params.BeaconConfig().MaxWithdrawalsPerPayload {
nextValidatorIndex, err = st.NextWithdrawalValidatorIndex()
if err != nil {
return errors.Wrap(err, "could not get next withdrawal validator index")
}
nextValidatorIndex += primitives.ValidatorIndex(params.BeaconConfig().MaxValidatorsPerWithdrawalsSweep)
nextValidatorIndex = nextValidatorIndex % primitives.ValidatorIndex(st.NumValidators())
} else {
nextValidatorIndex = expectedWithdrawals[len(expectedWithdrawals)-1].ValidatorIndex + 1
if nextValidatorIndex == primitives.ValidatorIndex(st.NumValidators()) {
nextValidatorIndex = 0
}
}
if err := st.SetNextWithdrawalValidatorIndex(nextValidatorIndex); err != nil {
return errors.Wrap(err, "could not set next withdrawal validator index")
}
return nil
}

View File

@@ -6,6 +6,7 @@ go_library(
"attestation.go",
"beacon_committee.go",
"block.go",
"deposit.go",
"genesis.go",
"legacy.go",
"log.go",
@@ -23,6 +24,7 @@ go_library(
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/cache:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/forkchoice/types:go_default_library",
"//beacon-chain/state:go_default_library",
@@ -31,6 +33,7 @@ go_library(
"//consensus-types/primitives:go_default_library",
"//container/slice:go_default_library",
"//container/trie:go_default_library",
"//contracts/deposit:go_default_library",
"//crypto/bls:go_default_library",
"//crypto/hash:go_default_library",
"//encoding/bytesutil:go_default_library",
@@ -54,6 +57,7 @@ go_test(
"attestation_test.go",
"beacon_committee_test.go",
"block_test.go",
"deposit_test.go",
"legacy_test.go",
"private_access_fuzz_noop_test.go", # keep
"private_access_test.go",
@@ -72,6 +76,7 @@ go_test(
tags = ["CI_race_detection"],
deps = [
"//beacon-chain/cache:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/forkchoice/types:go_default_library",
"//beacon-chain/state:go_default_library",
@@ -80,6 +85,8 @@ go_test(
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//container/slice:go_default_library",
"//container/trie:go_default_library",
"//crypto/bls:go_default_library",
"//crypto/hash:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",

View File

@@ -1,4 +1,4 @@
package blocks
package helpers
import (
"context"

View File

@@ -1,9 +1,9 @@
package blocks_test
package helpers_test
import (
"testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/blocks"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/signing"
state_native "github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
@@ -45,7 +45,7 @@ func TestBatchVerifyDepositsSignatures_Ok(t *testing.T) {
deposit.Proof = proof
require.NoError(t, err)
verified, err := blocks.BatchVerifyDepositsSignatures(t.Context(), []*ethpb.Deposit{deposit})
verified, err := helpers.BatchVerifyDepositsSignatures(t.Context(), []*ethpb.Deposit{deposit})
require.NoError(t, err)
require.Equal(t, true, verified)
}
@@ -68,7 +68,7 @@ func TestBatchVerifyDepositsSignatures_InvalidSignature(t *testing.T) {
deposit.Proof = proof
require.NoError(t, err)
verified, err := blocks.BatchVerifyDepositsSignatures(t.Context(), []*ethpb.Deposit{deposit})
verified, err := helpers.BatchVerifyDepositsSignatures(t.Context(), []*ethpb.Deposit{deposit})
require.NoError(t, err)
require.Equal(t, false, verified)
}
@@ -99,7 +99,7 @@ func TestVerifyDeposit_MerkleBranchFailsVerification(t *testing.T) {
})
require.NoError(t, err)
want := "deposit root did not verify"
err = blocks.VerifyDeposit(beaconState, deposit)
err = helpers.VerifyDeposit(beaconState, deposit)
require.ErrorContains(t, want, err)
}
@@ -123,7 +123,7 @@ func TestIsValidDepositSignature_Ok(t *testing.T) {
require.NoError(t, err)
sig := sk.Sign(sr[:])
depositData.Signature = sig.Marshal()
valid, err := blocks.IsValidDepositSignature(depositData)
valid, err := helpers.IsValidDepositSignature(depositData)
require.NoError(t, err)
require.Equal(t, true, valid)
}
@@ -163,7 +163,7 @@ func TestBatchVerifyPendingDepositsSignatures_Ok(t *testing.T) {
sig2 := sk2.Sign(sr2[:])
pendingDeposit2.Signature = sig2.Marshal()
verified, err := blocks.BatchVerifyPendingDepositsSignatures(t.Context(), []*ethpb.PendingDeposit{pendingDeposit, pendingDeposit2})
verified, err := helpers.BatchVerifyPendingDepositsSignatures(t.Context(), []*ethpb.PendingDeposit{pendingDeposit, pendingDeposit2})
require.NoError(t, err)
require.Equal(t, true, verified)
}
@@ -174,7 +174,7 @@ func TestBatchVerifyPendingDepositsSignatures_InvalidSignature(t *testing.T) {
WithdrawalCredentials: make([]byte, 32),
Signature: make([]byte, 96),
}
verified, err := blocks.BatchVerifyPendingDepositsSignatures(t.Context(), []*ethpb.PendingDeposit{pendingDeposit})
verified, err := helpers.BatchVerifyPendingDepositsSignatures(t.Context(), []*ethpb.PendingDeposit{pendingDeposit})
require.NoError(t, err)
require.Equal(t, false, verified)
}

View File

@@ -71,6 +71,7 @@ go_test(
"state_test.go",
"trailing_slot_state_cache_test.go",
"transition_fuzz_test.go",
"transition_gloas_test.go",
"transition_no_verify_sig_test.go",
"transition_test.go",
],
@@ -106,6 +107,7 @@ go_test(
"//time/slots:go_default_library",
"@com_github_google_gofuzz//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_stretchr_testify//require:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",
],
)

View File

@@ -142,6 +142,18 @@ func ProcessSlot(ctx context.Context, state state.BeaconState) (state.BeaconStat
); err != nil {
return nil, err
}
// Spec v1.6.1 (pseudocode):
// # [New in Gloas:EIP7732]
// # Unset the next payload availability
// state.execution_payload_availability[(state.slot + 1) % SLOTS_PER_HISTORICAL_ROOT] = 0b0
if state.Version() >= version.Gloas {
index := uint64((state.Slot() + 1) % params.BeaconConfig().SlotsPerHistoricalRoot)
if err := state.UpdateExecutionPayloadAvailabilityAtIndex(index, 0x0); err != nil {
return nil, err
}
}
return state, nil
}

View File

@@ -0,0 +1,141 @@
package transition
import (
"bytes"
"context"
"fmt"
"testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
state_native "github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/runtime/version"
"github.com/stretchr/testify/require"
)
func TestProcessSlot_GloasClearsNextPayloadAvailability(t *testing.T) {
slot := primitives.Slot(10)
cfg := params.BeaconConfig()
nextIdx := uint64((slot + 1) % cfg.SlotsPerHistoricalRoot)
byteIdx := nextIdx / 8
bitMask := byte(1 << (nextIdx % 8))
availability := bytes.Repeat([]byte{0xFF}, int(cfg.SlotsPerHistoricalRoot/8))
st := newGloasState(t, slot, availability)
_, err := ProcessSlot(context.Background(), st)
require.NoError(t, err)
post := st.ToProto().(*ethpb.BeaconStateGloas)
require.Equal(t, byte(0xFF)&^bitMask, post.ExecutionPayloadAvailability[byteIdx])
}
func TestProcessSlot_GloasClearsNextPayloadAvailability_Wrap(t *testing.T) {
cfg := params.BeaconConfig()
slot := primitives.Slot(cfg.SlotsPerHistoricalRoot - 1)
availability := bytes.Repeat([]byte{0xFF}, int(cfg.SlotsPerHistoricalRoot/8))
st := newGloasState(t, slot, availability)
_, err := ProcessSlot(context.Background(), st)
require.NoError(t, err)
post := st.ToProto().(*ethpb.BeaconStateGloas)
require.Equal(t, byte(0xFE), post.ExecutionPayloadAvailability[0])
}
func TestProcessSlot_GloasAvailabilityUpdateError(t *testing.T) {
slot := primitives.Slot(7)
availability := make([]byte, 1)
st := newGloasState(t, slot, availability)
_, err := ProcessSlot(context.Background(), st)
cfg := params.BeaconConfig()
idx := uint64((slot + 1) % cfg.SlotsPerHistoricalRoot)
byteIdx := idx / 8
require.EqualError(t, err, fmt.Sprintf(
"bit index %d (byte index %d) out of range for execution payload availability length %d",
idx, byteIdx, len(availability),
))
}
func newGloasState(t *testing.T, slot primitives.Slot, availability []byte) state.BeaconState {
t.Helper()
cfg := params.BeaconConfig()
protoState := &ethpb.BeaconStateGloas{
Slot: slot,
LatestBlockHeader: testBeaconBlockHeader(),
BlockRoots: make([][]byte, cfg.SlotsPerHistoricalRoot),
StateRoots: make([][]byte, cfg.SlotsPerHistoricalRoot),
RandaoMixes: make([][]byte, fieldparams.RandaoMixesLength),
ExecutionPayloadAvailability: availability,
BuilderPendingPayments: make([]*ethpb.BuilderPendingPayment, int(cfg.SlotsPerEpoch*2)),
LatestExecutionPayloadBid: &ethpb.ExecutionPayloadBid{
ParentBlockHash: make([]byte, 32),
ParentBlockRoot: make([]byte, 32),
BlockHash: make([]byte, 32),
PrevRandao: make([]byte, 32),
FeeRecipient: make([]byte, 20),
BlobKzgCommitments: [][]byte{make([]byte, 48)},
},
Eth1Data: &ethpb.Eth1Data{
DepositRoot: make([]byte, 32),
BlockHash: make([]byte, 32),
},
PreviousEpochParticipation: []byte{},
CurrentEpochParticipation: []byte{},
JustificationBits: []byte{0},
PreviousJustifiedCheckpoint: &ethpb.Checkpoint{Root: make([]byte, 32)},
CurrentJustifiedCheckpoint: &ethpb.Checkpoint{Root: make([]byte, 32)},
FinalizedCheckpoint: &ethpb.Checkpoint{Root: make([]byte, 32)},
CurrentSyncCommittee: &ethpb.SyncCommittee{},
NextSyncCommittee: &ethpb.SyncCommittee{},
}
for i := range protoState.BlockRoots {
protoState.BlockRoots[i] = make([]byte, 32)
}
for i := range protoState.StateRoots {
protoState.StateRoots[i] = make([]byte, 32)
}
for i := range protoState.RandaoMixes {
protoState.RandaoMixes[i] = make([]byte, 32)
}
for i := range protoState.BuilderPendingPayments {
protoState.BuilderPendingPayments[i] = &ethpb.BuilderPendingPayment{
Withdrawal: &ethpb.BuilderPendingWithdrawal{
FeeRecipient: make([]byte, 20),
},
}
}
pubkeys := make([][]byte, cfg.SyncCommitteeSize)
for i := range pubkeys {
pubkeys[i] = make([]byte, fieldparams.BLSPubkeyLength)
}
aggPubkey := make([]byte, fieldparams.BLSPubkeyLength)
protoState.CurrentSyncCommittee = &ethpb.SyncCommittee{
Pubkeys: pubkeys,
AggregatePubkey: aggPubkey,
}
protoState.NextSyncCommittee = &ethpb.SyncCommittee{
Pubkeys: pubkeys,
AggregatePubkey: aggPubkey,
}
st, err := state_native.InitializeFromProtoGloas(protoState)
require.NoError(t, err)
require.Equal(t, version.Gloas, st.Version())
return st
}
func testBeaconBlockHeader() *ethpb.BeaconBlockHeader {
return &ethpb.BeaconBlockHeader{
ParentRoot: make([]byte, 32),
StateRoot: make([]byte, 32),
BodyRoot: make([]byte, 32),
}
}

View File

@@ -512,6 +512,11 @@ func (dcs *DataColumnStorage) Get(root [fieldparams.RootLength]byte, indices []u
if err != nil {
return nil, errors.Wrap(err, "data column sidecars file path open")
}
defer func() {
if closeErr := file.Close(); closeErr != nil {
log.WithError(closeErr).WithField("file", filePath).Error("Error closing file during Get")
}
}()
// Read file metadata.
metadata, err := dcs.metadata(file)

View File

@@ -1053,6 +1053,10 @@ func (s *Store) getStateUsingStateDiff(ctx context.Context, blockRoot [32]byte)
return nil, err
}
if uint64(slot) < s.getOffset() {
return nil, ErrSlotBeforeOffset
}
st, err := s.stateByDiff(ctx, slot)
if err != nil {
return nil, err
@@ -1070,6 +1074,10 @@ func (s *Store) hasStateUsingStateDiff(ctx context.Context, blockRoot [32]byte)
return false, err
}
if uint64(slot) < s.getOffset() {
return false, ErrSlotBeforeOffset
}
stateLvl := computeLevel(s.getOffset(), slot)
return stateLvl != -1, nil
}

View File

@@ -24,7 +24,7 @@ const (
*/
// SlotInDiffTree returns whether the given slot is a saving point in the diff tree.
// It it is, it also returns the offset and level in the tree.
// If it is, it also returns the offset and level in the tree.
func (s *Store) SlotInDiffTree(slot primitives.Slot) (uint64, int, error) {
offset := s.getOffset()
if uint64(slot) < offset {

View File

@@ -25,7 +25,7 @@ func newStateDiffCache(s *Store) (*stateDiffCache, error) {
return bbolt.ErrBucketNotFound
}
offsetBytes := bucket.Get([]byte("offset"))
offsetBytes := bucket.Get(offsetKey)
if offsetBytes == nil {
return errors.New("state diff cache: offset not found")
}

View File

@@ -19,7 +19,7 @@ import (
var (
offsetKey = []byte("offset")
ErrSlotBeforeOffset = errors.New("slot is before root offset")
ErrSlotBeforeOffset = errors.New("slot is before state-diff root offset")
)
func makeKeyForStateDiffTree(level int, slot uint64) []byte {
@@ -73,6 +73,9 @@ func (s *Store) getAnchorState(offset uint64, lvl int, slot primitives.Slot) (an
// computeLevel computes the level in the diff tree. Returns -1 in case slot should not be in tree.
func computeLevel(offset uint64, slot primitives.Slot) int {
if uint64(slot) < offset {
return -1
}
rel := uint64(slot) - offset
for i, exp := range flags.Get().StateDiffExponents {
if exp < 2 || exp >= 64 {

View File

@@ -43,8 +43,12 @@ func TestStateDiff_ComputeLevel(t *testing.T) {
offset := db.getOffset()
// should be -1. slot < offset
lvl := computeLevel(10, primitives.Slot(9))
require.Equal(t, -1, lvl)
// 2 ** 21
lvl := computeLevel(offset, primitives.Slot(math.PowerOf2(21)))
lvl = computeLevel(offset, primitives.Slot(math.PowerOf2(21)))
require.Equal(t, 0, lvl)
// 2 ** 21 * 3

View File

@@ -1395,6 +1395,23 @@ func TestStore_CanSaveRetrieveStateUsingStateDiff(t *testing.T) {
require.IsNil(t, readSt)
})
t.Run("slot before offset", func(t *testing.T) {
db := setupDB(t)
setDefaultStateDiffExponents()
err := setOffsetInDB(db, 10)
require.NoError(t, err)
r := bytesutil.ToBytes32([]byte{'A'})
ss := &ethpb.StateSummary{Slot: 9, Root: r[:]}
err = db.SaveStateSummary(t.Context(), ss)
require.NoError(t, err)
st, err := db.getStateUsingStateDiff(t.Context(), r)
require.ErrorIs(t, err, ErrSlotBeforeOffset)
require.IsNil(t, st)
})
t.Run("Full state snapshot", func(t *testing.T) {
t.Run("using state summary", func(t *testing.T) {
for v := range version.All() {
@@ -1627,4 +1644,21 @@ func TestStore_HasStateUsingStateDiff(t *testing.T) {
}
})
t.Run("slot before offset", func(t *testing.T) {
db := setupDB(t)
setDefaultStateDiffExponents()
err := setOffsetInDB(db, 10)
require.NoError(t, err)
r := bytesutil.ToBytes32([]byte{'B'})
ss := &ethpb.StateSummary{Slot: 0, Root: r[:]}
err = db.SaveStateSummary(t.Context(), ss)
require.NoError(t, err)
hasState, err := db.hasStateUsingStateDiff(t.Context(), r)
require.ErrorIs(t, err, ErrSlotBeforeOffset)
require.Equal(t, false, hasState)
})
}

View File

@@ -7,6 +7,7 @@ go_library(
visibility = ["//visibility:public"],
deps = [
"//api/server/structs:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//monitoring/tracing/trace:go_default_library",
"//network/httputil:go_default_library",

View File

@@ -9,6 +9,7 @@ import (
"strings"
"github.com/OffchainLabs/prysm/v7/api/server/structs"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/monitoring/tracing/trace"
"github.com/OffchainLabs/prysm/v7/network/httputil"
@@ -181,6 +182,16 @@ func prepareConfigSpec() (map[string]any, error) {
data[tag] = convertValueForJSON(val, tag)
}
// Add Fulu preset values. These are compile-time constants from fieldparams,
// not runtime configs, but are required by the /eth/v1/config/spec API.
data["NUMBER_OF_COLUMNS"] = convertValueForJSON(reflect.ValueOf(uint64(fieldparams.NumberOfColumns)), "NUMBER_OF_COLUMNS")
data["CELLS_PER_EXT_BLOB"] = convertValueForJSON(reflect.ValueOf(uint64(fieldparams.NumberOfColumns)), "CELLS_PER_EXT_BLOB")
data["FIELD_ELEMENTS_PER_CELL"] = convertValueForJSON(reflect.ValueOf(uint64(fieldparams.CellsPerBlob)), "FIELD_ELEMENTS_PER_CELL")
data["FIELD_ELEMENTS_PER_EXT_BLOB"] = convertValueForJSON(reflect.ValueOf(config.FieldElementsPerBlob*2), "FIELD_ELEMENTS_PER_EXT_BLOB")
data["KZG_COMMITMENTS_INCLUSION_PROOF_DEPTH"] = convertValueForJSON(reflect.ValueOf(uint64(4)), "KZG_COMMITMENTS_INCLUSION_PROOF_DEPTH")
// UPDATE_TIMEOUT is derived from SLOTS_PER_EPOCH * EPOCHS_PER_SYNC_COMMITTEE_PERIOD
data["UPDATE_TIMEOUT"] = convertValueForJSON(reflect.ValueOf(uint64(config.SlotsPerEpoch)*uint64(config.EpochsPerSyncCommitteePeriod)), "UPDATE_TIMEOUT")
return data, nil
}

View File

@@ -132,7 +132,7 @@ func TestGetSpec(t *testing.T) {
config.MinSyncCommitteeParticipants = 71
config.ProposerReorgCutoffBPS = primitives.BP(121)
config.AttestationDueBPS = primitives.BP(122)
config.AggregrateDueBPS = primitives.BP(123)
config.AggregateDueBPS = primitives.BP(123)
config.ContributionDueBPS = primitives.BP(124)
config.TerminalBlockHash = common.HexToHash("TerminalBlockHash")
config.TerminalBlockHashActivationEpoch = 72
@@ -170,6 +170,8 @@ func TestGetSpec(t *testing.T) {
config.SyncMessageDueBPS = 103
config.BuilderWithdrawalPrefixByte = byte('b')
config.BuilderIndexSelfBuild = primitives.BuilderIndex(125)
config.BuilderPaymentThresholdNumerator = 104
config.BuilderPaymentThresholdDenominator = 105
var dbp [4]byte
copy(dbp[:], []byte{'0', '0', '0', '1'})
@@ -210,7 +212,7 @@ func TestGetSpec(t *testing.T) {
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), &resp))
data, ok := resp.Data.(map[string]any)
require.Equal(t, true, ok)
assert.Equal(t, 178, len(data))
assert.Equal(t, 186, len(data))
for k, v := range data {
t.Run(k, func(t *testing.T) {
switch k {
@@ -468,7 +470,7 @@ func TestGetSpec(t *testing.T) {
assert.Equal(t, "121", v)
case "ATTESTATION_DUE_BPS":
assert.Equal(t, "122", v)
case "AGGREGRATE_DUE_BPS":
case "AGGREGATE_DUE_BPS":
assert.Equal(t, "123", v)
case "CONTRIBUTION_DUE_BPS":
assert.Equal(t, "124", v)
@@ -588,10 +590,26 @@ func TestGetSpec(t *testing.T) {
assert.Equal(t, "102", v)
case "SYNC_MESSAGE_DUE_BPS":
assert.Equal(t, "103", v)
case "BUILDER_PAYMENT_THRESHOLD_NUMERATOR":
assert.Equal(t, "104", v)
case "BUILDER_PAYMENT_THRESHOLD_DENOMINATOR":
assert.Equal(t, "105", v)
case "BLOB_SCHEDULE":
blobSchedule, ok := v.([]any)
assert.Equal(t, true, ok)
assert.Equal(t, 2, len(blobSchedule))
case "FIELD_ELEMENTS_PER_CELL":
assert.Equal(t, "64", v) // From fieldparams.CellsPerBlob
case "FIELD_ELEMENTS_PER_EXT_BLOB":
assert.Equal(t, "198", v) // FieldElementsPerBlob (99) * 2
case "KZG_COMMITMENTS_INCLUSION_PROOF_DEPTH":
assert.Equal(t, "4", v) // Preset value
case "CELLS_PER_EXT_BLOB":
assert.Equal(t, "128", v) // From fieldparams.NumberOfColumns
case "NUMBER_OF_COLUMNS":
assert.Equal(t, "128", v) // From fieldparams.NumberOfColumns
case "UPDATE_TIMEOUT":
assert.Equal(t, "1782", v) // SlotsPerEpoch (27) * EpochsPerSyncCommitteePeriod (66)
default:
t.Errorf("Incorrect key: %s", k)
}

View File

@@ -3,12 +3,22 @@ package state
import (
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
)
type writeOnlyGloasFields interface {
SetExecutionPayloadBid(h interfaces.ROExecutionPayloadBid) error
SetBuilderPendingPayment(index primitives.Slot, payment *ethpb.BuilderPendingPayment) error
ClearBuilderPendingPayment(index primitives.Slot) error
RotateBuilderPendingPayments() error
AppendBuilderPendingWithdrawals([]*ethpb.BuilderPendingWithdrawal) error
UpdateExecutionPayloadAvailabilityAtIndex(idx uint64, val byte) error
SetPayloadExpectedWithdrawals(withdrawals []*enginev1.Withdrawal) error
DequeueBuilderPendingWithdrawals(num uint64) error
SetNextWithdrawalBuilderIndex(idx primitives.BuilderIndex) error
DecreaseBuilderBalance(builderIndex primitives.BuilderIndex, amount uint64) error
}
type readOnlyGloasFields interface {
@@ -16,4 +26,8 @@ type readOnlyGloasFields interface {
IsActiveBuilder(primitives.BuilderIndex) (bool, error)
CanBuilderCoverBid(primitives.BuilderIndex, primitives.Gwei) (bool, error)
LatestBlockHash() ([32]byte, error)
BuilderPendingPayments() ([]*ethpb.BuilderPendingPayment, error)
IsParentBlockFull() (bool, error)
ExpectedWithdrawalsGloas() ([]*enginev1.Withdrawal, uint64, uint64, primitives.BuilderIndex, error)
}

View File

@@ -15,6 +15,7 @@ go_library(
"getters_eth1.go",
"getters_exit.go",
"getters_gloas.go",
"getters_gloas_withdrawals.go",
"getters_misc.go",
"getters_participation.go",
"getters_payload_header.go",

View File

@@ -1,6 +1,7 @@
package state_native
import (
"bytes"
"fmt"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
@@ -10,6 +11,22 @@ import (
"github.com/OffchainLabs/prysm/v7/runtime/version"
)
// IsParentBlockFull returns true when the latest bid was fulfilled with a payload.
func (b *BeaconState) IsParentBlockFull() (bool, error) {
if b.version < version.Gloas {
return false, errNotSupported("IsParentBlockFull", b.version)
}
b.lock.RLock()
defer b.lock.RUnlock()
if b.latestExecutionPayloadBid == nil {
return false, nil
}
return bytes.Equal(b.latestExecutionPayloadBid.BlockHash, b.latestBlockHash), nil
}
// LatestBlockHash returns the hash of the latest execution block.
func (b *BeaconState) LatestBlockHash() ([32]byte, error) {
if b.version < version.Gloas {
@@ -135,3 +152,68 @@ func (b *BeaconState) builderPendingBalanceToWithdraw(builderIndex primitives.Bu
}
return total
}
// BuilderPendingPayments returns a copy of the builder pending payments.
func (b *BeaconState) BuilderPendingPayments() ([]*ethpb.BuilderPendingPayment, error) {
if b.version < version.Gloas {
return nil, errNotSupported("BuilderPendingPayments", b.version)
}
b.lock.RLock()
defer b.lock.RUnlock()
return b.builderPendingPaymentsVal(), nil
}
// BuilderPendingWithdrawals returns the builder pending withdrawals queue.
func (b *BeaconState) BuilderPendingWithdrawals() ([]*ethpb.BuilderPendingWithdrawal, error) {
if b.version < version.Gloas {
return nil, errNotSupported("BuilderPendingWithdrawals", b.version)
}
b.lock.RLock()
defer b.lock.RUnlock()
return b.builderPendingWithdrawalsVal(), nil
}
// BuildersCount returns the number of builders in the registry.
func (b *BeaconState) BuildersCount() int {
if b.version < version.Gloas {
return 0
}
b.lock.RLock()
defer b.lock.RUnlock()
return len(b.builders)
}
// Builders returns the builders registry.
func (b *BeaconState) Builders() ([]*ethpb.Builder, error) {
if b.version < version.Gloas {
return nil, errNotSupported("Builders", b.version)
}
b.lock.RLock()
defer b.lock.RUnlock()
builders := make([]*ethpb.Builder, len(b.builders))
for i := range b.builders {
builders[i] = ethpb.CopyBuilder(b.builders[i])
}
return builders, nil
}
// NextWithdrawalBuilderIndex returns the next builder index for the withdrawals sweep.
func (b *BeaconState) NextWithdrawalBuilderIndex() (primitives.BuilderIndex, error) {
if b.version < version.Gloas {
return 0, errNotSupported("NextWithdrawalBuilderIndex", b.version)
}
b.lock.RLock()
defer b.lock.RUnlock()
return b.nextWithdrawalBuilderIndex, nil
}

View File

@@ -44,6 +44,56 @@ func TestLatestBlockHash(t *testing.T) {
})
}
func TestIsParentBlockFull(t *testing.T) {
t.Run("returns error before gloas", func(t *testing.T) {
st, _ := util.DeterministicGenesisState(t, 1)
_, err := st.IsParentBlockFull()
require.ErrorContains(t, "is not supported", err)
})
t.Run("returns false when bid is unset", func(t *testing.T) {
st, err := state_native.InitializeFromProtoGloas(&ethpb.BeaconStateGloas{
LatestBlockHash: bytes.Repeat([]byte{0xAB}, 32),
})
require.NoError(t, err)
full, err := st.IsParentBlockFull()
require.NoError(t, err)
require.Equal(t, false, full)
})
t.Run("returns true when bid block hash matches latest block hash", func(t *testing.T) {
hashBytes := bytes.Repeat([]byte{0xAB}, 32)
st, err := state_native.InitializeFromProtoGloas(&ethpb.BeaconStateGloas{
LatestBlockHash: hashBytes,
LatestExecutionPayloadBid: &ethpb.ExecutionPayloadBid{
BlockHash: hashBytes,
},
})
require.NoError(t, err)
full, err := st.IsParentBlockFull()
require.NoError(t, err)
require.Equal(t, true, full)
})
t.Run("returns false when bid block hash does not match latest block hash", func(t *testing.T) {
latest := bytes.Repeat([]byte{0xAB}, 32)
bid := bytes.Repeat([]byte{0xCD}, 32)
st, err := state_native.InitializeFromProtoGloas(&ethpb.BeaconStateGloas{
LatestBlockHash: latest,
LatestExecutionPayloadBid: &ethpb.ExecutionPayloadBid{
BlockHash: bid,
},
})
require.NoError(t, err)
full, err := st.IsParentBlockFull()
require.NoError(t, err)
require.Equal(t, false, full)
})
}
func TestBuilderPubkey(t *testing.T) {
t.Run("returns error before gloas", func(t *testing.T) {
stIface, _ := util.DeterministicGenesisState(t, 1)
@@ -157,3 +207,12 @@ func TestBuilderHelpers(t *testing.T) {
require.Equal(t, false, ok)
})
}
func TestBuilderPendingPayments_UnsupportedVersion(t *testing.T) {
stIface, err := state_native.InitializeFromProtoElectra(&ethpb.BeaconStateElectra{})
require.NoError(t, err)
st := stIface.(*state_native.BeaconState)
_, err = st.BuilderPendingPayments()
require.ErrorContains(t, "BuilderPendingPayments", err)
}

View File

@@ -0,0 +1,167 @@
package state_native
import (
"fmt"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
"github.com/OffchainLabs/prysm/v7/runtime/version"
"github.com/OffchainLabs/prysm/v7/time/slots"
)
// ExpectedWithdrawalsGloas returns the withdrawals that a proposer will need to pack in the next block
// applied to the current state. It is also used by validators to check that the execution payload carried
// the right number of withdrawals.
//
// Spec v1.7.0-alpha.1:
//
// def get_expected_withdrawals(state: BeaconState) -> ExpectedWithdrawals:
// withdrawal_index = state.next_withdrawal_index
// withdrawals: List[Withdrawal] = []
//
// # [New in Gloas:EIP7732]
// # Get builder withdrawals
// builder_withdrawals, withdrawal_index, processed_builder_withdrawals_count = (
// get_builder_withdrawals(state, withdrawal_index, withdrawals)
// )
// withdrawals.extend(builder_withdrawals)
//
// # Get partial withdrawals
// partial_withdrawals, withdrawal_index, processed_partial_withdrawals_count = (
// get_pending_partial_withdrawals(state, withdrawal_index, withdrawals)
// )
// withdrawals.extend(partial_withdrawals)
//
// # [New in Gloas:EIP7732]
// # Get builders sweep withdrawals
// builders_sweep_withdrawals, withdrawal_index, processed_builders_sweep_count = (
// get_builders_sweep_withdrawals(state, withdrawal_index, withdrawals)
// )
// withdrawals.extend(builders_sweep_withdrawals)
//
// # Get validators sweep withdrawals
// validators_sweep_withdrawals, withdrawal_index, processed_validators_sweep_count = (
// get_validators_sweep_withdrawals(state, withdrawal_index, withdrawals)
// )
// withdrawals.extend(validators_sweep_withdrawals)
//
// return ExpectedWithdrawals(
// withdrawals,
// # [New in Gloas:EIP7732]
// processed_builder_withdrawals_count,
// processed_partial_withdrawals_count,
// # [New in Gloas:EIP7732]
// processed_builders_sweep_count,
// processed_validators_sweep_count,
// )
func (b *BeaconState) ExpectedWithdrawalsGloas() ([]*enginev1.Withdrawal, uint64, uint64, primitives.BuilderIndex, error) {
if b.version < version.Gloas {
return nil, 0, 0, 0, errNotSupported("ExpectedWithdrawalsGloas", b.version)
}
b.lock.RLock()
defer b.lock.RUnlock()
cfg := params.BeaconConfig()
withdrawals := make([]*enginev1.Withdrawal, 0, cfg.MaxWithdrawalsPerPayload)
withdrawalIndex := b.nextWithdrawalIndex
withdrawalIndex, processedBuilderWithdrawalsCount, err := b.appendBuilderWithdrawalsGloas(withdrawalIndex, &withdrawals)
if err != nil {
return nil, 0, 0, 0, err
}
withdrawalIndex, processedPartialWithdrawalsCount, err := b.appendPendingPartialWithdrawals(withdrawalIndex, &withdrawals)
if err != nil {
return nil, 0, 0, 0, err
}
withdrawalIndex, processedBuildersSweepCount, err := b.appendBuildersSweepWithdrawalsGloas(withdrawalIndex, &withdrawals)
if err != nil {
return nil, 0, 0, 0, err
}
nextBuilderIndex := b.nextWithdrawalBuilderIndex
if buildersLen := len(b.builders); buildersLen > 0 {
nextBuilderIndex = primitives.BuilderIndex((uint64(nextBuilderIndex) + processedBuildersSweepCount) % uint64(buildersLen))
}
err = b.appendValidatorsSweepWithdrawals(withdrawalIndex, &withdrawals)
if err != nil {
return nil, 0, 0, 0, err
}
return withdrawals, processedBuilderWithdrawalsCount, processedPartialWithdrawalsCount, nextBuilderIndex, nil
}
func (b *BeaconState) appendBuilderWithdrawalsGloas(withdrawalIndex uint64, withdrawals *[]*enginev1.Withdrawal) (uint64, uint64, error) {
cfg := params.BeaconConfig()
withdrawalsLimit := cfg.MaxWithdrawalsPerPayload - 1
ws := *withdrawals
var processedCount uint64
for _, w := range b.builderPendingWithdrawals {
if uint64(len(ws)) >= withdrawalsLimit {
break
}
ws = append(ws, &enginev1.Withdrawal{
Index: withdrawalIndex,
ValidatorIndex: primitives.ValidatorIndex(
uint64(primitives.BuilderIndex(w.BuilderIndex)) | cfg.BuilderIndexFlag,
),
Address: bytesutil.SafeCopyBytes(w.FeeRecipient),
Amount: uint64(w.Amount),
})
withdrawalIndex++
processedCount++
}
*withdrawals = ws
return withdrawalIndex, processedCount, nil
}
func (b *BeaconState) appendBuildersSweepWithdrawalsGloas(withdrawalIndex uint64, withdrawals *[]*enginev1.Withdrawal) (uint64, uint64, error) {
cfg := params.BeaconConfig()
withdrawalsLimit := cfg.MaxWithdrawalsPerPayload - 1
priorWithdrawalCount := uint64(len(*withdrawals))
if priorWithdrawalCount >= withdrawalsLimit || len(b.builders) == 0 {
return withdrawalIndex, 0, nil
}
ws := *withdrawals
epoch := slots.ToEpoch(b.slot)
buildersLimit := len(b.builders)
if maxBuilders := int(cfg.MaxBuildersPerWithdrawalsSweep); buildersLimit > maxBuilders {
buildersLimit = maxBuilders
}
builderIndex := b.nextWithdrawalBuilderIndex
if uint64(builderIndex) >= uint64(len(b.builders)) {
return withdrawalIndex, 0, fmt.Errorf("next withdrawal builder index %d out of range", builderIndex)
}
var processedCount uint64
for i := 0; i < buildersLimit; i++ {
if uint64(len(ws)) >= withdrawalsLimit {
break
}
builder := b.builders[builderIndex]
if builder != nil && builder.WithdrawableEpoch <= epoch && builder.Balance > 0 {
ws = append(ws, &enginev1.Withdrawal{
Index: withdrawalIndex,
ValidatorIndex: primitives.ValidatorIndex(uint64(builderIndex) | cfg.BuilderIndexFlag),
Address: bytesutil.SafeCopyBytes(builder.ExecutionAddress),
Amount: uint64(builder.Balance),
})
withdrawalIndex++
}
builderIndex = primitives.BuilderIndex((uint64(builderIndex) + 1) % uint64(len(b.builders)))
processedCount++
}
*withdrawals = ws
return withdrawalIndex, processedCount, nil
}

View File

@@ -725,3 +725,13 @@ func ProtobufBeaconStateFulu(s any) (*ethpb.BeaconStateFulu, error) {
}
return pbState, nil
}
// ProtobufBeaconStateGloas transforms an input into beacon state Gloas in the form of protobuf.
// Error is returned if the input is not type protobuf beacon state.
func ProtobufBeaconStateGloas(s any) (*ethpb.BeaconStateGloas, error) {
pbState, ok := s.(*ethpb.BeaconStateGloas)
if !ok {
return nil, errors.New("input is not type pb.BeaconStateGloas")
}
return pbState, nil
}

View File

@@ -113,77 +113,111 @@ func (b *BeaconState) ExpectedWithdrawals() ([]*enginev1.Withdrawal, uint64, err
defer b.lock.RUnlock()
withdrawals := make([]*enginev1.Withdrawal, 0, params.BeaconConfig().MaxWithdrawalsPerPayload)
validatorIndex := b.nextWithdrawalValidatorIndex
withdrawalIndex := b.nextWithdrawalIndex
epoch := slots.ToEpoch(b.slot)
// Electra partial withdrawals functionality.
var processedPartialWithdrawalsCount uint64
if b.version >= version.Electra {
for _, w := range b.pendingPartialWithdrawals {
if w.WithdrawableEpoch > epoch || len(withdrawals) >= int(params.BeaconConfig().MaxPendingPartialsPerWithdrawalsSweep) {
break
}
v, err := b.validatorAtIndexReadOnly(w.Index)
if err != nil {
return nil, 0, fmt.Errorf("failed to determine withdrawals at index %d: %w", w.Index, err)
}
vBal, err := b.balanceAtIndex(w.Index)
if err != nil {
return nil, 0, fmt.Errorf("could not retrieve balance at index %d: %w", w.Index, err)
}
hasSufficientEffectiveBalance := v.EffectiveBalance() >= params.BeaconConfig().MinActivationBalance
var totalWithdrawn uint64
for _, wi := range withdrawals {
if wi.ValidatorIndex == w.Index {
totalWithdrawn += wi.Amount
}
}
balance, err := mathutil.Sub64(vBal, totalWithdrawn)
if err != nil {
return nil, 0, errors.Wrapf(err, "failed to subtract balance %d with total withdrawn %d", vBal, totalWithdrawn)
}
hasExcessBalance := balance > params.BeaconConfig().MinActivationBalance
if v.ExitEpoch() == params.BeaconConfig().FarFutureEpoch && hasSufficientEffectiveBalance && hasExcessBalance {
amount := min(balance-params.BeaconConfig().MinActivationBalance, w.Amount)
withdrawals = append(withdrawals, &enginev1.Withdrawal{
Index: withdrawalIndex,
ValidatorIndex: w.Index,
Address: v.GetWithdrawalCredentials()[12:],
Amount: amount,
})
withdrawalIndex++
}
processedPartialWithdrawalsCount++
}
withdrawalIndex, processedPartialWithdrawalsCount, err := b.appendPendingPartialWithdrawals(withdrawalIndex, &withdrawals)
if err != nil {
return nil, 0, err
}
err = b.appendValidatorsSweepWithdrawals(withdrawalIndex, &withdrawals)
if err != nil {
return nil, 0, err
}
return withdrawals, processedPartialWithdrawalsCount, nil
}
func (b *BeaconState) appendPendingPartialWithdrawals(withdrawalIndex uint64, withdrawals *[]*enginev1.Withdrawal) (uint64, uint64, error) {
if b.version < version.Electra {
return withdrawalIndex, 0, nil
}
cfg := params.BeaconConfig()
withdrawalsLimit := min(
uint64(len(*withdrawals))+cfg.MaxPendingPartialsPerWithdrawalsSweep,
cfg.MaxWithdrawalsPerPayload-1,
)
if uint64(len(*withdrawals)) > withdrawalsLimit {
return withdrawalIndex, 0, fmt.Errorf("prior withdrawals length %d exceeds limit %d", len(*withdrawals), withdrawalsLimit)
}
ws := *withdrawals
epoch := slots.ToEpoch(b.slot)
var processedPartialWithdrawalsCount uint64
for _, w := range b.pendingPartialWithdrawals {
isWithdrawable := w.WithdrawableEpoch <= epoch
hasReachedLimit := uint64(len(ws)) >= withdrawalsLimit
if !isWithdrawable || hasReachedLimit {
break
}
v, err := b.validatorAtIndexReadOnly(w.Index)
if err != nil {
return withdrawalIndex, 0, fmt.Errorf("failed to determine withdrawals at index %d: %w", w.Index, err)
}
vBal, err := b.balanceAtIndex(w.Index)
if err != nil {
return withdrawalIndex, 0, fmt.Errorf("could not retrieve balance at index %d: %w", w.Index, err)
}
hasSufficientEffectiveBalance := v.EffectiveBalance() >= cfg.MinActivationBalance
var totalWithdrawn uint64
for _, wi := range ws {
if wi.ValidatorIndex == w.Index {
totalWithdrawn += wi.Amount
}
}
balance, err := mathutil.Sub64(vBal, totalWithdrawn)
if err != nil {
return withdrawalIndex, 0, errors.Wrapf(err, "failed to subtract balance %d with total withdrawn %d", vBal, totalWithdrawn)
}
hasExcessBalance := balance > cfg.MinActivationBalance
if v.ExitEpoch() == cfg.FarFutureEpoch && hasSufficientEffectiveBalance && hasExcessBalance {
amount := min(balance-cfg.MinActivationBalance, w.Amount)
ws = append(ws, &enginev1.Withdrawal{
Index: withdrawalIndex,
ValidatorIndex: w.Index,
Address: v.GetWithdrawalCredentials()[12:],
Amount: amount,
})
withdrawalIndex++
}
processedPartialWithdrawalsCount++
}
*withdrawals = ws
return withdrawalIndex, processedPartialWithdrawalsCount, nil
}
func (b *BeaconState) appendValidatorsSweepWithdrawals(withdrawalIndex uint64, withdrawals *[]*enginev1.Withdrawal) error {
ws := *withdrawals
validatorIndex := b.nextWithdrawalValidatorIndex
validatorsLen := b.validatorsLen()
epoch := slots.ToEpoch(b.slot)
bound := min(uint64(validatorsLen), params.BeaconConfig().MaxValidatorsPerWithdrawalsSweep)
for range bound {
val, err := b.validatorAtIndexReadOnly(validatorIndex)
if err != nil {
return nil, 0, errors.Wrapf(err, "could not retrieve validator at index %d", validatorIndex)
return errors.Wrapf(err, "could not retrieve validator at index %d", validatorIndex)
}
balance, err := b.balanceAtIndex(validatorIndex)
if err != nil {
return nil, 0, errors.Wrapf(err, "could not retrieve balance at index %d", validatorIndex)
return errors.Wrapf(err, "could not retrieve balance at index %d", validatorIndex)
}
if b.version >= version.Electra {
var partiallyWithdrawnBalance uint64
for _, w := range withdrawals {
for _, w := range ws {
if w.ValidatorIndex == validatorIndex {
partiallyWithdrawnBalance += w.Amount
}
}
balance, err = mathutil.Sub64(balance, partiallyWithdrawnBalance)
if err != nil {
return nil, 0, errors.Wrapf(err, "could not subtract balance %d with partial withdrawn balance %d", balance, partiallyWithdrawnBalance)
return errors.Wrapf(err, "could not subtract balance %d with partial withdrawn balance %d", balance, partiallyWithdrawnBalance)
}
}
if helpers.IsFullyWithdrawableValidator(val, balance, epoch, b.version) {
withdrawals = append(withdrawals, &enginev1.Withdrawal{
ws = append(ws, &enginev1.Withdrawal{
Index: withdrawalIndex,
ValidatorIndex: validatorIndex,
Address: bytesutil.SafeCopyBytes(val.GetWithdrawalCredentials()[ETH1AddressOffset:]),
@@ -191,7 +225,7 @@ func (b *BeaconState) ExpectedWithdrawals() ([]*enginev1.Withdrawal, uint64, err
})
withdrawalIndex++
} else if helpers.IsPartiallyWithdrawableValidator(val, balance, epoch, b.version) {
withdrawals = append(withdrawals, &enginev1.Withdrawal{
ws = append(ws, &enginev1.Withdrawal{
Index: withdrawalIndex,
ValidatorIndex: validatorIndex,
Address: bytesutil.SafeCopyBytes(val.GetWithdrawalCredentials()[ETH1AddressOffset:]),
@@ -199,7 +233,7 @@ func (b *BeaconState) ExpectedWithdrawals() ([]*enginev1.Withdrawal, uint64, err
})
withdrawalIndex++
}
if uint64(len(withdrawals)) == params.BeaconConfig().MaxWithdrawalsPerPayload {
if uint64(len(ws)) == params.BeaconConfig().MaxWithdrawalsPerPayload {
break
}
validatorIndex += 1
@@ -208,7 +242,8 @@ func (b *BeaconState) ExpectedWithdrawals() ([]*enginev1.Withdrawal, uint64, err
}
}
return withdrawals, processedPartialWithdrawalsCount, nil
*withdrawals = ws
return nil
}
func (b *BeaconState) PendingPartialWithdrawals() ([]*ethpb.PendingPartialWithdrawal, error) {

View File

@@ -1,15 +1,124 @@
package state_native
import (
"errors"
"fmt"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native/types"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state/stateutil"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/runtime/version"
)
// SetPayloadExpectedWithdrawals stores the expected withdrawals for the next payload.
func (b *BeaconState) SetPayloadExpectedWithdrawals(withdrawals []*enginev1.Withdrawal) error {
if b.version < version.Gloas {
return errNotSupported("SetPayloadExpectedWithdrawals", b.version)
}
b.lock.Lock()
defer b.lock.Unlock()
b.payloadExpectedWithdrawals = withdrawals
b.markFieldAsDirty(types.PayloadExpectedWithdrawals)
return nil
}
// RotateBuilderPendingPayments rotates the queue by dropping slots per epoch payments from the
// front and appending slots per epoch empty payments to the end.
// This implements: state.builder_pending_payments = state.builder_pending_payments[SLOTS_PER_EPOCH:] + [BuilderPendingPayment() for _ in range(SLOTS_PER_EPOCH)]
func (b *BeaconState) RotateBuilderPendingPayments() error {
if b.version < version.Gloas {
return errNotSupported("RotateBuilderPendingPayments", b.version)
}
b.lock.Lock()
defer b.lock.Unlock()
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
copy(b.builderPendingPayments[:slotsPerEpoch], b.builderPendingPayments[slotsPerEpoch:2*slotsPerEpoch])
for i := slotsPerEpoch; i < primitives.Slot(len(b.builderPendingPayments)); i++ {
b.builderPendingPayments[i] = emptyBuilderPendingPayment
}
b.markFieldAsDirty(types.BuilderPendingPayments)
b.rebuildTrie[types.BuilderPendingPayments] = true
return nil
}
// emptyBuilderPendingPayment is a shared zero-value payment used to clear entries.
var emptyBuilderPendingPayment = &ethpb.BuilderPendingPayment{
Withdrawal: &ethpb.BuilderPendingWithdrawal{
FeeRecipient: make([]byte, 20),
},
}
// AppendBuilderPendingWithdrawals appends builder pending withdrawals to the beacon state.
// If the withdrawals slice is shared, it copies the slice first to preserve references.
func (b *BeaconState) AppendBuilderPendingWithdrawals(withdrawals []*ethpb.BuilderPendingWithdrawal) error {
if b.version < version.Gloas {
return errNotSupported("AppendBuilderPendingWithdrawals", b.version)
}
if len(withdrawals) == 0 {
return nil
}
b.lock.Lock()
defer b.lock.Unlock()
pendingWithdrawals := b.builderPendingWithdrawals
if b.sharedFieldReferences[types.BuilderPendingWithdrawals].Refs() > 1 {
pendingWithdrawals = make([]*ethpb.BuilderPendingWithdrawal, 0, len(b.builderPendingWithdrawals)+len(withdrawals))
pendingWithdrawals = append(pendingWithdrawals, b.builderPendingWithdrawals...)
b.sharedFieldReferences[types.BuilderPendingWithdrawals].MinusRef()
b.sharedFieldReferences[types.BuilderPendingWithdrawals] = stateutil.NewRef(1)
}
b.builderPendingWithdrawals = append(pendingWithdrawals, withdrawals...)
b.markFieldAsDirty(types.BuilderPendingWithdrawals)
return nil
}
// DequeueBuilderPendingWithdrawals removes processed builder withdrawals from the front of the queue.
func (b *BeaconState) DequeueBuilderPendingWithdrawals(n uint64) error {
if b.version < version.Gloas {
return errNotSupported("DequeueBuilderPendingWithdrawals", b.version)
}
if n > uint64(len(b.builderPendingWithdrawals)) {
return errors.New("cannot dequeue more builder withdrawals than are in the queue")
}
if n == 0 {
return nil
}
b.lock.Lock()
defer b.lock.Unlock()
if b.sharedFieldReferences[types.BuilderPendingWithdrawals].Refs() > 1 {
withdrawals := make([]*ethpb.BuilderPendingWithdrawal, len(b.builderPendingWithdrawals))
copy(withdrawals, b.builderPendingWithdrawals)
b.builderPendingWithdrawals = withdrawals
b.sharedFieldReferences[types.BuilderPendingWithdrawals].MinusRef()
b.sharedFieldReferences[types.BuilderPendingWithdrawals] = stateutil.NewRef(1)
}
b.builderPendingWithdrawals = b.builderPendingWithdrawals[n:]
b.markFieldAsDirty(types.BuilderPendingWithdrawals)
b.rebuildTrie[types.BuilderPendingWithdrawals] = true
return nil
}
// SetExecutionPayloadBid sets the latest execution payload bid in the state.
func (b *BeaconState) SetExecutionPayloadBid(h interfaces.ROExecutionPayloadBid) error {
if b.version < version.Gloas {
@@ -23,7 +132,7 @@ func (b *BeaconState) SetExecutionPayloadBid(h interfaces.ROExecutionPayloadBid)
parentBlockRoot := h.ParentBlockRoot()
blockHash := h.BlockHash()
randao := h.PrevRandao()
blobKzgCommitmentsRoot := h.BlobKzgCommitmentsRoot()
blobKzgCommitments := bytesutil.SafeCopy2dBytes(h.BlobKzgCommitments())
feeRecipient := h.FeeRecipient()
b.latestExecutionPayloadBid = &ethpb.ExecutionPayloadBid{
ParentBlockHash: parentBlockHash[:],
@@ -35,7 +144,7 @@ func (b *BeaconState) SetExecutionPayloadBid(h interfaces.ROExecutionPayloadBid)
Slot: h.Slot(),
Value: h.Value(),
ExecutionPayment: h.ExecutionPayment(),
BlobKzgCommitmentsRoot: blobKzgCommitmentsRoot[:],
BlobKzgCommitments: blobKzgCommitments,
FeeRecipient: feeRecipient[:],
}
b.markFieldAsDirty(types.LatestExecutionPayloadBid)
@@ -43,6 +152,25 @@ func (b *BeaconState) SetExecutionPayloadBid(h interfaces.ROExecutionPayloadBid)
return nil
}
// ClearBuilderPendingPayment clears a builder pending payment at the specified index.
func (b *BeaconState) ClearBuilderPendingPayment(index primitives.Slot) error {
if b.version < version.Gloas {
return errNotSupported("ClearBuilderPendingPayment", b.version)
}
b.lock.Lock()
defer b.lock.Unlock()
if uint64(index) >= uint64(len(b.builderPendingPayments)) {
return fmt.Errorf("builder pending payments index %d out of range (len=%d)", index, len(b.builderPendingPayments))
}
b.builderPendingPayments[index] = emptyBuilderPendingPayment
b.markFieldAsDirty(types.BuilderPendingPayments)
return nil
}
// SetBuilderPendingPayment sets a builder pending payment at the specified index.
func (b *BeaconState) SetBuilderPendingPayment(index primitives.Slot, payment *ethpb.BuilderPendingPayment) error {
if b.version < version.Gloas {
@@ -61,3 +189,87 @@ func (b *BeaconState) SetBuilderPendingPayment(index primitives.Slot, payment *e
b.markFieldAsDirty(types.BuilderPendingPayments)
return nil
}
// UpdateExecutionPayloadAvailabilityAtIndex updates the execution payload availability bit at a specific index.
func (b *BeaconState) UpdateExecutionPayloadAvailabilityAtIndex(idx uint64, val byte) error {
b.lock.Lock()
defer b.lock.Unlock()
byteIndex := idx / 8
bitIndex := idx % 8
if byteIndex >= uint64(len(b.executionPayloadAvailability)) {
return fmt.Errorf("bit index %d (byte index %d) out of range for execution payload availability length %d", idx, byteIndex, len(b.executionPayloadAvailability))
}
if val != 0 {
b.executionPayloadAvailability[byteIndex] |= (1 << bitIndex)
} else {
b.executionPayloadAvailability[byteIndex] &^= (1 << bitIndex)
}
b.markFieldAsDirty(types.ExecutionPayloadAvailability)
return nil
}
// SetNextWithdrawalBuilderIndex sets the next builder index for the withdrawals sweep.
func (b *BeaconState) SetNextWithdrawalBuilderIndex(index primitives.BuilderIndex) error {
if b.version < version.Gloas {
return errNotSupported("SetNextWithdrawalBuilderIndex", b.version)
}
b.lock.Lock()
defer b.lock.Unlock()
b.nextWithdrawalBuilderIndex = index
b.markFieldAsDirty(types.NextWithdrawalBuilderIndex)
return nil
}
// DecreaseBuilderBalance decreases the builder's balance by amount (saturating at 0).
func (b *BeaconState) DecreaseBuilderBalance(builderIndex primitives.BuilderIndex, amount uint64) error {
if b.version < version.Gloas {
return errNotSupported("DecreaseBuilderBalance", b.version)
}
if amount == 0 {
return nil
}
b.lock.Lock()
defer b.lock.Unlock()
idx := uint64(builderIndex)
if idx >= uint64(len(b.builders)) {
return fmt.Errorf("builder index %d out of range (len=%d)", builderIndex, len(b.builders))
}
// Copy-on-write for shared builders registry.
if b.sharedFieldReferences[types.Builders].Refs() > 1 {
builders := make([]*ethpb.Builder, len(b.builders))
copy(builders, b.builders)
b.builders = builders
b.sharedFieldReferences[types.Builders].MinusRef()
b.sharedFieldReferences[types.Builders] = stateutil.NewRef(1)
// Ensure we don't mutate a shared builder pointer.
if b.builders[idx] != nil {
b.builders[idx] = ethpb.CopyBuilder(b.builders[idx])
}
}
builder := b.builders[idx]
if builder == nil {
return fmt.Errorf("builder at index %d is nil", builderIndex)
}
bal := uint64(builder.Balance)
if amount >= bal {
builder.Balance = 0
} else {
builder.Balance = primitives.Gwei(bal - amount)
}
b.markFieldAsDirty(types.Builders)
b.addDirtyIndices(types.Builders, []uint64{idx})
return nil
}

View File

@@ -5,7 +5,10 @@ import (
"testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native/types"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state/stateutil"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/runtime/version"
"github.com/OffchainLabs/prysm/v7/testing/require"
@@ -16,7 +19,7 @@ type testExecutionPayloadBid struct {
parentBlockRoot [32]byte
blockHash [32]byte
prevRandao [32]byte
blobKzgCommitmentsRoot [32]byte
blobKzgCommitments [][]byte
feeRecipient [20]byte
gasLimit uint64
builderIndex primitives.BuilderIndex
@@ -38,9 +41,9 @@ func (t testExecutionPayloadBid) Value() primitives.Gwei { return t.value }
func (t testExecutionPayloadBid) ExecutionPayment() primitives.Gwei {
return t.executionPayment
}
func (t testExecutionPayloadBid) BlobKzgCommitmentsRoot() [32]byte { return t.blobKzgCommitmentsRoot }
func (t testExecutionPayloadBid) FeeRecipient() [20]byte { return t.feeRecipient }
func (t testExecutionPayloadBid) IsNil() bool { return false }
func (t testExecutionPayloadBid) BlobKzgCommitments() [][]byte { return t.blobKzgCommitments }
func (t testExecutionPayloadBid) FeeRecipient() [20]byte { return t.feeRecipient }
func (t testExecutionPayloadBid) IsNil() bool { return false }
func TestSetExecutionPayloadBid(t *testing.T) {
t.Run("previous fork returns expected error", func(t *testing.T) {
@@ -55,7 +58,7 @@ func TestSetExecutionPayloadBid(t *testing.T) {
parentBlockRoot = [32]byte(bytes.Repeat([]byte{0xCD}, 32))
blockHash = [32]byte(bytes.Repeat([]byte{0xEF}, 32))
prevRandao = [32]byte(bytes.Repeat([]byte{0x11}, 32))
blobRoot = [32]byte(bytes.Repeat([]byte{0x22}, 32))
blobCommitments = [][]byte{bytes.Repeat([]byte{0x22}, 48)}
feeRecipient [20]byte
)
copy(feeRecipient[:], bytes.Repeat([]byte{0x33}, len(feeRecipient)))
@@ -68,7 +71,7 @@ func TestSetExecutionPayloadBid(t *testing.T) {
parentBlockRoot: parentBlockRoot,
blockHash: blockHash,
prevRandao: prevRandao,
blobKzgCommitmentsRoot: blobRoot,
blobKzgCommitments: blobCommitments,
feeRecipient: feeRecipient,
gasLimit: 123,
builderIndex: 7,
@@ -84,7 +87,7 @@ func TestSetExecutionPayloadBid(t *testing.T) {
require.DeepEqual(t, parentBlockRoot[:], st.latestExecutionPayloadBid.ParentBlockRoot)
require.DeepEqual(t, blockHash[:], st.latestExecutionPayloadBid.BlockHash)
require.DeepEqual(t, prevRandao[:], st.latestExecutionPayloadBid.PrevRandao)
require.DeepEqual(t, blobRoot[:], st.latestExecutionPayloadBid.BlobKzgCommitmentsRoot)
require.DeepEqual(t, blobCommitments, st.latestExecutionPayloadBid.BlobKzgCommitments)
require.DeepEqual(t, feeRecipient[:], st.latestExecutionPayloadBid.FeeRecipient)
require.Equal(t, uint64(123), st.latestExecutionPayloadBid.GasLimit)
require.Equal(t, primitives.BuilderIndex(7), st.latestExecutionPayloadBid.BuilderIndex)
@@ -138,3 +141,403 @@ func TestSetBuilderPendingPayment(t *testing.T) {
require.Equal(t, false, st.dirtyFields[types.BuilderPendingPayments])
})
}
func TestClearBuilderPendingPayment(t *testing.T) {
t.Run("previous fork returns expected error", func(t *testing.T) {
st := &BeaconState{version: version.Fulu}
err := st.ClearBuilderPendingPayment(0)
require.ErrorContains(t, "is not supported", err)
})
t.Run("clears and marks dirty", func(t *testing.T) {
st := &BeaconState{
version: version.Gloas,
dirtyFields: make(map[types.FieldIndex]bool),
builderPendingPayments: make([]*ethpb.BuilderPendingPayment, 2),
}
st.builderPendingPayments[1] = &ethpb.BuilderPendingPayment{
Weight: 2,
Withdrawal: &ethpb.BuilderPendingWithdrawal{
Amount: 99,
BuilderIndex: 1,
},
}
require.NoError(t, st.ClearBuilderPendingPayment(1))
require.Equal(t, emptyBuilderPendingPayment, st.builderPendingPayments[1])
require.Equal(t, true, st.dirtyFields[types.BuilderPendingPayments])
})
t.Run("returns error on out of range index", func(t *testing.T) {
st := &BeaconState{
version: version.Gloas,
dirtyFields: make(map[types.FieldIndex]bool),
builderPendingPayments: make([]*ethpb.BuilderPendingPayment, 1),
}
err := st.ClearBuilderPendingPayment(2)
require.ErrorContains(t, "out of range", err)
require.Equal(t, false, st.dirtyFields[types.BuilderPendingPayments])
})
}
func TestRotateBuilderPendingPayments(t *testing.T) {
totalPayments := 2 * params.BeaconConfig().SlotsPerEpoch
payments := make([]*ethpb.BuilderPendingPayment, totalPayments)
for i := range payments {
idx := uint64(i)
payments[i] = &ethpb.BuilderPendingPayment{
Weight: primitives.Gwei(idx * 100e9),
Withdrawal: &ethpb.BuilderPendingWithdrawal{
FeeRecipient: make([]byte, 20),
Amount: primitives.Gwei(idx * 1e9),
BuilderIndex: primitives.BuilderIndex(idx + 100),
},
}
}
statePb, err := InitializeFromProtoUnsafeGloas(&ethpb.BeaconStateGloas{
BuilderPendingPayments: payments,
})
require.NoError(t, err)
st, ok := statePb.(*BeaconState)
require.Equal(t, true, ok)
oldPayments, err := st.BuilderPendingPayments()
require.NoError(t, err)
require.NoError(t, st.RotateBuilderPendingPayments())
newPayments, err := st.BuilderPendingPayments()
require.NoError(t, err)
slotsPerEpoch := int(params.BeaconConfig().SlotsPerEpoch)
for i := range slotsPerEpoch {
require.DeepEqual(t, oldPayments[slotsPerEpoch+i], newPayments[i])
}
for i := slotsPerEpoch; i < 2*slotsPerEpoch; i++ {
payment := newPayments[i]
require.Equal(t, primitives.Gwei(0), payment.Weight)
require.Equal(t, 20, len(payment.Withdrawal.FeeRecipient))
require.Equal(t, primitives.Gwei(0), payment.Withdrawal.Amount)
require.Equal(t, primitives.BuilderIndex(0), payment.Withdrawal.BuilderIndex)
}
}
func TestRotateBuilderPendingPayments_UnsupportedVersion(t *testing.T) {
st := &BeaconState{version: version.Electra}
err := st.RotateBuilderPendingPayments()
require.ErrorContains(t, "RotateBuilderPendingPayments", err)
}
func TestSetPayloadExpectedWithdrawals(t *testing.T) {
t.Run("previous fork returns expected error", func(t *testing.T) {
st := &BeaconState{version: version.Fulu}
err := st.SetPayloadExpectedWithdrawals([]*enginev1.Withdrawal{})
require.ErrorContains(t, "SetPayloadExpectedWithdrawals", err)
})
t.Run("rejects nil input", func(t *testing.T) {
st := &BeaconState{
version: version.Gloas,
dirtyFields: make(map[types.FieldIndex]bool),
sharedFieldReferences: map[types.FieldIndex]*stateutil.Reference{
types.PayloadExpectedWithdrawals: stateutil.NewRef(1),
},
}
err := st.SetPayloadExpectedWithdrawals(nil)
require.ErrorContains(t, "cannot set nil payload expected withdrawals", err)
require.Equal(t, false, st.dirtyFields[types.PayloadExpectedWithdrawals])
})
t.Run("sets and marks dirty", func(t *testing.T) {
oldRef := stateutil.NewRef(2)
st := &BeaconState{
version: version.Gloas,
dirtyFields: make(map[types.FieldIndex]bool),
sharedFieldReferences: map[types.FieldIndex]*stateutil.Reference{
types.PayloadExpectedWithdrawals: oldRef,
},
payloadExpectedWithdrawals: []*enginev1.Withdrawal{{Index: 1}, {Index: 2}},
}
withdrawals := []*enginev1.Withdrawal{{Index: 3}}
require.NoError(t, st.SetPayloadExpectedWithdrawals(withdrawals))
require.DeepEqual(t, withdrawals, st.payloadExpectedWithdrawals)
require.Equal(t, true, st.dirtyFields[types.PayloadExpectedWithdrawals])
require.Equal(t, uint(1), oldRef.Refs())
require.Equal(t, uint(1), st.sharedFieldReferences[types.PayloadExpectedWithdrawals].Refs())
require.Equal(t, false, st.sharedFieldReferences[types.PayloadExpectedWithdrawals] == oldRef)
})
}
func TestDequeueBuilderPendingWithdrawals(t *testing.T) {
t.Run("previous fork returns expected error", func(t *testing.T) {
st := &BeaconState{version: version.Fulu}
err := st.DequeueBuilderPendingWithdrawals(1)
require.ErrorContains(t, "DequeueBuilderPendingWithdrawals", err)
})
t.Run("returns error when dequeueing more than length", func(t *testing.T) {
st := &BeaconState{
version: version.Gloas,
dirtyFields: make(map[types.FieldIndex]bool),
sharedFieldReferences: map[types.FieldIndex]*stateutil.Reference{
types.BuilderPendingWithdrawals: stateutil.NewRef(1),
},
builderPendingWithdrawals: []*ethpb.BuilderPendingWithdrawal{{Amount: 1}},
}
err := st.DequeueBuilderPendingWithdrawals(2)
require.ErrorContains(t, "cannot dequeue more builder withdrawals", err)
require.Equal(t, 1, len(st.builderPendingWithdrawals))
require.Equal(t, false, st.dirtyFields[types.BuilderPendingWithdrawals])
})
t.Run("no-op on zero", func(t *testing.T) {
st := &BeaconState{
version: version.Gloas,
dirtyFields: make(map[types.FieldIndex]bool),
sharedFieldReferences: map[types.FieldIndex]*stateutil.Reference{
types.BuilderPendingWithdrawals: stateutil.NewRef(1),
},
builderPendingWithdrawals: []*ethpb.BuilderPendingWithdrawal{{Amount: 1}},
}
require.NoError(t, st.DequeueBuilderPendingWithdrawals(0))
require.Equal(t, 1, len(st.builderPendingWithdrawals))
require.Equal(t, false, st.dirtyFields[types.BuilderPendingWithdrawals])
require.Equal(t, false, st.rebuildTrie[types.BuilderPendingWithdrawals])
})
t.Run("dequeues and marks dirty", func(t *testing.T) {
st := &BeaconState{
version: version.Gloas,
dirtyFields: make(map[types.FieldIndex]bool),
sharedFieldReferences: map[types.FieldIndex]*stateutil.Reference{
types.BuilderPendingWithdrawals: stateutil.NewRef(1),
},
builderPendingWithdrawals: []*ethpb.BuilderPendingWithdrawal{
{Amount: 1},
{Amount: 2},
{Amount: 3},
},
rebuildTrie: make(map[types.FieldIndex]bool),
}
require.NoError(t, st.DequeueBuilderPendingWithdrawals(2))
require.Equal(t, 1, len(st.builderPendingWithdrawals))
require.Equal(t, primitives.Gwei(3), st.builderPendingWithdrawals[0].Amount)
require.Equal(t, true, st.dirtyFields[types.BuilderPendingWithdrawals])
require.Equal(t, true, st.rebuildTrie[types.BuilderPendingWithdrawals])
})
t.Run("copy-on-write preserves shared state", func(t *testing.T) {
sharedRef := stateutil.NewRef(2)
sharedWithdrawals := []*ethpb.BuilderPendingWithdrawal{
{Amount: 1},
{Amount: 2},
{Amount: 3},
}
st1 := &BeaconState{
version: version.Gloas,
dirtyFields: make(map[types.FieldIndex]bool),
sharedFieldReferences: map[types.FieldIndex]*stateutil.Reference{
types.BuilderPendingWithdrawals: sharedRef,
},
builderPendingWithdrawals: sharedWithdrawals,
rebuildTrie: make(map[types.FieldIndex]bool),
}
st2 := &BeaconState{
sharedFieldReferences: map[types.FieldIndex]*stateutil.Reference{
types.BuilderPendingWithdrawals: sharedRef,
},
builderPendingWithdrawals: sharedWithdrawals,
}
require.NoError(t, st1.DequeueBuilderPendingWithdrawals(2))
require.Equal(t, primitives.Gwei(3), st1.builderPendingWithdrawals[0].Amount)
require.Equal(t, 3, len(st2.builderPendingWithdrawals))
require.Equal(t, primitives.Gwei(1), st2.builderPendingWithdrawals[0].Amount)
require.Equal(t, uint(1), st1.sharedFieldReferences[types.BuilderPendingWithdrawals].Refs())
require.Equal(t, uint(1), st2.sharedFieldReferences[types.BuilderPendingWithdrawals].Refs())
})
}
func TestSetNextWithdrawalBuilderIndex(t *testing.T) {
t.Run("previous fork returns expected error", func(t *testing.T) {
st := &BeaconState{version: version.Fulu}
err := st.SetNextWithdrawalBuilderIndex(1)
require.ErrorContains(t, "SetNextWithdrawalBuilderIndex", err)
})
t.Run("sets and marks dirty", func(t *testing.T) {
st := &BeaconState{
version: version.Gloas,
dirtyFields: make(map[types.FieldIndex]bool),
}
require.NoError(t, st.SetNextWithdrawalBuilderIndex(7))
require.Equal(t, primitives.BuilderIndex(7), st.nextWithdrawalBuilderIndex)
require.Equal(t, true, st.dirtyFields[types.NextWithdrawalBuilderIndex])
})
}
func TestAppendBuilderPendingWithdrawal_CopyOnWrite(t *testing.T) {
wd := &ethpb.BuilderPendingWithdrawal{
FeeRecipient: make([]byte, 20),
Amount: 1,
BuilderIndex: 2,
}
statePb, err := InitializeFromProtoUnsafeGloas(&ethpb.BeaconStateGloas{
BuilderPendingWithdrawals: []*ethpb.BuilderPendingWithdrawal{wd},
})
require.NoError(t, err)
st, ok := statePb.(*BeaconState)
require.Equal(t, true, ok)
copied := st.Copy().(*BeaconState)
require.Equal(t, uint(2), st.sharedFieldReferences[types.BuilderPendingWithdrawals].Refs())
appended := &ethpb.BuilderPendingWithdrawal{
FeeRecipient: make([]byte, 20),
Amount: 4,
BuilderIndex: 5,
}
require.NoError(t, copied.AppendBuilderPendingWithdrawals([]*ethpb.BuilderPendingWithdrawal{appended}))
require.Equal(t, 1, len(st.builderPendingWithdrawals))
require.Equal(t, 2, len(copied.builderPendingWithdrawals))
require.DeepEqual(t, wd, copied.builderPendingWithdrawals[0])
require.DeepEqual(t, appended, copied.builderPendingWithdrawals[1])
require.DeepEqual(t, wd, st.builderPendingWithdrawals[0])
require.Equal(t, uint(1), st.sharedFieldReferences[types.BuilderPendingWithdrawals].Refs())
require.Equal(t, uint(1), copied.sharedFieldReferences[types.BuilderPendingWithdrawals].Refs())
}
func TestAppendBuilderPendingWithdrawals(t *testing.T) {
st := &BeaconState{
version: version.Gloas,
dirtyFields: make(map[types.FieldIndex]bool),
sharedFieldReferences: map[types.FieldIndex]*stateutil.Reference{
types.BuilderPendingWithdrawals: stateutil.NewRef(1),
},
builderPendingWithdrawals: make([]*ethpb.BuilderPendingWithdrawal, 0),
}
first := &ethpb.BuilderPendingWithdrawal{Amount: 1}
second := &ethpb.BuilderPendingWithdrawal{Amount: 2}
require.NoError(t, st.AppendBuilderPendingWithdrawals([]*ethpb.BuilderPendingWithdrawal{first, second}))
require.Equal(t, 2, len(st.builderPendingWithdrawals))
require.DeepEqual(t, first, st.builderPendingWithdrawals[0])
require.DeepEqual(t, second, st.builderPendingWithdrawals[1])
require.Equal(t, true, st.dirtyFields[types.BuilderPendingWithdrawals])
}
func TestAppendBuilderPendingWithdrawals_UnsupportedVersion(t *testing.T) {
st := &BeaconState{version: version.Electra}
err := st.AppendBuilderPendingWithdrawals([]*ethpb.BuilderPendingWithdrawal{{}})
require.ErrorContains(t, "AppendBuilderPendingWithdrawals", err)
}
func TestUpdateExecutionPayloadAvailabilityAtIndex_SetAndClear(t *testing.T) {
st := newGloasStateWithAvailability(t, make([]byte, 1024))
otherIdx := uint64(8) // byte 1, bit 0
idx := uint64(9) // byte 1, bit 1
require.NoError(t, st.UpdateExecutionPayloadAvailabilityAtIndex(otherIdx, 1))
require.Equal(t, byte(0x01), st.executionPayloadAvailability[1])
require.NoError(t, st.UpdateExecutionPayloadAvailabilityAtIndex(idx, 1))
require.Equal(t, byte(0x03), st.executionPayloadAvailability[1])
require.NoError(t, st.UpdateExecutionPayloadAvailabilityAtIndex(idx, 0))
require.Equal(t, byte(0x01), st.executionPayloadAvailability[1])
}
func TestUpdateExecutionPayloadAvailabilityAtIndex_OutOfRange(t *testing.T) {
st := newGloasStateWithAvailability(t, make([]byte, 1024))
idx := uint64(len(st.executionPayloadAvailability)) * 8
err := st.UpdateExecutionPayloadAvailabilityAtIndex(idx, 1)
require.ErrorContains(t, "out of range", err)
for _, b := range st.executionPayloadAvailability {
if b != 0 {
t.Fatalf("execution payload availability mutated on error")
}
}
}
func TestDecreaseBuilderBalance(t *testing.T) {
t.Run("previous fork returns expected error", func(t *testing.T) {
st := &BeaconState{version: version.Fulu}
err := st.DecreaseBuilderBalance(0, 1)
require.ErrorContains(t, "DecreaseBuilderBalance", err)
})
t.Run("decreases and saturates", func(t *testing.T) {
st := &BeaconState{
version: version.Gloas,
dirtyFields: make(map[types.FieldIndex]bool),
dirtyIndices: make(map[types.FieldIndex][]uint64),
rebuildTrie: make(map[types.FieldIndex]bool),
sharedFieldReferences: map[types.FieldIndex]*stateutil.Reference{
types.Builders: stateutil.NewRef(1),
},
builders: []*ethpb.Builder{
{Balance: 10},
},
}
require.NoError(t, st.DecreaseBuilderBalance(0, 3))
require.Equal(t, uint64(7), uint64(st.builders[0].Balance))
require.Equal(t, true, st.dirtyFields[types.Builders])
require.NoError(t, st.DecreaseBuilderBalance(0, 100))
require.Equal(t, uint64(0), uint64(st.builders[0].Balance))
})
t.Run("copy-on-write preserves shared state", func(t *testing.T) {
sharedRef := stateutil.NewRef(2)
sharedBuilders := []*ethpb.Builder{
{Balance: 10},
}
st1 := &BeaconState{
version: version.Gloas,
dirtyFields: make(map[types.FieldIndex]bool),
dirtyIndices: make(map[types.FieldIndex][]uint64),
rebuildTrie: make(map[types.FieldIndex]bool),
sharedFieldReferences: map[types.FieldIndex]*stateutil.Reference{
types.Builders: sharedRef,
},
builders: sharedBuilders,
}
st2 := &BeaconState{
builders: sharedBuilders,
}
require.NoError(t, st1.DecreaseBuilderBalance(0, 3))
require.Equal(t, uint64(7), uint64(st1.builders[0].Balance))
require.Equal(t, uint64(10), uint64(st2.builders[0].Balance))
})
}
func newGloasStateWithAvailability(t *testing.T, availability []byte) *BeaconState {
t.Helper()
st, err := InitializeFromProtoUnsafeGloas(&ethpb.BeaconStateGloas{
ExecutionPayloadAvailability: availability,
})
require.NoError(t, err)
return st.(*BeaconState)
}

View File

@@ -16,7 +16,7 @@ go_library(
"//beacon-chain/state:__subpackages__",
],
deps = [
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/state:go_default_library",
"//config/fieldparams:go_default_library",

View File

@@ -3,7 +3,7 @@ package testing
import (
"testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/blocks"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/signing"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
@@ -31,7 +31,7 @@ func GeneratePendingDeposit(t *testing.T, key common.SecretKey, amount uint64, w
Amount: dm.Amount,
Signature: sig.Marshal(),
}
valid, err := blocks.IsValidDepositSignature(depositData)
valid, err := helpers.IsValidDepositSignature(depositData)
require.NoError(t, err)
require.Equal(t, true, valid)
return &ethpb.PendingDeposit{

View File

@@ -148,7 +148,7 @@ func (b batch) ensureParent(expected [32]byte) error {
func (b batch) blockRequest() *eth.BeaconBlocksByRangeRequest {
return &eth.BeaconBlocksByRangeRequest{
StartSlot: b.begin,
Count: uint64(b.end - b.begin),
Count: uint64(b.end.FlooredSubSlot(b.begin)),
Step: 1,
}
}
@@ -156,7 +156,7 @@ func (b batch) blockRequest() *eth.BeaconBlocksByRangeRequest {
func (b batch) blobRequest() *eth.BlobSidecarsByRangeRequest {
return &eth.BlobSidecarsByRangeRequest{
StartSlot: b.begin,
Count: uint64(b.end - b.begin),
Count: uint64(b.end.FlooredSubSlot(b.begin)),
}
}

View File

@@ -10,6 +10,93 @@ import (
"github.com/pkg/errors"
)
func TestBlockRequest(t *testing.T) {
cases := []struct {
name string
begin primitives.Slot
end primitives.Slot
expectedCount uint64
}{
{
name: "normal case",
begin: 100,
end: 200,
expectedCount: 100,
},
{
name: "end equals begin",
begin: 100,
end: 100,
expectedCount: 0,
},
{
name: "end less than begin (would underflow without check)",
begin: 200,
end: 100,
expectedCount: 0,
},
{
name: "zero values",
begin: 0,
end: 0,
expectedCount: 0,
},
{
name: "single slot",
begin: 0,
end: 1,
expectedCount: 1,
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
b := batch{begin: tc.begin, end: tc.end}
req := b.blockRequest()
require.Equal(t, tc.expectedCount, req.Count)
require.Equal(t, tc.begin, req.StartSlot)
require.Equal(t, uint64(1), req.Step)
})
}
}
func TestBlobRequest(t *testing.T) {
cases := []struct {
name string
begin primitives.Slot
end primitives.Slot
expectedCount uint64
}{
{
name: "normal case",
begin: 100,
end: 200,
expectedCount: 100,
},
{
name: "end equals begin",
begin: 100,
end: 100,
expectedCount: 0,
},
{
name: "end less than begin (would underflow without check)",
begin: 200,
end: 100,
expectedCount: 0,
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
b := batch{begin: tc.begin, end: tc.end}
req := b.blobRequest()
require.Equal(t, tc.expectedCount, req.Count)
require.Equal(t, tc.begin, req.StartSlot)
})
}
}
func TestSortBatchDesc(t *testing.T) {
orderIn := []primitives.Slot{100, 10000, 1}
orderOut := []primitives.Slot{10000, 100, 1}

View File

@@ -4,9 +4,6 @@ import (
"context"
"time"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v7/beacon-chain/verification"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/crypto/bls"
"github.com/OffchainLabs/prysm/v7/monitoring/tracing"
@@ -56,32 +53,6 @@ func (s *Service) verifierRoutine() {
}
}
// A routine that runs in the background to perform batch
// KZG verifications by draining the channel and processing all pending requests.
func (s *Service) kzgVerifierRoutine() {
for {
kzgBatch := make([]*kzgVerifier, 0, 1)
select {
case <-s.ctx.Done():
return
case kzg := <-s.kzgChan:
kzgBatch = append(kzgBatch, kzg)
}
for {
select {
case <-s.ctx.Done():
return
case kzg := <-s.kzgChan:
kzgBatch = append(kzgBatch, kzg)
continue
default:
verifyKzgBatch(kzgBatch)
}
break
}
}
}
func (s *Service) validateWithBatchVerifier(ctx context.Context, message string, set *bls.SignatureBatch) (pubsub.ValidationResult, error) {
_, span := trace.StartSpan(ctx, "sync.validateWithBatchVerifier")
defer span.End()
@@ -154,71 +125,3 @@ func performBatchAggregation(aggSet *bls.SignatureBatch) (*bls.SignatureBatch, e
}
return aggSet, nil
}
func (s *Service) validateWithKzgBatchVerifier(ctx context.Context, dataColumns []blocks.RODataColumn) (pubsub.ValidationResult, error) {
_, span := trace.StartSpan(ctx, "sync.validateWithKzgBatchVerifier")
defer span.End()
timeout := time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second
resChan := make(chan error, 1)
verificationSet := &kzgVerifier{dataColumns: dataColumns, resChan: resChan}
ctx, cancel := context.WithTimeout(ctx, timeout)
defer cancel()
select {
case s.kzgChan <- verificationSet:
case <-ctx.Done():
return pubsub.ValidationIgnore, ctx.Err()
}
select {
case <-ctx.Done():
return pubsub.ValidationIgnore, ctx.Err() // parent context canceled, give up
case err := <-resChan:
if err != nil {
log.WithError(err).Trace("Could not perform batch verification")
tracing.AnnotateError(span, err)
return s.validateUnbatchedColumnsKzg(ctx, dataColumns)
}
}
return pubsub.ValidationAccept, nil
}
func (s *Service) validateUnbatchedColumnsKzg(ctx context.Context, columns []blocks.RODataColumn) (pubsub.ValidationResult, error) {
_, span := trace.StartSpan(ctx, "sync.validateUnbatchedColumnsKzg")
defer span.End()
start := time.Now()
if err := peerdas.VerifyDataColumnsSidecarKZGProofs(columns); err != nil {
err = errors.Wrap(err, "could not verify")
tracing.AnnotateError(span, err)
return pubsub.ValidationReject, err
}
verification.DataColumnBatchKZGVerificationHistogram.WithLabelValues("fallback").Observe(float64(time.Since(start).Milliseconds()))
return pubsub.ValidationAccept, nil
}
func verifyKzgBatch(kzgBatch []*kzgVerifier) {
if len(kzgBatch) == 0 {
return
}
allDataColumns := make([]blocks.RODataColumn, 0, len(kzgBatch))
for _, kzgVerifier := range kzgBatch {
allDataColumns = append(allDataColumns, kzgVerifier.dataColumns...)
}
var verificationErr error
start := time.Now()
err := peerdas.VerifyDataColumnsSidecarKZGProofs(allDataColumns)
if err != nil {
verificationErr = errors.Wrap(err, "batch KZG verification failed")
} else {
verification.DataColumnBatchKZGVerificationHistogram.WithLabelValues("batch").Observe(float64(time.Since(start).Milliseconds()))
}
// Send the same result to all verifiers in the batch
for _, verifier := range kzgBatch {
verifier.resChan <- verificationErr
}
}

View File

@@ -668,7 +668,7 @@ func populateBlock(bw *blocks.BlockWithROSidecars, blobs []blocks.ROBlob, req *p
func missingCommitError(root [32]byte, slot primitives.Slot, missing [][]byte) error {
missStr := make([]string, 0, len(missing))
for k := range missing {
for _, k := range missing {
missStr = append(missStr, fmt.Sprintf("%#x", k))
}
return errors.Wrapf(errMissingBlobsForBlockCommitments,

View File

@@ -226,8 +226,6 @@ func (s *Service) Start() {
// fetchOriginSidecars fetches origin sidecars
func (s *Service) fetchOriginSidecars(peers []peer.ID) error {
const delay = 10 * time.Second // The delay between each attempt to fetch origin data column sidecars
blockRoot, err := s.cfg.DB.OriginCheckpointBlockRoot(s.ctx)
if errors.Is(err, db.ErrNotFoundOriginBlockRoot) {
return nil
@@ -260,7 +258,7 @@ func (s *Service) fetchOriginSidecars(peers []peer.ID) error {
blockVersion := roBlock.Version()
if blockVersion >= version.Fulu {
if err := s.fetchOriginDataColumnSidecars(roBlock, delay); err != nil {
if err := s.fetchOriginDataColumnSidecars(roBlock); err != nil {
return errors.Wrap(err, "fetch origin columns")
}
return nil
@@ -414,7 +412,7 @@ func (s *Service) fetchOriginBlobSidecars(pids []peer.ID, rob blocks.ROBlock) er
return fmt.Errorf("no connected peer able to provide blobs for checkpoint sync block %#x", r)
}
func (s *Service) fetchOriginDataColumnSidecars(roBlock blocks.ROBlock, delay time.Duration) error {
func (s *Service) fetchOriginDataColumnSidecars(roBlock blocks.ROBlock) error {
const (
errorMessage = "Failed to fetch origin data column sidecars"
warningIteration = 10
@@ -501,7 +499,6 @@ func (s *Service) fetchOriginDataColumnSidecars(roBlock blocks.ROBlock, delay ti
log := log.WithFields(logrus.Fields{
"attempt": attempt,
"missingIndices": helpers.SortedPrettySliceFromMap(missingIndicesByRoot[root]),
"delay": delay,
})
logFunc := log.Debug

View File

@@ -687,10 +687,7 @@ func TestFetchOriginColumns(t *testing.T) {
cfg.BlobSchedule = []params.BlobScheduleEntry{{Epoch: 0, MaxBlobsPerBlock: 10}}
params.OverrideBeaconConfig(cfg)
const (
delay = 0
blobCount = 1
)
const blobCount = 1
t.Run("block has no commitments", func(t *testing.T) {
service := new(Service)
@@ -702,7 +699,7 @@ func TestFetchOriginColumns(t *testing.T) {
roBlock, err := blocks.NewROBlock(signedBlock)
require.NoError(t, err)
err = service.fetchOriginDataColumnSidecars(roBlock, delay)
err = service.fetchOriginDataColumnSidecars(roBlock)
require.NoError(t, err)
})
@@ -724,7 +721,7 @@ func TestFetchOriginColumns(t *testing.T) {
err := storage.Save(verifiedSidecars)
require.NoError(t, err)
err = service.fetchOriginDataColumnSidecars(roBlock, delay)
err = service.fetchOriginDataColumnSidecars(roBlock)
require.NoError(t, err)
})
@@ -829,7 +826,7 @@ func TestFetchOriginColumns(t *testing.T) {
attempt++
})
err = service.fetchOriginDataColumnSidecars(roBlock, delay)
err = service.fetchOriginDataColumnSidecars(roBlock)
require.NoError(t, err)
// Check all corresponding sidecars are saved in the store.

View File

@@ -1,339 +1,14 @@
package sync
import (
"context"
"sync"
"testing"
"time"
"github.com/OffchainLabs/prysm/v7/beacon-chain/blockchain/kzg"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/testing/assert"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/OffchainLabs/prysm/v7/testing/util"
pubsub "github.com/libp2p/go-libp2p-pubsub"
)
func TestValidateWithKzgBatchVerifier(t *testing.T) {
err := kzg.Start()
require.NoError(t, err)
tests := []struct {
name string
dataColumns []blocks.RODataColumn
expectedResult pubsub.ValidationResult
expectError bool
}{
{
name: "single valid data column",
dataColumns: createValidTestDataColumns(t, 1),
expectedResult: pubsub.ValidationAccept,
expectError: false,
},
{
name: "multiple valid data columns",
dataColumns: createValidTestDataColumns(t, 3),
expectedResult: pubsub.ValidationAccept,
expectError: false,
},
{
name: "single invalid data column",
dataColumns: createInvalidTestDataColumns(t, 1),
expectedResult: pubsub.ValidationReject,
expectError: true,
},
{
name: "empty data column slice",
dataColumns: []blocks.RODataColumn{},
expectedResult: pubsub.ValidationAccept,
expectError: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
ctx := t.Context()
service := &Service{
ctx: ctx,
kzgChan: make(chan *kzgVerifier, 100),
}
go service.kzgVerifierRoutine()
result, err := service.validateWithKzgBatchVerifier(ctx, tt.dataColumns)
require.Equal(t, tt.expectedResult, result)
if tt.expectError {
assert.NotNil(t, err)
} else {
assert.NoError(t, err)
}
})
}
}
func TestVerifierRoutine(t *testing.T) {
err := kzg.Start()
require.NoError(t, err)
t.Run("processes single request", func(t *testing.T) {
ctx := t.Context()
service := &Service{
ctx: ctx,
kzgChan: make(chan *kzgVerifier, 100),
}
go service.kzgVerifierRoutine()
dataColumns := createValidTestDataColumns(t, 1)
resChan := make(chan error, 1)
service.kzgChan <- &kzgVerifier{dataColumns: dataColumns, resChan: resChan}
select {
case err := <-resChan:
require.NoError(t, err)
case <-time.After(time.Second):
t.Fatal("timeout waiting for verification result")
}
})
t.Run("batches multiple requests", func(t *testing.T) {
ctx := t.Context()
service := &Service{
ctx: ctx,
kzgChan: make(chan *kzgVerifier, 100),
}
go service.kzgVerifierRoutine()
const numRequests = 5
resChans := make([]chan error, numRequests)
for i := range numRequests {
dataColumns := createValidTestDataColumns(t, 1)
resChan := make(chan error, 1)
resChans[i] = resChan
service.kzgChan <- &kzgVerifier{dataColumns: dataColumns, resChan: resChan}
}
for i := range numRequests {
select {
case err := <-resChans[i]:
require.NoError(t, err)
case <-time.After(time.Second):
t.Fatalf("timeout waiting for verification result %d", i)
}
}
})
t.Run("context cancellation stops routine", func(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
service := &Service{
ctx: ctx,
kzgChan: make(chan *kzgVerifier, 100),
}
routineDone := make(chan struct{})
go func() {
service.kzgVerifierRoutine()
close(routineDone)
}()
cancel()
select {
case <-routineDone:
case <-time.After(time.Second):
t.Fatal("timeout waiting for routine to exit")
}
})
}
func TestVerifyKzgBatch(t *testing.T) {
err := kzg.Start()
require.NoError(t, err)
t.Run("all valid data columns succeed", func(t *testing.T) {
dataColumns := createValidTestDataColumns(t, 3)
resChan := make(chan error, 1)
kzgVerifiers := []*kzgVerifier{{dataColumns: dataColumns, resChan: resChan}}
verifyKzgBatch(kzgVerifiers)
select {
case err := <-resChan:
require.NoError(t, err)
case <-time.After(time.Second):
t.Fatal("timeout waiting for batch verification")
}
})
t.Run("invalid proofs fail entire batch", func(t *testing.T) {
validColumns := createValidTestDataColumns(t, 1)
invalidColumns := createInvalidTestDataColumns(t, 1)
allColumns := append(validColumns, invalidColumns...)
resChan := make(chan error, 1)
kzgVerifiers := []*kzgVerifier{{dataColumns: allColumns, resChan: resChan}}
verifyKzgBatch(kzgVerifiers)
select {
case err := <-resChan:
assert.NotNil(t, err)
case <-time.After(time.Second):
t.Fatal("timeout waiting for batch verification")
}
})
t.Run("empty batch handling", func(t *testing.T) {
verifyKzgBatch([]*kzgVerifier{})
})
}
func TestKzgBatchVerifierConcurrency(t *testing.T) {
err := kzg.Start()
require.NoError(t, err)
ctx := t.Context()
service := &Service{
ctx: ctx,
kzgChan: make(chan *kzgVerifier, 100),
}
go service.kzgVerifierRoutine()
const numGoroutines = 10
const numRequestsPerGoroutine = 5
var wg sync.WaitGroup
wg.Add(numGoroutines)
// Multiple goroutines sending verification requests simultaneously
for i := range numGoroutines {
go func(goroutineID int) {
defer wg.Done()
for range numRequestsPerGoroutine {
dataColumns := createValidTestDataColumns(t, 1)
result, err := service.validateWithKzgBatchVerifier(ctx, dataColumns)
require.Equal(t, pubsub.ValidationAccept, result)
require.NoError(t, err)
}
}(i)
}
wg.Wait()
}
func TestKzgBatchVerifierFallback(t *testing.T) {
err := kzg.Start()
require.NoError(t, err)
t.Run("fallback handles mixed valid/invalid batch correctly", func(t *testing.T) {
ctx := t.Context()
service := &Service{
ctx: ctx,
kzgChan: make(chan *kzgVerifier, 100),
}
go service.kzgVerifierRoutine()
validColumns := createValidTestDataColumns(t, 1)
invalidColumns := createInvalidTestDataColumns(t, 1)
result, err := service.validateWithKzgBatchVerifier(ctx, validColumns)
require.Equal(t, pubsub.ValidationAccept, result)
require.NoError(t, err)
result, err = service.validateWithKzgBatchVerifier(ctx, invalidColumns)
require.Equal(t, pubsub.ValidationReject, result)
assert.NotNil(t, err)
})
t.Run("empty data columns fallback", func(t *testing.T) {
ctx := t.Context()
service := &Service{
ctx: ctx,
kzgChan: make(chan *kzgVerifier, 100),
}
go service.kzgVerifierRoutine()
result, err := service.validateWithKzgBatchVerifier(ctx, []blocks.RODataColumn{})
require.Equal(t, pubsub.ValidationAccept, result)
require.NoError(t, err)
})
}
func TestValidateWithKzgBatchVerifier_DeadlockOnTimeout(t *testing.T) {
err := kzg.Start()
require.NoError(t, err)
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.SecondsPerSlot = 0
params.OverrideBeaconConfig(cfg)
ctx, cancel := context.WithCancel(t.Context())
defer cancel()
service := &Service{
ctx: ctx,
kzgChan: make(chan *kzgVerifier),
}
go service.kzgVerifierRoutine()
result, err := service.validateWithKzgBatchVerifier(context.Background(), nil)
require.Equal(t, pubsub.ValidationIgnore, result)
require.ErrorIs(t, err, context.DeadlineExceeded)
done := make(chan struct{})
go func() {
_, _ = service.validateWithKzgBatchVerifier(context.Background(), nil)
close(done)
}()
select {
case <-done:
case <-time.After(500 * time.Millisecond):
t.Fatal("validateWithKzgBatchVerifier blocked")
}
}
func TestValidateWithKzgBatchVerifier_ContextCanceledBeforeSend(t *testing.T) {
cancelledCtx, cancel := context.WithCancel(t.Context())
cancel()
service := &Service{
ctx: context.Background(),
kzgChan: make(chan *kzgVerifier),
}
done := make(chan struct{})
go func() {
result, err := service.validateWithKzgBatchVerifier(cancelledCtx, nil)
require.Equal(t, pubsub.ValidationIgnore, result)
require.ErrorIs(t, err, context.Canceled)
close(done)
}()
select {
case <-done:
case <-time.After(500 * time.Millisecond):
t.Fatal("validateWithKzgBatchVerifier did not return after context cancellation")
}
select {
case <-service.kzgChan:
t.Fatal("verificationSet was sent to kzgChan despite canceled context")
default:
}
}
func createValidTestDataColumns(t *testing.T, count int) []blocks.RODataColumn {
_, roSidecars, _ := util.GenerateTestFuluBlockWithSidecars(t, count)
if len(roSidecars) >= count {

View File

@@ -77,8 +77,13 @@ func SendBeaconBlocksByRangeRequest(
}
defer closeStream(stream, log)
// Cap the slice capacity to MaxRequestBlock to prevent panic from invalid Count values.
// This guards against upstream bugs that may produce astronomically large Count values
// (e.g., due to unsigned integer underflow).
sliceCap := min(req.Count, params.MaxRequestBlock(slots.ToEpoch(tor.CurrentSlot())))
// Augment block processing function, if non-nil block processor is provided.
blocks := make([]interfaces.ReadOnlySignedBeaconBlock, 0, req.Count)
blocks := make([]interfaces.ReadOnlySignedBeaconBlock, 0, sliceCap)
process := func(blk interfaces.ReadOnlySignedBeaconBlock) error {
blocks = append(blocks, blk)
if blockProcessor != nil {

View File

@@ -168,7 +168,6 @@ type Service struct {
syncContributionBitsOverlapLock sync.RWMutex
syncContributionBitsOverlapCache *lru.Cache
signatureChan chan *signatureVerifier
kzgChan chan *kzgVerifier
clockWaiter startup.ClockWaiter
initialSyncComplete chan struct{}
verifierWaiter *verification.InitializerWaiter
@@ -209,10 +208,7 @@ func NewService(ctx context.Context, opts ...Option) *Service {
}
// Initialize signature channel with configured limit
r.signatureChan = make(chan *signatureVerifier, r.cfg.batchVerifierLimit)
// Initialize KZG channel with fixed buffer size of 100.
// This buffer size is designed to handle burst traffic of data column gossip messages:
// - Data columns arrive less frequently than attestations (default batchVerifierLimit=1000)
r.kzgChan = make(chan *kzgVerifier, 100)
// Correctly remove it from our seen pending block map.
// The eviction method always assumes that the mutex is held.
r.slotToPendingBlocks.OnEvicted(func(s string, i any) {
@@ -265,7 +261,6 @@ func (s *Service) Start() {
s.newColumnsVerifier = newDataColumnsVerifierFromInitializer(v)
go s.verifierRoutine()
go s.kzgVerifierRoutine()
go s.startDiscoveryAndSubscriptions()
go s.processDataColumnLogs()

View File

@@ -144,12 +144,9 @@ func (s *Service) validateDataColumn(ctx context.Context, pid peer.ID, msg *pubs
}
// [REJECT] The sidecar's column data is valid as verified by `verify_data_column_sidecar_kzg_proofs(sidecar)`.
validationResult, err := s.validateWithKzgBatchVerifier(ctx, roDataColumns)
if validationResult != pubsub.ValidationAccept {
return validationResult, err
if err := verifier.SidecarKzgProofVerified(); err != nil {
return pubsub.ValidationReject, err
}
// Mark KZG verification as satisfied since we did it via batch verifier
verifier.SatisfyRequirement(verification.RequireSidecarKzgProofVerified)
// [IGNORE] The sidecar is the first sidecar for the tuple `(block_header.slot, block_header.proposer_index, sidecar.index)`
// with valid header signature, sidecar inclusion proof, and kzg proof.

View File

@@ -71,10 +71,7 @@ func TestValidateDataColumn(t *testing.T) {
ctx: ctx,
newColumnsVerifier: newDataColumnsVerifier,
seenDataColumnCache: newSlotAwareCache(seenDataColumnSize),
kzgChan: make(chan *kzgVerifier, 100),
}
// Start the KZG verifier routine for batch verification
go service.kzgVerifierRoutine()
// Encode a `beaconBlock` message instead of expected.
buf := new(bytes.Buffer)

View File

@@ -0,0 +1,3 @@
### Ignored
- Add `NewBeaconStateGloas()`.

View File

@@ -0,0 +1,3 @@
### Added
- Flag `--log.vmodule` to set per-package verbosity levels for logging.

View File

@@ -0,0 +1,3 @@
### Ignored
- small touch ups on state diff code.

View File

@@ -0,0 +1,2 @@
### Fixed
- Fixed a typo: AggregrateDueBPS -> AggregateDueBPS.

View File

@@ -0,0 +1,3 @@
### Fixed
- Prevent authentication bypass on direct `/v2/validator/*` endpoints by enforcing auth checks for non-public routes.

View File

@@ -0,0 +1,3 @@
### Ignored
- delayed head evaluator check to mid epoch for e2e.

View File

@@ -0,0 +1,3 @@
### Ignored
- updating phase 0 constants for ethspecify

View File

@@ -0,0 +1,3 @@
### Removed
- Batching of KZG verification for incoming via gossip data column sidecars

View File

@@ -0,0 +1,2 @@
### Removed
- `--disable-get-blobs-v2` flag from help.

View File

@@ -0,0 +1,3 @@
### Removed
- Remove unused `delay` parameter from `fetchOriginDataColumnSidecars` function.

View File

@@ -0,0 +1,3 @@
### Changed
- Added some defensive checks to prevent overflows in block batch requests.

View File

@@ -0,0 +1,2 @@
### Added
- Close opened file in data_column.go

View File

@@ -0,0 +1,3 @@
### Added
- Added missing beacon config in fulu so that the presets don't go missing in /eth/v1/config/spec beacon api.

3
changelog/satushh-log.md Normal file
View File

@@ -0,0 +1,3 @@
### Changed
- Log commitments instead of indices in missingCommitError

View File

@@ -0,0 +1,2 @@
### Added
- Implement modified proposer slashing for gloas

View File

@@ -0,0 +1,3 @@
### Added
- Add slot processing with execution payload availability updates

View File

@@ -0,0 +1,2 @@
### Added
- add pending payments processing and quorum threshold, plus spectests and state hooks (rotate/append)

View File

@@ -0,0 +1,2 @@
### Ignored
- Move deposit helpers from `beacon-chain/core/blocks` to `beacon-chain/core/helpers` (refactor only).

View File

@@ -0,0 +1,2 @@
### Ignored
- Refactor expected withdrawals into reusable helpers for future forks.

View File

@@ -0,0 +1,3 @@
### Added
- Added shell completion support for `beacon-chain` and `validator` CLI tools.

View File

@@ -3,6 +3,8 @@ load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"completion.go",
"completion_scripts.go",
"config.go",
"defaults.go",
"flags.go",
@@ -28,6 +30,7 @@ go_test(
name = "go_default_test",
size = "small",
srcs = [
"completion_test.go",
"config_test.go",
"flags_test.go",
"helpers_test.go",
@@ -39,6 +42,7 @@ go_test(
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_urfave_cli_v2//:go_default_library",
"@org_uber_go_mock//gomock:go_default_library",
],

View File

@@ -364,7 +364,8 @@ var (
}
// DisableGetBlobsV2 disables the engine_getBlobsV2 usage.
DisableGetBlobsV2 = &cli.BoolFlag{
Name: "disable-get-blobs-v2",
Usage: "Disables the engine_getBlobsV2 usage.",
Name: "disable-get-blobs-v2",
Usage: "Disables the engine_getBlobsV2 usage.",
Hidden: true,
}
)

View File

@@ -8,6 +8,7 @@ import (
"os"
"path/filepath"
runtimeDebug "runtime/debug"
"strings"
"github.com/OffchainLabs/prysm/v7/beacon-chain/builder"
"github.com/OffchainLabs/prysm/v7/beacon-chain/node"
@@ -113,6 +114,7 @@ var appFlags = []cli.Flag{
cmd.PubsubQueueSize,
cmd.DataDirFlag,
cmd.VerbosityFlag,
cmd.LogVModuleFlag,
cmd.EnableTracingFlag,
cmd.TracingProcessNameFlag,
cmd.TracingEndpointFlag,
@@ -171,13 +173,22 @@ func before(ctx *cli.Context) error {
return errors.Wrap(err, "failed to load flags from config file")
}
// determine log verbosity
// determine default log verbosity
verbosity := ctx.String(cmd.VerbosityFlag.Name)
verbosityLevel, err := logrus.ParseLevel(verbosity)
if err != nil {
return errors.Wrap(err, "failed to parse log verbosity")
}
logs.SetLoggingLevel(verbosityLevel)
// determine per package verbosity. if not set, maxLevel will be 0.
vmoduleInput := strings.Join(ctx.StringSlice(cmd.LogVModuleFlag.Name), ",")
vmodule, maxLevel, err := cmd.ParseVModule(vmoduleInput)
if err != nil {
return errors.Wrap(err, "failed to parse log vmodule")
}
// set the global logging level to allow for the highest verbosity requested
logs.SetLoggingLevel(max(verbosityLevel, maxLevel))
format := ctx.String(cmd.LogFormat.Name)
switch format {
@@ -191,11 +202,13 @@ func before(ctx *cli.Context) error {
formatter.FullTimestamp = true
formatter.ForceFormatting = true
formatter.ForceColors = true
formatter.VModule = vmodule
formatter.BaseVerbosity = verbosityLevel
logrus.AddHook(&logs.WriterHook{
Formatter: formatter,
Writer: os.Stderr,
AllowedLevels: logrus.AllLevels[:verbosityLevel+1],
AllowedLevels: logrus.AllLevels[:max(verbosityLevel, maxLevel)+1],
})
case "fluentd":
f := joonix.NewFormatter()
@@ -219,7 +232,7 @@ func before(ctx *cli.Context) error {
logFileName := ctx.String(cmd.LogFileName.Name)
if logFileName != "" {
if err := logs.ConfigurePersistentLogging(logFileName, format, verbosityLevel); err != nil {
if err := logs.ConfigurePersistentLogging(logFileName, format, verbosityLevel, vmodule); err != nil {
log.WithError(err).Error("Failed to configuring logging to disk.")
}
}
@@ -271,9 +284,11 @@ func main() {
Commands: []*cli.Command{
dbcommands.Commands,
jwtcommands.Commands,
cmd.CompletionCommand("beacon-chain"),
},
Flags: appFlags,
Before: before,
Flags: appFlags,
Before: before,
EnableBashCompletion: true,
}
defer func() {

View File

@@ -169,7 +169,6 @@ var appHelpFlagGroups = []flagGroup{
flags.ExecutionJWTSecretFlag,
flags.JwtId,
flags.InteropMockEth1DataVotesFlag,
flags.DisableGetBlobsV2,
},
},
{ // Flags relevant to configuring beacon chain monitoring.
@@ -201,6 +200,7 @@ var appHelpFlagGroups = []flagGroup{
cmd.LogFileName,
cmd.VerbosityFlag,
flags.DisableEphemeralLogFile,
cmd.LogVModuleFlag,
},
},
{ // Feature flags.

View File

@@ -86,7 +86,7 @@ func main() {
logFileName := ctx.String(cmd.LogFileName.Name)
if logFileName != "" {
if err := logs.ConfigurePersistentLogging(logFileName, format, level); err != nil {
if err := logs.ConfigurePersistentLogging(logFileName, format, level, map[string]logrus.Level{}); err != nil {
log.WithError(err).Error("Failed to configuring logging to disk.")
}
}

63
cmd/completion.go Normal file
View File

@@ -0,0 +1,63 @@
package cmd
import (
"fmt"
"github.com/urfave/cli/v2"
)
// CompletionCommand returns the completion command for the given binary name.
// The binaryName parameter should be "beacon-chain" or "validator".
func CompletionCommand(binaryName string) *cli.Command {
return &cli.Command{
Name: "completion",
Category: "completion",
Usage: "Generate shell completion scripts",
Description: fmt.Sprintf(`Generate shell completion scripts for bash, zsh, or fish.
To load completions:
Bash:
$ source <(%[1]s completion bash)
# To load completions for each session, execute once:
$ %[1]s completion bash > /etc/bash_completion.d/%[1]s
Zsh:
# To load completions for each session, execute once:
$ %[1]s completion zsh > "${fpath[1]}/_%[1]s"
# You may need to start a new shell for completions to take effect.
Fish:
$ %[1]s completion fish | source
# To load completions for each session, execute once:
$ %[1]s completion fish > ~/.config/fish/completions/%[1]s.fish
`, binaryName),
Subcommands: []*cli.Command{
{
Name: "bash",
Usage: "Generate bash completion script",
Action: func(_ *cli.Context) error {
fmt.Println(bashCompletionScript(binaryName))
return nil
},
},
{
Name: "zsh",
Usage: "Generate zsh completion script",
Action: func(_ *cli.Context) error {
fmt.Println(zshCompletionScript(binaryName))
return nil
},
},
{
Name: "fish",
Usage: "Generate fish completion script",
Action: func(_ *cli.Context) error {
fmt.Println(fishCompletionScript(binaryName))
return nil
},
},
},
}
}

99
cmd/completion_scripts.go Normal file
View File

@@ -0,0 +1,99 @@
package cmd
import (
"fmt"
"strings"
)
// bashCompletionScript returns the bash completion script for the given binary.
func bashCompletionScript(binaryName string) string {
// Convert hyphens to underscores for bash function names
funcName := strings.ReplaceAll(binaryName, "-", "_")
return fmt.Sprintf(`#!/bin/bash
_%[1]s_completions() {
local cur prev words cword opts
COMPREPLY=()
# Use bash-completion if available, otherwise set variables directly
if declare -F _init_completion >/dev/null 2>&1; then
_init_completion -n "=:" || return
else
cur="${COMP_WORDS[COMP_CWORD]}"
prev="${COMP_WORDS[COMP_CWORD-1]}"
words=("${COMP_WORDS[@]}")
cword=$COMP_CWORD
fi
# Build command array for completion - flag must be at the END
local -a requestComp
if [[ "$cur" == "-"* ]]; then
requestComp=("${COMP_WORDS[@]:0:COMP_CWORD}" "$cur" --generate-bash-completion)
else
requestComp=("${COMP_WORDS[@]:0:COMP_CWORD}" --generate-bash-completion)
fi
opts=$("${requestComp[@]}" 2>/dev/null)
COMPREPLY=($(compgen -W "${opts}" -- "${cur}"))
return 0
}
complete -o bashdefault -o default -o nospace -F _%[1]s_completions %[2]s
`, funcName, binaryName)
}
// zshCompletionScript returns the zsh completion script for the given binary.
func zshCompletionScript(binaryName string) string {
// Convert hyphens to underscores for zsh function names
funcName := strings.ReplaceAll(binaryName, "-", "_")
return fmt.Sprintf(`#compdef %[2]s
_%[1]s() {
local curcontext="$curcontext" ret=1
local -a completions
# Build command array with --generate-bash-completion at the END
local -a requestComp
if [[ "${words[CURRENT]}" == -* ]]; then
requestComp=(${words[1,CURRENT]} --generate-bash-completion)
else
requestComp=(${words[1,CURRENT-1]} --generate-bash-completion)
fi
completions=($("${requestComp[@]}" 2>/dev/null))
if [[ ${#completions[@]} -gt 0 ]]; then
_describe -t commands '%[2]s' completions && ret=0
fi
# Fallback to file completion
_files && ret=0
return ret
}
compdef _%[1]s %[2]s
`, funcName, binaryName)
}
// fishCompletionScript returns the fish completion script for the given binary.
func fishCompletionScript(binaryName string) string {
// Convert hyphens to underscores for fish function names
funcName := strings.ReplaceAll(binaryName, "-", "_")
return fmt.Sprintf(`# Fish completion for %[2]s
function __fish_%[1]s_complete
set -l args (commandline -opc)
set -l cur (commandline -ct)
# Build command with --generate-bash-completion at the END
if string match -q -- '-*' "$cur"
%[2]s $args $cur --generate-bash-completion 2>/dev/null
else
%[2]s $args --generate-bash-completion 2>/dev/null
end
end
complete -c %[2]s -f -a "(__fish_%[1]s_complete)"
`, funcName, binaryName)
}

105
cmd/completion_test.go Normal file
View File

@@ -0,0 +1,105 @@
package cmd
import (
"strings"
"testing"
"github.com/OffchainLabs/prysm/v7/testing/assert"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/urfave/cli/v2"
)
func TestCompletionCommand(t *testing.T) {
t.Run("creates command with correct name", func(t *testing.T) {
cmd := CompletionCommand("beacon-chain")
require.Equal(t, "completion", cmd.Name)
})
t.Run("has three subcommands", func(t *testing.T) {
cmd := CompletionCommand("beacon-chain")
require.Equal(t, 3, len(cmd.Subcommands))
names := make([]string, len(cmd.Subcommands))
for i, sub := range cmd.Subcommands {
names[i] = sub.Name
}
assert.DeepEqual(t, []string{"bash", "zsh", "fish"}, names)
})
t.Run("description contains binary name", func(t *testing.T) {
cmd := CompletionCommand("validator")
assert.Equal(t, true, strings.Contains(cmd.Description, "validator"))
})
}
func TestBashCompletionScript(t *testing.T) {
script := bashCompletionScript("beacon-chain")
assert.Equal(t, true, strings.Contains(script, "beacon-chain"), "script should contain binary name")
assert.Equal(t, true, strings.Contains(script, "_beacon_chain_completions"), "script should contain function name with underscores")
assert.Equal(t, true, strings.Contains(script, "complete -o bashdefault"), "script should contain complete command")
assert.Equal(t, true, strings.Contains(script, "--generate-bash-completion"), "script should use generate-bash-completion flag")
}
func TestZshCompletionScript(t *testing.T) {
script := zshCompletionScript("validator")
assert.Equal(t, true, strings.Contains(script, "#compdef validator"), "script should contain compdef directive")
assert.Equal(t, true, strings.Contains(script, "_validator"), "script should contain function name")
assert.Equal(t, true, strings.Contains(script, "--generate-bash-completion"), "script should use generate-bash-completion flag")
}
func TestFishCompletionScript(t *testing.T) {
script := fishCompletionScript("beacon-chain")
assert.Equal(t, true, strings.Contains(script, "complete -c beacon-chain"), "script should contain complete command")
assert.Equal(t, true, strings.Contains(script, "__fish_beacon_chain_complete"), "script should contain function name with underscores")
assert.Equal(t, true, strings.Contains(script, "--generate-bash-completion"), "script should use generate-bash-completion flag")
}
func TestScriptFunctionNames(t *testing.T) {
// Test that hyphens are converted to underscores in function names
bashScript := bashCompletionScript("beacon-chain")
assert.Equal(t, true, strings.Contains(bashScript, "_beacon_chain_completions"))
assert.Equal(t, false, strings.Contains(bashScript, "_beacon-chain_completions"))
zshScript := zshCompletionScript("beacon-chain")
assert.Equal(t, true, strings.Contains(zshScript, "_beacon_chain"))
fishScript := fishCompletionScript("beacon-chain")
assert.Equal(t, true, strings.Contains(fishScript, "__fish_beacon_chain_complete"))
}
func TestCompletionSubcommandActions(t *testing.T) {
// Test that Action functions execute without errors
cmd := CompletionCommand("beacon-chain")
tests := []struct {
name string
subcommand string
}{
{"bash action executes", "bash"},
{"zsh action executes", "zsh"},
{"fish action executes", "fish"},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
var subCmd *cli.Command
for _, sub := range cmd.Subcommands {
if sub.Name == tt.subcommand {
subCmd = sub
break
}
}
require.NotNil(t, subCmd, "subcommand should exist")
require.NotNil(t, subCmd.Action, "subcommand should have an action")
// Action should not return an error; use a real cli.Context
app := &cli.App{}
ctx := cli.NewContext(app, nil, nil)
err := subCmd.Action(ctx)
require.NoError(t, err)
})
}
}

View File

@@ -36,6 +36,11 @@ var (
Usage: "Logging verbosity. (trace, debug, info, warn, error, fatal, panic)",
Value: "info",
}
// LogVModuleFlag defines per-package log levels.
LogVModuleFlag = &cli.StringSliceFlag{
Name: "log.vmodule",
Usage: "Per-package log verbosity. packagePath=level entries separated by commas.",
}
// DataDirFlag defines a path on disk where Prysm databases are stored.
DataDirFlag = &cli.StringFlag{
Name: "datadir",

View File

@@ -6,6 +6,7 @@ import (
"testing"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/sirupsen/logrus"
"github.com/urfave/cli/v2"
)
@@ -237,3 +238,41 @@ func TestValidateNoArgs_SubcommandFlags(t *testing.T) {
err = app.Run([]string{"command", "bar", "subComm2", "--barfoo100", "garbage", "subComm4"})
require.ErrorContains(t, "unrecognized argument: garbage", err)
}
func TestParseVModule(t *testing.T) {
t.Run("valid", func(t *testing.T) {
input := "beacon-chain/p2p=error, beacon-chain/light-client=trace"
parsed, maxL, err := ParseVModule(input)
require.NoError(t, err)
require.Equal(t, logrus.ErrorLevel, parsed["beacon-chain/p2p"])
require.Equal(t, logrus.TraceLevel, parsed["beacon-chain/light-client"])
require.Equal(t, logrus.TraceLevel, maxL)
})
t.Run("empty", func(t *testing.T) {
parsed, maxL, err := ParseVModule(" ")
require.NoError(t, err)
require.IsNil(t, parsed)
require.Equal(t, logrus.PanicLevel, maxL)
})
t.Run("invalid", func(t *testing.T) {
tests := []string{
"beacon-chain/p2p",
"beacon-chain/p2p=",
"beacon-chain/p2p/=error",
"=info",
"beacon-chain/*=info",
"beacon-chain/p2p=meow",
"/beacon-chain/p2p=info",
"beacon-chain/../p2p=info",
"beacon-chain/p2p=info,",
"beacon-chain/p2p=info,beacon-chain/p2p=debug",
}
for _, input := range tests {
_, maxL, err := ParseVModule(input)
require.ErrorContains(t, "invalid", err)
require.Equal(t, logrus.PanicLevel, maxL)
}
})
}

View File

@@ -4,6 +4,7 @@ import (
"bufio"
"fmt"
"os"
"path"
"runtime"
"strings"
@@ -102,3 +103,63 @@ func ExpandSingleEndpointIfFile(ctx *cli.Context, flag *cli.StringFlag) error {
}
return nil
}
// ParseVModule parses a comma-separated list of package=level entries.
func ParseVModule(input string) (map[string]logrus.Level, logrus.Level, error) {
var l logrus.Level
trimmed := strings.TrimSpace(input)
if trimmed == "" {
return nil, l, nil
}
parts := strings.Split(trimmed, ",")
result := make(map[string]logrus.Level, len(parts))
for _, raw := range parts {
entry := strings.TrimSpace(raw)
if entry == "" {
return nil, l, fmt.Errorf("invalid vmodule entry: empty segment")
}
kv := strings.Split(entry, "=")
if len(kv) != 2 {
return nil, l, fmt.Errorf("invalid vmodule entry %q: expected path=level", entry)
}
pkg := strings.TrimSpace(kv[0])
levelText := strings.TrimSpace(kv[1])
if pkg == "" {
return nil, l, fmt.Errorf("invalid vmodule entry %q: empty package path", entry)
}
if levelText == "" {
return nil, l, fmt.Errorf("invalid vmodule entry %q: empty level", entry)
}
if strings.Contains(pkg, "*") {
return nil, l, fmt.Errorf("invalid vmodule package path %q: wildcards are not allowed", pkg)
}
if strings.ContainsAny(pkg, " \t\n") {
return nil, l, fmt.Errorf("invalid vmodule package path %q: whitespace is not allowed", pkg)
}
if strings.HasPrefix(pkg, "/") {
return nil, l, fmt.Errorf("invalid vmodule package path %q: leading slash is not allowed", pkg)
}
cleaned := path.Clean(pkg)
if cleaned != pkg || pkg == "." || pkg == ".." {
return nil, l, fmt.Errorf("invalid vmodule package path %q: must be an absolute package path. (trailing slash not allowed)", pkg)
}
if _, exists := result[pkg]; exists {
return nil, l, fmt.Errorf("invalid vmodule package path %q: duplicate entry", pkg)
}
level, err := logrus.ParseLevel(levelText)
if err != nil {
return nil, l, fmt.Errorf("invalid vmodule level %q: must be one of panic, fatal, error, warn, info, debug, trace", levelText)
}
result[pkg] = level
}
maxLevel := logrus.PanicLevel
for _, lvl := range result {
if lvl > maxLevel {
maxLevel = lvl
}
}
return result, maxLevel, nil
}

View File

@@ -9,6 +9,7 @@ import (
"os"
"path/filepath"
runtimeDebug "runtime/debug"
"strings"
"github.com/OffchainLabs/prysm/v7/cmd"
accountcommands "github.com/OffchainLabs/prysm/v7/cmd/validator/accounts"
@@ -95,6 +96,7 @@ var appFlags = []cli.Flag{
cmd.MinimalConfigFlag,
cmd.E2EConfigFlag,
cmd.VerbosityFlag,
cmd.LogVModuleFlag,
cmd.DataDirFlag,
cmd.ClearDB,
cmd.ForceClearDB,
@@ -140,21 +142,32 @@ func main() {
slashingprotectioncommands.Commands,
dbcommands.Commands,
web.Commands,
cmd.CompletionCommand("validator"),
},
Flags: appFlags,
Flags: appFlags,
EnableBashCompletion: true,
Before: func(ctx *cli.Context) error {
// Load flags from config file, if specified.
if err := cmd.LoadFlagsFromConfig(ctx, appFlags); err != nil {
return err
}
// determine log verbosity
// determine default log verbosity
verbosity := ctx.String(cmd.VerbosityFlag.Name)
verbosityLevel, err := logrus.ParseLevel(verbosity)
if err != nil {
return errors.Wrap(err, "failed to parse log verbosity")
}
logs.SetLoggingLevel(verbosityLevel)
// determine per package verbosity. if not set, maxLevel will be 0.
vmoduleInput := strings.Join(ctx.StringSlice(cmd.LogVModuleFlag.Name), ",")
vmodule, maxLevel, err := cmd.ParseVModule(vmoduleInput)
if err != nil {
return errors.Wrap(err, "failed to parse log vmodule")
}
// set the global logging level to allow for the highest verbosity requested
logs.SetLoggingLevel(max(maxLevel, verbosityLevel))
logFileName := ctx.String(cmd.LogFileName.Name)
@@ -170,11 +183,13 @@ func main() {
formatter.FullTimestamp = true
formatter.ForceFormatting = true
formatter.ForceColors = true
formatter.VModule = vmodule
formatter.BaseVerbosity = verbosityLevel
logrus.AddHook(&logs.WriterHook{
Formatter: formatter,
Writer: os.Stderr,
AllowedLevels: logrus.AllLevels[:verbosityLevel+1],
AllowedLevels: logrus.AllLevels[:max(verbosityLevel, maxLevel)+1],
})
case "fluentd":
f := joonix.NewFormatter()
@@ -195,7 +210,7 @@ func main() {
}
if logFileName != "" {
if err := logs.ConfigurePersistentLogging(logFileName, format, verbosityLevel); err != nil {
if err := logs.ConfigurePersistentLogging(logFileName, format, verbosityLevel, vmodule); err != nil {
log.WithError(err).Error("Failed to configuring logging to disk.")
}
}

View File

@@ -76,6 +76,7 @@ var appHelpFlagGroups = []flagGroup{
cmd.AcceptTosFlag,
cmd.ApiTimeoutFlag,
flags.DisableEphemeralLogFile,
cmd.LogVModuleFlag,
},
},
{

View File

@@ -47,6 +47,7 @@ type BeaconChainConfig struct {
HysteresisQuotient uint64 `yaml:"HYSTERESIS_QUOTIENT" spec:"true"` // HysteresisQuotient defines the hysteresis quotient for effective balance calculations.
HysteresisDownwardMultiplier uint64 `yaml:"HYSTERESIS_DOWNWARD_MULTIPLIER" spec:"true"` // HysteresisDownwardMultiplier defines the hysteresis downward multiplier for effective balance calculations.
HysteresisUpwardMultiplier uint64 `yaml:"HYSTERESIS_UPWARD_MULTIPLIER" spec:"true"` // HysteresisUpwardMultiplier defines the hysteresis upward multiplier for effective balance calculations.
BuilderIndexFlag uint64 `yaml:"BUILDER_INDEX_FLAG" spec:"true"` // BuilderIndexFlag marks a ValidatorIndex as a BuilderIndex when the bit is set.
// Gwei value constants.
MinDepositAmount uint64 `yaml:"MIN_DEPOSIT_AMOUNT" spec:"true"` // MinDepositAmount is the minimum amount of Gwei a validator can send to the deposit contract at once (lower amounts will be reverted).
@@ -88,7 +89,7 @@ type BeaconChainConfig struct {
IntervalsPerSlot uint64 `yaml:"INTERVALS_PER_SLOT"` // IntervalsPerSlot defines the number of fork choice intervals in a slot defined in the fork choice spec.
ProposerReorgCutoffBPS primitives.BP `yaml:"PROPOSER_REORG_CUTOFF_BPS" spec:"true"` // ProposerReorgCutoffBPS defines the proposer reorg deadline in basis points of the slot.
AttestationDueBPS primitives.BP `yaml:"ATTESTATION_DUE_BPS" spec:"true"` // AttestationDueBPS defines the attestation due time in basis points of the slot.
AggregrateDueBPS primitives.BP `yaml:"AGGREGRATE_DUE_BPS" spec:"true"` // AggregrateDueBPS defines the aggregate due time in basis points of the slot.
AggregateDueBPS primitives.BP `yaml:"AGGREGATE_DUE_BPS" spec:"true"` // AggregateDueBPS defines the aggregate due time in basis points of the slot.
SyncMessageDueBPS primitives.BP `yaml:"SYNC_MESSAGE_DUE_BPS" spec:"true"` // SyncMessageDueBPS defines the sync message due time in basis points of the slot.
ContributionDueBPS primitives.BP `yaml:"CONTRIBUTION_DUE_BPS" spec:"true"` // ContributionDueBPS defines the contribution due time in basis points of the slot.
@@ -126,6 +127,7 @@ type BeaconChainConfig struct {
MaxWithdrawalsPerPayload uint64 `yaml:"MAX_WITHDRAWALS_PER_PAYLOAD" spec:"true"` // MaxWithdrawalsPerPayload defines the maximum number of withdrawals in a block.
MaxBlsToExecutionChanges uint64 `yaml:"MAX_BLS_TO_EXECUTION_CHANGES" spec:"true"` // MaxBlsToExecutionChanges defines the maximum number of BLS-to-execution-change objects in a block.
MaxValidatorsPerWithdrawalsSweep uint64 `yaml:"MAX_VALIDATORS_PER_WITHDRAWALS_SWEEP" spec:"true"` // MaxValidatorsPerWithdrawalsSweep bounds the size of the sweep searching for withdrawals per slot.
MaxBuildersPerWithdrawalsSweep uint64 `yaml:"MAX_BUILDERS_PER_WITHDRAWALS_SWEEP" spec:"true"` // MaxBuildersPerWithdrawalsSweep bounds the size of the builder withdrawals sweep per slot.
// BLS domain values.
DomainBeaconProposer [4]byte `yaml:"DOMAIN_BEACON_PROPOSER" spec:"true"` // DomainBeaconProposer defines the BLS signature domain for beacon proposal verification.
@@ -293,6 +295,10 @@ type BeaconChainConfig struct {
ValidatorCustodyRequirement uint64 `yaml:"VALIDATOR_CUSTODY_REQUIREMENT" spec:"true"` // ValidatorCustodyRequirement is the minimum number of custody groups an honest node with validators attached custodies and serves samples from
BalancePerAdditionalCustodyGroup uint64 `yaml:"BALANCE_PER_ADDITIONAL_CUSTODY_GROUP" spec:"true"` // BalancePerAdditionalCustodyGroup is the balance increment corresponding to one additional group to custody.
// Values introduced in Gloas upgrade
BuilderPaymentThresholdNumerator uint64 `yaml:"BUILDER_PAYMENT_THRESHOLD_NUMERATOR" spec:"true"` // BuilderPaymentThresholdNumerator is the numerator for builder payment quorum threshold calculation.
BuilderPaymentThresholdDenominator uint64 `yaml:"BUILDER_PAYMENT_THRESHOLD_DENOMINATOR" spec:"true"` // BuilderPaymentThresholdDenominator is the denominator for builder payment quorum threshold calculation.
// Networking Specific Parameters
MaxPayloadSize uint64 `yaml:"MAX_PAYLOAD_SIZE" spec:"true"` // MAX_PAYLOAD_SIZE is the maximum allowed size of uncompressed payload in gossip messages and rpc chunks.
AttestationSubnetCount uint64 `yaml:"ATTESTATION_SUBNET_COUNT" spec:"true"` // AttestationSubnetCount is the number of attestation subnets used in the gossipsub protocol.

View File

@@ -194,6 +194,8 @@ func ConfigToYaml(cfg *BeaconChainConfig) []byte {
fmt.Sprintf("SHARD_COMMITTEE_PERIOD: %d", cfg.ShardCommitteePeriod),
fmt.Sprintf("MIN_VALIDATOR_WITHDRAWABILITY_DELAY: %d", cfg.MinValidatorWithdrawabilityDelay),
fmt.Sprintf("MAX_VALIDATORS_PER_WITHDRAWALS_SWEEP: %d", cfg.MaxValidatorsPerWithdrawalsSweep),
fmt.Sprintf("MAX_BUILDERS_PER_WITHDRAWALS_SWEEP: %d", cfg.MaxBuildersPerWithdrawalsSweep),
fmt.Sprintf("BUILDER_INDEX_FLAG: %d", cfg.BuilderIndexFlag),
fmt.Sprintf("MAX_SEED_LOOKAHEAD: %d", cfg.MaxSeedLookahead),
fmt.Sprintf("EJECTION_BALANCE: %d", cfg.EjectionBalance),
fmt.Sprintf("MIN_PER_EPOCH_CHURN_LIMIT: %d", cfg.MinPerEpochChurnLimit),
@@ -243,7 +245,7 @@ func ConfigToYaml(cfg *BeaconChainConfig) []byte {
fmt.Sprintf("MAX_BLOBS_PER_BLOCK: %d", cfg.DeprecatedMaxBlobsPerBlock),
fmt.Sprintf("PROPOSER_REORG_CUTOFF_BPS: %d", cfg.ProposerReorgCutoffBPS),
fmt.Sprintf("ATTESTATION_DUE_BPS: %d", cfg.AttestationDueBPS),
fmt.Sprintf("AGGREGRATE_DUE_BPS: %d", cfg.AggregrateDueBPS),
fmt.Sprintf("AGGREGATE_DUE_BPS: %d", cfg.AggregateDueBPS),
fmt.Sprintf("SYNC_MESSAGE_DUE_BPS: %d", cfg.SyncMessageDueBPS),
fmt.Sprintf("CONTRIBUTION_DUE_BPS: %d", cfg.ContributionDueBPS),
}

View File

@@ -24,7 +24,6 @@ import (
// These are variables that we don't use in Prysm. (i.e. future hardfork, light client... etc)
// IMPORTANT: Use one field per line and sort these alphabetically to reduce conflicts.
var placeholderFields = []string{
"AGGREGATE_DUE_BPS",
"AGGREGATE_DUE_BPS_GLOAS",
"ATTESTATION_DEADLINE",
"ATTESTATION_DUE_BPS_GLOAS",
@@ -99,7 +98,7 @@ func assertEqualConfigs(t *testing.T, name string, fields []string, expected, ac
assert.Equal(t, expected.HysteresisDownwardMultiplier, actual.HysteresisDownwardMultiplier, "%s: HysteresisDownwardMultiplier", name)
assert.Equal(t, expected.HysteresisUpwardMultiplier, actual.HysteresisUpwardMultiplier, "%s: HysteresisUpwardMultiplier", name)
assert.Equal(t, expected.AttestationDueBPS, actual.AttestationDueBPS, "%s: AttestationDueBPS", name)
assert.Equal(t, expected.AggregrateDueBPS, actual.AggregrateDueBPS, "%s: AggregrateDueBPS", name)
assert.Equal(t, expected.AggregateDueBPS, actual.AggregateDueBPS, "%s: AggregateDueBPS", name)
assert.Equal(t, expected.ContributionDueBPS, actual.ContributionDueBPS, "%s: ContributionDueBPS", name)
assert.Equal(t, expected.ProposerReorgCutoffBPS, actual.ProposerReorgCutoffBPS, "%s: ProposerReorgCutoffBPS", name)
assert.Equal(t, expected.SyncMessageDueBPS, actual.SyncMessageDueBPS, "%s: SyncMessageDueBPS", name)

View File

@@ -83,6 +83,7 @@ var mainnetBeaconConfig = &BeaconChainConfig{
HysteresisQuotient: 4,
HysteresisDownwardMultiplier: 1,
HysteresisUpwardMultiplier: 5,
BuilderIndexFlag: 1099511627776,
// Gwei value constants.
MinDepositAmount: 1 * 1e9,
@@ -123,7 +124,7 @@ var mainnetBeaconConfig = &BeaconChainConfig{
// Time-based protocol parameters.
ProposerReorgCutoffBPS: primitives.BP(1667),
AttestationDueBPS: primitives.BP(3333),
AggregrateDueBPS: primitives.BP(6667),
AggregateDueBPS: primitives.BP(6667),
SyncMessageDueBPS: primitives.BP(3333),
ContributionDueBPS: primitives.BP(6667),
@@ -169,6 +170,7 @@ var mainnetBeaconConfig = &BeaconChainConfig{
MaxWithdrawalsPerPayload: 16,
MaxBlsToExecutionChanges: 16,
MaxValidatorsPerWithdrawalsSweep: 16384,
MaxBuildersPerWithdrawalsSweep: 16384,
// BLS domain values.
DomainBeaconProposer: bytesutil.Uint32ToBytes4(0x00000000),
@@ -331,6 +333,11 @@ var mainnetBeaconConfig = &BeaconChainConfig{
MinEpochsForDataColumnSidecarsRequest: 4096,
ValidatorCustodyRequirement: 8,
BalancePerAdditionalCustodyGroup: 32_000_000_000,
// Values related to gloas
BuilderPaymentThresholdNumerator: 6,
BuilderPaymentThresholdDenominator: 10,
// Values related to networking parameters.
MaxPayloadSize: 10 * 1 << 20, // 10 MiB
AttestationSubnetCount: 64,

View File

@@ -49,6 +49,7 @@ func compareConfigs(t *testing.T, expected, actual *params.BeaconChainConfig) {
require.DeepEqual(t, expected.HysteresisQuotient, actual.HysteresisQuotient)
require.DeepEqual(t, expected.HysteresisDownwardMultiplier, actual.HysteresisDownwardMultiplier)
require.DeepEqual(t, expected.HysteresisUpwardMultiplier, actual.HysteresisUpwardMultiplier)
require.DeepEqual(t, expected.BuilderIndexFlag, actual.BuilderIndexFlag)
require.DeepEqual(t, expected.MinDepositAmount, actual.MinDepositAmount)
require.DeepEqual(t, expected.MaxEffectiveBalance, actual.MaxEffectiveBalance)
require.DeepEqual(t, expected.EjectionBalance, actual.EjectionBalance)
@@ -94,6 +95,7 @@ func compareConfigs(t *testing.T, expected, actual *params.BeaconChainConfig) {
require.DeepEqual(t, expected.MaxDeposits, actual.MaxDeposits)
require.DeepEqual(t, expected.MaxVoluntaryExits, actual.MaxVoluntaryExits)
require.DeepEqual(t, expected.MaxWithdrawalsPerPayload, actual.MaxWithdrawalsPerPayload)
require.DeepEqual(t, expected.MaxBuildersPerWithdrawalsSweep, actual.MaxBuildersPerWithdrawalsSweep)
require.DeepEqual(t, expected.DomainBeaconProposer, actual.DomainBeaconProposer)
require.DeepEqual(t, expected.DomainRandao, actual.DomainRandao)
require.DeepEqual(t, expected.DomainBeaconAttester, actual.DomainBeaconAttester)

View File

@@ -671,7 +671,7 @@ func hydrateBeaconBlockBodyGloas() *eth.BeaconBlockBodyGloas {
BlockHash: make([]byte, fieldparams.RootLength),
PrevRandao: make([]byte, fieldparams.RootLength),
FeeRecipient: make([]byte, 20),
BlobKzgCommitmentsRoot: make([]byte, fieldparams.RootLength),
BlobKzgCommitments: [][]byte{make([]byte, fieldparams.BLSPubkeyLength)},
},
Signature: make([]byte, fieldparams.BLSSignatureLength),
},

View File

@@ -1,6 +1,7 @@
package blocks
import (
field_params "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/signing"
consensus_types "github.com/OffchainLabs/prysm/v7/consensus-types"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
@@ -43,11 +44,17 @@ func (h executionPayloadBidGloas) IsNil() bool {
len(h.payload.ParentBlockRoot) != 32 ||
len(h.payload.BlockHash) != 32 ||
len(h.payload.PrevRandao) != 32 ||
len(h.payload.BlobKzgCommitmentsRoot) != 32 ||
len(h.payload.BlobKzgCommitments) > field_params.MaxBlobCommitmentsPerBlock ||
len(h.payload.FeeRecipient) != 20 {
return true
}
for _, commitment := range h.payload.BlobKzgCommitments {
if len(commitment) != 48 {
return true
}
}
return false
}
@@ -131,9 +138,9 @@ func (h executionPayloadBidGloas) ExecutionPayment() primitives.Gwei {
return primitives.Gwei(h.payload.ExecutionPayment)
}
// BlobKzgCommitmentsRoot returns the root of the KZG commitments for blobs.
func (h executionPayloadBidGloas) BlobKzgCommitmentsRoot() [32]byte {
return [32]byte(h.payload.BlobKzgCommitmentsRoot)
// BlobKzgCommitments returns the KZG commitments for blobs.
func (h executionPayloadBidGloas) BlobKzgCommitments() [][]byte {
return h.payload.BlobKzgCommitments
}
// FeeRecipient returns the execution address that will receive the builder payment.

View File

@@ -24,7 +24,7 @@ func validExecutionPayloadBid() *ethpb.ExecutionPayloadBid {
Slot: 6,
Value: 7,
ExecutionPayment: 8,
BlobKzgCommitmentsRoot: bytes.Repeat([]byte{0x05}, 32),
BlobKzgCommitments: [][]byte{bytes.Repeat([]byte{0x05}, 48)},
FeeRecipient: bytes.Repeat([]byte{0x06}, 20),
}
}
@@ -52,8 +52,8 @@ func TestWrappedROExecutionPayloadBid(t *testing.T) {
mutate: func(b *ethpb.ExecutionPayloadBid) { b.PrevRandao = []byte{0x04} },
},
{
name: "blob kzg commitments root",
mutate: func(b *ethpb.ExecutionPayloadBid) { b.BlobKzgCommitmentsRoot = []byte{0x05} },
name: "blob kzg commitments length",
mutate: func(b *ethpb.ExecutionPayloadBid) { b.BlobKzgCommitments = [][]byte{[]byte{0x05}} },
},
{
name: "fee recipient",
@@ -85,7 +85,7 @@ func TestWrappedROExecutionPayloadBid(t *testing.T) {
assert.DeepEqual(t, [32]byte(bytes.Repeat([]byte{0x02}, 32)), wrapped.ParentBlockRoot())
assert.DeepEqual(t, [32]byte(bytes.Repeat([]byte{0x03}, 32)), wrapped.BlockHash())
assert.DeepEqual(t, [32]byte(bytes.Repeat([]byte{0x04}, 32)), wrapped.PrevRandao())
assert.DeepEqual(t, [32]byte(bytes.Repeat([]byte{0x05}, 32)), wrapped.BlobKzgCommitmentsRoot())
assert.DeepEqual(t, [][]byte{bytes.Repeat([]byte{0x05}, 48)}, wrapped.BlobKzgCommitments())
assert.DeepEqual(t, [20]byte(bytes.Repeat([]byte{0x06}, 20)), wrapped.FeeRecipient())
})
}

View File

@@ -22,7 +22,7 @@ type ROExecutionPayloadBid interface {
Slot() primitives.Slot
Value() primitives.Gwei
ExecutionPayment() primitives.Gwei
BlobKzgCommitmentsRoot() [32]byte
BlobKzgCommitments() [][]byte
FeeRecipient() [20]byte
IsNil() bool
}

View File

@@ -13,6 +13,15 @@ var _ fssz.Unmarshaler = (*BuilderIndex)(nil)
// BuilderIndex is an index into the builder registry.
type BuilderIndex uint64
// ToValidatorIndex sets the builder flag on a builder index.
//
// Spec v1.6.1 (pseudocode):
// def convert_builder_index_to_validator_index(builder_index: BuilderIndex) -> ValidatorIndex:
// return ValidatorIndex(builder_index | BUILDER_INDEX_FLAG)
func (b BuilderIndex) ToValidatorIndex() ValidatorIndex {
return ValidatorIndex(uint64(b) | BuilderIndexFlag)
}
// HashTreeRoot returns the SSZ hash tree root of the index.
func (b BuilderIndex) HashTreeRoot() ([32]byte, error) {
return fssz.HashWithDefaultHasher(b)

View File

@@ -10,9 +10,32 @@ var _ fssz.HashRoot = (ValidatorIndex)(0)
var _ fssz.Marshaler = (*ValidatorIndex)(nil)
var _ fssz.Unmarshaler = (*ValidatorIndex)(nil)
// BuilderIndexFlag marks a ValidatorIndex as a BuilderIndex when the bit is set.
//
// Spec v1.6.1: BUILDER_INDEX_FLAG.
const BuilderIndexFlag uint64 = 1 << 40
// ValidatorIndex in eth2.
type ValidatorIndex uint64
// IsBuilderIndex returns true when the BuilderIndex flag is set on the validator index.
//
// Spec v1.6.1 (pseudocode):
// def is_builder_index(validator_index: ValidatorIndex) -> bool:
// return (validator_index & BUILDER_INDEX_FLAG) != 0
func (v ValidatorIndex) IsBuilderIndex() bool {
return uint64(v)&BuilderIndexFlag != 0
}
// ToBuilderIndex strips the builder flag from a validator index.
//
// Spec v1.6.1 (pseudocode):
// def convert_validator_index_to_builder_index(validator_index: ValidatorIndex) -> BuilderIndex:
// return BuilderIndex(validator_index & ~BUILDER_INDEX_FLAG)
func (v ValidatorIndex) ToBuilderIndex() BuilderIndex {
return BuilderIndex(uint64(v) & ^BuilderIndexFlag)
}
// Div divides validator index by x.
// This method panics if dividing by zero!
func (v ValidatorIndex) Div(x uint64) ValidatorIndex {

View File

@@ -33,3 +33,32 @@ func TestValidatorIndex_Casting(t *testing.T) {
}
})
}
func TestValidatorIndex_BuilderIndexFlagConversions(t *testing.T) {
base := uint64(42)
unflagged := ValidatorIndex(base)
if unflagged.IsBuilderIndex() {
t.Fatalf("expected unflagged validator index to not be a builder index")
}
if got, want := unflagged.ToBuilderIndex(), BuilderIndex(base); got != want {
t.Fatalf("unexpected builder index: got %d want %d", got, want)
}
flagged := ValidatorIndex(base | BuilderIndexFlag)
if !flagged.IsBuilderIndex() {
t.Fatalf("expected flagged validator index to be a builder index")
}
if got, want := flagged.ToBuilderIndex(), BuilderIndex(base); got != want {
t.Fatalf("unexpected builder index: got %d want %d", got, want)
}
builder := BuilderIndex(base)
roundTrip := builder.ToValidatorIndex()
if !roundTrip.IsBuilderIndex() {
t.Fatalf("expected round-tripped validator index to be a builder index")
}
if got, want := roundTrip.ToBuilderIndex(), builder; got != want {
t.Fatalf("unexpected round-trip builder index: got %d want %d", got, want)
}
}

View File

@@ -30,7 +30,7 @@ func addLogWriter(w io.Writer) {
}
// ConfigurePersistentLogging adds a log-to-file writer. File content is identical to stdout.
func ConfigurePersistentLogging(logFileName string, format string, lvl logrus.Level) error {
func ConfigurePersistentLogging(logFileName string, format string, lvl logrus.Level, vmodule map[string]logrus.Level) error {
logrus.WithField("logFileName", logFileName).Info("Logs will be made persistent")
if err := file.MkdirAll(filepath.Dir(logFileName)); err != nil {
return err
@@ -47,6 +47,13 @@ func ConfigurePersistentLogging(logFileName string, format string, lvl logrus.Le
return nil
}
maxVmoduleLevel := logrus.PanicLevel
for _, level := range vmodule {
if level > maxVmoduleLevel {
maxVmoduleLevel = level
}
}
// Create formatter and writer hook
formatter := new(prefixed.TextFormatter)
formatter.TimestampFormat = "2006-01-02 15:04:05.00"
@@ -54,11 +61,13 @@ func ConfigurePersistentLogging(logFileName string, format string, lvl logrus.Le
// If persistent log files are written - we disable the log messages coloring because
// the colors are ANSI codes and seen as gibberish in the log files.
formatter.DisableColors = true
formatter.BaseVerbosity = lvl
formatter.VModule = vmodule
logrus.AddHook(&WriterHook{
Formatter: formatter,
Writer: f,
AllowedLevels: logrus.AllLevels[:lvl+1],
AllowedLevels: logrus.AllLevels[:max(lvl, maxVmoduleLevel)+1],
})
logrus.Info("File logging initialized")

View File

@@ -28,20 +28,20 @@ func TestMaskCredentialsLogging(t *testing.T) {
}
}
func TestConfigurePersistantLogging(t *testing.T) {
func TestConfigurePersistentLogging(t *testing.T) {
testParentDir := t.TempDir()
// 1. Test creation of file in an existing parent directory
logFileName := "test.log"
existingDirectory := "test-1-existing-testing-dir"
err := ConfigurePersistentLogging(fmt.Sprintf("%s/%s/%s", testParentDir, existingDirectory, logFileName), "text", logrus.InfoLevel)
err := ConfigurePersistentLogging(fmt.Sprintf("%s/%s/%s", testParentDir, existingDirectory, logFileName), "text", logrus.InfoLevel, map[string]logrus.Level{})
require.NoError(t, err)
// 2. Test creation of file along with parent directory
nonExistingDirectory := "test-2-non-existing-testing-dir"
err = ConfigurePersistentLogging(fmt.Sprintf("%s/%s/%s", testParentDir, nonExistingDirectory, logFileName), "text", logrus.InfoLevel)
err = ConfigurePersistentLogging(fmt.Sprintf("%s/%s/%s", testParentDir, nonExistingDirectory, logFileName), "text", logrus.InfoLevel, map[string]logrus.Level{})
require.NoError(t, err)
// 3. Test creation of file in an existing parent directory with a non-existing sub-directory
@@ -52,7 +52,7 @@ func TestConfigurePersistantLogging(t *testing.T) {
return
}
err = ConfigurePersistentLogging(fmt.Sprintf("%s/%s/%s/%s", testParentDir, existingDirectory, nonExistingSubDirectory, logFileName), "text", logrus.InfoLevel)
err = ConfigurePersistentLogging(fmt.Sprintf("%s/%s/%s/%s", testParentDir, existingDirectory, nonExistingSubDirectory, logFileName), "text", logrus.InfoLevel, map[string]logrus.Level{})
require.NoError(t, err)
//4. Create log file in a directory without 700 permissions

View File

@@ -7,11 +7,10 @@
package dbval
import (
reflect "reflect"
sync "sync"
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
reflect "reflect"
sync "sync"
)
const (

Some files were not shown because too many files have changed in this diff Show More