Compare commits

...

18 Commits

Author SHA1 Message Date
terence tsao
bfa4f7ac3a gloas: add process execution payload 2026-01-11 17:19:01 +08:00
Jun Song
21366e11ca feat: generalize proof generation for beacon state fields (#15443)
**What type of PR is this?**

Feature

**What does this PR do? Why is it needed?**

**Which issues(s) does this PR fix?**

This PR is the part of EPF6 project: [Merkle Proofs of
Everything](https://github.com/eth-protocol-fellows/cohort-six/blob/master/projects/project-ideas.md#prysm-merkle-proofs-of-everything).

**Other notes for review**

You can see some rationale and future TODOs in [my HackMD
post](https://hackmd.io/@junsong/SJGze5cNxg). The main next task is
following: (excerpted from the post)

> More generally speaking, we should find a way to prove for an item
with generalized index more than 128.

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2026-01-10 13:15:25 +00:00
Potuz
7ed4d496dd Add feature flag to verify signatures before proposing (#15920)
Adds a feature flag `--enable-proposer-preprocessing` to verify
individual signatures in the block right before proposing to fallback to
empty fields in case of failure.

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-09 18:35:40 +00:00
Potuz
158c09ca8c Add --low-valcount-sweep feature flag for withdrawal sweep bound (#16231)
Gate the withdrawal sweep optimization (using min of validator count and
MaxValidatorsPerWithdrawalsSweep) behind a hidden feature flag that
defaults to false. Enable the flag for spectests to match consensus
spec.

The backported changes were from
[4788](https://github.com/ethereum/consensus-specs/pull/4788)

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-08 16:14:03 +00:00
Bastin
17245f4fac Ephemeral debug logfile (#16108)
**What type of PR is this?**
Feature

**What does this PR do? Why is it needed?**
This PR introduces an ephemeral log file that captures debug logs for 24
hours.

- it captures debug logs regardless of the user provided (or
non-provided) `--verbosity` flag.
- it allows a maximum of 250MB for each log file. 
- it keeps 1 backup logfile in case of size-based rotations. (as opposed
to time-based)
- this is enabled by default for beacon and validator nodes.
- the log files live in `datadir/logs/` directory under the names of
`beacon-chain.log` and `validator.log`. backups have a timestamp in
their name as well.
- the feature can be disabled using the `--disable-ephemeral-log-file`
flag.
2026-01-08 14:16:22 +00:00
Bastin
53b0a574ab Set logging verbosity per writer hook instead of globally (#16106)
**What type of PR is this?**
Feature

**What does this PR do? Why is it needed?**
This PR sets the logging verbosity level per writer hook (per output:
terminal, log file, etc) rather than setting a global logrus level which
limits customizing each output.

it set the terminal and log file output to be the same as the user
provided `--verbosity` flag. so nothing changes in reality.

it also introduces a `SetLoggingLevel()` to be used instead of
`logrus.SetLeveL()` in order for us to be able to set a different
baseline level later on if needed. (my next PR will use this).

I'm only making this change in the `beacon-chain` and `validator` apps,
skipping tools like `bootnode` and `client-stats`.
2026-01-08 12:19:16 +00:00
terence
c96d188468 gloas: add builders registry and update state fields (#16164)
This pr implements the Gloas builder registry and related beacon state
fields per the spec, including proto/SSZ updates and state-native wiring
for builders, payload availability, pending payments/withdrawals, and
expected withdrawals. This aligns BeaconState with the Gloas container
changes and adds supporting hashing/copy helpers.

Spec ref:
https://github.com/ethereum/consensus-specs/blob/master/specs/gloas/beacon-chain.md
2026-01-07 21:51:56 +00:00
Preston Van Loon
0fcb922702 Changelog for v7.1.2 (#16225)
**What type of PR is this?**

Documentation

**What does this PR do? Why is it needed?**

**Which issues(s) does this PR fix?**

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-01-07 20:06:51 +00:00
satushh
3646a77bfb Use copy() instead of byte-by-byte loop (#16222)
<!-- Thanks for sending a PR! Before submitting:

1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
   fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
   in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
   Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->

**What type of PR is this?**

Optimisation

**What does this PR do? Why is it needed?**

use copy() instead of byte-by-byte loop which isn't required. 

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [ ] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [ ] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [ ] I have added a description with sufficient context for reviewers
to understand this PR.
- [ ] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-01-07 17:15:52 +00:00
Potuz
1541558261 update spectests (#16219) 2026-01-07 16:48:50 +00:00
james-prysm
1a6252ade4 changing isHealthy to isReady (#16167)
<!-- Thanks for sending a PR! Before submitting:

1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
   fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
   in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
   Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->

**What type of PR is this?**

 Bug fix

**What does this PR do? Why is it needed?**

validator fallbacks shouldn't work on nodes that are syncing as many of
the tasks validators perform require the node to be fully synced.

- 206 or any other code is  interpreted as "not ready"
- 200 interpreted as "ready"

**Which issues(s) does this PR fix?**
 
continuation of https://github.com/OffchainLabs/prysm/pull/15401

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-01-06 18:58:12 +00:00
Preston Van Loon
27c009e7ff Tests: Add require.Eventually and fix a few test flakes (#16217)
**What type of PR is this?**

Other

**What does this PR do? Why is it needed?**

This is a better way to wait for a test condition to hit, rather than
time.Sleep.

**Which issues(s) does this PR fix?**


**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-01-06 18:20:27 +00:00
Jonny Rhea
ffad861e2c WithMaxExportBatchSize is specified twice (#16211)
<!-- Thanks for sending a PR! Before submitting:

1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
   fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
   in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
   Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->

**What type of PR is this?**

> Bug fix


**What does this PR do? Why is it needed?**

It's just a simple fix. I was looking at how prysm uses OpenTelemetry
and I noticed it.

**Which issues(s) does this PR fix?**

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-01-06 16:22:20 +00:00
Manu NALEPA
792fa22099 Add the --disable-get-blobs-v2 flag and fixes #16171 (#16155)
**What type of PR is this?**
Feature + Bugfix

**What does this PR do? Why is it needed?**
Starting at Fusaka, the beacon node can pull blobs with the
`engine_getBlobsV2` API from the execution layer.
This reduces by a lot the burden on the beacon node. However, the beacon
node should be able to work 100% correctly without this execution layer
help.

This PR introduces the `--disable-get-blobs-v2` flag to simulate a 0%
success rate of this engine API.

This PR also fixes:
- https://github.com/OffchainLabs/prysm/issues/16171

Please read commit by commit with commit messages.

**How to test it:**
For the `--disable-get-blobs-v2` part:

Run the beacon node with the `--disable-get-blobs-v2` flag in DEBUG
mode.
For every block with commitments, the following log should be displayed:
```
[2025-12-19 15:36:25.49] DEBUG sync: No data column sidecars constructed from the execution client ...
```

And the following log should **never** be displayed:
```
[2026-01-05 10:19:00.55] DEBUG sync: Constructed data column sidecars from the execution client count=...
```

For the #16171 part:
- No ERROR log showed in the linked issue should never be displayed.

**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-01-05 22:29:15 +00:00
Preston Van Loon
c5b3d3531c Added changelog for v7.1.1 (#16161)
<!-- Thanks for sending a PR! Before submitting:

1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
   fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
   in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
   Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->

**What type of PR is this?**

Documentation

**What does this PR do? Why is it needed?**

v7.1.1 release is coming today

**Which issues(s) does this PR fix?**


**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [ ] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-01-05 22:26:13 +00:00
Aarsh Shah
cc4510bb77 p2p: batch publish data column sidecars (#16183)
**What type of PR is this?**

Feature

What does this PR do? Why is it needed?

This PR takes @MarcoPolo 's PR at
https://github.com/OffchainLabs/prysm/pull/16130 to completion with
tests.

The description on his PR:

"""
a relatively small change to optimize network send order.

Without this, network writes tend to prioritize sending data for one
column to all peers before sending data for later columns (e.g for two
columns and 4 peers per column it would send A,A,A,A,B,B,B,B). With
batch publishing we can change the write order to round robin across
columns (e.g. A,B,A,B,A,B,A,B).

In cases where the process is sending at a rate over the network limit,
this approach allows at least some copies of the column to propagate
through the network. In early simulations with bandwidth limits of
50mbps for the publisher, this improved dissemination by ~20-30%.
"""
See the issue for some more context.

**Which issues(s) does this PR fix?**

Fixes https://github.com/OffchainLabs/prysm/issues/16129

Other notes for review

Acknowledgements

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [ ] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [ ] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).

---------

Co-authored-by: Marco Munizaga <git@marcopolo.io>
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
Co-authored-by: kasey <489222+kasey@users.noreply.github.com>
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2026-01-05 22:02:06 +00:00
Bastin
6fa0e9cf5f Logrus hooks for terminal vs log-file output (#16102)
## Review after #16059 

**What type of PR is this?**
Feature

**What does this PR do?**
This PR introduces logrus writer hooks into the logging of prysm.
when log-format is text:
- set the default logrus output to be `io.Discard`
- create a writer hook for terminal, with formatting and coloring
enabled.
- create a separate writer hook for log-file (if enabled), without
coloring.

This immediately allows for having formatted/colored terminal logs,
while keeping the log-file clean.
2026-01-05 15:20:12 +00:00
Bastin
6b5ba5ad01 Switch logging from using prefixes to the new package path format (#16059)
#### This PR sets the foundation for the new logging features.

---

The goal of this big PR is the following:
1. Adding a log.go file to every package:
[_commit_](54f6396d4c)
- Writing a bash script that adds the log.go file to every package that
imports logrus, except the excluded packages, configured at the top of
the bash script.
- the log.go file creates a log variable and sets a field called
`package` to the full path of that package.
- I have tried to fix every error/problem that came from mass generation
of this file. (duplicate declarations, different prefix names, etc...)
- some packages had the log.go file from before, and had some helper
functions in there as well. I've moved all of them to a `log_helpers.go`
file within each package.

2. Create a CI rule which verifies that:
[_commit_](b799c3a0ef)
- every package which imports logrus, also has a log.go file, except the
excluded packages.
- the `package` field of each log.go variable, has the correct path. (to
detect when we move a package or change it's name)
- I pushed a commit with a manually changed log.go file to trigger the
ci check failure and it worked.

3. Alter the logging system to read the prefix from this `package` field
for every log while outputing:
[_commit_](b0c7f1146c)
- some packages have/want/need a different log prefix than their package
name (like `kv`). This can be solved by keeping a map of package paths
to prefix names somewhere.
    
    
---

**Some notes:**
- Please review everything carefully.
- I created the `prefixReplacement` map and populated the data that I
deemed necessary. Please check it and complain if something doesn't make
sense or is missing. I attached at the bottom, the list of all the
packages that used to use a different name than their package name as
their prefix.
- I have chosen to mark some packages to be excluded from this whole
process. They will either not log anything, or log without a prefix, or
log using their previously defined prefix. See the list of exclusions in
the bottom.
- I fixed all the tests that failed because of this change. These were
failing because they were expecting the old prefix to be in the
generated logs. I have changed those to expect the new `package` field
instead. This might not be a great solution. Ideally we might want to
remove this from the tests so they only test for relevant fields in the
logs. but this is a problem for another day.
- Please run the node with this config, and mention if you see something
weird in the logs. (use different verbosities)
- The CI workflow uses a script that basically runs the
`hack/gen-logs.sh` and checks that the git diff is zero. that script is
`hack/check-logs.sh`. This means that if one runs this script locally,
it will not actually _check_ anything, rather than just regenerate the
log.go files and fix any mistake. This might be confusing. Please
suggest solutions if you think it's a problem.

---

**A list of packages that used a different prefix than their package
names for their logs:**

- beacon-chain/cache/depositsnapshot/ package depositsnapshot, prefix
"cache"
- beacon-chain/core/transition/log.go — package transition, prefix
"state"
  - beacon-chain/db/kv/log.go — package kv, prefix "db"
- beacon-chain/db/slasherkv/log.go — package slasherkv, prefix
"slasherdb"
- beacon-chain/db/pruner/pruner.go — package pruner, prefix "db-pruner"
- beacon-chain/light-client/log.go — package light_client, prefix
"light-client"
- beacon-chain/operations/attestations/log.go — package attestations,
prefix "pool/attestations"
- beacon-chain/operations/slashings/log.go — package slashings, prefix
"pool/slashings"
  - beacon-chain/rpc/core/log.go — package core, prefix "rpc/core"
- beacon-chain/rpc/eth/beacon/log.go — package beacon, prefix
"rpc/beaconv1"
- beacon-chain/rpc/eth/validator/log.go — package validator, prefix
"beacon-api"
- beacon-chain/rpc/prysm/v1alpha1/beacon/log.go — package beacon, prefix
"rpc"
- beacon-chain/rpc/prysm/v1alpha1/validator/log.go — package validator,
prefix "rpc/validator"
- beacon-chain/state/stategen/log.go — package stategen, prefix
"state-gen"
- beacon-chain/sync/checkpoint/log.go — package checkpoint, prefix
"checkpoint-sync"
- beacon-chain/sync/initial-sync/log.go — package initialsync, prefix
"initial-sync"
  - cmd/prysmctl/p2p/log.go — package p2p, prefix "prysmctl-p2p"
  - config/features/log.go -- package features, prefix "flags"
  - io/file/log.go — package file, prefix "fileutil"
  - proto/prysm/v1alpha1/log.go — package eth, prefix "protobuf"
- validator/client/beacon-api/log.go — package beacon_api, prefix
"beacon-api"
  - validator/db/kv/log.go — package kv, prefix "db"
  - validator/db/filesystem/db.go — package filesystem, prefix "db"
- validator/keymanager/derived/log.go — package derived, prefix
"derived-keymanager"
- validator/keymanager/local/log.go — package local, prefix
"local-keymanager"
- validator/keymanager/remote-web3signer/log.go — package
remote_web3signer, prefix "remote-keymanager"
- validator/keymanager/remote-web3signer/internal/log.go — package
internal, prefix "remote-web3signer-
    internal"
- beacon-chain/forkchoice/doubly... prefix is
"forkchoice-doublylinkedtree"
  
  
  
**List of excluded directories (their subdirectories are also
excluded):**
  ```
  EXCLUDED_PATH_PREFIXES=(
      "testing"
      "validator/client/testutil"
      "beacon-chain/p2p/testing"
      "beacon-chain/rpc/eth/config"
      "beacon-chain/rpc/prysm/v1alpha1/debug"
      "tools"
      "runtime"
      "monitoring"
      "io"
      "cmd"
      ".well-known"
      "changelog"
      "hack"
      "specrefs"
      "third_party"
      "bazel-out"
      "bazel-bin"
      "bazel-prysm"
      "bazel-testlogs"
      "build"
      ".github"
      ".jj"
      ".idea"
      ".vscode"
)
```
2026-01-05 14:15:20 +00:00
340 changed files with 6674 additions and 2265 deletions

23
.github/workflows/check-logs.yml vendored Normal file
View File

@@ -0,0 +1,23 @@
name: Check log.go files
on: [ pull_request ]
jobs:
check-logs:
runs-on: ubuntu-4
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Go 1.25.1
uses: actions/setup-go@v5
with:
go-version: '1.25.1'
- name: Install ripgrep
run: sudo apt-get install -y ripgrep
- name: Check log.go files
run: ./hack/check-logs.sh

3
.gitignore vendored
View File

@@ -44,3 +44,6 @@ tmp
# spectest coverage reports
report.txt
# execution client data
execution/

View File

@@ -4,6 +4,66 @@ All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
## [v7.1.2](https://github.com/prysmaticlabs/prysm/compare/v7.1.1...v7.1.2) - 2026-01-07
Happy new year! This patch release is very small. The main improvement is better management of pending attestation aggregation via [PR 16153](https://github.com/OffchainLabs/prysm/pull/16153).
### Added
- `primitives.BuilderIndex`: SSZ `uint64` wrapper for builder registry indices. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16169)
### Changed
- the /eth/v2/beacon/pool/attestations and /eth/v1/beacon/pool/sync_committees now returns a 503 error if the node is still syncing, the rest api is also working in a similar process to gRPC broadcasting immediately now. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16152)
- `validateDataColumn`: Remove error logs. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16157)
- Pending aggregates: When multiple aggregated attestations only differing by the aggregator index are in the pending queue, only process one of them. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16153)
### Fixed
- Fix the missing fork version object mapping for Fulu in light client p2p. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16151)
- Do not process slots and copy states for next epoch proposers after Fulu. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16168)
## [v7.1.1](https://github.com/prysmaticlabs/prysm/compare/v7.1.0...v7.1.1) - 2025-12-18
Release highlights:
- Fixed potential deadlock scenario in data column batch verification
- Improved processing and metrics for cells and proofs
We are aware of [an issue](https://github.com/OffchainLabs/prysm/issues/16160) where Prysm struggles to sync from an out of sync state. We will have another release before the end of the year to address this issue.
Our postmortem document from the December 4th mainnet issue has been published on our [documentation site](https://prysm.offchainlabs.com/docs/misc/mainnet-postmortems/)
### Added
- Track the dependent root of the latest finalized checkpoint in forkchoice. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16103)
- Proposal design document to implement graffiti. Currently it is empty by default and the idea is to have it of the form GE168dPR63af. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15983)
- Add support for detecting and logging per address reachability via libp2p AutoNAT v2. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16100)
- Static analyzer that ensures each `httputil.HandleError` call is followed by a `return` statement. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16134)
- Prometheus histogram `cells_and_proofs_from_structured_computation_milliseconds` to track computation time for cells and proofs from structured blobs. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16115)
- Prometheus histogram `get_blobs_v2_latency_milliseconds` to track RPC latency for `getBlobsV2` calls to the execution layer. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16115)
### Changed
- Optimise migratetocold by not doing brute force for loop. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16101)
- e2e sync committee evaluator now skips the first slot after startup, we already skip the fork epoch for checks here, this skip only applies on startup, due to altair always from 0 and validators need to warm up. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16145)
- Run `ComputeCellsAndProofsFromFlat` in parallel to improve performance when computing cells and proofs. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16115)
- Run `ComputeCellsAndProofsFromStructured` in parallel to improve performance when computing cells and proofs. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16115)
### Removed
- Unnecessary copy is removed from Eth1DataHasEnoughSupport. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16118)
### Fixed
- Incorrect constructor return type [#16084](https://github.com/OffchainLabs/prysm/pull/16084). [[PR]](https://github.com/prysmaticlabs/prysm/pull/16084)
- Fixed possible race when validating two attestations at the same time. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16105)
- Fix missing return after version header check in SubmitAttesterSlashingsV2. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16126)
- Fix deadlock in data column gossip KZG batch verification when a caller times out preventing result delivery. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16141)
- Fixed replay state issue in rest api caused by attester and sync committee duties endpoints. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16136)
- Do not error when committee has been computed correctly but updating the cache failed. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16142)
- Prevent blocked sends to the KZG batch verifier when the caller context is already canceled, avoiding useless queueing and potential hangs. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16144)
## [v7.1.0](https://github.com/prysmaticlabs/prysm/compare/v7.0.0...v7.1.0) - 2025-12-10
This release includes several key features/fixes. If you are running v7.0.0 then you should update to v7.0.1 or later and remove the flag `--disable-last-epoch-targets`.

View File

@@ -273,16 +273,16 @@ filegroup(
url = "https://github.com/ethereum/EIPs/archive/5480440fe51742ed23342b68cf106cefd427e39d.tar.gz",
)
consensus_spec_version = "v1.6.0"
consensus_spec_version = "v1.7.0-alpha.0"
load("@prysm//tools:download_spectests.bzl", "consensus_spec_tests")
consensus_spec_tests(
name = "consensus_spec_tests",
flavors = {
"general": "sha256-54hTaUNF9nLg+hRr3oHoq0yjZpW3MNiiUUuCQu6Rajk=",
"minimal": "sha256-1JHIGg3gVMjvcGYRHR5cwdDgOvX47oR/MWp6gyAeZfA=",
"mainnet": "sha256-292h3W2Ffts0YExgDTyxYe9Os7R0bZIXuAaMO8P6kl4=",
"general": "sha256-b+rJOuVqq+Dy53quPcNYcQwPFoMU7Wp7tdUVe7n0g8w=",
"minimal": "sha256-qxRIxtjPxVsVCY90WsBJKhk0027XDSmhjnRvRN14V1c=",
"mainnet": "sha256-NsuOQG3LzeiEE1TrWuvQ6vu6BboHv7h7f/RTS0pWkCs=",
},
version = consensus_spec_version,
)
@@ -298,7 +298,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-VzBgrEokvYSMIIXVnSA5XS9I3m9oxpvToQGxC1N5lzw=",
integrity = "sha256-hwNdUBgdBrkk6pWIpNYbzbwswUuOu6AMD2exN8uv+QQ=",
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
)

View File

@@ -5,6 +5,7 @@ go_library(
srcs = [
"common.go",
"header.go",
"log.go",
],
importpath = "github.com/OffchainLabs/prysm/v7/api/apiutil",
visibility = ["//visibility:public"],

View File

@@ -5,8 +5,6 @@ import (
"sort"
"strconv"
"strings"
log "github.com/sirupsen/logrus"
)
type mediaRange struct {

9
api/apiutil/log.go Normal file
View File

@@ -0,0 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package apiutil
import "github.com/sirupsen/logrus"
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "api/apiutil")

View File

@@ -1,5 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package beacon
import "github.com/sirupsen/logrus"
var log = logrus.WithField("prefix", "beacon")
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "api/client/beacon")

View File

@@ -6,6 +6,7 @@ go_library(
"bid.go",
"client.go",
"errors.go",
"log.go",
"types.go",
],
importpath = "github.com/OffchainLabs/prysm/v7/api/client/builder",
@@ -63,6 +64,5 @@ go_test(
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
)

View File

@@ -25,7 +25,7 @@ import (
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/runtime/version"
"github.com/pkg/errors"
log "github.com/sirupsen/logrus"
"github.com/sirupsen/logrus"
"go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
)
@@ -70,7 +70,7 @@ type requestLogger struct{}
func (*requestLogger) observe(r *http.Request) (e error) {
b := bytes.NewBuffer(nil)
if r.Body == nil {
log.WithFields(log.Fields{
log.WithFields(logrus.Fields{
"bodyBase64": "(nil value)",
"url": r.URL.String(),
}).Info("Builder http request")
@@ -87,7 +87,7 @@ func (*requestLogger) observe(r *http.Request) (e error) {
return err
}
r.Body = io.NopCloser(b)
log.WithFields(log.Fields{
log.WithFields(logrus.Fields{
"bodyBase64": string(body),
"url": r.URL.String(),
}).Info("Builder http request")

View File

@@ -23,7 +23,6 @@ import (
"github.com/OffchainLabs/prysm/v7/testing/assert"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/OffchainLabs/prysm/v7/testing/util"
log "github.com/sirupsen/logrus"
)
type roundtrip func(*http.Request) (*http.Response, error)

View File

@@ -0,0 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package builder
import "github.com/sirupsen/logrus"
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "api/client/builder")

View File

@@ -4,6 +4,7 @@ go_library(
name = "go_default_library",
srcs = [
"event_stream.go",
"log.go",
"utils.go",
],
importpath = "github.com/OffchainLabs/prysm/v7/api/client/event",
@@ -23,8 +24,5 @@ go_test(
"utils_test.go",
],
embed = [":go_default_library"],
deps = [
"//testing/require:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
deps = ["//testing/require:go_default_library"],
)

View File

@@ -10,7 +10,6 @@ import (
"github.com/OffchainLabs/prysm/v7/api"
"github.com/OffchainLabs/prysm/v7/api/client"
"github.com/pkg/errors"
log "github.com/sirupsen/logrus"
)
const (

View File

@@ -8,7 +8,6 @@ import (
"time"
"github.com/OffchainLabs/prysm/v7/testing/require"
log "github.com/sirupsen/logrus"
)
func TestNewEventStream(t *testing.T) {

9
api/client/event/log.go Normal file
View File

@@ -0,0 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package event
import "github.com/sirupsen/logrus"
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "api/client/event")

View File

@@ -4,6 +4,7 @@ go_library(
name = "go_default_library",
srcs = [
"grpcutils.go",
"log.go",
"parameters.go",
],
importpath = "github.com/OffchainLabs/prysm/v7/api/grpc",

View File

@@ -32,7 +32,7 @@ func LogRequests(
)
start := time.Now()
err := invoker(ctx, method, req, reply, cc, opts...)
logrus.WithField("backend", header["x-backend"]).
log.WithField("backend", header["x-backend"]).
WithField("method", method).WithField("duration", time.Since(start)).
Debug("gRPC request finished.")
return err
@@ -58,7 +58,7 @@ func LogStream(
grpc.Header(&header),
)
strm, err := streamer(ctx, sd, conn, method, opts...)
logrus.WithField("backend", header["x-backend"]).
log.WithField("backend", header["x-backend"]).
WithField("method", method).
Debug("gRPC stream started.")
return strm, err
@@ -71,7 +71,7 @@ func AppendHeaders(parent context.Context, headers []string) context.Context {
if h != "" {
keyValue := strings.Split(h, "=")
if len(keyValue) < 2 {
logrus.Warnf("Incorrect gRPC header flag format. Skipping %v", keyValue[0])
log.Warnf("Incorrect gRPC header flag format. Skipping %v", keyValue[0])
continue
}
parent = metadata.AppendToOutgoingContext(parent, keyValue[0], strings.Join(keyValue[1:], "=")) // nolint:fatcontext

9
api/grpc/log.go Normal file
View File

@@ -0,0 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package grpc
import "github.com/sirupsen/logrus"
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "api/grpc")

View File

@@ -1,5 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package httprest
import "github.com/sirupsen/logrus"
var log = logrus.WithField("prefix", "httprest")
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "api/server/httprest")

View File

@@ -3,6 +3,7 @@ load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"log.go",
"middleware.go",
"util.go",
],
@@ -27,6 +28,5 @@ go_test(
"//api:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
)

View File

@@ -0,0 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package middleware
import "github.com/sirupsen/logrus"
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "api/server/middleware")

View File

@@ -9,7 +9,6 @@ import (
"github.com/OffchainLabs/prysm/v7/api"
"github.com/OffchainLabs/prysm/v7/api/apiutil"
"github.com/rs/cors"
log "github.com/sirupsen/logrus"
)
type Middleware func(http.Handler) http.Handler

View File

@@ -10,7 +10,6 @@ import (
"github.com/OffchainLabs/prysm/v7/api"
"github.com/OffchainLabs/prysm/v7/testing/require"
log "github.com/sirupsen/logrus"
)
// frozenHeaderRecorder allows asserting that response headers were not modified

View File

@@ -5,6 +5,7 @@ go_library(
srcs = [
"debounce.go",
"every.go",
"log.go",
"multilock.go",
"scatter.go",
],

View File

@@ -6,8 +6,6 @@ import (
"reflect"
"runtime"
"time"
log "github.com/sirupsen/logrus"
)
// RunEvery runs the provided command periodically.

9
async/log.go Normal file
View File

@@ -0,0 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package async
import "github.com/sirupsen/logrus"
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "async")

View File

@@ -14,6 +14,7 @@ go_library(
"head_sync_committee_info.go",
"init_sync_process_block.go",
"log.go",
"log_helpers.go",
"merge_ascii_art.go",
"metrics.go",
"options.go",

View File

@@ -1,164 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package blockchain
import (
"encoding/hex"
"fmt"
"time"
import "github.com/sirupsen/logrus"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/blocks"
"github.com/OffchainLabs/prysm/v7/config/params"
consensus_types "github.com/OffchainLabs/prysm/v7/consensus-types"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/runtime/version"
prysmTime "github.com/OffchainLabs/prysm/v7/time"
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
var log = logrus.WithField("prefix", "blockchain")
// logs state transition related data every slot.
func logStateTransitionData(b interfaces.ReadOnlyBeaconBlock) error {
log := log.WithField("slot", b.Slot())
if len(b.Body().Attestations()) > 0 {
log = log.WithField("attestations", len(b.Body().Attestations()))
}
if len(b.Body().AttesterSlashings()) > 0 {
log = log.WithField("attesterSlashings", len(b.Body().AttesterSlashings()))
}
if len(b.Body().ProposerSlashings()) > 0 {
log = log.WithField("proposerSlashings", len(b.Body().ProposerSlashings()))
}
if len(b.Body().VoluntaryExits()) > 0 {
log = log.WithField("voluntaryExits", len(b.Body().VoluntaryExits()))
}
if b.Version() >= version.Altair {
agg, err := b.Body().SyncAggregate()
if err != nil {
return err
}
log = log.WithField("syncBitsCount", agg.SyncCommitteeBits.Count())
}
if b.Version() >= version.Bellatrix {
p, err := b.Body().Execution()
if err != nil {
return err
}
log = log.WithField("payloadHash", fmt.Sprintf("%#x", bytesutil.Trunc(p.BlockHash())))
txs, err := p.Transactions()
switch {
case errors.Is(err, consensus_types.ErrUnsupportedField):
case err != nil:
return err
default:
log = log.WithField("txCount", len(txs))
txsPerSlotCount.Set(float64(len(txs)))
}
}
if b.Version() >= version.Deneb {
kzgs, err := b.Body().BlobKzgCommitments()
if err != nil {
log.WithError(err).Error("Failed to get blob KZG commitments")
} else if len(kzgs) > 0 {
log = log.WithField("kzgCommitmentCount", len(kzgs))
}
}
if b.Version() >= version.Electra {
eReqs, err := b.Body().ExecutionRequests()
if err != nil {
log.WithError(err).Error("Failed to get execution requests")
} else {
if len(eReqs.Deposits) > 0 {
log = log.WithField("depositRequestCount", len(eReqs.Deposits))
}
if len(eReqs.Consolidations) > 0 {
log = log.WithField("consolidationRequestCount", len(eReqs.Consolidations))
}
if len(eReqs.Withdrawals) > 0 {
log = log.WithField("withdrawalRequestCount", len(eReqs.Withdrawals))
}
}
}
log.Info("Finished applying state transition")
return nil
}
func logBlockSyncStatus(block interfaces.ReadOnlyBeaconBlock, blockRoot [32]byte, justified, finalized *ethpb.Checkpoint, receivedTime time.Time, genesis time.Time, daWaitedTime time.Duration) error {
startTime, err := slots.StartTime(genesis, block.Slot())
if err != nil {
return err
}
level := log.Logger.GetLevel()
if level >= logrus.DebugLevel {
parentRoot := block.ParentRoot()
lf := logrus.Fields{
"slot": block.Slot(),
"slotInEpoch": block.Slot() % params.BeaconConfig().SlotsPerEpoch,
"block": fmt.Sprintf("0x%s...", hex.EncodeToString(blockRoot[:])[:8]),
"epoch": slots.ToEpoch(block.Slot()),
"justifiedEpoch": justified.Epoch,
"justifiedRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(justified.Root)[:8]),
"finalizedEpoch": finalized.Epoch,
"finalizedRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(finalized.Root)[:8]),
"parentRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(parentRoot[:])[:8]),
"version": version.String(block.Version()),
"sinceSlotStartTime": prysmTime.Now().Sub(startTime),
"chainServiceProcessedTime": prysmTime.Now().Sub(receivedTime) - daWaitedTime,
"dataAvailabilityWaitedTime": daWaitedTime,
}
log.WithFields(lf).Debug("Synced new block")
} else {
log.WithFields(logrus.Fields{
"slot": block.Slot(),
"block": fmt.Sprintf("0x%s...", hex.EncodeToString(blockRoot[:])[:8]),
"finalizedEpoch": finalized.Epoch,
"finalizedRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(finalized.Root)[:8]),
"epoch": slots.ToEpoch(block.Slot()),
}).Info("Synced new block")
}
return nil
}
// logs payload related data every slot.
func logPayload(block interfaces.ReadOnlyBeaconBlock) error {
isExecutionBlk, err := blocks.IsExecutionBlock(block.Body())
if err != nil {
return errors.Wrap(err, "could not determine if block is execution block")
}
if !isExecutionBlk {
return nil
}
payload, err := block.Body().Execution()
if err != nil {
return err
}
if payload.GasLimit() == 0 {
return errors.New("gas limit should not be 0")
}
gasUtilized := float64(payload.GasUsed()) / float64(payload.GasLimit())
fields := logrus.Fields{
"blockHash": fmt.Sprintf("%#x", bytesutil.Trunc(payload.BlockHash())),
"parentHash": fmt.Sprintf("%#x", bytesutil.Trunc(payload.ParentHash())),
"blockNumber": payload.BlockNumber(),
"gasUtilized": fmt.Sprintf("%.2f", gasUtilized),
}
if block.Version() >= version.Capella {
withdrawals, err := payload.Withdrawals()
if err != nil {
return errors.Wrap(err, "could not get withdrawals")
}
fields["withdrawals"] = len(withdrawals)
changes, err := block.Body().BLSToExecutionChanges()
if err != nil {
return errors.Wrap(err, "could not get BLSToExecutionChanges")
}
if len(changes) > 0 {
fields["blsToExecutionChanges"] = len(changes)
}
}
log.WithFields(fields).Debug("Synced new payload")
return nil
}
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "beacon-chain/blockchain")

View File

@@ -0,0 +1,162 @@
package blockchain
import (
"encoding/hex"
"fmt"
"time"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/blocks"
"github.com/OffchainLabs/prysm/v7/config/params"
consensus_types "github.com/OffchainLabs/prysm/v7/consensus-types"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/runtime/version"
prysmTime "github.com/OffchainLabs/prysm/v7/time"
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
// logs state transition related data every slot.
func logStateTransitionData(b interfaces.ReadOnlyBeaconBlock) error {
log := log.WithField("slot", b.Slot())
if len(b.Body().Attestations()) > 0 {
log = log.WithField("attestations", len(b.Body().Attestations()))
}
if len(b.Body().AttesterSlashings()) > 0 {
log = log.WithField("attesterSlashings", len(b.Body().AttesterSlashings()))
}
if len(b.Body().ProposerSlashings()) > 0 {
log = log.WithField("proposerSlashings", len(b.Body().ProposerSlashings()))
}
if len(b.Body().VoluntaryExits()) > 0 {
log = log.WithField("voluntaryExits", len(b.Body().VoluntaryExits()))
}
if b.Version() >= version.Altair {
agg, err := b.Body().SyncAggregate()
if err != nil {
return err
}
log = log.WithField("syncBitsCount", agg.SyncCommitteeBits.Count())
}
if b.Version() >= version.Bellatrix {
p, err := b.Body().Execution()
if err != nil {
return err
}
log = log.WithField("payloadHash", fmt.Sprintf("%#x", bytesutil.Trunc(p.BlockHash())))
txs, err := p.Transactions()
switch {
case errors.Is(err, consensus_types.ErrUnsupportedField):
case err != nil:
return err
default:
log = log.WithField("txCount", len(txs))
txsPerSlotCount.Set(float64(len(txs)))
}
}
if b.Version() >= version.Deneb {
kzgs, err := b.Body().BlobKzgCommitments()
if err != nil {
log.WithError(err).Error("Failed to get blob KZG commitments")
} else if len(kzgs) > 0 {
log = log.WithField("kzgCommitmentCount", len(kzgs))
}
}
if b.Version() >= version.Electra {
eReqs, err := b.Body().ExecutionRequests()
if err != nil {
log.WithError(err).Error("Failed to get execution requests")
} else {
if len(eReqs.Deposits) > 0 {
log = log.WithField("depositRequestCount", len(eReqs.Deposits))
}
if len(eReqs.Consolidations) > 0 {
log = log.WithField("consolidationRequestCount", len(eReqs.Consolidations))
}
if len(eReqs.Withdrawals) > 0 {
log = log.WithField("withdrawalRequestCount", len(eReqs.Withdrawals))
}
}
}
log.Info("Finished applying state transition")
return nil
}
func logBlockSyncStatus(block interfaces.ReadOnlyBeaconBlock, blockRoot [32]byte, justified, finalized *ethpb.Checkpoint, receivedTime time.Time, genesis time.Time, daWaitedTime time.Duration) error {
startTime, err := slots.StartTime(genesis, block.Slot())
if err != nil {
return err
}
level := log.Logger.GetLevel()
if level >= logrus.DebugLevel {
parentRoot := block.ParentRoot()
lf := logrus.Fields{
"slot": block.Slot(),
"slotInEpoch": block.Slot() % params.BeaconConfig().SlotsPerEpoch,
"block": fmt.Sprintf("0x%s...", hex.EncodeToString(blockRoot[:])[:8]),
"epoch": slots.ToEpoch(block.Slot()),
"justifiedEpoch": justified.Epoch,
"justifiedRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(justified.Root)[:8]),
"finalizedEpoch": finalized.Epoch,
"finalizedRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(finalized.Root)[:8]),
"parentRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(parentRoot[:])[:8]),
"version": version.String(block.Version()),
"sinceSlotStartTime": prysmTime.Now().Sub(startTime),
"chainServiceProcessedTime": prysmTime.Now().Sub(receivedTime) - daWaitedTime,
"dataAvailabilityWaitedTime": daWaitedTime,
}
log.WithFields(lf).Debug("Synced new block")
} else {
log.WithFields(logrus.Fields{
"slot": block.Slot(),
"block": fmt.Sprintf("0x%s...", hex.EncodeToString(blockRoot[:])[:8]),
"finalizedEpoch": finalized.Epoch,
"finalizedRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(finalized.Root)[:8]),
"epoch": slots.ToEpoch(block.Slot()),
}).Info("Synced new block")
}
return nil
}
// logs payload related data every slot.
func logPayload(block interfaces.ReadOnlyBeaconBlock) error {
isExecutionBlk, err := blocks.IsExecutionBlock(block.Body())
if err != nil {
return errors.Wrap(err, "could not determine if block is execution block")
}
if !isExecutionBlk {
return nil
}
payload, err := block.Body().Execution()
if err != nil {
return err
}
if payload.GasLimit() == 0 {
return errors.New("gas limit should not be 0")
}
gasUtilized := float64(payload.GasUsed()) / float64(payload.GasLimit())
fields := logrus.Fields{
"blockHash": fmt.Sprintf("%#x", bytesutil.Trunc(payload.BlockHash())),
"parentHash": fmt.Sprintf("%#x", bytesutil.Trunc(payload.ParentHash())),
"blockNumber": payload.BlockNumber(),
"gasUtilized": fmt.Sprintf("%.2f", gasUtilized),
}
if block.Version() >= version.Capella {
withdrawals, err := payload.Withdrawals()
if err != nil {
return errors.Wrap(err, "could not get withdrawals")
}
fields["withdrawals"] = len(withdrawals)
changes, err := block.Body().BLSToExecutionChanges()
if err != nil {
return errors.Wrap(err, "could not get BLSToExecutionChanges")
}
if len(changes) > 0 {
fields["blsToExecutionChanges"] = len(changes)
}
}
log.WithFields(fields).Debug("Synced new payload")
return nil
}

View File

@@ -34,7 +34,7 @@ func Test_logStateTransitionData(t *testing.T) {
require.NoError(t, err)
return wb
},
want: "\"Finished applying state transition\" prefix=blockchain slot=0",
want: "\"Finished applying state transition\" package=beacon-chain/blockchain slot=0",
},
{name: "has attestation",
b: func() interfaces.ReadOnlyBeaconBlock {
@@ -42,7 +42,7 @@ func Test_logStateTransitionData(t *testing.T) {
require.NoError(t, err)
return wb
},
want: "\"Finished applying state transition\" attestations=1 prefix=blockchain slot=0",
want: "\"Finished applying state transition\" attestations=1 package=beacon-chain/blockchain slot=0",
},
{name: "has deposit",
b: func() interfaces.ReadOnlyBeaconBlock {
@@ -53,7 +53,7 @@ func Test_logStateTransitionData(t *testing.T) {
require.NoError(t, err)
return wb
},
want: "\"Finished applying state transition\" attestations=1 prefix=blockchain slot=0",
want: "\"Finished applying state transition\" attestations=1 package=beacon-chain/blockchain slot=0",
},
{name: "has attester slashing",
b: func() interfaces.ReadOnlyBeaconBlock {
@@ -62,7 +62,7 @@ func Test_logStateTransitionData(t *testing.T) {
require.NoError(t, err)
return wb
},
want: "\"Finished applying state transition\" attesterSlashings=1 prefix=blockchain slot=0",
want: "\"Finished applying state transition\" attesterSlashings=1 package=beacon-chain/blockchain slot=0",
},
{name: "has proposer slashing",
b: func() interfaces.ReadOnlyBeaconBlock {
@@ -71,7 +71,7 @@ func Test_logStateTransitionData(t *testing.T) {
require.NoError(t, err)
return wb
},
want: "\"Finished applying state transition\" prefix=blockchain proposerSlashings=1 slot=0",
want: "\"Finished applying state transition\" package=beacon-chain/blockchain proposerSlashings=1 slot=0",
},
{name: "has exit",
b: func() interfaces.ReadOnlyBeaconBlock {
@@ -80,7 +80,7 @@ func Test_logStateTransitionData(t *testing.T) {
require.NoError(t, err)
return wb
},
want: "\"Finished applying state transition\" prefix=blockchain slot=0 voluntaryExits=1",
want: "\"Finished applying state transition\" package=beacon-chain/blockchain slot=0 voluntaryExits=1",
},
{name: "has everything",
b: func() interfaces.ReadOnlyBeaconBlock {
@@ -93,11 +93,11 @@ func Test_logStateTransitionData(t *testing.T) {
require.NoError(t, err)
return wb
},
want: "\"Finished applying state transition\" attestations=1 attesterSlashings=1 prefix=blockchain proposerSlashings=1 slot=0 voluntaryExits=1",
want: "\"Finished applying state transition\" attestations=1 attesterSlashings=1 package=beacon-chain/blockchain proposerSlashings=1 slot=0 voluntaryExits=1",
},
{name: "has payload",
b: func() interfaces.ReadOnlyBeaconBlock { return wrappedPayloadBlk },
want: "\"Finished applying state transition\" payloadHash=0x010203 prefix=blockchain slot=0 syncBitsCount=0 txCount=2",
want: "\"Finished applying state transition\" package=beacon-chain/blockchain payloadHash=0x010203 slot=0 syncBitsCount=0 txCount=2",
},
}
for _, tt := range tests {

View File

@@ -17,6 +17,7 @@ import (
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
ethpbv1 "github.com/OffchainLabs/prysm/v7/proto/eth/v1"
@@ -130,12 +131,10 @@ func TestService_ReceiveBlock(t *testing.T) {
block: genFullBlock(t, util.DefaultBlockGenConfig(), 1 /*slot*/),
},
check: func(t *testing.T, s *Service) {
// Hacky sleep, should use a better way to be able to resolve the race
// between event being sent out and processed.
time.Sleep(100 * time.Millisecond)
if recvd := len(s.cfg.StateNotifier.(*blockchainTesting.MockStateNotifier).ReceivedEvents()); recvd < 1 {
t.Errorf("Received %d state notifications, expected at least 1", recvd)
}
notifier := s.cfg.StateNotifier.(*blockchainTesting.MockStateNotifier)
require.Eventually(t, func() bool {
return len(notifier.ReceivedEvents()) >= 1
}, 2*time.Second, 10*time.Millisecond, "Expected at least 1 state notification")
},
},
{
@@ -222,10 +221,10 @@ func TestService_ReceiveBlockUpdateHead(t *testing.T) {
require.NoError(t, s.ReceiveBlock(ctx, wsb, root, nil))
})
wg.Wait()
time.Sleep(100 * time.Millisecond)
if recvd := len(s.cfg.StateNotifier.(*blockchainTesting.MockStateNotifier).ReceivedEvents()); recvd < 1 {
t.Errorf("Received %d state notifications, expected at least 1", recvd)
}
notifier := s.cfg.StateNotifier.(*blockchainTesting.MockStateNotifier)
require.Eventually(t, func() bool {
return len(notifier.ReceivedEvents()) >= 1
}, 2*time.Second, 10*time.Millisecond, "Expected at least 1 state notification")
// Verify fork choice has processed the block. (Genesis block and the new block)
assert.Equal(t, 2, s.cfg.ForkChoiceStore.NodeCount())
}
@@ -265,10 +264,10 @@ func TestService_ReceiveBlockBatch(t *testing.T) {
block: genFullBlock(t, util.DefaultBlockGenConfig(), 1 /*slot*/),
},
check: func(t *testing.T, s *Service) {
time.Sleep(100 * time.Millisecond)
if recvd := len(s.cfg.StateNotifier.(*blockchainTesting.MockStateNotifier).ReceivedEvents()); recvd < 1 {
t.Errorf("Received %d state notifications, expected at least 1", recvd)
}
notifier := s.cfg.StateNotifier.(*blockchainTesting.MockStateNotifier)
require.Eventually(t, func() bool {
return len(notifier.ReceivedEvents()) >= 1
}, 2*time.Second, 10*time.Millisecond, "Expected at least 1 state notification")
},
},
}
@@ -512,8 +511,9 @@ func Test_executePostFinalizationTasks(t *testing.T) {
s.cfg.StateNotifier = notifier
s.executePostFinalizationTasks(s.ctx, headState)
time.Sleep(1 * time.Second) // sleep for a second because event is in a separate go routine
require.Equal(t, 1, len(notifier.ReceivedEvents()))
require.Eventually(t, func() bool {
return len(notifier.ReceivedEvents()) == 1
}, 5*time.Second, 50*time.Millisecond, "Expected exactly 1 state notification")
e := notifier.ReceivedEvents()[0]
assert.Equal(t, statefeed.FinalizedCheckpoint, int(e.Type))
fc, ok := e.Data.(*ethpbv1.EventFinalizedCheckpoint)
@@ -552,8 +552,9 @@ func Test_executePostFinalizationTasks(t *testing.T) {
s.cfg.StateNotifier = notifier
s.executePostFinalizationTasks(s.ctx, headState)
time.Sleep(1 * time.Second) // sleep for a second because event is in a separate go routine
require.Equal(t, 1, len(notifier.ReceivedEvents()))
require.Eventually(t, func() bool {
return len(notifier.ReceivedEvents()) == 1
}, 5*time.Second, 50*time.Millisecond, "Expected exactly 1 state notification")
e := notifier.ReceivedEvents()[0]
assert.Equal(t, statefeed.FinalizedCheckpoint, int(e.Type))
fc, ok := e.Data.(*ethpbv1.EventFinalizedCheckpoint)
@@ -596,13 +597,13 @@ func TestProcessLightClientBootstrap(t *testing.T) {
s.executePostFinalizationTasks(s.ctx, l.AttestedState)
// wait for the goroutine to finish processing
time.Sleep(1 * time.Second)
// Check that the light client bootstrap is saved
b, err := s.lcStore.LightClientBootstrap(ctx, [32]byte(cp.Root))
require.NoError(t, err)
require.NotNil(t, b)
// Wait for the light client bootstrap to be saved (runs in goroutine)
var b interfaces.LightClientBootstrap
require.Eventually(t, func() bool {
var err error
b, err = s.lcStore.LightClientBootstrap(ctx, [32]byte(cp.Root))
return err == nil && b != nil
}, 5*time.Second, 50*time.Millisecond, "Light client bootstrap was not saved within timeout")
btst, err := lightClient.NewLightClientBootstrapFromBeaconState(ctx, l.FinalizedState.Slot(), l.FinalizedState, l.FinalizedBlock)
require.NoError(t, err)

View File

@@ -3,7 +3,10 @@ load("@prysm//tools/go:def.bzl", "go_library")
go_library(
name = "go_default_library",
testonly = True,
srcs = ["mock.go"],
srcs = [
"log.go",
"mock.go",
],
importpath = "github.com/OffchainLabs/prysm/v7/beacon-chain/blockchain/testing",
visibility = [
"//beacon-chain:__subpackages__",

View File

@@ -0,0 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package testing
import "github.com/sirupsen/logrus"
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "beacon-chain/blockchain/testing")

View File

@@ -30,7 +30,6 @@ import (
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
var ErrNilState = errors.New("nil state")
@@ -267,7 +266,7 @@ func (s *ChainService) ReceiveBlockInitialSync(ctx context.Context, block interf
if err := s.DB.SaveBlock(ctx, block); err != nil {
return err
}
logrus.Infof("Saved block with root: %#x at slot %d", signingRoot, block.Block().Slot())
log.Infof("Saved block with root: %#x at slot %d", signingRoot, block.Block().Slot())
}
s.Root = signingRoot[:]
s.Block = block
@@ -296,7 +295,7 @@ func (s *ChainService) ReceiveBlockBatch(ctx context.Context, blks []blocks.ROBl
if err := s.DB.SaveBlock(ctx, b); err != nil {
return err
}
logrus.Infof("Saved block with root: %#x at slot %d", signingRoot, b.Block().Slot())
log.Infof("Saved block with root: %#x at slot %d", signingRoot, b.Block().Slot())
}
s.Root = signingRoot[:]
s.Block = b
@@ -328,7 +327,7 @@ func (s *ChainService) ReceiveBlock(ctx context.Context, block interfaces.ReadOn
if err := s.DB.SaveBlock(ctx, block); err != nil {
return err
}
logrus.Infof("Saved block with root: %#x at slot %d", signingRoot, block.Block().Slot())
log.Infof("Saved block with root: %#x at slot %d", signingRoot, block.Block().Slot())
}
s.Root = signingRoot[:]
s.Block = block
@@ -585,11 +584,11 @@ func (s *ChainService) UpdateHead(ctx context.Context, slot primitives.Slot) {
ojc := &ethpb.Checkpoint{}
st, root, err := prepareForkchoiceState(ctx, slot, bytesutil.ToBytes32(s.Root), [32]byte{}, [32]byte{}, ojc, ojc)
if err != nil {
logrus.WithError(err).Error("Could not update head")
log.WithError(err).Error("Could not update head")
}
err = s.ForkChoiceStore.InsertNode(ctx, st, root)
if err != nil {
logrus.WithError(err).Error("Could not insert node to forkchoice")
log.WithError(err).Error("Could not insert node to forkchoice")
}
}

View File

@@ -1,5 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package builder
import "github.com/sirupsen/logrus"
var log = logrus.WithField("prefix", "builder")
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "beacon-chain/builder")

View File

@@ -16,6 +16,7 @@ go_library(
"doc.go",
"error.go",
"interfaces.go",
"log.go",
"payload_id.go",
"proposer_indices.go",
"proposer_indices_disabled.go", # keep

View File

@@ -9,7 +9,6 @@ import (
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1/attestation"
"github.com/pkg/errors"
log "github.com/sirupsen/logrus"
)
type attGroup struct {

View File

@@ -17,7 +17,6 @@ import (
lru "github.com/hashicorp/golang-lru"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
log "github.com/sirupsen/logrus"
)
const (

View File

@@ -8,6 +8,7 @@ go_library(
"deposit_pruner.go",
"deposit_tree.go",
"deposit_tree_snapshot.go",
"log.go",
"merkle_tree.go",
],
importpath = "github.com/OffchainLabs/prysm/v7/beacon-chain/cache/depositsnapshot",

View File

@@ -20,7 +20,6 @@ var (
Name: "beacondb_all_deposits_eip4881",
Help: "The number of total deposits in memory",
})
log = logrus.WithField("prefix", "cache")
)
// InsertDeposit into the database. If deposit or block number are nil

View File

@@ -0,0 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package depositsnapshot
import "github.com/sirupsen/logrus"
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "beacon-chain/cache/depositsnapshot")

9
beacon-chain/cache/log.go vendored Normal file
View File

@@ -0,0 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package cache
import "github.com/sirupsen/logrus"
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "beacon-chain/cache")

View File

@@ -11,7 +11,6 @@ import (
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
log "github.com/sirupsen/logrus"
"k8s.io/client-go/tools/cache"
)

View File

@@ -9,7 +9,6 @@ import (
"github.com/pkg/errors"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/sirupsen/logrus"
)
const (
@@ -67,7 +66,7 @@ func (t *TrackedValidatorsCache) Validator(index primitives.ValidatorIndex) (Tra
val, ok := item.(TrackedValidator)
if !ok {
logrus.Errorf("Failed to cast tracked validator from cache, got unexpected item type %T", item)
log.Errorf("Failed to cast tracked validator from cache, got unexpected item type %T", item)
return TrackedValidator{}, false
}
@@ -113,7 +112,7 @@ func (t *TrackedValidatorsCache) Indices() map[primitives.ValidatorIndex]bool {
for cacheKey := range items {
index, err := fromCacheKey(cacheKey)
if err != nil {
logrus.WithError(err).Error("Failed to get validator index from cache key")
log.WithError(err).Error("Failed to get validator index from cache key")
continue
}

View File

@@ -8,6 +8,7 @@ go_library(
"deposit.go",
"epoch_precompute.go",
"epoch_spec.go",
"log.go",
"reward.go",
"sync_committee.go",
"transition.go",

View File

@@ -7,7 +7,6 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/time"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
"github.com/OffchainLabs/prysm/v7/config/params"
log "github.com/sirupsen/logrus"
)
// ProcessSyncCommitteeUpdates processes sync client committee updates for the beacon state.

View File

@@ -0,0 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package altair
import "github.com/sirupsen/logrus"
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "beacon-chain/core/altair")

View File

@@ -26,6 +26,7 @@ go_library(
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/core/validators:go_default_library",
"//beacon-chain/state:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types:go_default_library",

View File

@@ -1,5 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package blocks
import "github.com/sirupsen/logrus"
var log = logrus.WithField("prefix", "blocks")
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "beacon-chain/core/blocks")

View File

@@ -7,6 +7,7 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/signing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
"github.com/OffchainLabs/prysm/v7/config/features"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
@@ -212,7 +213,12 @@ func ProcessWithdrawals(st state.BeaconState, executionData interfaces.Execution
if err != nil {
return nil, errors.Wrap(err, "could not get next withdrawal validator index")
}
nextValidatorIndex += primitives.ValidatorIndex(params.BeaconConfig().MaxValidatorsPerWithdrawalsSweep)
if features.Get().LowValcountSweep {
bound := min(uint64(st.NumValidators()), params.BeaconConfig().MaxValidatorsPerWithdrawalsSweep)
nextValidatorIndex += primitives.ValidatorIndex(bound)
} else {
nextValidatorIndex += primitives.ValidatorIndex(params.BeaconConfig().MaxValidatorsPerWithdrawalsSweep)
}
nextValidatorIndex = nextValidatorIndex % primitives.ValidatorIndex(st.NumValidators())
} else {
nextValidatorIndex = expectedWithdrawals[len(expectedWithdrawals)-1].ValidatorIndex + 1

View File

@@ -9,9 +9,9 @@ go_library(
"deposits.go",
"effective_balance_updates.go",
"error.go",
"log.go",
"registry_updates.go",
"transition.go",
"transition_no_verify_sig.go",
"upgrade.go",
"validator.go",
"withdrawals.go",
@@ -61,7 +61,6 @@ go_test(
"error_test.go",
"export_test.go",
"registry_updates_test.go",
"transition_no_verify_sig_test.go",
"transition_test.go",
"upgrade_test.go",
"validator_test.go",

View File

@@ -17,7 +17,6 @@ import (
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/ethereum/go-ethereum/common/math"
"github.com/pkg/errors"
log "github.com/sirupsen/logrus"
)
// ProcessPendingConsolidations implements the spec definition below. This method makes mutating

View File

@@ -18,7 +18,6 @@ import (
"github.com/OffchainLabs/prysm/v7/runtime/version"
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/pkg/errors"
log "github.com/sirupsen/logrus"
)
// ProcessDeposits is one of the operations performed on each processed

View File

@@ -6,6 +6,11 @@ type execReqErr struct {
error
}
// NewExecReqError creates a new execReqErr.
func NewExecReqError(msg string) error {
return execReqErr{errors.New(msg)}
}
// IsExecutionRequestError returns true if the error has `execReqErr`.
func IsExecutionRequestError(e error) bool {
if e == nil {

View File

@@ -0,0 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package electra
import "github.com/sirupsen/logrus"
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "beacon-chain/core/electra")

View File

@@ -1,60 +0,0 @@
package electra_test
import (
"testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/electra"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/OffchainLabs/prysm/v7/testing/util"
)
func TestProcessOperationsWithNilRequests(t *testing.T) {
tests := []struct {
name string
modifyBlk func(blockElectra *ethpb.SignedBeaconBlockElectra)
errMsg string
}{
{
name: "Nil deposit request",
modifyBlk: func(blk *ethpb.SignedBeaconBlockElectra) {
blk.Block.Body.ExecutionRequests.Deposits = []*enginev1.DepositRequest{nil}
},
errMsg: "nil deposit request",
},
{
name: "Nil withdrawal request",
modifyBlk: func(blk *ethpb.SignedBeaconBlockElectra) {
blk.Block.Body.ExecutionRequests.Withdrawals = []*enginev1.WithdrawalRequest{nil}
},
errMsg: "nil withdrawal request",
},
{
name: "Nil consolidation request",
modifyBlk: func(blk *ethpb.SignedBeaconBlockElectra) {
blk.Block.Body.ExecutionRequests.Consolidations = []*enginev1.ConsolidationRequest{nil}
},
errMsg: "nil consolidation request",
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
st, ks := util.DeterministicGenesisStateElectra(t, 128)
blk, err := util.GenerateFullBlockElectra(st, ks, util.DefaultBlockGenConfig(), 1)
require.NoError(t, err)
tc.modifyBlk(blk)
b, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, st.SetSlot(1))
_, err = electra.ProcessOperations(t.Context(), st, b.Block())
require.ErrorContains(t, tc.errMsg, err)
})
}
}

View File

@@ -17,7 +17,6 @@ import (
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
log "github.com/sirupsen/logrus"
)
// ProcessWithdrawalRequests processes the validator withdrawals from the provided execution payload

View File

@@ -0,0 +1,43 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = ["payload.go"],
importpath = "github.com/OffchainLabs/prysm/v7/beacon-chain/core/gloas",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/core/electra:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/state:go_default_library",
"//config/params:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/bls:go_default_library",
"//encoding/ssz:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = ["payload_test.go"],
embed = [":go_default_library"],
deps = [
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/bls:go_default_library",
"//encoding/ssz:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//testing/require:go_default_library",
"//time/slots:go_default_library",
],
)

View File

@@ -0,0 +1,341 @@
package gloas
import (
"bytes"
"context"
"fmt"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/electra"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/signing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/crypto/bls"
"github.com/OffchainLabs/prysm/v7/encoding/ssz"
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/pkg/errors"
)
// ProcessExecutionPayload processes the signed execution payload envelope for the Gloas fork.
// Spec v1.7.0-alpha.0 (pseudocode):
// def process_execution_payload(
//
// state: BeaconState,
// signed_envelope: SignedExecutionPayloadEnvelope,
// execution_engine: ExecutionEngine,
// verify: bool = True,
//
// ) -> None:
//
// envelope = signed_envelope.message
// payload = envelope.payload
//
// if verify:
// assert verify_execution_payload_envelope_signature(state, signed_envelope)
//
// previous_state_root = hash_tree_root(state)
// if state.latest_block_header.state_root == Root():
// state.latest_block_header.state_root = previous_state_root
//
// assert envelope.beacon_block_root == hash_tree_root(state.latest_block_header)
// assert envelope.slot == state.slot
//
// committed_bid = state.latest_execution_payload_bid
// assert envelope.builder_index == committed_bid.builder_index
// assert committed_bid.blob_kzg_commitments_root == hash_tree_root(envelope.blob_kzg_commitments)
// assert committed_bid.prev_randao == payload.prev_randao
//
// assert hash_tree_root(payload.withdrawals) == hash_tree_root(state.payload_expected_withdrawals)
//
// assert committed_bid.gas_limit == payload.gas_limit
// assert committed_bid.block_hash == payload.block_hash
// assert payload.parent_hash == state.latest_block_hash
// assert payload.timestamp == compute_time_at_slot(state, state.slot)
// assert (
// len(envelope.blob_kzg_commitments)
// <= get_blob_parameters(get_current_epoch(state)).max_blobs_per_block
// )
// versioned_hashes = [
// kzg_commitment_to_versioned_hash(commitment) for commitment in envelope.blob_kzg_commitments
// ]
// requests = envelope.execution_requests
// assert execution_engine.verify_and_notify_new_payload(
// NewPayloadRequest(
// execution_payload=payload,
// versioned_hashes=versioned_hashes,
// parent_beacon_block_root=state.latest_block_header.parent_root,
// execution_requests=requests,
// )
// )
//
// for op in requests.deposits: process_deposit_request(state, op)
// for op in requests.withdrawals: process_withdrawal_request(state, op)
// for op in requests.consolidations: process_consolidation_request(state, op)
//
// payment = state.builder_pending_payments[SLOTS_PER_EPOCH + state.slot % SLOTS_PER_EPOCH]
// amount = payment.withdrawal.amount
// if amount > 0:
// state.builder_pending_withdrawals.append(payment.withdrawal)
// state.builder_pending_payments[SLOTS_PER_EPOCH + state.slot % SLOTS_PER_EPOCH] = (
// BuilderPendingPayment()
// )
//
// state.execution_payload_availability[state.slot % SLOTS_PER_HISTORICAL_ROOT] = 0b1
// state.latest_block_hash = payload.block_hash
//
// if verify:
// assert envelope.state_root == hash_tree_root(state)
func ProcessExecutionPayload(
ctx context.Context,
st state.BeaconState,
signedEnvelope interfaces.ROSignedExecutionPayloadEnvelope,
) error {
// Verify the signature on the signed execution payload envelope
if err := verifyExecutionPayloadEnvelopeSignature(st, signedEnvelope); err != nil {
return errors.Wrap(err, "signature verification failed")
}
envelope, err := signedEnvelope.Envelope()
if err != nil {
return errors.Wrap(err, "could not get envelope from signed envelope")
}
payload, err := envelope.Execution()
if err != nil {
return errors.Wrap(err, "could not get execution payload from envelope")
}
latestHeader := st.LatestBlockHeader()
if len(latestHeader.StateRoot) == 0 || bytes.Equal(latestHeader.StateRoot, make([]byte, 32)) {
previousStateRoot, err := st.HashTreeRoot(ctx)
if err != nil {
return errors.Wrap(err, "could not compute state root")
}
latestHeader.StateRoot = previousStateRoot[:]
if err := st.SetLatestBlockHeader(latestHeader); err != nil {
return errors.Wrap(err, "could not set latest block header")
}
}
blockHeaderRoot, err := latestHeader.HashTreeRoot()
if err != nil {
return errors.Wrap(err, "could not compute block header root")
}
beaconBlockRoot := envelope.BeaconBlockRoot()
if !bytes.Equal(beaconBlockRoot[:], blockHeaderRoot[:]) {
return errors.Errorf("envelope beacon block root does not match state latest block header root: envelope=%#x, header=%#x", beaconBlockRoot, blockHeaderRoot)
}
if envelope.Slot() != st.Slot() {
return errors.Errorf("envelope slot does not match state slot: envelope=%d, state=%d", envelope.Slot(), st.Slot())
}
latestBid, err := st.LatestExecutionPayloadBid()
if err != nil {
return errors.Wrap(err, "could not get latest execution payload bid")
}
if latestBid == nil {
return errors.New("latest execution payload bid is nil")
}
if envelope.BuilderIndex() != latestBid.BuilderIndex() {
return errors.Errorf("envelope builder index does not match committed bid builder index: envelope=%d, bid=%d", envelope.BuilderIndex(), latestBid.BuilderIndex())
}
envelopeBlobCommitments := envelope.BlobKzgCommitments()
envelopeBlobRoot, err := ssz.KzgCommitmentsRoot(envelopeBlobCommitments)
if err != nil {
return errors.Wrap(err, "could not compute envelope blob KZG commitments root")
}
committedBlobRoot := latestBid.BlobKzgCommitmentsRoot()
if !bytes.Equal(committedBlobRoot[:], envelopeBlobRoot[:]) {
return errors.Errorf("committed bid blob KZG commitments root does not match envelope: bid=%#x, envelope=%#x", committedBlobRoot, envelopeBlobRoot)
}
withdrawals, err := payload.Withdrawals()
if err != nil {
return errors.Wrap(err, "could not get withdrawals from payload")
}
cfg := params.BeaconConfig()
withdrawalsRoot, err := ssz.WithdrawalSliceRoot(withdrawals, cfg.MaxWithdrawalsPerPayload)
if err != nil {
return errors.Wrap(err, "could not compute withdrawals root")
}
expectedWithdrawals, err := st.PayloadExpectedWithdrawals()
if err != nil {
return errors.Wrap(err, "could not get expected withdrawals")
}
expectedRoot, err := ssz.WithdrawalSliceRoot(expectedWithdrawals, cfg.MaxWithdrawalsPerPayload)
if err != nil {
return errors.Wrap(err, "could not compute expected withdrawals root")
}
if !bytes.Equal(withdrawalsRoot[:], expectedRoot[:]) {
return errors.Errorf("payload withdrawals root does not match expected withdrawals: payload=%#x, expected=%#x", withdrawalsRoot, expectedRoot)
}
if latestBid.GasLimit() != payload.GasLimit() {
return errors.Errorf("committed bid gas limit does not match payload gas limit: bid=%d, payload=%d", latestBid.GasLimit(), payload.GasLimit())
}
latestBidPrevRandao := latestBid.PrevRandao()
if !bytes.Equal(payload.PrevRandao(), latestBidPrevRandao[:]) {
return errors.Errorf("payload prev randao does not match committed bid prev randao: payload=%#x, bid=%#x", payload.PrevRandao(), latestBidPrevRandao)
}
bidBlockHash := latestBid.BlockHash()
payloadBlockHash := payload.BlockHash()
if !bytes.Equal(bidBlockHash[:], payloadBlockHash) {
return errors.Errorf("committed bid block hash does not match payload block hash: bid=%#x, payload=%#x", bidBlockHash, payloadBlockHash)
}
latestBlockHash, err := st.LatestBlockHash()
if err != nil {
return errors.Wrap(err, "could not get latest block hash")
}
if !bytes.Equal(payload.ParentHash(), latestBlockHash[:]) {
return errors.Errorf("payload parent hash does not match state latest block hash: payload=%#x, state=%#x", payload.ParentHash(), latestBlockHash)
}
t, err := slots.StartTime(st.GenesisTime(), st.Slot())
if err != nil {
return errors.Wrap(err, "could not compute timestamp")
}
if payload.Timestamp() != uint64(t.Unix()) {
return errors.Errorf("payload timestamp does not match expected timestamp: payload=%d, expected=%d", payload.Timestamp(), uint64(t.Unix()))
}
maxBlobsPerBlock := cfg.MaxBlobsPerBlock(envelope.Slot())
if len(envelopeBlobCommitments) > maxBlobsPerBlock {
return errors.Errorf("too many blob KZG commitments: got=%d, max=%d", len(envelopeBlobCommitments), maxBlobsPerBlock)
}
if err := processExecutionRequests(ctx, st, envelope.ExecutionRequests()); err != nil {
return errors.Wrap(err, "could not process execution requests")
}
if err := queueBuilderPayment(ctx, st); err != nil {
return errors.Wrap(err, "could not queue builder payment")
}
if err := st.SetExecutionPayloadAvailability(st.Slot(), true); err != nil {
return errors.Wrap(err, "could not set execution payload availability")
}
if err := st.SetLatestBlockHash([32]byte(payload.BlockHash())); err != nil {
return errors.Wrap(err, "could not set latest block hash")
}
r, err := st.HashTreeRoot(ctx)
if err != nil {
return errors.Wrap(err, "could not get hash tree root")
}
if r != envelope.StateRoot() {
return fmt.Errorf("state root mismatch: expected %#x, got %#x", envelope.StateRoot(), r)
}
return nil
}
// processExecutionRequests processes deposits, withdrawals, and consolidations from execution requests.
// Spec v1.7.0-alpha.0 (pseudocode):
// for op in requests.deposits: process_deposit_request(state, op)
// for op in requests.withdrawals: process_withdrawal_request(state, op)
// for op in requests.consolidations: process_consolidation_request(state, op)
func processExecutionRequests(ctx context.Context, st state.BeaconState, requests *enginev1.ExecutionRequests) error {
var err error
st, err = electra.ProcessDepositRequests(ctx, st, requests.Deposits)
if err != nil {
return errors.Wrap(err, "could not process deposit requests")
}
st, err = electra.ProcessWithdrawalRequests(ctx, st, requests.Withdrawals)
if err != nil {
return errors.Wrap(err, "could not process withdrawal requests")
}
err = electra.ProcessConsolidationRequests(ctx, st, requests.Consolidations)
if err != nil {
return errors.Wrap(err, "could not process consolidation requests")
}
return nil
}
// queueBuilderPayment implements the builder payment queuing logic for Gloas.
// Spec v1.7.0-alpha.0 (pseudocode):
// payment = state.builder_pending_payments[SLOTS_PER_EPOCH + state.slot % SLOTS_PER_EPOCH]
// amount = payment.withdrawal.amount
// if amount > 0:
//
// state.builder_pending_withdrawals.append(payment.withdrawal)
//
// state.builder_pending_payments[SLOTS_PER_EPOCH + state.slot % SLOTS_PER_EPOCH] = BuilderPendingPayment()
func queueBuilderPayment(ctx context.Context, st state.BeaconState) error {
slot := st.Slot()
payment, err := st.BuilderPendingPayment(slot)
if err != nil {
return errors.Wrap(err, "could not get builder pending payment")
}
if payment.Withdrawal != nil && payment.Withdrawal.Amount > 0 {
if err := st.AppendBuilderPendingWithdrawal(payment.Withdrawal); err != nil {
return errors.Wrap(err, "could not append builder pending withdrawal")
}
}
reset := &ethpb.BuilderPendingPayment{
Withdrawal: &ethpb.BuilderPendingWithdrawal{
FeeRecipient: make([]byte, 20),
},
}
if err := st.SetBuilderPendingPayment(slot, reset); err != nil {
return errors.Wrap(err, "could not set builder pending payment")
}
return nil
}
// verifyExecutionPayloadEnvelopeSignature verifies the BLS signature on a signed execution payload envelope.
// Spec v1.7.0-alpha.0 (pseudocode):
// if verify: assert verify_execution_payload_envelope_signature(state, signed_envelope)
func verifyExecutionPayloadEnvelopeSignature(st state.BeaconState, signedEnvelope interfaces.ROSignedExecutionPayloadEnvelope) error {
envelope, err := signedEnvelope.Envelope()
if err != nil {
return fmt.Errorf("failed to get envelope: %w", err)
}
builderIdx := envelope.BuilderIndex()
builderPubkey := st.PubkeyAtIndex(primitives.ValidatorIndex(builderIdx))
publicKey, err := bls.PublicKeyFromBytes(builderPubkey[:])
if err != nil {
return fmt.Errorf("invalid builder public key: %w", err)
}
signatureBytes := signedEnvelope.Signature()
signature, err := bls.SignatureFromBytes(signatureBytes[:])
if err != nil {
return fmt.Errorf("invalid signature format: %w", err)
}
currentEpoch := slots.ToEpoch(envelope.Slot())
domain, err := signing.Domain(
st.Fork(),
currentEpoch,
params.BeaconConfig().DomainBeaconBuilder,
st.GenesisValidatorsRoot(),
)
if err != nil {
return fmt.Errorf("failed to compute signing domain: %w", err)
}
signingRoot, err := signedEnvelope.SigningRoot(domain)
if err != nil {
return fmt.Errorf("failed to compute signing root: %w", err)
}
if !signature.Verify(publicKey, signingRoot[:]) {
return fmt.Errorf("signature verification failed: %w", signing.ErrSigFailedToVerify)
}
return nil
}

View File

@@ -0,0 +1,269 @@
package gloas
import (
"bytes"
"context"
"testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/signing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
state_native "github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/crypto/bls"
"github.com/OffchainLabs/prysm/v7/encoding/ssz"
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/OffchainLabs/prysm/v7/time/slots"
)
type payloadFixture struct {
state state.BeaconState
signed interfaces.ROSignedExecutionPayloadEnvelope
signedProto *ethpb.SignedExecutionPayloadEnvelope
envelope *ethpb.ExecutionPayloadEnvelope
payload *enginev1.ExecutionPayloadDeneb
slot primitives.Slot
}
func buildPayloadFixture(t *testing.T, mutate func(payload *enginev1.ExecutionPayloadDeneb, bid *ethpb.ExecutionPayloadBid, envelope *ethpb.ExecutionPayloadEnvelope)) payloadFixture {
t.Helper()
cfg := params.BeaconConfig()
slot := primitives.Slot(5)
builderIdx := primitives.BuilderIndex(0)
sk, err := bls.RandKey()
require.NoError(t, err)
pk := sk.PublicKey().Marshal()
randao := bytes.Repeat([]byte{0xAA}, 32)
parentHash := bytes.Repeat([]byte{0xBB}, 32)
blockHash := bytes.Repeat([]byte{0xCC}, 32)
withdrawals := []*enginev1.Withdrawal{
{Index: 0, ValidatorIndex: 1, Address: bytes.Repeat([]byte{0x01}, 20), Amount: 0},
}
blobCommitments := [][]byte{}
blobRoot, err := ssz.KzgCommitmentsRoot(blobCommitments)
require.NoError(t, err)
payload := &enginev1.ExecutionPayloadDeneb{
ParentHash: parentHash,
FeeRecipient: bytes.Repeat([]byte{0x01}, 20),
StateRoot: bytes.Repeat([]byte{0x02}, 32),
ReceiptsRoot: bytes.Repeat([]byte{0x03}, 32),
LogsBloom: bytes.Repeat([]byte{0x04}, 256),
PrevRandao: randao,
BlockNumber: 1,
GasLimit: 1,
GasUsed: 0,
Timestamp: 100,
ExtraData: []byte{},
BaseFeePerGas: bytes.Repeat([]byte{0x05}, 32),
BlockHash: blockHash,
Transactions: [][]byte{},
Withdrawals: withdrawals,
BlobGasUsed: 0,
ExcessBlobGas: 0,
}
bid := &ethpb.ExecutionPayloadBid{
ParentBlockHash: parentHash,
ParentBlockRoot: bytes.Repeat([]byte{0xDD}, 32),
BlockHash: blockHash,
PrevRandao: randao,
GasLimit: 1,
BuilderIndex: builderIdx,
Slot: slot,
Value: 0,
ExecutionPayment: 0,
BlobKzgCommitmentsRoot: blobRoot[:],
FeeRecipient: bytes.Repeat([]byte{0xEE}, 20),
}
header := &ethpb.BeaconBlockHeader{
Slot: slot,
ParentRoot: bytes.Repeat([]byte{0x11}, 32),
StateRoot: bytes.Repeat([]byte{0x22}, 32),
BodyRoot: bytes.Repeat([]byte{0x33}, 32),
}
headerRoot, err := header.HashTreeRoot()
require.NoError(t, err)
envelope := &ethpb.ExecutionPayloadEnvelope{
Slot: slot,
BuilderIndex: builderIdx,
BeaconBlockRoot: headerRoot[:],
Payload: payload,
BlobKzgCommitments: blobCommitments,
ExecutionRequests: &enginev1.ExecutionRequests{},
}
if mutate != nil {
mutate(payload, bid, envelope)
}
genesisRoot := bytes.Repeat([]byte{0xAB}, 32)
blockRoots := make([][]byte, cfg.SlotsPerHistoricalRoot)
stateRoots := make([][]byte, cfg.SlotsPerHistoricalRoot)
for i := range blockRoots {
blockRoots[i] = bytes.Repeat([]byte{0x44}, 32)
stateRoots[i] = bytes.Repeat([]byte{0x55}, 32)
}
randaoMixes := make([][]byte, cfg.EpochsPerHistoricalVector)
for i := range randaoMixes {
randaoMixes[i] = randao
}
withdrawalCreds := make([]byte, 32)
withdrawalCreds[0] = cfg.ETH1AddressWithdrawalPrefixByte
eth1Data := &ethpb.Eth1Data{
DepositRoot: bytes.Repeat([]byte{0x66}, 32),
DepositCount: 0,
BlockHash: bytes.Repeat([]byte{0x77}, 32),
}
vals := []*ethpb.Validator{
{
PublicKey: pk,
WithdrawalCredentials: withdrawalCreds,
EffectiveBalance: cfg.MinActivationBalance + 1_000,
},
}
balances := []uint64{cfg.MinActivationBalance + 1_000}
payments := make([]*ethpb.BuilderPendingPayment, cfg.SlotsPerEpoch*2)
for i := range payments {
payments[i] = &ethpb.BuilderPendingPayment{
Withdrawal: &ethpb.BuilderPendingWithdrawal{
FeeRecipient: make([]byte, 20),
},
}
}
executionPayloadAvailability := make([]byte, cfg.SlotsPerHistoricalRoot/8)
genesisTime := uint64(0)
slotSeconds := cfg.SecondsPerSlot * uint64(slot)
if payload.Timestamp > slotSeconds {
genesisTime = payload.Timestamp - slotSeconds
}
stProto := &ethpb.BeaconStateGloas{
Slot: slot,
GenesisTime: genesisTime,
GenesisValidatorsRoot: genesisRoot,
Fork: &ethpb.Fork{
CurrentVersion: bytes.Repeat([]byte{0x01}, 4),
PreviousVersion: bytes.Repeat([]byte{0x01}, 4),
Epoch: 0,
},
LatestBlockHeader: header,
BlockRoots: blockRoots,
StateRoots: stateRoots,
RandaoMixes: randaoMixes,
Eth1Data: eth1Data,
Validators: vals,
Balances: balances,
LatestBlockHash: payload.ParentHash,
LatestExecutionPayloadBid: bid,
BuilderPendingPayments: payments,
ExecutionPayloadAvailability: executionPayloadAvailability,
BuilderPendingWithdrawals: []*ethpb.BuilderPendingWithdrawal{},
PayloadExpectedWithdrawals: payload.Withdrawals,
}
st, err := state_native.InitializeFromProtoGloas(stProto)
require.NoError(t, err)
expected := st.Copy()
ctx := context.Background()
require.NoError(t, processExecutionRequests(ctx, expected, envelope.ExecutionRequests))
require.NoError(t, queueBuilderPayment(ctx, expected))
require.NoError(t, expected.SetExecutionPayloadAvailability(slot, true))
var blockHashArr [32]byte
copy(blockHashArr[:], payload.BlockHash)
require.NoError(t, expected.SetLatestBlockHash(blockHashArr))
expectedRoot, err := expected.HashTreeRoot(ctx)
require.NoError(t, err)
envelope.StateRoot = expectedRoot[:]
epoch := slots.ToEpoch(slot)
domain, err := signing.Domain(st.Fork(), epoch, cfg.DomainBeaconBuilder, st.GenesisValidatorsRoot())
require.NoError(t, err)
signingRoot, err := signing.ComputeSigningRoot(envelope, domain)
require.NoError(t, err)
signature := sk.Sign(signingRoot[:]).Marshal()
signedProto := &ethpb.SignedExecutionPayloadEnvelope{
Message: envelope,
Signature: signature,
}
signed, err := blocks.WrappedROSignedExecutionPayloadEnvelope(signedProto)
require.NoError(t, err)
return payloadFixture{
state: st,
signed: signed,
signedProto: signedProto,
envelope: envelope,
payload: payload,
slot: slot,
}
}
func TestProcessExecutionPayload_Success(t *testing.T) {
fixture := buildPayloadFixture(t, nil)
require.NoError(t, ProcessExecutionPayload(t.Context(), fixture.state, fixture.signed))
latestHash, err := fixture.state.LatestBlockHash()
require.NoError(t, err)
var expectedHash [32]byte
copy(expectedHash[:], fixture.payload.BlockHash)
require.Equal(t, expectedHash, latestHash)
payment, err := fixture.state.BuilderPendingPayment(fixture.slot)
require.NoError(t, err)
require.NotNil(t, payment)
require.Equal(t, primitives.Gwei(0), payment.Withdrawal.Amount)
}
func TestProcessExecutionPayload_PrevRandaoMismatch(t *testing.T) {
fixture := buildPayloadFixture(t, func(_ *enginev1.ExecutionPayloadDeneb, bid *ethpb.ExecutionPayloadBid, _ *ethpb.ExecutionPayloadEnvelope) {
bid.PrevRandao = bytes.Repeat([]byte{0xFF}, 32)
})
err := ProcessExecutionPayload(t.Context(), fixture.state, fixture.signed)
require.ErrorContains(t, "prev randao", err)
}
func TestQueueBuilderPayment_ZeroAmountClearsSlot(t *testing.T) {
fixture := buildPayloadFixture(t, nil)
require.NoError(t, queueBuilderPayment(t.Context(), fixture.state))
payment, err := fixture.state.BuilderPendingPayment(fixture.slot)
require.NoError(t, err)
require.NotNil(t, payment)
require.Equal(t, primitives.Gwei(0), payment.Withdrawal.Amount)
}
func TestVerifyExecutionPayloadEnvelopeSignature_InvalidSig(t *testing.T) {
fixture := buildPayloadFixture(t, nil)
signedProto := &ethpb.SignedExecutionPayloadEnvelope{
Message: fixture.signedProto.Message,
Signature: bytes.Repeat([]byte{0xFF}, 96),
}
badSigned, err := blocks.WrappedROSignedExecutionPayloadEnvelope(signedProto)
require.NoError(t, err)
err = verifyExecutionPayloadEnvelopeSignature(fixture.state, badSigned)
require.ErrorContains(t, "invalid signature format", err)
}

View File

@@ -8,6 +8,7 @@ go_library(
"block.go",
"genesis.go",
"legacy.go",
"log.go",
"metrics.go",
"randao.go",
"ranges.go",

View File

@@ -0,0 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package helpers
import "github.com/sirupsen/logrus"
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "beacon-chain/core/helpers")

View File

@@ -14,7 +14,6 @@ import (
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/pkg/errors"
log "github.com/sirupsen/logrus"
)
var (

View File

@@ -21,7 +21,6 @@ import (
"github.com/pkg/errors"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
log "github.com/sirupsen/logrus"
)
var (

View File

@@ -3,6 +3,8 @@ load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"electra.go",
"errors.go",
"log.go",
"skip_slot_cache.go",
"state.go",
@@ -62,6 +64,8 @@ go_test(
"altair_transition_no_verify_sig_test.go",
"bellatrix_transition_no_verify_sig_test.go",
"benchmarks_test.go",
"electra_test.go",
"exports_test.go",
"skip_slot_cache_test.go",
"state_fuzz_test.go",
"state_test.go",

View File

@@ -1,9 +1,10 @@
package electra
package transition
import (
"context"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/blocks"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/electra"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
v "github.com/OffchainLabs/prysm/v7/beacon-chain/core/validators"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
@@ -47,7 +48,7 @@ var (
// # [New in Electra:EIP7251]
// for_ops(body.execution_payload.consolidation_requests, process_consolidation_request)
func ProcessOperations(ctx context.Context, st state.BeaconState, block interfaces.ReadOnlyBeaconBlock) (state.BeaconState, error) {
func electraOperations(ctx context.Context, st state.BeaconState, block interfaces.ReadOnlyBeaconBlock) (state.BeaconState, error) {
var err error
// 6110 validations are in VerifyOperationLengths
@@ -63,59 +64,60 @@ func ProcessOperations(ctx context.Context, st state.BeaconState, block interfac
return nil, errors.Wrap(err, "could not update total active balance cache")
}
}
st, err = ProcessProposerSlashings(ctx, st, bb.ProposerSlashings(), exitInfo)
st, err = blocks.ProcessProposerSlashings(ctx, st, bb.ProposerSlashings(), exitInfo)
if err != nil {
return nil, errors.Wrap(err, "could not process altair proposer slashing")
return nil, errors.Wrap(ErrProcessProposerSlashingsFailed, err.Error())
}
st, err = ProcessAttesterSlashings(ctx, st, bb.AttesterSlashings(), exitInfo)
st, err = blocks.ProcessAttesterSlashings(ctx, st, bb.AttesterSlashings(), exitInfo)
if err != nil {
return nil, errors.Wrap(err, "could not process altair attester slashing")
return nil, errors.Wrap(ErrProcessAttesterSlashingsFailed, err.Error())
}
st, err = ProcessAttestationsNoVerifySignature(ctx, st, block)
st, err = electra.ProcessAttestationsNoVerifySignature(ctx, st, block)
if err != nil {
return nil, errors.Wrap(err, "could not process altair attestation")
return nil, errors.Wrap(ErrProcessAttestationsFailed, err.Error())
}
if _, err := ProcessDeposits(ctx, st, bb.Deposits()); err != nil { // new in electra
return nil, errors.Wrap(err, "could not process altair deposit")
if _, err := electra.ProcessDeposits(ctx, st, bb.Deposits()); err != nil {
return nil, errors.Wrap(ErrProcessDepositsFailed, err.Error())
}
st, err = ProcessVoluntaryExits(ctx, st, bb.VoluntaryExits(), exitInfo)
st, err = blocks.ProcessVoluntaryExits(ctx, st, bb.VoluntaryExits(), exitInfo)
if err != nil {
return nil, errors.Wrap(err, "could not process voluntary exits")
return nil, errors.Wrap(ErrProcessVoluntaryExitsFailed, err.Error())
}
st, err = ProcessBLSToExecutionChanges(st, block)
st, err = blocks.ProcessBLSToExecutionChanges(st, block)
if err != nil {
return nil, errors.Wrap(err, "could not process bls-to-execution changes")
return nil, errors.Wrap(ErrProcessBLSChangesFailed, err.Error())
}
// new in electra
requests, err := bb.ExecutionRequests()
if err != nil {
return nil, errors.Wrap(err, "could not get execution requests")
return nil, electra.NewExecReqError(errors.Wrap(err, "could not get execution requests").Error())
}
for _, d := range requests.Deposits {
if d == nil {
return nil, errors.New("nil deposit request")
return nil, electra.NewExecReqError("nil deposit request")
}
}
st, err = ProcessDepositRequests(ctx, st, requests.Deposits)
st, err = electra.ProcessDepositRequests(ctx, st, requests.Deposits)
if err != nil {
return nil, execReqErr{errors.Wrap(err, "could not process deposit requests")}
return nil, electra.NewExecReqError(errors.Wrap(err, "could not process deposit requests").Error())
}
for _, w := range requests.Withdrawals {
if w == nil {
return nil, errors.New("nil withdrawal request")
return nil, electra.NewExecReqError("nil withdrawal request")
}
}
st, err = ProcessWithdrawalRequests(ctx, st, requests.Withdrawals)
st, err = electra.ProcessWithdrawalRequests(ctx, st, requests.Withdrawals)
if err != nil {
return nil, execReqErr{errors.Wrap(err, "could not process withdrawal requests")}
return nil, electra.NewExecReqError(errors.Wrap(err, "could not process withdrawal requests").Error())
}
for _, c := range requests.Consolidations {
if c == nil {
return nil, errors.New("nil consolidation request")
return nil, electra.NewExecReqError("nil consolidation request")
}
}
if err := ProcessConsolidationRequests(ctx, st, requests.Consolidations); err != nil {
return nil, execReqErr{errors.Wrap(err, "could not process consolidation requests")}
if err := electra.ProcessConsolidationRequests(ctx, st, requests.Consolidations); err != nil {
return nil, electra.NewExecReqError(errors.Wrap(err, "could not process consolidation requests").Error())
}
return st, nil
}

View File

@@ -0,0 +1,216 @@
package transition_test
import (
"context"
"errors"
"testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/transition"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/OffchainLabs/prysm/v7/testing/util"
)
func TestProcessOperationsWithNilRequests(t *testing.T) {
tests := []struct {
name string
modifyBlk func(blockElectra *ethpb.SignedBeaconBlockElectra)
errMsg string
}{
{
name: "Nil deposit request",
modifyBlk: func(blk *ethpb.SignedBeaconBlockElectra) {
blk.Block.Body.ExecutionRequests.Deposits = []*enginev1.DepositRequest{nil}
},
errMsg: "nil deposit request",
},
{
name: "Nil withdrawal request",
modifyBlk: func(blk *ethpb.SignedBeaconBlockElectra) {
blk.Block.Body.ExecutionRequests.Withdrawals = []*enginev1.WithdrawalRequest{nil}
},
errMsg: "nil withdrawal request",
},
{
name: "Nil consolidation request",
modifyBlk: func(blk *ethpb.SignedBeaconBlockElectra) {
blk.Block.Body.ExecutionRequests.Consolidations = []*enginev1.ConsolidationRequest{nil}
},
errMsg: "nil consolidation request",
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
st, ks := util.DeterministicGenesisStateElectra(t, 128)
blk, err := util.GenerateFullBlockElectra(st, ks, util.DefaultBlockGenConfig(), 1)
require.NoError(t, err)
tc.modifyBlk(blk)
b, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, st.SetSlot(1))
_, err = transition.ElectraOperations(t.Context(), st, b.Block())
require.ErrorContains(t, tc.errMsg, err)
})
}
}
func TestElectraOperations_ProcessingErrors(t *testing.T) {
tests := []struct {
name string
modifyBlk func(blk *ethpb.SignedBeaconBlockElectra)
errCheck func(t *testing.T, err error)
}{
{
name: "ErrProcessProposerSlashingsFailed",
modifyBlk: func(blk *ethpb.SignedBeaconBlockElectra) {
// Create invalid proposer slashing with out-of-bounds proposer index
blk.Block.Body.ProposerSlashings = []*ethpb.ProposerSlashing{
{
Header_1: &ethpb.SignedBeaconBlockHeader{
Header: &ethpb.BeaconBlockHeader{
Slot: 1,
ProposerIndex: 999999, // Invalid index (out of bounds)
ParentRoot: make([]byte, 32),
StateRoot: make([]byte, 32),
BodyRoot: make([]byte, 32),
},
Signature: make([]byte, 96),
},
Header_2: &ethpb.SignedBeaconBlockHeader{
Header: &ethpb.BeaconBlockHeader{
Slot: 1,
ProposerIndex: 999999,
ParentRoot: make([]byte, 32),
StateRoot: make([]byte, 32),
BodyRoot: make([]byte, 32),
},
Signature: make([]byte, 96),
},
},
}
},
errCheck: func(t *testing.T, err error) {
require.ErrorContains(t, "process proposer slashings failed", err)
require.Equal(t, true, errors.Is(err, transition.ErrProcessProposerSlashingsFailed))
},
},
{
name: "ErrProcessAttestationsFailed",
modifyBlk: func(blk *ethpb.SignedBeaconBlockElectra) {
// Create attestation with invalid committee index
blk.Block.Body.Attestations = []*ethpb.AttestationElectra{
{
AggregationBits: []byte{0b00000001},
Data: &ethpb.AttestationData{
Slot: 1,
CommitteeIndex: 999999, // Invalid committee index
BeaconBlockRoot: make([]byte, 32),
Source: &ethpb.Checkpoint{
Epoch: 0,
Root: make([]byte, 32),
},
Target: &ethpb.Checkpoint{
Epoch: 0,
Root: make([]byte, 32),
},
},
CommitteeBits: []byte{0b00000001},
Signature: make([]byte, 96),
},
}
},
errCheck: func(t *testing.T, err error) {
require.ErrorContains(t, "process attestations failed", err)
require.Equal(t, true, errors.Is(err, transition.ErrProcessAttestationsFailed))
},
},
{
name: "ErrProcessDepositsFailed",
modifyBlk: func(blk *ethpb.SignedBeaconBlockElectra) {
// Create deposit with invalid proof length
blk.Block.Body.Deposits = []*ethpb.Deposit{
{
Proof: [][]byte{}, // Invalid: empty proof
Data: &ethpb.Deposit_Data{
PublicKey: make([]byte, 48),
WithdrawalCredentials: make([]byte, 32),
Amount: 32000000000, // 32 ETH in Gwei
Signature: make([]byte, 96),
},
},
}
},
errCheck: func(t *testing.T, err error) {
require.ErrorContains(t, "process deposits failed", err)
require.Equal(t, true, errors.Is(err, transition.ErrProcessDepositsFailed))
},
},
{
name: "ErrProcessVoluntaryExitsFailed",
modifyBlk: func(blk *ethpb.SignedBeaconBlockElectra) {
// Create voluntary exit with invalid validator index
blk.Block.Body.VoluntaryExits = []*ethpb.SignedVoluntaryExit{
{
Exit: &ethpb.VoluntaryExit{
Epoch: 0,
ValidatorIndex: 999999, // Invalid index (out of bounds)
},
Signature: make([]byte, 96),
},
}
},
errCheck: func(t *testing.T, err error) {
require.ErrorContains(t, "process voluntary exits failed", err)
require.Equal(t, true, errors.Is(err, transition.ErrProcessVoluntaryExitsFailed))
},
},
{
name: "ErrProcessBLSChangesFailed",
modifyBlk: func(blk *ethpb.SignedBeaconBlockElectra) {
// Create BLS to execution change with invalid validator index
blk.Block.Body.BlsToExecutionChanges = []*ethpb.SignedBLSToExecutionChange{
{
Message: &ethpb.BLSToExecutionChange{
ValidatorIndex: 999999, // Invalid index (out of bounds)
FromBlsPubkey: make([]byte, 48),
ToExecutionAddress: make([]byte, 20),
},
Signature: make([]byte, 96),
},
}
},
errCheck: func(t *testing.T, err error) {
require.ErrorContains(t, "process BLS to execution changes failed", err)
require.Equal(t, true, errors.Is(err, transition.ErrProcessBLSChangesFailed))
},
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
ctx := context.Background()
st, ks := util.DeterministicGenesisStateElectra(t, 128)
blk, err := util.GenerateFullBlockElectra(st, ks, util.DefaultBlockGenConfig(), 1)
require.NoError(t, err)
tc.modifyBlk(blk)
b, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, st.SetSlot(primitives.Slot(1)))
_, err = transition.ElectraOperations(ctx, st, b.Block())
require.NotNil(t, err, "Expected an error but got nil")
tc.errCheck(t, err)
})
}
}

View File

@@ -0,0 +1,19 @@
package transition
import "errors"
var (
ErrAttestationsSignatureInvalid = errors.New("attestations signature invalid")
ErrRandaoSignatureInvalid = errors.New("randao signature invalid")
ErrBLSToExecutionChangesSignatureInvalid = errors.New("BLS to execution changes signature invalid")
ErrProcessWithdrawalsFailed = errors.New("process withdrawals failed")
ErrProcessRandaoFailed = errors.New("process randao failed")
ErrProcessEth1DataFailed = errors.New("process eth1 data failed")
ErrProcessProposerSlashingsFailed = errors.New("process proposer slashings failed")
ErrProcessAttesterSlashingsFailed = errors.New("process attester slashings failed")
ErrProcessAttestationsFailed = errors.New("process attestations failed")
ErrProcessDepositsFailed = errors.New("process deposits failed")
ErrProcessVoluntaryExitsFailed = errors.New("process voluntary exits failed")
ErrProcessBLSChangesFailed = errors.New("process BLS to execution changes failed")
ErrProcessSyncAggregateFailed = errors.New("process sync aggregate failed")
)

View File

@@ -0,0 +1,3 @@
package transition
var ElectraOperations = electraOperations

View File

@@ -1,10 +1,9 @@
// Package interop contains useful utilities for persisting
// ssz-encoded states and blocks to disk during each state
// transition for development purposes.
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package interop
import (
"github.com/sirupsen/logrus"
)
import "github.com/sirupsen/logrus"
var log = logrus.WithField("prefix", "interop")
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "beacon-chain/core/transition/interop")

View File

@@ -1,5 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package transition
import "github.com/sirupsen/logrus"
var log = logrus.WithField("prefix", "state")
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "beacon-chain/core/transition")

View File

@@ -7,12 +7,11 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/altair"
b "github.com/OffchainLabs/prysm/v7/beacon-chain/core/blocks"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/electra"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/transition/interop"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/validators"
v "github.com/OffchainLabs/prysm/v7/beacon-chain/core/validators"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
"github.com/OffchainLabs/prysm/v7/config/features"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v7/crypto/bls"
@@ -70,10 +69,11 @@ func ExecuteStateTransitionNoVerifyAnySig(
}
// Execute per block transition.
set, st, err := ProcessBlockNoVerifyAnySig(ctx, st, signed)
sigSlice, st, err := ProcessBlockNoVerifyAnySig(ctx, st, signed)
if err != nil {
return nil, nil, errors.Wrap(err, "could not process block")
}
set := sigSlice.Batch()
// State root validation.
postStateRoot, err := st.HashTreeRoot(ctx)
@@ -113,7 +113,7 @@ func ExecuteStateTransitionNoVerifyAnySig(
// assert block.state_root == hash_tree_root(state)
func CalculateStateRoot(
ctx context.Context,
state state.BeaconState,
rollback state.BeaconState,
signed interfaces.ReadOnlySignedBeaconBlock,
) ([32]byte, error) {
ctx, span := trace.StartSpan(ctx, "core.state.CalculateStateRoot")
@@ -122,7 +122,7 @@ func CalculateStateRoot(
tracing.AnnotateError(span, ctx.Err())
return [32]byte{}, ctx.Err()
}
if state == nil || state.IsNil() {
if rollback == nil || rollback.IsNil() {
return [32]byte{}, errors.New("nil state")
}
if signed == nil || signed.IsNil() || signed.Block().IsNil() {
@@ -130,7 +130,7 @@ func CalculateStateRoot(
}
// Copy state to avoid mutating the state reference.
state = state.Copy()
state := rollback.Copy()
// Execute per slots transition.
var err error
@@ -141,12 +141,101 @@ func CalculateStateRoot(
}
// Execute per block transition.
state, err = ProcessBlockForStateRoot(ctx, state, signed)
if features.Get().EnableProposerPreprocessing {
state, err = processBlockForProposing(ctx, state, signed)
if err != nil {
return [32]byte{}, errors.Wrap(err, "could not process block for proposing")
}
} else {
state, err = ProcessBlockForStateRoot(ctx, state, signed)
if err != nil {
return [32]byte{}, errors.Wrap(err, "could not process block")
}
}
return state.HashTreeRoot(ctx)
}
// processBlockVerifySigs processes the block and verifies the signatures within it. Block signatures are not verified as this block is not yet signed.
func processBlockForProposing(ctx context.Context, st state.BeaconState, signed interfaces.ReadOnlySignedBeaconBlock) (state.BeaconState, error) {
var err error
var set BlockSignatureBatches
set, st, err = ProcessBlockNoVerifyAnySig(ctx, st, signed)
if err != nil {
return [32]byte{}, errors.Wrap(err, "could not process block")
return nil, err
}
// We first try to verify all sigantures batched optimistically. We ignore block proposer signature.
sigSet := set.Batch()
valid, err := sigSet.Verify()
if err != nil || valid {
return st, err
}
// Some signature failed to verify.
// Verify Attestations signatures
attSigs := set.AttestationSignatures
if attSigs == nil {
return nil, ErrAttestationsSignatureInvalid
}
valid, err = attSigs.Verify()
if err != nil {
return nil, err
}
if !valid {
return nil, ErrAttestationsSignatureInvalid
}
return state.HashTreeRoot(ctx)
// Verify Randao signature
randaoSigs := set.RandaoSignatures
if randaoSigs == nil {
return nil, ErrRandaoSignatureInvalid
}
valid, err = randaoSigs.Verify()
if err != nil {
return nil, err
}
if !valid {
return nil, ErrRandaoSignatureInvalid
}
if signed.Block().Version() < version.Capella {
//This should not happen as we must have failed one of the above signatures.
return st, nil
}
// Verify BLS to execution changes signatures
blsChangeSigs := set.BLSChangeSignatures
if blsChangeSigs == nil {
return nil, ErrBLSToExecutionChangesSignatureInvalid
}
valid, err = blsChangeSigs.Verify()
if err != nil {
return nil, err
}
if !valid {
return nil, ErrBLSToExecutionChangesSignatureInvalid
}
// We should not reach this point as one of the above signatures must have failed.
return st, nil
}
// BlockSignatureBatches holds the signature batches for different parts of a beacon block.
type BlockSignatureBatches struct {
RandaoSignatures *bls.SignatureBatch
AttestationSignatures *bls.SignatureBatch
BLSChangeSignatures *bls.SignatureBatch
}
// Batch returns the batch of signature batches in the BlockSignatureBatches.
func (b BlockSignatureBatches) Batch() *bls.SignatureBatch {
sigs := bls.NewSet()
if b.RandaoSignatures != nil {
sigs.Join(b.RandaoSignatures)
}
if b.AttestationSignatures != nil {
sigs.Join(b.AttestationSignatures)
}
if b.BLSChangeSignatures != nil {
sigs.Join(b.BLSChangeSignatures)
}
return sigs
}
// ProcessBlockNoVerifyAnySig creates a new, modified beacon state by applying block operation
@@ -165,48 +254,48 @@ func ProcessBlockNoVerifyAnySig(
ctx context.Context,
st state.BeaconState,
signed interfaces.ReadOnlySignedBeaconBlock,
) (*bls.SignatureBatch, state.BeaconState, error) {
) (BlockSignatureBatches, state.BeaconState, error) {
ctx, span := trace.StartSpan(ctx, "core.state.ProcessBlockNoVerifyAnySig")
defer span.End()
set := BlockSignatureBatches{}
if err := blocks.BeaconBlockIsNil(signed); err != nil {
return nil, nil, err
return set, nil, err
}
if st.Version() != signed.Block().Version() {
return nil, nil, fmt.Errorf("state and block are different version. %d != %d", st.Version(), signed.Block().Version())
return set, nil, fmt.Errorf("state and block are different version. %d != %d", st.Version(), signed.Block().Version())
}
blk := signed.Block()
st, err := ProcessBlockForStateRoot(ctx, st, signed)
if err != nil {
return nil, nil, err
return set, nil, err
}
randaoReveal := signed.Block().Body().RandaoReveal()
rSet, err := b.RandaoSignatureBatch(ctx, st, randaoReveal[:])
if err != nil {
tracing.AnnotateError(span, err)
return nil, nil, errors.Wrap(err, "could not retrieve randao signature set")
return set, nil, errors.Wrap(err, "could not retrieve randao signature set")
}
set.RandaoSignatures = rSet
aSet, err := b.AttestationSignatureBatch(ctx, st, signed.Block().Body().Attestations())
if err != nil {
return nil, nil, errors.Wrap(err, "could not retrieve attestation signature set")
return set, nil, errors.Wrap(err, "could not retrieve attestation signature set")
}
set.AttestationSignatures = aSet
// Merge beacon block, randao and attestations signatures into a set.
set := bls.NewSet()
set.Join(rSet).Join(aSet)
if blk.Version() >= version.Capella {
changes, err := signed.Block().Body().BLSToExecutionChanges()
if err != nil {
return nil, nil, errors.Wrap(err, "could not get BLSToExecutionChanges")
return set, nil, errors.Wrap(err, "could not get BLSToExecutionChanges")
}
cSet, err := b.BLSChangesSignatureBatch(st, changes)
if err != nil {
return nil, nil, errors.Wrap(err, "could not get BLSToExecutionChanges signatures")
return set, nil, errors.Wrap(err, "could not get BLSToExecutionChanges signatures")
}
set.Join(cSet)
set.BLSChangeSignatures = cSet
}
return set, st, nil
}
@@ -268,7 +357,7 @@ func ProcessOperationsNoVerifyAttsSigs(
return nil, err
}
} else {
state, err = electra.ProcessOperations(ctx, state, beaconBlock)
state, err = electraOperations(ctx, state, beaconBlock)
if err != nil {
return nil, err
}
@@ -326,7 +415,7 @@ func ProcessBlockForStateRoot(
if state.Version() >= version.Capella {
state, err = b.ProcessWithdrawals(state, executionData)
if err != nil {
return nil, errors.Wrap(err, "could not process withdrawals")
return nil, errors.Wrap(ErrProcessWithdrawalsFailed, err.Error())
}
}
if err = b.ProcessPayload(state, blk.Body()); err != nil {
@@ -338,13 +427,13 @@ func ProcessBlockForStateRoot(
state, err = b.ProcessRandaoNoVerify(state, randaoReveal[:])
if err != nil {
tracing.AnnotateError(span, err)
return nil, errors.Wrap(err, "could not verify and process randao")
return nil, errors.Wrap(ErrProcessRandaoFailed, err.Error())
}
state, err = b.ProcessEth1DataInBlock(ctx, state, signed.Block().Body().Eth1Data())
if err != nil {
tracing.AnnotateError(span, err)
return nil, errors.Wrap(err, "could not process eth1 data")
return nil, errors.Wrap(ErrProcessEth1DataFailed, err.Error())
}
state, err = ProcessOperationsNoVerifyAttsSigs(ctx, state, signed.Block())
@@ -363,7 +452,7 @@ func ProcessBlockForStateRoot(
}
state, _, err = altair.ProcessSyncAggregate(ctx, state, sa)
if err != nil {
return nil, errors.Wrap(err, "process_sync_aggregate failed")
return nil, errors.Wrap(ErrProcessSyncAggregateFailed, err.Error())
}
return state, nil
@@ -379,31 +468,35 @@ func altairOperations(ctx context.Context, st state.BeaconState, beaconBlock int
exitInfo := &validators.ExitInfo{}
if hasSlashings || hasExits {
// ExitInformation is expensive to compute, only do it if we need it.
exitInfo = v.ExitInformation(st)
exitInfo = validators.ExitInformation(st)
if err := helpers.UpdateTotalActiveBalanceCache(st, exitInfo.TotalActiveBalance); err != nil {
return nil, errors.Wrap(err, "could not update total active balance cache")
}
}
st, err = b.ProcessProposerSlashings(ctx, st, beaconBlock.Body().ProposerSlashings(), exitInfo)
if err != nil {
return nil, errors.Wrap(err, "could not process altair proposer slashing")
return nil, errors.Wrap(ErrProcessProposerSlashingsFailed, err.Error())
}
st, err = b.ProcessAttesterSlashings(ctx, st, beaconBlock.Body().AttesterSlashings(), exitInfo)
if err != nil {
return nil, errors.Wrap(err, "could not process altair attester slashing")
return nil, errors.Wrap(ErrProcessAttesterSlashingsFailed, err.Error())
}
st, err = altair.ProcessAttestationsNoVerifySignature(ctx, st, beaconBlock)
if err != nil {
return nil, errors.Wrap(err, "could not process altair attestation")
return nil, errors.Wrap(ErrProcessAttestationsFailed, err.Error())
}
if _, err := altair.ProcessDeposits(ctx, st, beaconBlock.Body().Deposits()); err != nil {
return nil, errors.Wrap(err, "could not process altair deposit")
return nil, errors.Wrap(ErrProcessDepositsFailed, err.Error())
}
st, err = b.ProcessVoluntaryExits(ctx, st, beaconBlock.Body().VoluntaryExits(), exitInfo)
if err != nil {
return nil, errors.Wrap(err, "could not process voluntary exits")
return nil, errors.Wrap(ErrProcessVoluntaryExitsFailed, err.Error())
}
return b.ProcessBLSToExecutionChanges(st, beaconBlock)
st, err = b.ProcessBLSToExecutionChanges(st, beaconBlock)
if err != nil {
return nil, errors.Wrap(ErrProcessBLSChangesFailed, err.Error())
}
return st, nil
}
// This calls phase 0 block operations.
@@ -411,32 +504,32 @@ func phase0Operations(ctx context.Context, st state.BeaconState, beaconBlock int
var err error
hasSlashings := len(beaconBlock.Body().ProposerSlashings()) > 0 || len(beaconBlock.Body().AttesterSlashings()) > 0
hasExits := len(beaconBlock.Body().VoluntaryExits()) > 0
var exitInfo *v.ExitInfo
var exitInfo *validators.ExitInfo
if hasSlashings || hasExits {
// ExitInformation is expensive to compute, only do it if we need it.
exitInfo = v.ExitInformation(st)
exitInfo = validators.ExitInformation(st)
if err := helpers.UpdateTotalActiveBalanceCache(st, exitInfo.TotalActiveBalance); err != nil {
return nil, errors.Wrap(err, "could not update total active balance cache")
}
}
st, err = b.ProcessProposerSlashings(ctx, st, beaconBlock.Body().ProposerSlashings(), exitInfo)
if err != nil {
return nil, errors.Wrap(err, "could not process block proposer slashings")
return nil, errors.Wrap(ErrProcessProposerSlashingsFailed, err.Error())
}
st, err = b.ProcessAttesterSlashings(ctx, st, beaconBlock.Body().AttesterSlashings(), exitInfo)
if err != nil {
return nil, errors.Wrap(err, "could not process block attester slashings")
return nil, errors.Wrap(ErrProcessAttesterSlashingsFailed, err.Error())
}
st, err = b.ProcessAttestationsNoVerifySignature(ctx, st, beaconBlock)
if err != nil {
return nil, errors.Wrap(err, "could not process block attestations")
return nil, errors.Wrap(ErrProcessAttestationsFailed, err.Error())
}
if _, err := altair.ProcessDeposits(ctx, st, beaconBlock.Body().Deposits()); err != nil {
return nil, errors.Wrap(err, "could not process deposits")
return nil, errors.Wrap(ErrProcessDepositsFailed, err.Error())
}
st, err = b.ProcessVoluntaryExits(ctx, st, beaconBlock.Body().VoluntaryExits(), exitInfo)
if err != nil {
return nil, errors.Wrap(err, "could not process voluntary exits")
return nil, errors.Wrap(ErrProcessVoluntaryExitsFailed, err.Error())
}
return st, nil
}

View File

@@ -132,7 +132,8 @@ func TestProcessBlockNoVerify_PassesProcessingConditions(t *testing.T) {
set, _, err := transition.ProcessBlockNoVerifyAnySig(t.Context(), beaconState, wsb)
require.NoError(t, err)
// Test Signature set verifies.
verified, err := set.Verify()
sigSet := set.Batch()
verified, err := sigSet.Verify()
require.NoError(t, err)
assert.Equal(t, true, verified, "Could not verify signature set.")
}
@@ -145,7 +146,8 @@ func TestProcessBlockNoVerifyAnySigAltair_OK(t *testing.T) {
require.NoError(t, err)
set, _, err := transition.ProcessBlockNoVerifyAnySig(t.Context(), beaconState, wsb)
require.NoError(t, err)
verified, err := set.Verify()
sigSet := set.Batch()
verified, err := sigSet.Verify()
require.NoError(t, err)
require.Equal(t, true, verified, "Could not verify signature set")
}
@@ -154,8 +156,9 @@ func TestProcessBlockNoVerify_SigSetContainsDescriptions(t *testing.T) {
beaconState, block, _, _, _ := createFullBlockWithOperations(t)
wsb, err := blocks.NewSignedBeaconBlock(block)
require.NoError(t, err)
set, _, err := transition.ProcessBlockNoVerifyAnySig(t.Context(), beaconState, wsb)
signatures, _, err := transition.ProcessBlockNoVerifyAnySig(t.Context(), beaconState, wsb)
require.NoError(t, err)
set := signatures.Batch()
assert.Equal(t, len(set.Signatures), len(set.Descriptions), "Signatures and descriptions do not match up")
assert.Equal(t, "randao signature", set.Descriptions[0])
assert.Equal(t, "attestation signature", set.Descriptions[1])

View File

@@ -1,5 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package das
import "github.com/sirupsen/logrus"
var log = logrus.WithField("prefix", "das")
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "beacon-chain/das")

View File

@@ -1,5 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package filesystem
import "github.com/sirupsen/logrus"
var log = logrus.WithField("prefix", "filesystem")
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "beacon-chain/db/filesystem")

View File

@@ -1,5 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package kv
import "github.com/sirupsen/logrus"
var log = logrus.WithField("prefix", "db")
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "beacon-chain/db/kv")

View File

@@ -1,5 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package db
import "github.com/sirupsen/logrus"
var log = logrus.WithField("prefix", "db")
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "beacon-chain/db")

View File

@@ -2,7 +2,10 @@ load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = ["pruner.go"],
srcs = [
"log.go",
"pruner.go",
],
importpath = "github.com/OffchainLabs/prysm/v7/beacon-chain/db/pruner",
visibility = [
"//beacon-chain:__subpackages__",

View File

@@ -0,0 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package pruner
import "github.com/sirupsen/logrus"
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "beacon-chain/db/pruner")

View File

@@ -13,8 +13,6 @@ import (
"github.com/sirupsen/logrus"
)
var log = logrus.WithField("prefix", "db-pruner")
const (
// defaultPrunableBatchSize is the number of slots that can be pruned at once.
defaultPrunableBatchSize = 32

View File

@@ -1,5 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package slasherkv
import "github.com/sirupsen/logrus"
var log = logrus.WithField("prefix", "slasherdb")
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "beacon-chain/db/slasherkv")

View File

@@ -40,6 +40,7 @@ go_library(
"//beacon-chain/state/state-native:go_default_library",
"//beacon-chain/state/stategen:go_default_library",
"//beacon-chain/verification:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",

View File

@@ -11,6 +11,7 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v7/beacon-chain/execution/types"
"github.com/OffchainLabs/prysm/v7/beacon-chain/verification"
"github.com/OffchainLabs/prysm/v7/cmd/beacon-chain/flags"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
@@ -538,6 +539,10 @@ func (s *Service) GetBlobsV2(ctx context.Context, versionedHashes []common.Hash)
return nil, errors.New(fmt.Sprintf("%s is not supported", GetBlobsV2))
}
if flags.Get().DisableGetBlobsV2 {
return []*pb.BlobAndProofV2{}, nil
}
result := make([]*pb.BlobAndProofV2, len(versionedHashes))
err := s.rpcClient.CallContext(ctx, &result, GetBlobsV2, versionedHashes)

View File

@@ -1,5 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package execution
import "github.com/sirupsen/logrus"
var log = logrus.WithField("prefix", "execution")
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "beacon-chain/execution")

View File

@@ -7,6 +7,7 @@ go_library(
"errors.go",
"forkchoice.go",
"last_root.go",
"log.go",
"metrics.go",
"node.go",
"on_tick.go",

View File

@@ -0,0 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package doublylinkedtree
import "github.com/sirupsen/logrus"
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "beacon-chain/forkchoice/doubly-linked-tree")

View File

@@ -3,12 +3,9 @@ package doublylinkedtree
import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/sirupsen/logrus"
)
var (
log = logrus.WithField("prefix", "forkchoice-doublylinkedtree")
headSlotNumber = promauto.NewGauge(
prometheus.GaugeOpts{
Name: "doublylinkedtree_head_slot",

View File

@@ -1,5 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package light_client
import "github.com/sirupsen/logrus"
var log = logrus.WithField("prefix", "light-client")
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "beacon-chain/light-client")

View File

@@ -75,7 +75,6 @@ func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
p2p := p2pTesting.NewTestP2P(t)
lcStore := NewLightClientStore(p2p, new(event.Feed), testDB.SetupDB(t))
timeForGoroutinesToFinish := 20 * time.Microsecond
// update 0 with basic data and no supermajority following an empty lastFinalityUpdate - should save and broadcast
l0 := util.NewTestLightClient(t, version.Altair)
update0, err := NewLightClientFinalityUpdateFromBeaconState(l0.Ctx, l0.State, l0.Block, l0.AttestedState, l0.AttestedBlock, l0.FinalizedBlock)
@@ -87,8 +86,9 @@ func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
lcStore.SetLastFinalityUpdate(update0, true)
require.Equal(t, update0, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
time.Sleep(timeForGoroutinesToFinish) // give some time for the broadcast goroutine to finish
require.Equal(t, true, p2p.BroadcastCalled.Load(), "Broadcast should have been called after setting a new last finality update when previous is nil")
require.Eventually(t, func() bool {
return p2p.BroadcastCalled.Load()
}, time.Second, 10*time.Millisecond, "Broadcast should have been called after setting a new last finality update when previous is nil")
p2p.BroadcastCalled.Store(false) // Reset for next test
// update 1 with same finality slot, increased attested slot, and no supermajority - should save but not broadcast
@@ -102,7 +102,7 @@ func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
lcStore.SetLastFinalityUpdate(update1, true)
require.Equal(t, update1, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
time.Sleep(timeForGoroutinesToFinish) // give some time for the broadcast goroutine to finish
time.Sleep(50 * time.Millisecond) // Wait briefly to verify broadcast is not called
require.Equal(t, false, p2p.BroadcastCalled.Load(), "Broadcast should not have been called after setting a new last finality update without supermajority")
p2p.BroadcastCalled.Store(false) // Reset for next test
@@ -117,8 +117,9 @@ func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
lcStore.SetLastFinalityUpdate(update2, true)
require.Equal(t, update2, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
time.Sleep(timeForGoroutinesToFinish) // give some time for the broadcast goroutine to finish
require.Equal(t, true, p2p.BroadcastCalled.Load(), "Broadcast should have been called after setting a new last finality update with supermajority")
require.Eventually(t, func() bool {
return p2p.BroadcastCalled.Load()
}, time.Second, 10*time.Millisecond, "Broadcast should have been called after setting a new last finality update with supermajority")
p2p.BroadcastCalled.Store(false) // Reset for next test
// update 3 with same finality slot, increased attested slot, and supermajority - should save but not broadcast
@@ -132,7 +133,7 @@ func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
lcStore.SetLastFinalityUpdate(update3, true)
require.Equal(t, update3, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
time.Sleep(timeForGoroutinesToFinish) // give some time for the broadcast goroutine to finish
time.Sleep(50 * time.Millisecond) // Wait briefly to verify broadcast is not called
require.Equal(t, false, p2p.BroadcastCalled.Load(), "Broadcast should not have been when previous was already broadcast")
// update 4 with increased finality slot, increased attested slot, and supermajority - should save and broadcast
@@ -146,8 +147,9 @@ func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
lcStore.SetLastFinalityUpdate(update4, true)
require.Equal(t, update4, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
time.Sleep(timeForGoroutinesToFinish) // give some time for the broadcast goroutine to finish
require.Equal(t, true, p2p.BroadcastCalled.Load(), "Broadcast should have been called after a new finality update with increased finality slot")
require.Eventually(t, func() bool {
return p2p.BroadcastCalled.Load()
}, time.Second, 10*time.Millisecond, "Broadcast should have been called after a new finality update with increased finality slot")
p2p.BroadcastCalled.Store(false) // Reset for next test
// update 5 with the same new finality slot, increased attested slot, and supermajority - should save but not broadcast
@@ -161,7 +163,7 @@ func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
lcStore.SetLastFinalityUpdate(update5, true)
require.Equal(t, update5, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
time.Sleep(timeForGoroutinesToFinish) // give some time for the broadcast goroutine to finish
time.Sleep(50 * time.Millisecond) // Wait briefly to verify broadcast is not called
require.Equal(t, false, p2p.BroadcastCalled.Load(), "Broadcast should not have been called when previous was already broadcast with supermajority")
// update 6 with the same new finality slot, increased attested slot, and no supermajority - should save but not broadcast
@@ -175,7 +177,7 @@ func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
lcStore.SetLastFinalityUpdate(update6, true)
require.Equal(t, update6, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
time.Sleep(timeForGoroutinesToFinish) // give some time for the broadcast goroutine to finish
time.Sleep(50 * time.Millisecond) // Wait briefly to verify broadcast is not called
require.Equal(t, false, p2p.BroadcastCalled.Load(), "Broadcast should not have been called when previous was already broadcast with supermajority")
}

View File

@@ -4,6 +4,7 @@ go_library(
name = "go_default_library",
srcs = [
"doc.go",
"log.go",
"metrics.go",
"process_attestation.go",
"process_block.go",

View File

@@ -0,0 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package monitor
import "github.com/sirupsen/logrus"
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "beacon-chain/monitor")

View File

@@ -3,11 +3,9 @@ package monitor
import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/sirupsen/logrus"
)
var (
log = logrus.WithField("prefix", "monitor")
// TODO: The Prometheus gauge vectors and counters in this package deprecate the
// corresponding gauge vectors and counters in the validator client.

View File

@@ -54,8 +54,8 @@ func TestProcessIncludedAttestationTwoTracked(t *testing.T) {
AggregationBits: bitfield.Bitlist{0b11, 0b1},
}
s.processIncludedAttestation(t.Context(), state, att)
wanted1 := "\"Attestation included\" balanceChange=0 correctHead=true correctSource=true correctTarget=true head=0x68656c6c6f2d inclusionSlot=2 newBalance=32000000000 prefix=monitor slot=1 source=0x68656c6c6f2d target=0x68656c6c6f2d validatorIndex=2"
wanted2 := "\"Attestation included\" balanceChange=100000000 correctHead=true correctSource=true correctTarget=true head=0x68656c6c6f2d inclusionSlot=2 newBalance=32000000000 prefix=monitor slot=1 source=0x68656c6c6f2d target=0x68656c6c6f2d validatorIndex=12"
wanted1 := "\"Attestation included\" balanceChange=0 correctHead=true correctSource=true correctTarget=true head=0x68656c6c6f2d inclusionSlot=2 newBalance=32000000000 package=beacon-chain/monitor slot=1 source=0x68656c6c6f2d target=0x68656c6c6f2d validatorIndex=2"
wanted2 := "\"Attestation included\" balanceChange=100000000 correctHead=true correctSource=true correctTarget=true head=0x68656c6c6f2d inclusionSlot=2 newBalance=32000000000 package=beacon-chain/monitor slot=1 source=0x68656c6c6f2d target=0x68656c6c6f2d validatorIndex=12"
require.LogsContain(t, hook, wanted1)
require.LogsContain(t, hook, wanted2)
}
@@ -123,8 +123,8 @@ func TestProcessUnaggregatedAttestationStateCached(t *testing.T) {
}
require.NoError(t, s.config.StateGen.SaveState(ctx, root, state))
s.processUnaggregatedAttestation(t.Context(), att)
wanted1 := "\"Processed unaggregated attestation\" head=0x68656c6c6f2d prefix=monitor slot=1 source=0x68656c6c6f2d target=0x68656c6c6f2d validatorIndex=2"
wanted2 := "\"Processed unaggregated attestation\" head=0x68656c6c6f2d prefix=monitor slot=1 source=0x68656c6c6f2d target=0x68656c6c6f2d validatorIndex=12"
wanted1 := "\"Processed unaggregated attestation\" head=0x68656c6c6f2d package=beacon-chain/monitor slot=1 source=0x68656c6c6f2d target=0x68656c6c6f2d validatorIndex=2"
wanted2 := "\"Processed unaggregated attestation\" head=0x68656c6c6f2d package=beacon-chain/monitor slot=1 source=0x68656c6c6f2d target=0x68656c6c6f2d validatorIndex=12"
require.LogsContain(t, hook, wanted1)
require.LogsContain(t, hook, wanted2)
}
@@ -161,7 +161,7 @@ func TestProcessAggregatedAttestationStateNotCached(t *testing.T) {
},
}
s.processAggregatedAttestation(ctx, att)
require.LogsContain(t, hook, "\"Processed attestation aggregation\" aggregatorIndex=2 beaconBlockRoot=0x000000000000 prefix=monitor slot=1 sourceRoot=0x68656c6c6f2d targetRoot=0x68656c6c6f2d")
require.LogsContain(t, hook, "\"Processed attestation aggregation\" aggregatorIndex=2 beaconBlockRoot=0x000000000000 package=beacon-chain/monitor slot=1 sourceRoot=0x68656c6c6f2d targetRoot=0x68656c6c6f2d")
require.LogsContain(t, hook, "Skipping aggregated attestation due to state not found in cache")
logrus.SetLevel(logrus.InfoLevel)
}
@@ -199,9 +199,9 @@ func TestProcessAggregatedAttestationStateCached(t *testing.T) {
require.NoError(t, s.config.StateGen.SaveState(ctx, root, state))
s.processAggregatedAttestation(ctx, att)
require.LogsContain(t, hook, "\"Processed attestation aggregation\" aggregatorIndex=2 beaconBlockRoot=0x68656c6c6f2d prefix=monitor slot=1 sourceRoot=0x68656c6c6f2d targetRoot=0x68656c6c6f2d")
require.LogsContain(t, hook, "\"Processed aggregated attestation\" head=0x68656c6c6f2d prefix=monitor slot=1 source=0x68656c6c6f2d target=0x68656c6c6f2d validatorIndex=2")
require.LogsDoNotContain(t, hook, "\"Processed aggregated attestation\" head=0x68656c6c6f2d prefix=monitor slot=1 source=0x68656c6c6f2d target=0x68656c6c6f2d validatorIndex=12")
require.LogsContain(t, hook, "\"Processed attestation aggregation\" aggregatorIndex=2 beaconBlockRoot=0x68656c6c6f2d package=beacon-chain/monitor slot=1 sourceRoot=0x68656c6c6f2d targetRoot=0x68656c6c6f2d")
require.LogsContain(t, hook, "\"Processed aggregated attestation\" head=0x68656c6c6f2d package=beacon-chain/monitor slot=1 source=0x68656c6c6f2d target=0x68656c6c6f2d validatorIndex=2")
require.LogsDoNotContain(t, hook, "\"Processed aggregated attestation\" head=0x68656c6c6f2d package=beacon-chain/monitor slot=1 source=0x68656c6c6f2d target=0x68656c6c6f2d validatorIndex=12")
}
func TestProcessAttestations(t *testing.T) {
@@ -239,8 +239,8 @@ func TestProcessAttestations(t *testing.T) {
wrappedBlock, err := blocks.NewBeaconBlock(block)
require.NoError(t, err)
s.processAttestations(ctx, state, wrappedBlock)
wanted1 := "\"Attestation included\" balanceChange=0 correctHead=true correctSource=true correctTarget=true head=0x68656c6c6f2d inclusionSlot=2 newBalance=32000000000 prefix=monitor slot=1 source=0x68656c6c6f2d target=0x68656c6c6f2d validatorIndex=2"
wanted2 := "\"Attestation included\" balanceChange=100000000 correctHead=true correctSource=true correctTarget=true head=0x68656c6c6f2d inclusionSlot=2 newBalance=32000000000 prefix=monitor slot=1 source=0x68656c6c6f2d target=0x68656c6c6f2d validatorIndex=12"
wanted1 := "\"Attestation included\" balanceChange=0 correctHead=true correctSource=true correctTarget=true head=0x68656c6c6f2d inclusionSlot=2 newBalance=32000000000 package=beacon-chain/monitor slot=1 source=0x68656c6c6f2d target=0x68656c6c6f2d validatorIndex=2"
wanted2 := "\"Attestation included\" balanceChange=100000000 correctHead=true correctSource=true correctTarget=true head=0x68656c6c6f2d inclusionSlot=2 newBalance=32000000000 package=beacon-chain/monitor slot=1 source=0x68656c6c6f2d target=0x68656c6c6f2d validatorIndex=12"
require.LogsContain(t, hook, wanted1)
require.LogsContain(t, hook, wanted2)

View File

@@ -43,7 +43,7 @@ func TestProcessSlashings(t *testing.T) {
},
},
},
wantedErr: "\"Proposer slashing was included\" bodyRoot1= bodyRoot2= prefix=monitor proposerIndex=2",
wantedErr: "\"Proposer slashing was included\" bodyRoot1= bodyRoot2= package=beacon-chain/monitor proposerIndex=2",
},
{
name: "Proposer slashing an untracked index",
@@ -89,7 +89,7 @@ func TestProcessSlashings(t *testing.T) {
},
},
wantedErr: "\"Attester slashing was included\" attestationSlot1=0 attestationSlot2=0 attesterIndex=1 " +
"beaconBlockRoot1=0x000000000000 beaconBlockRoot2=0x000000000000 blockInclusionSlot=0 prefix=monitor sourceEpoch1=1 sourceEpoch2=0 targetEpoch1=0 targetEpoch2=0",
"beaconBlockRoot1=0x000000000000 beaconBlockRoot2=0x000000000000 blockInclusionSlot=0 package=beacon-chain/monitor sourceEpoch1=1 sourceEpoch2=0 targetEpoch1=0 targetEpoch2=0",
},
{
name: "Attester slashing untracked index",
@@ -149,7 +149,7 @@ func TestProcessProposedBlock(t *testing.T) {
StateRoot: bytesutil.PadTo([]byte("state-world"), 32),
Body: &ethpb.BeaconBlockBody{},
},
wantedErr: "\"Proposed beacon block was included\" balanceChange=100000000 blockRoot=0x68656c6c6f2d newBalance=32000000000 parentRoot=0x68656c6c6f2d prefix=monitor proposerIndex=12 slot=6 version=0",
wantedErr: "\"Proposed beacon block was included\" balanceChange=100000000 blockRoot=0x68656c6c6f2d newBalance=32000000000 package=beacon-chain/monitor parentRoot=0x68656c6c6f2d proposerIndex=12 slot=6 version=0",
},
{
name: "Block proposed by untracked validator",
@@ -224,10 +224,10 @@ func TestProcessBlock_AllEventsTrackedVals(t *testing.T) {
root, err := b.GetBlock().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, s.config.StateGen.SaveState(ctx, root, genesis))
wanted1 := fmt.Sprintf("\"Proposed beacon block was included\" balanceChange=100000000 blockRoot=%#x newBalance=32000000000 parentRoot=0xf732eaeb7fae prefix=monitor proposerIndex=15 slot=1 version=1", bytesutil.Trunc(root[:]))
wanted2 := fmt.Sprintf("\"Proposer slashing was included\" bodyRoot1=0x000100000000 bodyRoot2=0x000200000000 prefix=monitor proposerIndex=%d slashingSlot=0 slot=1", idx)
wanted3 := "\"Sync committee contribution included\" balanceChange=0 contribCount=3 expectedContribCount=3 newBalance=32000000000 prefix=monitor validatorIndex=1"
wanted4 := "\"Sync committee contribution included\" balanceChange=0 contribCount=1 expectedContribCount=1 newBalance=32000000000 prefix=monitor validatorIndex=2"
wanted1 := fmt.Sprintf("\"Proposed beacon block was included\" balanceChange=100000000 blockRoot=%#x newBalance=32000000000 package=beacon-chain/monitor parentRoot=0xf732eaeb7fae proposerIndex=15 slot=1 version=1", bytesutil.Trunc(root[:]))
wanted2 := fmt.Sprintf("\"Proposer slashing was included\" bodyRoot1=0x000100000000 bodyRoot2=0x000200000000 package=beacon-chain/monitor proposerIndex=%d slashingSlot=0 slot=1", idx)
wanted3 := "\"Sync committee contribution included\" balanceChange=0 contribCount=3 expectedContribCount=3 newBalance=32000000000 package=beacon-chain/monitor validatorIndex=1"
wanted4 := "\"Sync committee contribution included\" balanceChange=0 contribCount=1 expectedContribCount=1 newBalance=32000000000 package=beacon-chain/monitor validatorIndex=2"
wrapped, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
s.processBlock(ctx, wrapped)
@@ -279,7 +279,7 @@ func TestLogAggregatedPerformance(t *testing.T) {
s.logAggregatedPerformance()
wanted := "\"Aggregated performance since launch\" attestationInclusion=\"80.00%\"" +
" averageInclusionDistance=1.2 balanceChangePct=\"0.95%\" correctlyVotedHeadPct=\"66.67%\" " +
"correctlyVotedSourcePct=\"91.67%\" correctlyVotedTargetPct=\"100.00%\" prefix=monitor startBalance=31700000000 " +
"correctlyVotedSourcePct=\"91.67%\" correctlyVotedTargetPct=\"100.00%\" package=beacon-chain/monitor startBalance=31700000000 " +
"startEpoch=0 totalAggregations=0 totalProposedBlocks=1 totalRequested=15 totalSyncContributions=0 " +
"validatorIndex=1"
require.LogsContain(t, hook, wanted)

View File

@@ -43,7 +43,7 @@ func TestProcessExitsFromBlockTrackedIndices(t *testing.T) {
wb, err := blocks.NewBeaconBlock(block)
require.NoError(t, err)
s.processExitsFromBlock(wb)
require.LogsContain(t, hook, "\"Voluntary exit was included\" prefix=monitor slot=0 validatorIndex=2")
require.LogsContain(t, hook, "\"Voluntary exit was included\" package=beacon-chain/monitor slot=0 validatorIndex=2")
}
func TestProcessExitsFromBlockUntrackedIndices(t *testing.T) {
@@ -99,7 +99,7 @@ func TestProcessExitP2PTrackedIndices(t *testing.T) {
Signature: make([]byte, 96),
}
s.processExit(exit)
require.LogsContain(t, hook, "\"Voluntary exit was processed\" prefix=monitor validatorIndex=1")
require.LogsContain(t, hook, "\"Voluntary exit was processed\" package=beacon-chain/monitor validatorIndex=1")
}
func TestProcessExitP2PUntrackedIndices(t *testing.T) {

View File

@@ -22,7 +22,7 @@ func TestProcessSyncCommitteeContribution(t *testing.T) {
}
s.processSyncCommitteeContribution(contrib)
require.LogsContain(t, hook, "\"Sync committee aggregation processed\" prefix=monitor validatorIndex=1")
require.LogsContain(t, hook, "\"Sync committee aggregation processed\" package=beacon-chain/monitor validatorIndex=1")
require.LogsDoNotContain(t, hook, "validatorIndex=2")
}
@@ -53,7 +53,7 @@ func TestProcessSyncAggregate(t *testing.T) {
require.NoError(t, err)
s.processSyncAggregate(beaconState, wrappedBlock)
require.LogsContain(t, hook, "\"Sync committee contribution included\" balanceChange=0 contribCount=1 expectedContribCount=4 newBalance=32000000000 prefix=monitor validatorIndex=1")
require.LogsContain(t, hook, "\"Sync committee contribution included\" balanceChange=100000000 contribCount=2 expectedContribCount=2 newBalance=32000000000 prefix=monitor validatorIndex=12")
require.LogsContain(t, hook, "\"Sync committee contribution included\" balanceChange=0 contribCount=1 expectedContribCount=4 newBalance=32000000000 package=beacon-chain/monitor validatorIndex=1")
require.LogsContain(t, hook, "\"Sync committee contribution included\" balanceChange=100000000 contribCount=2 expectedContribCount=2 newBalance=32000000000 package=beacon-chain/monitor validatorIndex=12")
require.LogsDoNotContain(t, hook, "validatorIndex=2")
}

View File

@@ -148,7 +148,7 @@ func TestStart(t *testing.T) {
// wait for Logrus
time.Sleep(1000 * time.Millisecond)
require.LogsContain(t, hook, "Synced to head epoch, starting reporting performance")
require.LogsContain(t, hook, "\"Starting service\" prefix=monitor validatorIndices=\"[1 2 12 15]\"")
require.LogsContain(t, hook, "\"Starting service\" package=beacon-chain/monitor validatorIndices=\"[1 2 12 15]\"")
s.Lock()
require.Equal(t, s.isLogging, true, "monitor is not running")
s.Unlock()
@@ -235,7 +235,7 @@ func TestMonitorRoutine(t *testing.T) {
// Wait for Logrus
time.Sleep(1000 * time.Millisecond)
wanted1 := fmt.Sprintf("\"Proposed beacon block was included\" balanceChange=100000000 blockRoot=%#x newBalance=32000000000 parentRoot=0xf732eaeb7fae prefix=monitor proposerIndex=15 slot=1 version=1", bytesutil.Trunc(root[:]))
wanted1 := fmt.Sprintf("\"Proposed beacon block was included\" balanceChange=100000000 blockRoot=%#x newBalance=32000000000 package=beacon-chain/monitor parentRoot=0xf732eaeb7fae proposerIndex=15 slot=1 version=1", bytesutil.Trunc(root[:]))
require.LogsContain(t, hook, wanted1)
}

View File

@@ -1,5 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package node
import "github.com/sirupsen/logrus"
var log = logrus.WithField("prefix", "node")
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "beacon-chain/node")

View File

@@ -1 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package registration
import "github.com/sirupsen/logrus"
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "beacon-chain/node/registration")

Some files were not shown because too many files have changed in this diff Show More