Compare commits

...

40 Commits

Author SHA1 Message Date
terence tsao
64c240d75c gloas: add builders registry + state fields 2026-01-05 14:52:07 -08:00
Bastin
6fa0e9cf5f Logrus hooks for terminal vs log-file output (#16102)
## Review after #16059 

**What type of PR is this?**
Feature

**What does this PR do?**
This PR introduces logrus writer hooks into the logging of prysm.
when log-format is text:
- set the default logrus output to be `io.Discard`
- create a writer hook for terminal, with formatting and coloring
enabled.
- create a separate writer hook for log-file (if enabled), without
coloring.

This immediately allows for having formatted/colored terminal logs,
while keeping the log-file clean.
2026-01-05 15:20:12 +00:00
Bastin
6b5ba5ad01 Switch logging from using prefixes to the new package path format (#16059)
#### This PR sets the foundation for the new logging features.

---

The goal of this big PR is the following:
1. Adding a log.go file to every package:
[_commit_](54f6396d4c)
- Writing a bash script that adds the log.go file to every package that
imports logrus, except the excluded packages, configured at the top of
the bash script.
- the log.go file creates a log variable and sets a field called
`package` to the full path of that package.
- I have tried to fix every error/problem that came from mass generation
of this file. (duplicate declarations, different prefix names, etc...)
- some packages had the log.go file from before, and had some helper
functions in there as well. I've moved all of them to a `log_helpers.go`
file within each package.

2. Create a CI rule which verifies that:
[_commit_](b799c3a0ef)
- every package which imports logrus, also has a log.go file, except the
excluded packages.
- the `package` field of each log.go variable, has the correct path. (to
detect when we move a package or change it's name)
- I pushed a commit with a manually changed log.go file to trigger the
ci check failure and it worked.

3. Alter the logging system to read the prefix from this `package` field
for every log while outputing:
[_commit_](b0c7f1146c)
- some packages have/want/need a different log prefix than their package
name (like `kv`). This can be solved by keeping a map of package paths
to prefix names somewhere.
    
    
---

**Some notes:**
- Please review everything carefully.
- I created the `prefixReplacement` map and populated the data that I
deemed necessary. Please check it and complain if something doesn't make
sense or is missing. I attached at the bottom, the list of all the
packages that used to use a different name than their package name as
their prefix.
- I have chosen to mark some packages to be excluded from this whole
process. They will either not log anything, or log without a prefix, or
log using their previously defined prefix. See the list of exclusions in
the bottom.
- I fixed all the tests that failed because of this change. These were
failing because they were expecting the old prefix to be in the
generated logs. I have changed those to expect the new `package` field
instead. This might not be a great solution. Ideally we might want to
remove this from the tests so they only test for relevant fields in the
logs. but this is a problem for another day.
- Please run the node with this config, and mention if you see something
weird in the logs. (use different verbosities)
- The CI workflow uses a script that basically runs the
`hack/gen-logs.sh` and checks that the git diff is zero. that script is
`hack/check-logs.sh`. This means that if one runs this script locally,
it will not actually _check_ anything, rather than just regenerate the
log.go files and fix any mistake. This might be confusing. Please
suggest solutions if you think it's a problem.

---

**A list of packages that used a different prefix than their package
names for their logs:**

- beacon-chain/cache/depositsnapshot/ package depositsnapshot, prefix
"cache"
- beacon-chain/core/transition/log.go — package transition, prefix
"state"
  - beacon-chain/db/kv/log.go — package kv, prefix "db"
- beacon-chain/db/slasherkv/log.go — package slasherkv, prefix
"slasherdb"
- beacon-chain/db/pruner/pruner.go — package pruner, prefix "db-pruner"
- beacon-chain/light-client/log.go — package light_client, prefix
"light-client"
- beacon-chain/operations/attestations/log.go — package attestations,
prefix "pool/attestations"
- beacon-chain/operations/slashings/log.go — package slashings, prefix
"pool/slashings"
  - beacon-chain/rpc/core/log.go — package core, prefix "rpc/core"
- beacon-chain/rpc/eth/beacon/log.go — package beacon, prefix
"rpc/beaconv1"
- beacon-chain/rpc/eth/validator/log.go — package validator, prefix
"beacon-api"
- beacon-chain/rpc/prysm/v1alpha1/beacon/log.go — package beacon, prefix
"rpc"
- beacon-chain/rpc/prysm/v1alpha1/validator/log.go — package validator,
prefix "rpc/validator"
- beacon-chain/state/stategen/log.go — package stategen, prefix
"state-gen"
- beacon-chain/sync/checkpoint/log.go — package checkpoint, prefix
"checkpoint-sync"
- beacon-chain/sync/initial-sync/log.go — package initialsync, prefix
"initial-sync"
  - cmd/prysmctl/p2p/log.go — package p2p, prefix "prysmctl-p2p"
  - config/features/log.go -- package features, prefix "flags"
  - io/file/log.go — package file, prefix "fileutil"
  - proto/prysm/v1alpha1/log.go — package eth, prefix "protobuf"
- validator/client/beacon-api/log.go — package beacon_api, prefix
"beacon-api"
  - validator/db/kv/log.go — package kv, prefix "db"
  - validator/db/filesystem/db.go — package filesystem, prefix "db"
- validator/keymanager/derived/log.go — package derived, prefix
"derived-keymanager"
- validator/keymanager/local/log.go — package local, prefix
"local-keymanager"
- validator/keymanager/remote-web3signer/log.go — package
remote_web3signer, prefix "remote-keymanager"
- validator/keymanager/remote-web3signer/internal/log.go — package
internal, prefix "remote-web3signer-
    internal"
- beacon-chain/forkchoice/doubly... prefix is
"forkchoice-doublylinkedtree"
  
  
  
**List of excluded directories (their subdirectories are also
excluded):**
  ```
  EXCLUDED_PATH_PREFIXES=(
      "testing"
      "validator/client/testutil"
      "beacon-chain/p2p/testing"
      "beacon-chain/rpc/eth/config"
      "beacon-chain/rpc/prysm/v1alpha1/debug"
      "tools"
      "runtime"
      "monitoring"
      "io"
      "cmd"
      ".well-known"
      "changelog"
      "hack"
      "specrefs"
      "third_party"
      "bazel-out"
      "bazel-bin"
      "bazel-prysm"
      "bazel-testlogs"
      "build"
      ".github"
      ".jj"
      ".idea"
      ".vscode"
)
```
2026-01-05 14:15:20 +00:00
Manu NALEPA
0db74365e0 Summarize "Accepted data column sidecars summary" log. (#16210)
**What type of PR is this?**
Other

**What does this PR do? Why is it needed?**

**Before:**
```
[2026-01-02 13:29:50.13] DEBUG sync: Accepted data column sidecars summary columnIndices=[0 1 6 7 8 9 10 11 12 13 14 15 16 18 23 28 29 31 32 35 37 38 39 40 41 42 43 45 47 48 49 50 51 52 55 58 59 60 62 65 66 68 70 73 74 75 76 78 79 81 83 84 88 89 90 93 94 95 96 98 99 103 105 106 107 108 109 110 111 113 114 115 117 118 119 121 122] gossipScores=[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] peers=[rjzcRC oxj6o4 HCT2LE HCT2LE oxj6o4 oxj6o4 oxj6o4 oxj6o4 oxj6o4 oxj6o4 oxj6o4 oxj6o4 HCT2LE HCT2LE aZAzfp HCT2LE HCT2LE oxj6o4 oxj6o4 oxj6o4 HCT2LE oxj6o4 oxj6o4 HCT2LE oxj6o4 HCT2LE oxj6o4 oxj6o4 oxj6o4 HCT2LE oxj6o4 oxj6o4 HCT2LE HCT2LE oxj6o4 oxj6o4 oxj6o4 oxj6o4 oxj6o4 HCT2LE oxj6o4 HCT2LE oxj6o4 oxj6o4 HCT2LE aZAzfp oxj6o4 oxj6o4 YdJQCg oxj6o4 oxj6o4 oxj6o4 HCT2LE oxj6o4 HCT2LE HCT2LE 5jMhEK HCT2LE oxj6o4 oxj6o4 oxj6o4 oxj6o4 oxj6o4 oxj6o4 HCT2LE rjzcRC oxj6o4 HCT2LE oxj6o4 oxj6o4 HCT2LE oxj6o4 oxj6o4 HCT2LE HCT2LE HCT2LE oxj6o4] receivedCount=77 sinceStartTimes=[869.00ms 845.00ms 797.00ms 795.00ms 805.00ms 906.00ms 844.00ms 849.00ms 843.00ms 844.00ms 821.00ms 796.00ms 794.00ms 796.00ms 838.00ms 842.00ms 843.00ms 848.00ms 795.00ms 820.00ms 797.00ms 830.00ms 801.00ms 794.00ms 925.00ms 924.00ms 935.00ms 843.00ms 802.00ms 796.00ms 802.00ms 798.00ms 794.00ms 796.00ms 796.00ms 843.00ms 802.00ms 830.00ms 826.00ms 796.00ms 819.00ms 801.00ms 852.00ms 877.00ms 876.00ms 843.00ms 843.00ms 844.00ms 1138.00ms 843.00ms 886.00ms 805.00ms 794.00ms 844.00ms 909.00ms 845.00ms 889.00ms 798.00ms 792.00ms 843.00ms 878.00ms 802.00ms 798.00ms 849.00ms 826.00ms 815.00ms 844.00ms 797.00ms 795.00ms 798.00ms 843.00ms 844.00ms 845.00ms 845.00ms 867.00ms 805.00ms 800.00ms] slot=2095599 validationTimes=[399.00ms 423.00ms 470.00ms 472.00ms 463.00ms 362.00ms 423.00ms 419.00ms 425.00ms 423.00ms 446.00ms 471.00ms 473.00ms 471.00ms 429.00ms 425.00ms 424.00ms 419.00ms 471.00ms 448.00ms 470.00ms 437.00ms 467.00ms 472.00ms 342.00ms 343.00ms 332.00ms 424.00ms 465.00ms 471.00ms 465.00ms 469.00ms 473.00ms 470.00ms 470.00ms 424.00ms 466.00ms 438.00ms 442.00ms 471.00ms 448.00ms 467.00ms 416.00ms 390.00ms 392.00ms 424.00ms 425.00ms 423.00ms 140.00ms 424.00ms 381.00ms 462.00ms 473.00ms 423.00ms 359.00ms 423.00ms 378.00ms 469.00ms 475.00ms 425.00ms 390.00ms 465.00ms 469.00ms 419.00ms 442.00ms 452.00ms 423.00ms 470.00ms 473.00ms 469.00ms 424.00ms 423.00ms 423.00ms 423.00ms 400.00ms 462.00ms 467.00ms]
```


**After:**
```
[2026-01-02 16:48:48.61] DEBUG sync: Accepted data column sidecars summary count=31 indices=0-1,3-5,7,21,24,27,29,36-37,46,48,55,57,66,70,76,82,89,93-94,97,99-101,113,120,124,126 root=0x409a4eac4761a3199f60dec0dfe50b6eed91e29d6c3671bb61704401906d2b69 sinceStartTime=[min: 512.181127ms, avg: 541.358688ms, max: 557.074707ms] slot=2096594 validationTime=[min: 13.357515ms, avg: 55.1343ms, max: 73.909889ms]
```

Distributions are still available on metrics:
<img width="792" height="309" alt="image"
src="https://github.com/user-attachments/assets/15128283-6740-4387-b205-41fb18205f54"
/>

<img width="799" height="322" alt="image"
src="https://github.com/user-attachments/assets/e0d602fa-db06-4cd3-8ec7-1ee2671c9921"
/>


**Which issues(s) does this PR fix?**

Fixes:
- https://github.com/OffchainLabs/prysm/issues/16208

**Other notes for review**

**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-01-02 17:09:30 +00:00
Potuz
6f90101364 Use proposer lookahead for data column verification (#16202)
Replace the proposer indices cache usage in data column sidecar
verification with direct state lookahead access. Since data column
sidecars require the Fulu fork, the state always has a ProposerLookahead
field that provides O(1) proposer index lookups for current and next
epoch.

This simplifies SidecarProposerExpected() by removing:
- Checkpoint-based proposer cache lookup
- Singleflight wrapper (not needed for O(1) access)
- Target root computation for cache keys

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-02 17:01:53 +00:00
Manu NALEPA
49e1763ec2 Data columns cache warmup: Parallelize computation of all files for a given epoch. (#16207)
**What type of PR is this?**
Other

**What does this PR do? Why is it needed?**
Before this PR, all `.sszs` files containing the data column sidecars
were read an process sequentially, taking some time.
After this PR, every `.sszs` files of a given epoch (so, up to 32 files
with the current `SLOT_PER_EPOCHS` value) are processed in parallel.

**Which issues(s) does this PR fix?**
- https://github.com/OffchainLabs/prysm/issues/16204

Tested on - [Netcup VPS 4000 G11](https://www.netcup.com/en/server/vps).
**Before this PR (3 trials)**:
```
[2026-01-02 08:55:12.71]  INFO filesystem: Data column filesystem cache warm-up complete elapsed=1m22.894007534s
[2026-01-02 12:59:33.62]  INFO filesystem: Data column filesystem cache warm-up complete elapsed=42.346732863s
[2026-01-02 13:03:13.65]  INFO filesystem: Data column filesystem cache warm-up complete elapsed=56.143565960s
```

**After this PR (3 trials)**:
```
[2026-01-02 12:50:07.53]  INFO filesystem: Data column filesystem cache warm-up complete elapsed=2.019424193s
[2026-01-02 12:52:01.34]  INFO filesystem: Data column filesystem cache warm-up complete elapsed=1.960671225s
[2026-01-02 12:53:34.66]  INFO filesystem: Data column filesystem cache warm-up complete elapsed=2.549555363s
```


**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-01-02 16:59:55 +00:00
Potuz
c2527c82cd Use a separate context when updating the NSC (#16209)
There is a race condition introduced in #16149 in which the update to
the NSC happens with a context that may be cancelled by the time the
routine is called. This PR starts a new context with a deadline to call
the routine in the background.

fixes #16205
2026-01-02 16:43:34 +00:00
Potuz
d4ea8fafd6 Call FCU in the background (#16149)
This PR introduces several simplifications to block processing.

It calls to notify the engine in the background when forkchoice needs to
be updated.

It no longer updates the caches and process epoch transition before
computing payload attributes, since this is no longer needed after Fulu.

It removes a complicated second call to FCU with the same head after
processing the last slot of the epoch.

Some checks for reviewers:

- the single caller of sendFCU held a lock to forkchoice. Since the call
now is in the background this helper can aquire the lock.
- All paths to handleEpochBoundary are now **NOT** locked. This allows
the lock to get the target root to be taken locally in place.
- The checkpoint cache is completely useless and thus the target root
call could be removed. But removing the proposer ID cache is more
complicated and out of scope for this PR.
- lateBlockTasks has pre and post-fulu cased, we could remove pre-fulu
checks and defer to the update function if deemed cleaner.
- Conversely, postBlockProcess does not have this casing and thus
pre-Fulu blocks on gossip may fail to get proposed correctly because of
the lack of the proposer being correctly computed.
2025-12-30 21:01:34 +00:00
kasey
07d1d6bdf9 Fix validation bug in --backfill-oldest-slot (#16173)
**What type of PR is this?**

Bug fix


**What does this PR do? Why is it needed?**

Validation of `--backfill-oldest-slot` fails for values > 1056767,
because the validation code is comparing the slot/32 to
`MIN_EPOCHS_FOR_BLOCK_REQUESTS` (33024), instead of comparing it to
`current_epoch - MIN_EPOCHS_FOR_BLOCK_REQUESTS`.

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-12-29 20:35:46 +00:00
Potuz
f938da99d9 Use head to validate atts for previous epoch (#16109)
In the event that the target checkpoint of an attestation is for the
previous epoch, and the head state has the same dependent root at that
epoch. The reason being that this guarantees that both seed and active
validator indices are guaranteed to be the same at the checkpoint's
epoch, from the point of view of the attester (even on a different
branch) and the head view.
2025-12-29 20:07:21 +00:00
Potuz
9deec69cc7 Do not verify block signature on block processing (#14820)
Verifying the block signature adds a batch and performs a full hash of
the block unnecessarily.
2025-12-29 19:52:38 +00:00
Potuz
2767f08f4d Do not send FCU on block batches (#16199)
On block batches the engine does not need to be notified of FCU, only on
regular sync at the end of sync it's useful to notify the engine.
2025-12-29 11:39:12 +00:00
Radosław Kapka
d46c620783 Extend httperror analyzer to more functions (#16186)
**What type of PR is this?**

Tooling

**What does this PR do? Why is it needed?**

Renames `httperror` analyzer to `httpwriter` and extends it to the
following functions:
- `WriteError`
- `WriteJson`
- `WriteSsz`

_**NOTE: The PR is currently red because the fix in
https://github.com/OffchainLabs/prysm/pull/16175 must be merged first**_

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2025-12-23 16:53:01 +00:00
sashass1315
dd05e44ef3 fix: avoid panic when fork schedule is empty (#16175)
SortedForkSchedule should never be empty for a properly initialized
network schedule, but the handler already had a branch to support an
empty result. Without an early return, we wrote a JSON response and then
still accessed schedule[0], which could panic and double-write the HTTP
response in misconfigured setups.

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-12-23 15:46:21 +00:00
satushh
9da36a5de6 Use HasPendingBalanceToWithdraw instead of PendingBalanceToWithdraw in ProcessConsolidationRequests (#16189)
<!-- Thanks for sending a PR! Before submitting:

1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
   fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
   in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
   Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->

**What type of PR is this?**

Performance

**What does this PR do? Why is it needed?**

`PendingBalanceToWithdraw` was used to find the `bal` only to check
later if `bal` is greater than 0 or not. No need to calculate the full
balance and we could just check if `bal` is greater than 0 or not by
using an existing function `HasPendingBalanceToWithdraw`. So this should
help in reducing some unnecessary computation.

`HasPendingBalanceToWithdraw` returns immediately on first match of
non-zero instance, while `PendingBalanceToWithdraw` always iterates
through all entries to compute the sum.

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [ ] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [ ] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [ ] I have added a description with sufficient context for reviewers
to understand this PR.
- [ ] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2025-12-22 18:16:14 +00:00
terence
7950a24926 feat(primitives): add BuilderIndex SSZ type (#16169)
This pr adds primitives.BuilderIndex for builder registry indexing in
Gloas
2025-12-20 04:29:42 +00:00
Potuz
ea51253be9 Do not process slots and copy state for payload attributes post Fulu (#16168)
When computing payload attributes post-Fulu, we do not need to process
slots, nor copy the state if we need to find out if the node is
proposing in the next slot. This prevents an immediate epoch processing
after block 31 is processed unless we are actually proposing.
2025-12-19 22:03:52 +00:00
Manu NALEPA
2ac30f5ce6 Pending aggregates: When multiple aggregated attestations only differing by the aggregator index are in the pending queue, only process one of them. (#16153)
**What type of PR is this?**
Other

**What does this PR do? Why is it needed?**
When an (potentially aggregated) attestation is received **before** the
block being voted for, Prysm queues this attestation, then processes the
queue when the block has been received.

This behavior is consistent with the [Phase0 specification
](https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/p2p-interface.md#beacon_attestation_subnet_id).

> [IGNORE] The block being voted for
(attestation.data.beacon_block_root) has been seen (via gossip or
non-gossip sources) (a client MAY queue attestations for processing once
block is retrieved).

Once the block being voted for is processed, previously queued
(potentially aggregated) attestations are then processed, and
broadcasted.

Processing (potentially aggregated) attestations takes some non
negligible time. For this reason, (potentially aggregated) attestations
are deduplicated before being introduced into the pending queue, to
avoid eventually processing duplicates.

Before this PR, two aggregated attestations were considered duplicated
if all of the following conditions were gathered:
1. Attestations have the same version, 
2. **Attestations have the same aggregator index (aka., the same
validator aggregated them)**,
3. Attestations have the same slot, 
4. Attestations have the same committee index, and
5. Attestations have the same aggregation bits

Aggregated attestations are then broadcasted.
The final purpose of aggregated attestations is to be packed into the
next block by the next proposer.
When packing attestations, the aggregator index is not used any more.

This pull request modifies the deduplication function used in the
pending aggregated attestations queue by considering that multiple
aggregated attestations only differing by the aggregator index are
equivalent (removing `2.` of the previous list.)

As a consequence, the count of aggregated attestations to be introduced
in the pending queue is reduced from 1 aggregated attestation by
aggregator to, in the best case,
[MAX_COMMITTEE_PER_SLOT=64](https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/beacon-chain.md#misc-1).

Also, only a single aggregated attestation for a given version, slot,
committee index and aggregation bits will be re-broadcasted. This is a
correct behavior, since no data to be included in a block will be lost.
(We can even say that this will reduce by a bit the total networking
volume.)

**How to test**:
1. Start a beacon node (preferably, on a slow computer) from a
checkpoint.
2. Filter logs containing `Synced new block` and `Verified and saved
pending attestations to pool`. (You can pipe logs into `grep -E "Synced
new block|Verified and saved pending attestations to pool"`.

- In `Synced new block` logs, monitor the `sinceSlotStartTime` value.
This should monotonically decrease.
- In `Verified and saved pending attestations to pool`, monitor the
`pendingAggregateAttAndProofCount` value. It should be a "honest" value.
"honest" is not really quantifiable here, since it depends on the
aggregators. But it's likely to be less than
`5*MAX_COMMITTEE_PER_SLOT=320`.

**Which issues(s) does this PR fix?**

Partially fixes:
- https://github.com/OffchainLabs/prysm/issues/16160

**Other notes for review**
Please read commit by commit, with commit messages.
The important commit is b748c04a67.

**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2025-12-19 14:05:50 +00:00
Manu NALEPA
7418c00ad6 validateDataColumn: Remove error logs. (#16157)
**What type of PR is this?**
Other

**What does this PR do? Why is it needed?**
When we receive data column sidecars via gossip, if the sidecar does not
respect the validation rules, a scary ERROR log is displayed. We can't
to anything about it, since the error comes from an invalid incoming
sidecar, so there is no need to print an ERROR message.

Node: As all REJECTED gossip message, a DEBUG log is also always
displayed.

Example of ERROR log:
```
[2025-12-18 15:38:26.46] ERROR sync: Failed to decode message error=invalid ssz encoding. first variable element offset indexes into fixed value data
[2025-12-18 15:38:26.46] DEBUG sync: Gossip message was rejected agent=erigon/caplin error=invalid ssz encoding. first variable element offset indexes into fixed value data gossipScore=0 multiaddress=/ip4/141.147.32.105/tcp/9000 peerID=16Uiu2HAmHu88k97iBist1vJg7cPNuTjJFRARKvDF7yaH3Pv3Vmso topic=/eth2/c6ecb76c/data_column_sidecar_30/ssz_snappy
```

(After this PR, the DEBUG one will still be printed.)

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2025-12-18 16:18:02 +00:00
james-prysm
66342655fd throw 503 error when submit attestation and sync committee are called on syncing node + align changes to gRPC (#16152)
<!-- Thanks for sending a PR! Before submitting:

1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
   fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
   in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
   Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->

**What type of PR is this?**

Bug fix


**What does this PR do? Why is it needed?**

Prysm starting throwing this error `Could not write response message"
error="write tcp 10.104.92.212:5052->10.104.92.196:41876: write: broken
pipe` because a validator got attestation data from a synced node and
submitted attestation to a syncing node, when the node couldn't replay
the state the validator context deadlined and disconnected but the
writer when it finally responded tries to write it gets this broken pipe
error.

applies to `/eth/v2/beacon/pool/attestations` and
`/eth/v1/beacon/pool/sync_committees`

the solution is 2 part.
1. we shouldn't allow submission of an attestation if the node is
syncing because we can't save the attestation without the state
information.
2. we were doing the expensive state call before broadcast before in
rest and now it should match gRPC where it happens afterward in its own
go routine.

Tested manually running kurtosis with rest validators

```
participants:
 # Super-nodes
 - el_type: nethermind
   cl_type: prysm
   cl_image: gcr.io/offchainlabs/prysm/beacon-chain:latest
   count: 2
   supernode: true
   cl_extra_params:
     - --subscribe-all-subnets
     - --verbosity=debug
   vc_extra_params:
     - --enable-beacon-rest-api
     - --verbosity=debug

 # Full-nodes
 - el_type: nethermind
   cl_type: prysm
   cl_image: gcr.io/offchainlabs/prysm/beacon-chain:latest
   validator_count: 63
   cl_extra_params:
     - --verbosity=debug
   vc_extra_params:
     - --enable-beacon-rest-api
     - --verbosity=debug

 - el_type: nethermind
   cl_type: prysm
   cl_image: gcr.io/offchainlabs/prysm/beacon-chain:latest
   cl_extra_params:
     - --verbosity=debug
   vc_extra_params:
     - --enable-beacon-rest-api
     - --verbosity=debug
   validator_count: 13

additional_services:
 - dora
 - spamoor

spamoor_params:
 image: ethpandaops/spamoor:master
 max_mem: 4000
 spammers:
   - scenario: eoatx
     config:
       throughput: 200
   - scenario: blobs
     config:
       throughput: 20

network_params:
  fulu_fork_epoch: 2
  bpo_1_epoch: 8
  bpo_1_max_blobs: 21
  withdrawal_type: "0x02"
  preset: mainnet
  seconds_per_slot: 6

global_log_level: debug
```

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2025-12-18 15:07:09 +00:00
Bastin
18eca953c1 Fix lightclient p2p bug (#16151)
**What type of PR is this?**
Bug fix

**What does this PR do? Why is it needed?**
This PR fixes the LC p2p `fork version not recognized` bug. It adds
object mappings for the LC types for Fulu, and fixes tests to cover such
cases in the future.
2025-12-17 20:45:06 +00:00
Manu NALEPA
8191bb5711 Construct data column sidecars from the execution layer in parallel and add metrics (#16115)
**What type of PR is this?**
Optimisation

**What does this PR do? Why is it needed?**
While constructing data column sidecars from the execution layer is very
cheap compared to reconstructing data column sidecars from data column
sidecars, it is still efficient to run this construction in parallel.

(**Reminder:** Using `getBlobsV2`, all the cell proofs are present, but
only 64 (out of 128) cells are present. Recomputing the missing cells is
cheap, while reconstruction the missing proofs is expensive.)

This PR:
- adds some metrics
- ensure the construction is done in parallel

**Other notes for review**
Please read commit by commit

The red vertical lines represent the limit between before and after this
pull request
<img width="1575" height="603" alt="image"
src="https://github.com/user-attachments/assets/24811b1b-8e3c-4bf5-ac82-f920d385573a"
/>

The last commit transforms the bottom right histogram to summary, since
it makes no sense any more to have an histogram for values.

Please check "hide whitespace" so this PR is easier to review:
<img width="229" height="196" alt="image"
src="https://github.com/user-attachments/assets/548cb2f4-b6f4-41d1-b3b3-4d4c8554f390"
/>

Updated metrics:



Now, for every **non missed slot**, for a block **with at least one
commitment**, we have either:
```
[2025-12-10 10:02:12.93] DEBUG sync: Constructed data column sidecars from the execution client count=118 indices=0-5,7-16,18-27,29-35,37-46,48-49,51-82,84-100,102-106,108-125,127 iteration=0 proposerIndex=855082 root=0xf8f44e7d4cbc209b2ff2796c07fcf91e85ab45eebe145c4372017a18b25bf290 slot=1928961 type=BeaconBlock
```

either
```
[2025-12-10 10:02:25.69] DEBUG sync: No data column sidecars constructed from the execution client iteration=2 proposerIndex=1093657 root=0x64c2f6c31e369cd45f2edaf5524b64f4869e8148cd29fb84b5b8866be529eea3 slot=1928962 type=DataColumnSidecar
```
<img width="1581" height="957" alt="image"
src="https://github.com/user-attachments/assets/514dbdae-ef14-47e2-9127-502ac6d26bc0"
/>
<img width="1596" height="916" alt="image"
src="https://github.com/user-attachments/assets/343d4710-4191-49e8-98be-afe70d5ffe1c"
/>



**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.
2025-12-16 16:27:32 +00:00
james-prysm
d4613aee0c skipping slot 1 sync committee check e2e (#16145)
<!-- Thanks for sending a PR! Before submitting:

1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
   fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
   in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
   Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->

**What type of PR is this?**

Tests

**What does this PR do? Why is it needed?**

```

--- PASS: TestEndToEnd_MinimalConfig/chain_started (0.50s)
--
--- PASS: TestEndToEnd_MinimalConfig/finished_syncing_0 (0.00s)
--- PASS: TestEndToEnd_MinimalConfig/all_nodes_have_same_head_0 (0.00s)
--- PASS: TestEndToEnd_MinimalConfig/validators_active_epoch_0 (0.00s)
--- FAIL: TestEndToEnd_MinimalConfig/validator_sync_participation_0 (0.01s)
--- PASS: TestEndToEnd_MinimalConfig/peers_connect_epoch_0 (0.11s)


```
This PR attempts to reduce flakes on validator sync participation
failures by skipping the first slot of the block after startup

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2025-12-15 20:00:34 +00:00
terence
9fcc1a7a77 Guard KZG send with context cancellation (#16144)
Avoid sending KZG verification reqs when the caller context is already
canceled to prevent blocking on the channel
2025-12-15 16:58:51 +00:00
Potuz
75dea214ac Do not error when indices have been computed (#16142)
If there is a context deadline updating the committee cache, but the
indices have been computed correctly, do not error out but rather return
the indices and log the error.
2025-12-13 17:36:06 +00:00
james-prysm
4374e709cb fixing state replay caused by REST api duties attester and sync committee endpoints (#16136)
<!-- Thanks for sending a PR! Before submitting:

1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
   fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
   in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
   Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->

**What type of PR is this?**

 Bug fix


**What does this PR do? Why is it needed?**

s.Stater.StateBySlot may replay if it's the current epoch as it's for
values in the db, if we are in the current we should try to get head
slot and use the cache, proposer duties was doing this already but the
other 2 duties endpoints was not. this pr aligns all 3 and introduces a
new `statebyepoch` that just wraps the approach.

I tested by running this kurtosis config with and without the fix to see
that the replays stop, the blockchain progresses, and the upgraded to
fulu is not printed multiple times

```
participants:
 # Super-nodes
 - el_type: nethermind
   cl_type: prysm
   cl_image: gcr.io/offchainlabs/prysm/beacon-chain:latest
   count: 2
   supernode: true
   cl_extra_params:
     - --subscribe-all-subnets
     - --verbosity=debug
   vc_extra_params:
     - --enable-beacon-rest-api
     - --verbosity=debug

 # Full-nodes
 - el_type: nethermind
   cl_type: prysm
   cl_image: gcr.io/offchainlabs/prysm/beacon-chain:latest
   validator_count: 63
   cl_extra_params:
     - --verbosity=debug
   vc_extra_params:
     - --enable-beacon-rest-api
     - --verbosity=debug

 - el_type: nethermind
   cl_type: prysm
   cl_image: gcr.io/offchainlabs/prysm/beacon-chain:latest
   cl_extra_params:
     - --verbosity=debug
   vc_extra_params:
     - --enable-beacon-rest-api
     - --verbosity=debug
   validator_count: 13

additional_services:
 - dora
 - spamoor

spamoor_params:
 image: ethpandaops/spamoor:master
 max_mem: 4000
 spammers:
   - scenario: eoatx
     config:
       throughput: 200
   - scenario: blobs
     config:
       throughput: 20

network_params:
  fulu_fork_epoch: 2
  bpo_1_epoch: 8
  bpo_1_max_blobs: 21
  withdrawal_type: "0x02"
  preset: mainnet
  seconds_per_slot: 6

global_log_level: debug
```

**Which issues(s) does this PR fix?**

Fixes # https://github.com/OffchainLabs/prysm/issues/16135

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2025-12-12 23:18:22 +00:00
Radosław Kapka
be300f80bd Static analyzer for httputil.HandleError calls (#16134)
**What type of PR is this?**

Tooling

**What does the PR do?**

Every call to `httputil.HandleError` must be followed by a `return`
statement. It's easy to miss this during reviews, so having a static
analyzer that enforces this will make our life easier.

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-12-12 21:38:09 +00:00
terence
096cba5b2d sync: fix KZG batch verifier deadlock on timeout (#16141)
`validateWithKzgBatchVerifier` could timeout (12s) and once it times out
because `resChan` is unbuffered, the verifier will stuck at following
line at `verifyKzgBatch` as its waiting for someone to grab the result
from `resChan`:
```
	for _, verifier := range kzgBatch {
		verifier.resChan <- verificationErr
	}
```
Fix is to make kzg batch verification non blocking on timeouts by
buffering each request’s buffered size 1
2025-12-12 17:17:40 +00:00
SashaMalysehko
d5127233e4 fix: missing return after version header check (#16126)
Ensure SubmitAttesterSlashingsV2 returns immediately when the
Eth-Consensus-Version header is missing. Without this early return the
handler calls version.FromString with an empty value and writes a second
JSON error to the response, producing invalid JSON and duplicating error
output. This change aligns the handler with the error-handling pattern
used in other endpoints that validate the version header.
2025-12-12 17:09:35 +00:00
Radosław Kapka
3d35cc20ec Use WriteStateFetchError in API handlers whenever possible (#16140)
**What type of PR is this?**

Other

**What does this PR do? Why is it needed?**

Calls to `Stater.StateBySlot` and `Stater.State` should be followed by
`shared.WriteStateFetchError` to provide the most robust error handling.

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2025-12-12 16:26:27 +00:00
Aarsh Shah
1e658530a7 revert https://github.com/OffchainLabs/prysm/pull/16100 (#16139)
This PR reverts https://github.com/OffchainLabs/prysm/pull/16100.

**What type of PR is this?**
Bug fix


**What does this PR do? Why is it needed?**
This PR reverts https://github.com/OffchainLabs/prysm/pull/16100 as that
PR deprecates mplex but other implementations only support mplex for
now..


**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2025-12-12 14:59:32 +00:00
Preston Van Loon
b360794c9c Update CHANGELOG.md for v7.1.0 release (#16127)
**What type of PR is this?**

Documentation

**What does this PR do? Why is it needed?**

Changelog

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.
2025-12-11 22:48:33 +00:00
Aarsh Shah
0fc9ab925a feat: add support for detecting and logging per address reachability via libp2p AutoNAT v2 (#16100)
**What type of PR is this?**
Feature

**What does this PR do? Why is it needed?**

This PR adds support for detecting and logging per address reachability
via libp2p AutoNAT v2. See
https://github.com/libp2p/go-libp2p/releases/tag/v0.42.0 for details.
This PR also upgrades Prysm to libp2p v0.42.0

**Which issues(s) does this PR fix?**

Fixes #https://github.com/OffchainLabs/prysm/issues/16098

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.
2025-12-11 11:56:52 +00:00
satushh
dda5ee3334 Graffiti proposal design doc (#15983)
<!-- Thanks for sending a PR! Before submitting:

1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
   fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
   in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
   Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->

**What type of PR is this?**

Design Doc

**What does this PR do? Why is it needed?**

This PR adds a design doc for adding graffiti. The idea is to have it
populated judiciously so that we can get proper information about the
EL, CL and their corresponding version info. At the same time being
flexible enough with the user input.

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.
2025-12-10 22:57:40 +00:00
Manu NALEPA
14c67376c3 Add test requirement to PULL_REQUEST_TEMPLATE.md (#16123)
**What type of PR is this?**
Other

**What does this PR do? Why is it needed?**
This pull request modifies the `PULL_REQUEST_TEMPLATE.md` to ensure the
developer checked that their PR works as expected.

Some contributors push some changes, without even running the modified
client once to see if their changes work as expected.

Avoidable back-and-forth trips between the contributor and the reviewers
could be prevented thanks to running the modified client.

**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2025-12-10 17:40:29 +00:00
Preston Van Loon
9c8b68a66d Update CHANGELOG.md for v7.0.1 release (#16107)
**What type of PR is this?**

Other

**What does this PR do? Why is it needed?**

**Which issues(s) does this PR fix?**

**Other notes for review**

Did not delete the fragments as they are still needed to generate v7.1.0
release notes. This release is all cherry-picks which would be included
in v7.1.0

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.
2025-12-10 17:07:38 +00:00
Potuz
a3210157e2 Fix TOCTOU race validating attestations (#16105)
A TOCTOU issue was reported by EF security in which two attestations
being validated at the same time may result in both of them being
forwarded. The spec says that we need to forward only the first one.
2025-12-09 19:26:05 +00:00
satushh
1536d59e30 Remove unnecessary copy in Eth1DataHasEnoughSupport (#16118)
<!-- Thanks for sending a PR! Before submitting:

1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
   fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
   in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
   Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->

**What type of PR is this?**

Other

**What does this PR do? Why is it needed?**

- Remove unnecessary `Copy()` call in `Eth1DataHasEnoughSupport`
- `data.Copy()` was called on every iteration of the vote counting loop,
even though `AreEth1DataEqual` only reads the data and never mutates it.
- Additionally, `Eth1DataVotes()` already returns copies of all votes,
so state is protected regardless.

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.
2025-12-09 19:02:36 +00:00
satushh
11e46a4560 Optimise for loop of MigrateToCold (#16101)
<!-- Thanks for sending a PR! Before submitting:

1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
   fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
   in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
   Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->

**What type of PR is this?**

 Other

**What does this PR do? Why is it needed?**

The for loop in MigrateToCold function was brute force in nature. It
could be improved by just directly jumping by `slotsPerArchivedPoint`
rather than going over every single slot.

```
for slot := oldFSlot; slot < fSlot; slot++ {
  ...
   if slot%s.slotsPerArchivedPoint == 0 && slot != 0 {
```
No need to do the modulo for every single slot.
We could just find the correct starting point and jump by
slotsPerArchivedPoint at a time.

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.

---------

Co-authored-by: Potuz <potuz@prysmaticlabs.com>
2025-12-09 17:15:52 +00:00
Snezhkko
5a2e51b894 fix(rpc): incorrect constructor return type (#16084)
The constructor `NewStateRootNotFoundError` incorrectly returned
`StateNotFoundError`. This prevented handlers that rely on
errors.As(err, *lookup.StateRootNotFoundError) from matching and mapping
the error to HTTP 404. The function now returns
StateRootNotFoundError and constructs that type, restoring the intended
behavior for “state root not found” cases.

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-12-09 13:56:00 +00:00
366 changed files with 5212 additions and 2692 deletions

View File

@@ -34,4 +34,5 @@ Fixes #
- [ ] I have read [CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [ ] I have included a uniquely named [changelog fragment file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [ ] I have added a description to this PR with sufficient context for reviewers to understand this PR.
- [ ] I have added a description with sufficient context for reviewers to understand this PR.
- [ ] I have tested that my changes work as expected and I added a testing plan to the PR description (if applicable).

23
.github/workflows/check-logs.yml vendored Normal file
View File

@@ -0,0 +1,23 @@
name: Check log.go files
on: [ pull_request ]
jobs:
check-logs:
runs-on: ubuntu-4
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Go 1.25.1
uses: actions/setup-go@v5
with:
go-version: '1.25.1'
- name: Install ripgrep
run: sudo apt-get install -y ripgrep
- name: Check log.go files
run: ./hack/check-logs.sh

View File

@@ -193,6 +193,7 @@ nogo(
"//tools/analyzers/featureconfig:go_default_library",
"//tools/analyzers/gocognit:go_default_library",
"//tools/analyzers/ineffassign:go_default_library",
"//tools/analyzers/httpwriter:go_default_library",
"//tools/analyzers/interfacechecker:go_default_library",
"//tools/analyzers/logcapitalization:go_default_library",
"//tools/analyzers/logruswitherror:go_default_library",

View File

@@ -4,6 +4,91 @@ All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
## [v7.1.0](https://github.com/prysmaticlabs/prysm/compare/v7.0.0...v7.1.0) - 2025-12-10
This release includes several key features/fixes. If you are running v7.0.0 then you should update to v7.0.1 or later and remove the flag `--disable-last-epoch-targets`.
Release highlights:
- Backfill is now supported in Fulu. Backfill from checkpoint sync now supports data columns. Run with `--enable-backfill` when using checkpoint sync.
- A new node configuration to custody enough data columns to reconstruct blobs. Use flag `--semi-supernode` to custody at least 50% of the data columns.
- Critical fixes in attestation processing.
A post mortem doc with full details on the mainnet attestation processing issue from December 4th is expected in the coming days.
### Added
- add fulu support to light client processing. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15995)
- Record data column gossip KZG batch verification latency in both the pooled worker and fallback paths so the `beacon_kzg_verification_data_column_batch_milliseconds` histogram reflects gossip traffic, annotated with `path` labels to distinguish the sources. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16018)
- Implement Gloas state. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15611)
- Add initial configs for the state-diff feature. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15903)
- Add kv functions for the state-diff feature. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15903)
- Add supported version for fork versions. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16030)
- prometheus metric `gossip_attestation_verification_milliseconds` to track attestation gossip topic validation latency. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15785)
- Integrate state-diff into `State()`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16033)
- Implement Gloas fork support in consensus-types/blocks with factory methods, getters, setters, and proto handling. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15618)
- Integrate state-diff into `HasState()`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16045)
- Added `--semi-supernode` flag to custody half of a super node's datacolumn requirements but allowing for reconstruction for blob retrieval. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16029)
- Data column backfill. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15580)
- Backfill metrics for columns: backfill_data_column_sidecar_downloaded, backfill_data_column_sidecar_downloaded_bytes, backfill_batch_columns_download_ms, backfill_batch_columns_verify_ms. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15580)
- prometheus summary `gossip_data_column_sidecar_arrival_milliseconds` to track data column sidecar arrival latency since slot start. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16099)
### Changed
- Improve readability in slashing import and remove duplicated code. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15957)
- Use dependent root instead of target when possible. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15996)
- Changed `--subscribe-all-data-subnets` flag to `--supernode` and aliased `--subscribe-all-data-subnets` for existing users. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16012)
- Use explicit slot component timing configs. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15999)
- Downgraded log level from INFO to DEBUG on PrepareBeaconProposer updated fee recipients. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15998)
- Change the logging behaviour of Updated fee recipients to only log count of validators at Debug level and all validator indices at Trace level. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15998)
- Stop emitting payload attribute events during late block handling when we are not proposing the next slot. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16026)
- Initialize the `ExecutionRequests` field in gossip block map. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16047)
- Avoid redundant WithHttpEndpoint when JWT is provided. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16032)
- Removed dead slot parameter from blobCacheEntry.filter. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16021)
- Added log prefix to the `genesis` package. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16075)
- Added log prefix to the `params` package. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16075)
- `WithGenesisValidatorsRoot`: Use camelCase for log field param. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16075)
- Move `Origin checkpoint found in db` from WARN to INFO, since it is the expected behaviour. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16075)
- backfill metrics that changed name and/or histogram buckets: backfill_batch_time_verify -> backfill_batch_verify_ms, backfill_batch_time_waiting -> backfill_batch_waiting_ms, backfill_batch_time_roundtrip -> backfill_batch_roundtrip_ms, backfill_blocks_bytes_downloaded -> backfill_blocks_downloaded_bytes, backfill_batch_time_verify -> backfill_batch_verify_ms, backfill_batch_blocks_time_download -> backfill_batch_blocks_download_ms, backfill_batch_blobs_time_download -> backfill_batch_blobs_download_ms, backfill_blobs_bytes_downloaded -> backfill_blocks_downloaded_bytes. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15580)
- Move the "Not enough connected peers" (for a given subnet) from WARN to DEBUG. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16087)
- `blobsDataFromStoredDataColumns`: Ask the use to use the `--supernode` flag and shorten the error mesage. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16097)
- Introduced flag `--ignore-unviable-attestations` (replaces and deprecates `--disable-last-epoch-targets`) to drop attestations whose target state is not viable; default remains to process them unless explicitly enabled. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16094)
### Removed
- Remove validator cross-client from end-to-end tests. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16025)
- `NUMBER_OF_COLUMNS` configuration (not in the specification any more, replaced by a preset). [[PR]](https://github.com/prysmaticlabs/prysm/pull/16073)
- `MAX_CELLS_IN_EXTENDED_MATRIX` configuration (not in the specification any more). [[PR]](https://github.com/prysmaticlabs/prysm/pull/16073)
### Fixed
- Nil check for block if it doesn't exist in the DB in fetchOriginSidecars. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16006)
- Fix proposals progress bar count [#16020](https://github.com/OffchainLabs/prysm/pull/16020). [[PR]](https://github.com/prysmaticlabs/prysm/pull/16020)
- Move `BlockGossipReceived` event to the end of gossip validation. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16031)
- Fix state diff repetitive anchor slot bug. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16037)
- Check the JWT secret length is exactly 256 bits (32 bytes) as per Engine API specification. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15939)
- http_error_count now matches the other cases by listing the endpoint name rather than the actual URL requested. This improves metrics cardinality. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16055)
- Fix array out of bounds in static analyzer. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16058)
- fixes E2E tests to be able to start from Electra genesis fork or future forks. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16048)
- Use head state to validate attestations for old blocks if they are compatible. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16095)
## [v7.0.1](https://github.com/prysmaticlabs/prysm/compare/v7.0.0...v7.0.1) - 2025-12-08
This patch release contains 4 cherry-picked changes to address the mainnet attestation processing issue from 2025-12-04. Operators are encouraged to update to this release as soon as practical. As of this release, the feature flag `--disable-last-epoch-targets` has been deprecated and can be safely removed from your node configuration.
A post mortem doc with full details is expected to be published later this week.
### Changed
- Move the "Not enough connected peers" (for a given subnet) from WARN to DEBUG. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16087)
- Use dependent root instead of target when possible. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15996)
- Introduced flag `--ignore-unviable-attestations` (replaces and deprecates `--disable-last-epoch-targets`) to drop attestations whose target state is not viable; default remains to process them unless explicitly enabled. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16094)
### Fixed
- Use head state to validate attestations for old blocks if they are compatible. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16095)
## [v7.0.0](https://github.com/prysmaticlabs/prysm/compare/v6.1.4...v7.0.0) - 2025-11-10
This is our initial mainnet release for the Ethereum mainnet Fulu fork on December 3rd, 2025. All operators MUST update to v7.0.0 or later release prior to the fulu fork epoch `411392`. See the [Ethereum Foundation blog post](https://blog.ethereum.org/2025/11/06/fusaka-mainnet-announcement) for more information on Fulu.

View File

@@ -5,6 +5,7 @@ go_library(
srcs = [
"common.go",
"header.go",
"log.go",
],
importpath = "github.com/OffchainLabs/prysm/v7/api/apiutil",
visibility = ["//visibility:public"],

View File

@@ -5,8 +5,6 @@ import (
"sort"
"strconv"
"strings"
log "github.com/sirupsen/logrus"
)
type mediaRange struct {

9
api/apiutil/log.go Normal file
View File

@@ -0,0 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package apiutil
import "github.com/sirupsen/logrus"
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "api/apiutil")

View File

@@ -1,5 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package beacon
import "github.com/sirupsen/logrus"
var log = logrus.WithField("prefix", "beacon")
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "api/client/beacon")

View File

@@ -6,6 +6,7 @@ go_library(
"bid.go",
"client.go",
"errors.go",
"log.go",
"types.go",
],
importpath = "github.com/OffchainLabs/prysm/v7/api/client/builder",
@@ -63,6 +64,5 @@ go_test(
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
)

View File

@@ -25,7 +25,7 @@ import (
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/runtime/version"
"github.com/pkg/errors"
log "github.com/sirupsen/logrus"
"github.com/sirupsen/logrus"
"go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
)
@@ -70,7 +70,7 @@ type requestLogger struct{}
func (*requestLogger) observe(r *http.Request) (e error) {
b := bytes.NewBuffer(nil)
if r.Body == nil {
log.WithFields(log.Fields{
log.WithFields(logrus.Fields{
"bodyBase64": "(nil value)",
"url": r.URL.String(),
}).Info("Builder http request")
@@ -87,7 +87,7 @@ func (*requestLogger) observe(r *http.Request) (e error) {
return err
}
r.Body = io.NopCloser(b)
log.WithFields(log.Fields{
log.WithFields(logrus.Fields{
"bodyBase64": string(body),
"url": r.URL.String(),
}).Info("Builder http request")

View File

@@ -23,7 +23,6 @@ import (
"github.com/OffchainLabs/prysm/v7/testing/assert"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/OffchainLabs/prysm/v7/testing/util"
log "github.com/sirupsen/logrus"
)
type roundtrip func(*http.Request) (*http.Response, error)

View File

@@ -0,0 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package builder
import "github.com/sirupsen/logrus"
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "api/client/builder")

View File

@@ -4,6 +4,7 @@ go_library(
name = "go_default_library",
srcs = [
"event_stream.go",
"log.go",
"utils.go",
],
importpath = "github.com/OffchainLabs/prysm/v7/api/client/event",
@@ -23,8 +24,5 @@ go_test(
"utils_test.go",
],
embed = [":go_default_library"],
deps = [
"//testing/require:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
deps = ["//testing/require:go_default_library"],
)

View File

@@ -10,7 +10,6 @@ import (
"github.com/OffchainLabs/prysm/v7/api"
"github.com/OffchainLabs/prysm/v7/api/client"
"github.com/pkg/errors"
log "github.com/sirupsen/logrus"
)
const (

View File

@@ -8,7 +8,6 @@ import (
"time"
"github.com/OffchainLabs/prysm/v7/testing/require"
log "github.com/sirupsen/logrus"
)
func TestNewEventStream(t *testing.T) {

9
api/client/event/log.go Normal file
View File

@@ -0,0 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package event
import "github.com/sirupsen/logrus"
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "api/client/event")

View File

@@ -4,6 +4,7 @@ go_library(
name = "go_default_library",
srcs = [
"grpcutils.go",
"log.go",
"parameters.go",
],
importpath = "github.com/OffchainLabs/prysm/v7/api/grpc",

View File

@@ -32,7 +32,7 @@ func LogRequests(
)
start := time.Now()
err := invoker(ctx, method, req, reply, cc, opts...)
logrus.WithField("backend", header["x-backend"]).
log.WithField("backend", header["x-backend"]).
WithField("method", method).WithField("duration", time.Since(start)).
Debug("gRPC request finished.")
return err
@@ -58,7 +58,7 @@ func LogStream(
grpc.Header(&header),
)
strm, err := streamer(ctx, sd, conn, method, opts...)
logrus.WithField("backend", header["x-backend"]).
log.WithField("backend", header["x-backend"]).
WithField("method", method).
Debug("gRPC stream started.")
return strm, err
@@ -71,7 +71,7 @@ func AppendHeaders(parent context.Context, headers []string) context.Context {
if h != "" {
keyValue := strings.Split(h, "=")
if len(keyValue) < 2 {
logrus.Warnf("Incorrect gRPC header flag format. Skipping %v", keyValue[0])
log.Warnf("Incorrect gRPC header flag format. Skipping %v", keyValue[0])
continue
}
parent = metadata.AppendToOutgoingContext(parent, keyValue[0], strings.Join(keyValue[1:], "=")) // nolint:fatcontext

9
api/grpc/log.go Normal file
View File

@@ -0,0 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package grpc
import "github.com/sirupsen/logrus"
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "api/grpc")

View File

@@ -1,5 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package httprest
import "github.com/sirupsen/logrus"
var log = logrus.WithField("prefix", "httprest")
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "api/server/httprest")

View File

@@ -3,6 +3,7 @@ load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"log.go",
"middleware.go",
"util.go",
],
@@ -27,6 +28,5 @@ go_test(
"//api:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
)

View File

@@ -0,0 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package middleware
import "github.com/sirupsen/logrus"
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "api/server/middleware")

View File

@@ -9,7 +9,6 @@ import (
"github.com/OffchainLabs/prysm/v7/api"
"github.com/OffchainLabs/prysm/v7/api/apiutil"
"github.com/rs/cors"
log "github.com/sirupsen/logrus"
)
type Middleware func(http.Handler) http.Handler

View File

@@ -10,7 +10,6 @@ import (
"github.com/OffchainLabs/prysm/v7/api"
"github.com/OffchainLabs/prysm/v7/testing/require"
log "github.com/sirupsen/logrus"
)
// frozenHeaderRecorder allows asserting that response headers were not modified

View File

@@ -5,6 +5,7 @@ go_library(
srcs = [
"debounce.go",
"every.go",
"log.go",
"multilock.go",
"scatter.go",
],

View File

@@ -6,8 +6,6 @@ import (
"reflect"
"runtime"
"time"
log "github.com/sirupsen/logrus"
)
// RunEvery runs the provided command periodically.

9
async/log.go Normal file
View File

@@ -0,0 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package async
import "github.com/sirupsen/logrus"
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "async")

View File

@@ -14,6 +14,7 @@ go_library(
"head_sync_committee_info.go",
"init_sync_process_block.go",
"log.go",
"log_helpers.go",
"merge_ascii_art.go",
"metrics.go",
"options.go",

View File

@@ -323,14 +323,17 @@ func (s *Service) getPayloadAttribute(ctx context.Context, st state.BeaconState,
var ok bool
e := slots.ToEpoch(slot)
stateEpoch := slots.ToEpoch(st.Slot())
if e == stateEpoch {
fuluAndNextEpoch := st.Version() >= version.Fulu && e == stateEpoch+1
if e == stateEpoch || fuluAndNextEpoch {
val, ok = s.trackedProposer(st, slot)
if !ok {
return emptyAttri
}
}
st = st.Copy()
if slot > st.Slot() {
// At this point either we know we are proposing on a future slot or we need to still compute the
// right proposer index pre-Fulu, either way we need to copy the state to process it.
st = st.Copy()
var err error
st, err = transition.ProcessSlotsUsingNextSlotCache(ctx, st, headRoot, slot)
if err != nil {
@@ -338,7 +341,7 @@ func (s *Service) getPayloadAttribute(ctx context.Context, st state.BeaconState,
return emptyAttri
}
}
if e > stateEpoch {
if e > stateEpoch && !fuluAndNextEpoch {
emptyAttri := payloadattribute.EmptyWithVersion(st.Version())
val, ok = s.trackedProposer(st, slot)
if !ok {

View File

@@ -1053,40 +1053,3 @@ func TestKZGCommitmentToVersionedHashes(t *testing.T) {
require.Equal(t, vhs[0].String(), vh0)
require.Equal(t, vhs[1].String(), vh1)
}
func TestComputePayloadAttribute(t *testing.T) {
service, tr := minimalTestService(t, WithPayloadIDCache(cache.NewPayloadIDCache()))
ctx := tr.ctx
st, _ := util.DeterministicGenesisStateBellatrix(t, 1)
service.cfg.TrackedValidatorsCache.Set(cache.TrackedValidator{Active: true, Index: 0})
// Cache hit, advance state, no fee recipient
slot := primitives.Slot(1)
service.cfg.PayloadIDCache.Set(slot, [32]byte{}, [8]byte{})
blk := util.NewBeaconBlockBellatrix()
signed, err := consensusblocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
roblock, err := consensusblocks.NewROBlockWithRoot(signed, [32]byte{'a'})
require.NoError(t, err)
cfg := &postBlockProcessConfig{
ctx: ctx,
roblock: roblock,
}
fcu := &fcuConfig{
headState: st,
proposingSlot: slot,
headRoot: [32]byte{},
}
require.NoError(t, service.computePayloadAttributes(cfg, fcu))
require.Equal(t, false, fcu.attributes.IsEmpty())
require.Equal(t, params.BeaconConfig().EthBurnAddressHex, common.BytesToAddress(fcu.attributes.SuggestedFeeRecipient()).String())
// Cache hit, advance state, has fee recipient
suggestedAddr := common.HexToAddress("123")
service.cfg.TrackedValidatorsCache.Set(cache.TrackedValidator{Active: true, FeeRecipient: primitives.ExecutionAddress(suggestedAddr), Index: 0})
service.cfg.PayloadIDCache.Set(slot, [32]byte{}, [8]byte{})
require.NoError(t, service.computePayloadAttributes(cfg, fcu))
require.Equal(t, false, fcu.attributes.IsEmpty())
require.Equal(t, suggestedAddr, common.BytesToAddress(fcu.attributes.SuggestedFeeRecipient()))
}

View File

@@ -12,6 +12,7 @@ import (
payloadattribute "github.com/OffchainLabs/prysm/v7/consensus-types/payload-attribute"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/monitoring/tracing/trace"
"github.com/OffchainLabs/prysm/v7/runtime/version"
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
@@ -53,58 +54,53 @@ type fcuConfig struct {
}
// sendFCU handles the logic to notify the engine of a forckhoice update
// for the first time when processing an incoming block during regular sync. It
// always updates the shuffling caches and handles epoch transitions when the
// incoming block is late, preparing payload attributes in this case while it
// only sends a message with empty attributes for early blocks.
func (s *Service) sendFCU(cfg *postBlockProcessConfig, fcuArgs *fcuConfig) error {
if !s.isNewHead(cfg.headRoot) {
return nil
// when processing an incoming block during regular sync. It
// always updates the shuffling caches and handles epoch transitions .
func (s *Service) sendFCU(cfg *postBlockProcessConfig, fcuArgs *fcuConfig) {
if cfg.postState.Version() < version.Fulu {
// update the caches to compute the right proposer index
// this function is called under a forkchoice lock which we need to release.
s.ForkChoicer().Unlock()
s.updateCachesPostBlockProcessing(cfg)
s.ForkChoicer().Lock()
}
if err := s.getFCUArgs(cfg, fcuArgs); err != nil {
log.WithError(err).Error("Could not get forkchoice update argument")
return
}
// If head has not been updated and attributes are nil, we can skip the FCU.
if !s.isNewHead(cfg.headRoot) && (fcuArgs.attributes == nil || fcuArgs.attributes.IsEmpty()) {
return
}
// If we are proposing and we aim to reorg the block, we have already sent FCU with attributes on lateBlockTasks
if fcuArgs.attributes != nil && !fcuArgs.attributes.IsEmpty() && s.shouldOverrideFCU(cfg.headRoot, s.CurrentSlot()+1) {
return nil
return
}
if s.inRegularSync() {
go s.forkchoiceUpdateWithExecution(cfg.ctx, fcuArgs)
}
return s.forkchoiceUpdateWithExecution(cfg.ctx, fcuArgs)
}
// sendFCUWithAttributes computes the payload attributes and sends an FCU message
// to the engine if needed
func (s *Service) sendFCUWithAttributes(cfg *postBlockProcessConfig, fcuArgs *fcuConfig) {
slotCtx, cancel := context.WithTimeout(context.Background(), slotDeadline)
defer cancel()
cfg.ctx = slotCtx
s.cfg.ForkChoiceStore.RLock()
defer s.cfg.ForkChoiceStore.RUnlock()
if err := s.computePayloadAttributes(cfg, fcuArgs); err != nil {
log.WithError(err).Error("Could not compute payload attributes")
return
}
if fcuArgs.attributes.IsEmpty() {
return
}
if _, err := s.notifyForkchoiceUpdate(cfg.ctx, fcuArgs); err != nil {
log.WithError(err).Error("Could not update forkchoice with payload attributes for proposal")
if s.isNewHead(fcuArgs.headRoot) {
if err := s.saveHead(cfg.ctx, fcuArgs.headRoot, fcuArgs.headBlock, fcuArgs.headState); err != nil {
log.WithError(err).Error("Could not save head")
}
s.pruneAttsFromPool(s.ctx, fcuArgs.headState, fcuArgs.headBlock)
}
}
// fockchoiceUpdateWithExecution is a wrapper around notifyForkchoiceUpdate. It decides whether a new call to FCU should be made.
func (s *Service) forkchoiceUpdateWithExecution(ctx context.Context, args *fcuConfig) error {
// fockchoiceUpdateWithExecution is a wrapper around notifyForkchoiceUpdate. It gets a forkchoice lock and calls the engine.
// The caller of this function should NOT have a lock in forkchoice store.
func (s *Service) forkchoiceUpdateWithExecution(ctx context.Context, args *fcuConfig) {
_, span := trace.StartSpan(ctx, "beacon-chain.blockchain.forkchoiceUpdateWithExecution")
defer span.End()
// Note: Use the service context here to avoid the parent context being ended during a forkchoice update.
ctx = trace.NewContext(s.ctx, span)
s.ForkChoicer().Lock()
defer s.ForkChoicer().Unlock()
_, err := s.notifyForkchoiceUpdate(ctx, args)
if err != nil {
return errors.Wrap(err, "could not notify forkchoice update")
log.WithError(err).Error("Could not notify forkchoice update")
}
if err := s.saveHead(ctx, args.headRoot, args.headBlock, args.headState); err != nil {
log.WithError(err).Error("Could not save head")
}
// Only need to prune attestations from pool if the head has changed.
s.pruneAttsFromPool(s.ctx, args.headState, args.headBlock)
return nil
}
// shouldOverrideFCU checks whether the incoming block is still subject to being

View File

@@ -97,7 +97,7 @@ func TestService_forkchoiceUpdateWithExecution_exceptionalCases(t *testing.T) {
headBlock: wsb,
proposingSlot: service.CurrentSlot() + 1,
}
require.NoError(t, service.forkchoiceUpdateWithExecution(ctx, args))
service.forkchoiceUpdateWithExecution(ctx, args)
payloadID, has := service.cfg.PayloadIDCache.PayloadID(2, [32]byte{2})
require.Equal(t, true, has)
@@ -151,7 +151,7 @@ func TestService_forkchoiceUpdateWithExecution_SameHeadRootNewProposer(t *testin
headRoot: r,
proposingSlot: service.CurrentSlot() + 1,
}
require.NoError(t, service.forkchoiceUpdateWithExecution(ctx, args))
service.forkchoiceUpdateWithExecution(ctx, args)
}
func TestShouldOverrideFCU(t *testing.T) {

View File

@@ -1,164 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package blockchain
import (
"encoding/hex"
"fmt"
"time"
import "github.com/sirupsen/logrus"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/blocks"
"github.com/OffchainLabs/prysm/v7/config/params"
consensus_types "github.com/OffchainLabs/prysm/v7/consensus-types"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/runtime/version"
prysmTime "github.com/OffchainLabs/prysm/v7/time"
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
var log = logrus.WithField("prefix", "blockchain")
// logs state transition related data every slot.
func logStateTransitionData(b interfaces.ReadOnlyBeaconBlock) error {
log := log.WithField("slot", b.Slot())
if len(b.Body().Attestations()) > 0 {
log = log.WithField("attestations", len(b.Body().Attestations()))
}
if len(b.Body().AttesterSlashings()) > 0 {
log = log.WithField("attesterSlashings", len(b.Body().AttesterSlashings()))
}
if len(b.Body().ProposerSlashings()) > 0 {
log = log.WithField("proposerSlashings", len(b.Body().ProposerSlashings()))
}
if len(b.Body().VoluntaryExits()) > 0 {
log = log.WithField("voluntaryExits", len(b.Body().VoluntaryExits()))
}
if b.Version() >= version.Altair {
agg, err := b.Body().SyncAggregate()
if err != nil {
return err
}
log = log.WithField("syncBitsCount", agg.SyncCommitteeBits.Count())
}
if b.Version() >= version.Bellatrix {
p, err := b.Body().Execution()
if err != nil {
return err
}
log = log.WithField("payloadHash", fmt.Sprintf("%#x", bytesutil.Trunc(p.BlockHash())))
txs, err := p.Transactions()
switch {
case errors.Is(err, consensus_types.ErrUnsupportedField):
case err != nil:
return err
default:
log = log.WithField("txCount", len(txs))
txsPerSlotCount.Set(float64(len(txs)))
}
}
if b.Version() >= version.Deneb {
kzgs, err := b.Body().BlobKzgCommitments()
if err != nil {
log.WithError(err).Error("Failed to get blob KZG commitments")
} else if len(kzgs) > 0 {
log = log.WithField("kzgCommitmentCount", len(kzgs))
}
}
if b.Version() >= version.Electra {
eReqs, err := b.Body().ExecutionRequests()
if err != nil {
log.WithError(err).Error("Failed to get execution requests")
} else {
if len(eReqs.Deposits) > 0 {
log = log.WithField("depositRequestCount", len(eReqs.Deposits))
}
if len(eReqs.Consolidations) > 0 {
log = log.WithField("consolidationRequestCount", len(eReqs.Consolidations))
}
if len(eReqs.Withdrawals) > 0 {
log = log.WithField("withdrawalRequestCount", len(eReqs.Withdrawals))
}
}
}
log.Info("Finished applying state transition")
return nil
}
func logBlockSyncStatus(block interfaces.ReadOnlyBeaconBlock, blockRoot [32]byte, justified, finalized *ethpb.Checkpoint, receivedTime time.Time, genesis time.Time, daWaitedTime time.Duration) error {
startTime, err := slots.StartTime(genesis, block.Slot())
if err != nil {
return err
}
level := log.Logger.GetLevel()
if level >= logrus.DebugLevel {
parentRoot := block.ParentRoot()
lf := logrus.Fields{
"slot": block.Slot(),
"slotInEpoch": block.Slot() % params.BeaconConfig().SlotsPerEpoch,
"block": fmt.Sprintf("0x%s...", hex.EncodeToString(blockRoot[:])[:8]),
"epoch": slots.ToEpoch(block.Slot()),
"justifiedEpoch": justified.Epoch,
"justifiedRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(justified.Root)[:8]),
"finalizedEpoch": finalized.Epoch,
"finalizedRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(finalized.Root)[:8]),
"parentRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(parentRoot[:])[:8]),
"version": version.String(block.Version()),
"sinceSlotStartTime": prysmTime.Now().Sub(startTime),
"chainServiceProcessedTime": prysmTime.Now().Sub(receivedTime) - daWaitedTime,
"dataAvailabilityWaitedTime": daWaitedTime,
}
log.WithFields(lf).Debug("Synced new block")
} else {
log.WithFields(logrus.Fields{
"slot": block.Slot(),
"block": fmt.Sprintf("0x%s...", hex.EncodeToString(blockRoot[:])[:8]),
"finalizedEpoch": finalized.Epoch,
"finalizedRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(finalized.Root)[:8]),
"epoch": slots.ToEpoch(block.Slot()),
}).Info("Synced new block")
}
return nil
}
// logs payload related data every slot.
func logPayload(block interfaces.ReadOnlyBeaconBlock) error {
isExecutionBlk, err := blocks.IsExecutionBlock(block.Body())
if err != nil {
return errors.Wrap(err, "could not determine if block is execution block")
}
if !isExecutionBlk {
return nil
}
payload, err := block.Body().Execution()
if err != nil {
return err
}
if payload.GasLimit() == 0 {
return errors.New("gas limit should not be 0")
}
gasUtilized := float64(payload.GasUsed()) / float64(payload.GasLimit())
fields := logrus.Fields{
"blockHash": fmt.Sprintf("%#x", bytesutil.Trunc(payload.BlockHash())),
"parentHash": fmt.Sprintf("%#x", bytesutil.Trunc(payload.ParentHash())),
"blockNumber": payload.BlockNumber(),
"gasUtilized": fmt.Sprintf("%.2f", gasUtilized),
}
if block.Version() >= version.Capella {
withdrawals, err := payload.Withdrawals()
if err != nil {
return errors.Wrap(err, "could not get withdrawals")
}
fields["withdrawals"] = len(withdrawals)
changes, err := block.Body().BLSToExecutionChanges()
if err != nil {
return errors.Wrap(err, "could not get BLSToExecutionChanges")
}
if len(changes) > 0 {
fields["blsToExecutionChanges"] = len(changes)
}
}
log.WithFields(fields).Debug("Synced new payload")
return nil
}
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "beacon-chain/blockchain")

View File

@@ -0,0 +1,162 @@
package blockchain
import (
"encoding/hex"
"fmt"
"time"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/blocks"
"github.com/OffchainLabs/prysm/v7/config/params"
consensus_types "github.com/OffchainLabs/prysm/v7/consensus-types"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/runtime/version"
prysmTime "github.com/OffchainLabs/prysm/v7/time"
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
// logs state transition related data every slot.
func logStateTransitionData(b interfaces.ReadOnlyBeaconBlock) error {
log := log.WithField("slot", b.Slot())
if len(b.Body().Attestations()) > 0 {
log = log.WithField("attestations", len(b.Body().Attestations()))
}
if len(b.Body().AttesterSlashings()) > 0 {
log = log.WithField("attesterSlashings", len(b.Body().AttesterSlashings()))
}
if len(b.Body().ProposerSlashings()) > 0 {
log = log.WithField("proposerSlashings", len(b.Body().ProposerSlashings()))
}
if len(b.Body().VoluntaryExits()) > 0 {
log = log.WithField("voluntaryExits", len(b.Body().VoluntaryExits()))
}
if b.Version() >= version.Altair {
agg, err := b.Body().SyncAggregate()
if err != nil {
return err
}
log = log.WithField("syncBitsCount", agg.SyncCommitteeBits.Count())
}
if b.Version() >= version.Bellatrix {
p, err := b.Body().Execution()
if err != nil {
return err
}
log = log.WithField("payloadHash", fmt.Sprintf("%#x", bytesutil.Trunc(p.BlockHash())))
txs, err := p.Transactions()
switch {
case errors.Is(err, consensus_types.ErrUnsupportedField):
case err != nil:
return err
default:
log = log.WithField("txCount", len(txs))
txsPerSlotCount.Set(float64(len(txs)))
}
}
if b.Version() >= version.Deneb {
kzgs, err := b.Body().BlobKzgCommitments()
if err != nil {
log.WithError(err).Error("Failed to get blob KZG commitments")
} else if len(kzgs) > 0 {
log = log.WithField("kzgCommitmentCount", len(kzgs))
}
}
if b.Version() >= version.Electra {
eReqs, err := b.Body().ExecutionRequests()
if err != nil {
log.WithError(err).Error("Failed to get execution requests")
} else {
if len(eReqs.Deposits) > 0 {
log = log.WithField("depositRequestCount", len(eReqs.Deposits))
}
if len(eReqs.Consolidations) > 0 {
log = log.WithField("consolidationRequestCount", len(eReqs.Consolidations))
}
if len(eReqs.Withdrawals) > 0 {
log = log.WithField("withdrawalRequestCount", len(eReqs.Withdrawals))
}
}
}
log.Info("Finished applying state transition")
return nil
}
func logBlockSyncStatus(block interfaces.ReadOnlyBeaconBlock, blockRoot [32]byte, justified, finalized *ethpb.Checkpoint, receivedTime time.Time, genesis time.Time, daWaitedTime time.Duration) error {
startTime, err := slots.StartTime(genesis, block.Slot())
if err != nil {
return err
}
level := log.Logger.GetLevel()
if level >= logrus.DebugLevel {
parentRoot := block.ParentRoot()
lf := logrus.Fields{
"slot": block.Slot(),
"slotInEpoch": block.Slot() % params.BeaconConfig().SlotsPerEpoch,
"block": fmt.Sprintf("0x%s...", hex.EncodeToString(blockRoot[:])[:8]),
"epoch": slots.ToEpoch(block.Slot()),
"justifiedEpoch": justified.Epoch,
"justifiedRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(justified.Root)[:8]),
"finalizedEpoch": finalized.Epoch,
"finalizedRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(finalized.Root)[:8]),
"parentRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(parentRoot[:])[:8]),
"version": version.String(block.Version()),
"sinceSlotStartTime": prysmTime.Now().Sub(startTime),
"chainServiceProcessedTime": prysmTime.Now().Sub(receivedTime) - daWaitedTime,
"dataAvailabilityWaitedTime": daWaitedTime,
}
log.WithFields(lf).Debug("Synced new block")
} else {
log.WithFields(logrus.Fields{
"slot": block.Slot(),
"block": fmt.Sprintf("0x%s...", hex.EncodeToString(blockRoot[:])[:8]),
"finalizedEpoch": finalized.Epoch,
"finalizedRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(finalized.Root)[:8]),
"epoch": slots.ToEpoch(block.Slot()),
}).Info("Synced new block")
}
return nil
}
// logs payload related data every slot.
func logPayload(block interfaces.ReadOnlyBeaconBlock) error {
isExecutionBlk, err := blocks.IsExecutionBlock(block.Body())
if err != nil {
return errors.Wrap(err, "could not determine if block is execution block")
}
if !isExecutionBlk {
return nil
}
payload, err := block.Body().Execution()
if err != nil {
return err
}
if payload.GasLimit() == 0 {
return errors.New("gas limit should not be 0")
}
gasUtilized := float64(payload.GasUsed()) / float64(payload.GasLimit())
fields := logrus.Fields{
"blockHash": fmt.Sprintf("%#x", bytesutil.Trunc(payload.BlockHash())),
"parentHash": fmt.Sprintf("%#x", bytesutil.Trunc(payload.ParentHash())),
"blockNumber": payload.BlockNumber(),
"gasUtilized": fmt.Sprintf("%.2f", gasUtilized),
}
if block.Version() >= version.Capella {
withdrawals, err := payload.Withdrawals()
if err != nil {
return errors.Wrap(err, "could not get withdrawals")
}
fields["withdrawals"] = len(withdrawals)
changes, err := block.Body().BLSToExecutionChanges()
if err != nil {
return errors.Wrap(err, "could not get BLSToExecutionChanges")
}
if len(changes) > 0 {
fields["blsToExecutionChanges"] = len(changes)
}
}
log.WithFields(fields).Debug("Synced new payload")
return nil
}

View File

@@ -34,7 +34,7 @@ func Test_logStateTransitionData(t *testing.T) {
require.NoError(t, err)
return wb
},
want: "\"Finished applying state transition\" prefix=blockchain slot=0",
want: "\"Finished applying state transition\" package=beacon-chain/blockchain slot=0",
},
{name: "has attestation",
b: func() interfaces.ReadOnlyBeaconBlock {
@@ -42,7 +42,7 @@ func Test_logStateTransitionData(t *testing.T) {
require.NoError(t, err)
return wb
},
want: "\"Finished applying state transition\" attestations=1 prefix=blockchain slot=0",
want: "\"Finished applying state transition\" attestations=1 package=beacon-chain/blockchain slot=0",
},
{name: "has deposit",
b: func() interfaces.ReadOnlyBeaconBlock {
@@ -53,7 +53,7 @@ func Test_logStateTransitionData(t *testing.T) {
require.NoError(t, err)
return wb
},
want: "\"Finished applying state transition\" attestations=1 prefix=blockchain slot=0",
want: "\"Finished applying state transition\" attestations=1 package=beacon-chain/blockchain slot=0",
},
{name: "has attester slashing",
b: func() interfaces.ReadOnlyBeaconBlock {
@@ -62,7 +62,7 @@ func Test_logStateTransitionData(t *testing.T) {
require.NoError(t, err)
return wb
},
want: "\"Finished applying state transition\" attesterSlashings=1 prefix=blockchain slot=0",
want: "\"Finished applying state transition\" attesterSlashings=1 package=beacon-chain/blockchain slot=0",
},
{name: "has proposer slashing",
b: func() interfaces.ReadOnlyBeaconBlock {
@@ -71,7 +71,7 @@ func Test_logStateTransitionData(t *testing.T) {
require.NoError(t, err)
return wb
},
want: "\"Finished applying state transition\" prefix=blockchain proposerSlashings=1 slot=0",
want: "\"Finished applying state transition\" package=beacon-chain/blockchain proposerSlashings=1 slot=0",
},
{name: "has exit",
b: func() interfaces.ReadOnlyBeaconBlock {
@@ -80,7 +80,7 @@ func Test_logStateTransitionData(t *testing.T) {
require.NoError(t, err)
return wb
},
want: "\"Finished applying state transition\" prefix=blockchain slot=0 voluntaryExits=1",
want: "\"Finished applying state transition\" package=beacon-chain/blockchain slot=0 voluntaryExits=1",
},
{name: "has everything",
b: func() interfaces.ReadOnlyBeaconBlock {
@@ -93,11 +93,11 @@ func Test_logStateTransitionData(t *testing.T) {
require.NoError(t, err)
return wb
},
want: "\"Finished applying state transition\" attestations=1 attesterSlashings=1 prefix=blockchain proposerSlashings=1 slot=0 voluntaryExits=1",
want: "\"Finished applying state transition\" attestations=1 attesterSlashings=1 package=beacon-chain/blockchain proposerSlashings=1 slot=0 voluntaryExits=1",
},
{name: "has payload",
b: func() interfaces.ReadOnlyBeaconBlock { return wrappedPayloadBlk },
want: "\"Finished applying state transition\" payloadHash=0x010203 prefix=blockchain slot=0 syncBitsCount=0 txCount=2",
want: "\"Finished applying state transition\" package=beacon-chain/blockchain payloadHash=0x010203 slot=0 syncBitsCount=0 txCount=2",
},
}
for _, tt := range tests {

View File

@@ -22,7 +22,7 @@ import (
// The caller of this function must have a lock on forkchoice.
func (s *Service) getRecentPreState(ctx context.Context, c *ethpb.Checkpoint) state.ReadOnlyBeaconState {
headEpoch := slots.ToEpoch(s.HeadSlot())
if c.Epoch < headEpoch || c.Epoch == 0 {
if c.Epoch+1 < headEpoch || c.Epoch == 0 {
return nil
}
// Only use head state if the head state is compatible with the target checkpoint.
@@ -30,11 +30,13 @@ func (s *Service) getRecentPreState(ctx context.Context, c *ethpb.Checkpoint) st
if err != nil {
return nil
}
headDependent, err := s.cfg.ForkChoiceStore.DependentRootForEpoch([32]byte(headRoot), c.Epoch-1)
// headEpoch - 1 equals c.Epoch if c is from the previous epoch and equals c.Epoch - 1 if c is from the current epoch.
// We don't use the smaller c.Epoch - 1 because forkchoice would not have the data to answer that.
headDependent, err := s.cfg.ForkChoiceStore.DependentRootForEpoch([32]byte(headRoot), headEpoch-1)
if err != nil {
return nil
}
targetDependent, err := s.cfg.ForkChoiceStore.DependentRootForEpoch([32]byte(c.Root), c.Epoch-1)
targetDependent, err := s.cfg.ForkChoiceStore.DependentRootForEpoch([32]byte(c.Root), headEpoch-1)
if err != nil {
return nil
}
@@ -43,7 +45,7 @@ func (s *Service) getRecentPreState(ctx context.Context, c *ethpb.Checkpoint) st
}
// If the head state alone is enough, we can return it directly read only.
if c.Epoch == headEpoch {
if c.Epoch <= headEpoch {
st, err := s.HeadStateReadOnly(ctx)
if err != nil {
return nil

View File

@@ -170,12 +170,13 @@ func TestService_GetRecentPreState(t *testing.T) {
err = s.SetFinalizedCheckpoint(cp0)
require.NoError(t, err)
st, root, err := prepareForkchoiceState(ctx, 31, [32]byte(ckRoot), [32]byte{}, [32]byte{'R'}, cp0, cp0)
st, blk, err := prepareForkchoiceState(ctx, 31, [32]byte(ckRoot), [32]byte{}, [32]byte{'R'}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, root))
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blk))
service.head = &head{
root: [32]byte(ckRoot),
state: s,
block: blk,
slot: 31,
}
require.NotNil(t, service.getRecentPreState(ctx, &ethpb.Checkpoint{Epoch: 1, Root: ckRoot}))
@@ -197,12 +198,13 @@ func TestService_GetRecentPreState_Old_Checkpoint(t *testing.T) {
err = s.SetFinalizedCheckpoint(cp0)
require.NoError(t, err)
st, root, err := prepareForkchoiceState(ctx, 33, [32]byte(ckRoot), [32]byte{}, [32]byte{'R'}, cp0, cp0)
st, blk, err := prepareForkchoiceState(ctx, 33, [32]byte(ckRoot), [32]byte{}, [32]byte{'R'}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, root))
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blk))
service.head = &head{
root: [32]byte(ckRoot),
state: s,
block: blk,
slot: 33,
}
require.IsNil(t, service.getRecentPreState(ctx, &ethpb.Checkpoint{}))
@@ -227,6 +229,7 @@ func TestService_GetRecentPreState_Same_DependentRoots(t *testing.T) {
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blk))
st, blk, err = prepareForkchoiceState(ctx, 64, [32]byte{'T'}, blk.Root(), [32]byte{}, cp0, cp0)
require.NoError(t, err)
headBlock := blk
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blk))
st, blk, err = prepareForkchoiceState(ctx, 33, [32]byte{'U'}, [32]byte(ckRoot), [32]byte{}, cp0, cp0)
require.NoError(t, err)
@@ -235,8 +238,9 @@ func TestService_GetRecentPreState_Same_DependentRoots(t *testing.T) {
service.head = &head{
root: [32]byte{'T'},
state: s,
block: headBlock,
slot: 64,
state: s,
}
require.NotNil(t, service.getRecentPreState(ctx, &ethpb.Checkpoint{Epoch: 2, Root: cpRoot[:]}))
}
@@ -263,6 +267,7 @@ func TestService_GetRecentPreState_Different_DependentRoots(t *testing.T) {
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blk))
st, blk, err = prepareForkchoiceState(ctx, 64, [32]byte{'U'}, blk.Root(), [32]byte{}, cp0, cp0)
require.NoError(t, err)
headBlock := blk
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blk))
st, blk, err = prepareForkchoiceState(ctx, 33, [32]byte{'V'}, [32]byte(ckRoot), [32]byte{}, cp0, cp0)
require.NoError(t, err)
@@ -270,7 +275,8 @@ func TestService_GetRecentPreState_Different_DependentRoots(t *testing.T) {
cpRoot := blk.Root()
service.head = &head{
root: [32]byte{'T'},
root: [32]byte{'U'},
block: headBlock,
state: s,
slot: 64,
}
@@ -287,12 +293,13 @@ func TestService_GetRecentPreState_Different(t *testing.T) {
err = s.SetFinalizedCheckpoint(cp0)
require.NoError(t, err)
st, root, err := prepareForkchoiceState(ctx, 33, [32]byte(ckRoot), [32]byte{}, [32]byte{'R'}, cp0, cp0)
st, blk, err := prepareForkchoiceState(ctx, 33, [32]byte(ckRoot), [32]byte{}, [32]byte{'R'}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, root))
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blk))
service.head = &head{
root: [32]byte(ckRoot),
state: s,
block: blk,
slot: 33,
}
require.IsNil(t, service.getRecentPreState(ctx, &ethpb.Checkpoint{}))

View File

@@ -66,9 +66,6 @@ func (s *Service) postBlockProcess(cfg *postBlockProcessConfig) error {
startTime := time.Now()
fcuArgs := &fcuConfig{}
if s.inRegularSync() {
defer s.handleSecondFCUCall(cfg, fcuArgs)
}
if features.Get().EnableLightClient && slots.ToEpoch(s.CurrentSlot()) >= params.BeaconConfig().AltairForkEpoch {
defer s.processLightClientUpdates(cfg)
}
@@ -105,14 +102,17 @@ func (s *Service) postBlockProcess(cfg *postBlockProcessConfig) error {
s.logNonCanonicalBlockReceived(cfg.roblock.Root(), cfg.headRoot)
return nil
}
if err := s.getFCUArgs(cfg, fcuArgs); err != nil {
log.WithError(err).Error("Could not get forkchoice update argument")
return nil
}
if err := s.sendFCU(cfg, fcuArgs); err != nil {
return errors.Wrap(err, "could not send FCU to engine")
}
s.sendFCU(cfg, fcuArgs)
// Pre-Fulu the caches are updated when computing the payload attributes
if cfg.postState.Version() >= version.Fulu {
go func() {
ctx, cancel := context.WithTimeout(s.ctx, slotDeadline)
defer cancel()
cfg.ctx = ctx
s.updateCachesPostBlockProcessing(cfg)
}()
}
return nil
}
@@ -295,14 +295,6 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlo
return errors.Wrap(err, "could not set optimistic block to valid")
}
}
arg := &fcuConfig{
headState: preState,
headRoot: lastBR,
headBlock: lastB,
}
if _, err := s.notifyForkchoiceUpdate(ctx, arg); err != nil {
return err
}
return s.saveHeadNoDB(ctx, lastB, lastBR, preState, !isValidPayload)
}
@@ -330,6 +322,7 @@ func (s *Service) areSidecarsAvailable(ctx context.Context, avs das.Availability
return nil
}
// the caller of this function must not hold a lock in forkchoice store.
func (s *Service) updateEpochBoundaryCaches(ctx context.Context, st state.BeaconState) error {
e := coreTime.CurrentEpoch(st)
if err := helpers.UpdateCommitteeCache(ctx, st, e); err != nil {
@@ -359,7 +352,9 @@ func (s *Service) updateEpochBoundaryCaches(ctx context.Context, st state.Beacon
if e > 0 {
e = e - 1
}
s.ForkChoicer().RLock()
target, err := s.cfg.ForkChoiceStore.TargetRootForEpoch(r, e)
s.ForkChoicer().RUnlock()
if err != nil {
log.WithError(err).Error("Could not update proposer index state-root map")
return nil
@@ -372,7 +367,7 @@ func (s *Service) updateEpochBoundaryCaches(ctx context.Context, st state.Beacon
}
// Epoch boundary tasks: it copies the headState and updates the epoch boundary
// caches.
// caches. The caller of this function must not hold a lock in forkchoice store.
func (s *Service) handleEpochBoundary(ctx context.Context, slot primitives.Slot, headState state.BeaconState, blockRoot []byte) error {
ctx, span := trace.StartSpan(ctx, "blockChain.handleEpochBoundary")
defer span.End()
@@ -912,8 +907,6 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
if currentSlot == s.HeadSlot() {
return
}
s.cfg.ForkChoiceStore.RLock()
defer s.cfg.ForkChoiceStore.RUnlock()
// return early if we are in init sync
if !s.inRegularSync() {
return
@@ -926,14 +919,32 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
if lastState == nil {
lastRoot, lastState = headRoot[:], headState
}
// Copy all the field tries in our cached state in the event of late
// blocks.
lastState.CopyAllTries()
if err := transition.UpdateNextSlotCache(ctx, lastRoot, lastState); err != nil {
log.WithError(err).Debug("Could not update next slot state cache")
}
if err := s.handleEpochBoundary(ctx, currentSlot, headState, headRoot[:]); err != nil {
log.WithError(err).Error("Could not update epoch boundary caches")
// Before Fulu we need to process the next slot to find out if we are proposing.
if lastState.Version() < version.Fulu {
// Copy all the field tries in our cached state in the event of late
// blocks.
lastState.CopyAllTries()
if err := transition.UpdateNextSlotCache(ctx, lastRoot, lastState); err != nil {
log.WithError(err).Debug("Could not update next slot state cache")
}
if err := s.handleEpochBoundary(ctx, currentSlot, headState, headRoot[:]); err != nil {
log.WithError(err).Error("Could not update epoch boundary caches")
}
} else {
// After Fulu, we can update the caches asynchronously after sending FCU to the engine
defer func() {
go func() {
ctx, cancel := context.WithTimeout(s.ctx, slotDeadline)
defer cancel()
lastState.CopyAllTries()
if err := transition.UpdateNextSlotCache(ctx, lastRoot, lastState); err != nil {
log.WithError(err).Debug("Could not update next slot state cache")
}
if err := s.handleEpochBoundary(ctx, currentSlot, headState, headRoot[:]); err != nil {
log.WithError(err).Error("Could not update epoch boundary caches")
}
}()
}()
}
// return early if we already started building a block for the current
// head root
@@ -963,6 +974,8 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
headBlock: headBlock,
attributes: attribute,
}
s.cfg.ForkChoiceStore.Lock()
defer s.cfg.ForkChoiceStore.Unlock()
_, err = s.notifyForkchoiceUpdate(ctx, fcuArgs)
if err != nil {
log.WithError(err).Debug("could not perform late block tasks: failed to update forkchoice with engine")

View File

@@ -42,14 +42,8 @@ func (s *Service) getFCUArgs(cfg *postBlockProcessConfig, fcuArgs *fcuConfig) er
if err := s.getFCUArgsEarlyBlock(cfg, fcuArgs); err != nil {
return err
}
if !s.inRegularSync() {
return nil
}
slot := cfg.roblock.Block().Slot()
if slots.WithinVotingWindow(s.genesisTime, slot) {
return nil
}
return s.computePayloadAttributes(cfg, fcuArgs)
fcuArgs.attributes = s.getPayloadAttribute(cfg.ctx, fcuArgs.headState, fcuArgs.proposingSlot, cfg.headRoot[:])
return nil
}
func (s *Service) getFCUArgsEarlyBlock(cfg *postBlockProcessConfig, fcuArgs *fcuConfig) error {
@@ -173,26 +167,19 @@ func (s *Service) processLightClientUpdates(cfg *postBlockProcessConfig) {
// updateCachesPostBlockProcessing updates the next slot cache and handles the epoch
// boundary in order to compute the right proposer indices after processing
// state transition. This function is called on late blocks while still locked,
// before sending FCU to the engine.
func (s *Service) updateCachesPostBlockProcessing(cfg *postBlockProcessConfig) error {
// state transition. The caller of this function must not hold a lock in forkchoice store.
func (s *Service) updateCachesPostBlockProcessing(cfg *postBlockProcessConfig) {
slot := cfg.postState.Slot()
root := cfg.roblock.Root()
if err := transition.UpdateNextSlotCache(cfg.ctx, root[:], cfg.postState); err != nil {
return errors.Wrap(err, "could not update next slot state cache")
log.WithError(err).Error("Could not update next slot state cache")
return
}
if !slots.IsEpochEnd(slot) {
return nil
return
}
return s.handleEpochBoundary(cfg.ctx, slot, cfg.postState, root[:])
}
// handleSecondFCUCall handles a second call to FCU when syncing a new block.
// This is useful when proposing in the next block and we want to defer the
// computation of the next slot shuffling.
func (s *Service) handleSecondFCUCall(cfg *postBlockProcessConfig, fcuArgs *fcuConfig) {
if (fcuArgs.attributes == nil || fcuArgs.attributes.IsEmpty()) && cfg.headRoot == cfg.roblock.Root() {
go s.sendFCUWithAttributes(cfg, fcuArgs)
if err := s.handleEpochBoundary(cfg.ctx, slot, cfg.postState, root[:]); err != nil {
log.WithError(err).Error("Could not handle epoch boundary")
}
}
@@ -202,20 +189,6 @@ func reportProcessingTime(startTime time.Time) {
onBlockProcessingTime.Observe(float64(time.Since(startTime).Milliseconds()))
}
// computePayloadAttributes modifies the passed FCU arguments to
// contain the right payload attributes with the tracked proposer. It gets
// called on blocks that arrive after the attestation voting window, or in a
// background routine after syncing early blocks.
func (s *Service) computePayloadAttributes(cfg *postBlockProcessConfig, fcuArgs *fcuConfig) error {
if cfg.roblock.Root() == cfg.headRoot {
if err := s.updateCachesPostBlockProcessing(cfg); err != nil {
return err
}
}
fcuArgs.attributes = s.getPayloadAttribute(cfg.ctx, fcuArgs.headState, fcuArgs.proposingSlot, cfg.headRoot[:])
return nil
}
// getBlockPreState returns the pre state of an incoming block. It uses the parent root of the block
// to retrieve the state in DB. It verifies the pre state's validity and the incoming block
// is in the correct time window.

View File

@@ -738,7 +738,9 @@ func TestOnBlock_CanFinalize_WithOnTick(t *testing.T) {
require.NoError(t, service.savePostStateInfo(ctx, r, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, r)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, true}))
service.cfg.ForkChoiceStore.Unlock()
require.NoError(t, service.updateJustificationOnBlock(ctx, preState, postState, currStoreJustifiedEpoch))
_, err = service.updateFinalizationOnBlock(ctx, preState, postState, currStoreFinalizedEpoch)
require.NoError(t, err)
@@ -788,7 +790,9 @@ func TestOnBlock_CanFinalize(t *testing.T) {
require.NoError(t, service.savePostStateInfo(ctx, r, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, r)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, true}))
service.cfg.ForkChoiceStore.Unlock()
require.NoError(t, service.updateJustificationOnBlock(ctx, preState, postState, currStoreJustifiedEpoch))
_, err = service.updateFinalizationOnBlock(ctx, preState, postState, currStoreFinalizedEpoch)
require.NoError(t, err)
@@ -816,25 +820,9 @@ func TestOnBlock_NilBlock(t *testing.T) {
service, tr := minimalTestService(t)
signed := &consensusblocks.SignedBeaconBlock{}
roblock := consensusblocks.ROBlock{ReadOnlySignedBeaconBlock: signed}
service.cfg.ForkChoiceStore.Lock()
err := service.postBlockProcess(&postBlockProcessConfig{tr.ctx, roblock, [32]byte{}, nil, true})
require.Equal(t, true, IsInvalidBlock(err))
}
func TestOnBlock_InvalidSignature(t *testing.T) {
service, tr := minimalTestService(t)
ctx := tr.ctx
gs, keys := util.DeterministicGenesisState(t, 32)
require.NoError(t, service.saveGenesisData(ctx, gs))
blk, err := util.GenerateFullBlock(gs, keys, util.DefaultBlockGenConfig(), 1)
require.NoError(t, err)
blk.Signature = []byte{'a'} // Mutate the signature.
wsb, err := consensusblocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
_, err = service.validateStateTransition(ctx, preState, wsb)
service.cfg.ForkChoiceStore.Unlock()
require.Equal(t, true, IsInvalidBlock(err))
}
@@ -866,7 +854,9 @@ func TestOnBlock_CallNewPayloadAndForkchoiceUpdated(t *testing.T) {
require.NoError(t, service.savePostStateInfo(ctx, r, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, r)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
service.cfg.ForkChoiceStore.Unlock()
testState, err = service.cfg.StateGen.StateByRoot(ctx, r)
require.NoError(t, err)
}
@@ -1339,7 +1329,9 @@ func TestOnBlock_ProcessBlocksParallel(t *testing.T) {
lock.Lock()
roblock, err := consensusblocks.NewROBlockWithRoot(wsb1, r1)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, true}))
service.cfg.ForkChoiceStore.Unlock()
lock.Unlock()
wg.Done()
}()
@@ -1351,7 +1343,9 @@ func TestOnBlock_ProcessBlocksParallel(t *testing.T) {
lock.Lock()
roblock, err := consensusblocks.NewROBlockWithRoot(wsb2, r2)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, true}))
service.cfg.ForkChoiceStore.Unlock()
lock.Unlock()
wg.Done()
}()
@@ -1363,7 +1357,9 @@ func TestOnBlock_ProcessBlocksParallel(t *testing.T) {
lock.Lock()
roblock, err := consensusblocks.NewROBlockWithRoot(wsb3, r3)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, true}))
service.cfg.ForkChoiceStore.Unlock()
lock.Unlock()
wg.Done()
}()
@@ -1375,7 +1371,9 @@ func TestOnBlock_ProcessBlocksParallel(t *testing.T) {
lock.Lock()
roblock, err := consensusblocks.NewROBlockWithRoot(wsb4, r4)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, true}))
service.cfg.ForkChoiceStore.Unlock()
lock.Unlock()
wg.Done()
}()
@@ -1400,197 +1398,6 @@ func Test_verifyBlkFinalizedSlot_invalidBlock(t *testing.T) {
require.Equal(t, true, IsInvalidBlock(err))
}
// See the description in #10777 and #10782 for the full setup
// We sync optimistically a chain of blocks. Block 17 is the last block in Epoch
// 2. Block 18 justifies block 12 (the first in Epoch 2) and Block 19 returns
// INVALID from FCU, with LVH block 17. No head is viable. We check
// that the node is optimistic and that we can actually import a block on top of
// 17 and recover.
func TestStore_NoViableHead_FCU(t *testing.T) {
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
config.SlotsPerEpoch = 6
config.AltairForkEpoch = 1
config.BellatrixForkEpoch = 2
params.OverrideBeaconConfig(config)
mockEngine := &mockExecution.EngineClient{ErrNewPayload: execution.ErrAcceptedSyncingPayloadStatus, ErrForkchoiceUpdated: execution.ErrAcceptedSyncingPayloadStatus}
service, tr := minimalTestService(t, WithExecutionEngineCaller(mockEngine))
ctx := tr.ctx
st, keys := util.DeterministicGenesisState(t, 64)
stateRoot, err := st.HashTreeRoot(ctx)
require.NoError(t, err, "Could not hash genesis state")
require.NoError(t, service.saveGenesisData(ctx, st))
genesis := blocks.NewGenesisBlock(stateRoot[:])
wsb, err := consensusblocks.NewSignedBeaconBlock(genesis)
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wsb), "Could not save genesis block")
parentRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err, "Could not get signing root")
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, st, parentRoot), "Could not save genesis state")
require.NoError(t, service.cfg.BeaconDB.SaveHeadBlockRoot(ctx, parentRoot), "Could not save genesis state")
for i := 1; i < 6; i++ {
driftGenesisTime(service, primitives.Slot(i), 0)
st, err := service.HeadState(ctx)
require.NoError(t, err)
b, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), primitives.Slot(i))
require.NoError(t, err)
wsb, err := consensusblocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
}
for i := 6; i < 12; i++ {
driftGenesisTime(service, primitives.Slot(i), 0)
st, err := service.HeadState(ctx)
require.NoError(t, err)
b, err := util.GenerateFullBlockAltair(st, keys, util.DefaultBlockGenConfig(), primitives.Slot(i))
require.NoError(t, err)
wsb, err := consensusblocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false})
require.NoError(t, err)
}
for i := 12; i < 18; i++ {
driftGenesisTime(service, primitives.Slot(i), 0)
st, err := service.HeadState(ctx)
require.NoError(t, err)
b, err := util.GenerateFullBlockBellatrix(st, keys, util.DefaultBlockGenConfig(), primitives.Slot(i))
require.NoError(t, err)
wsb, err := consensusblocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false})
require.NoError(t, err)
}
// Check that we haven't justified the second epoch yet
jc := service.cfg.ForkChoiceStore.JustifiedCheckpoint()
require.Equal(t, primitives.Epoch(0), jc.Epoch)
// import a block that justifies the second epoch
driftGenesisTime(service, 18, 0)
validHeadState, err := service.HeadState(ctx)
require.NoError(t, err)
b, err := util.GenerateFullBlockBellatrix(validHeadState, keys, util.DefaultBlockGenConfig(), 18)
require.NoError(t, err)
wsb, err = consensusblocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
firstInvalidRoot, err := b.Block.HashTreeRoot()
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, firstInvalidRoot, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, firstInvalidRoot)
require.NoError(t, err)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false})
require.NoError(t, err)
jc = service.cfg.ForkChoiceStore.JustifiedCheckpoint()
require.Equal(t, primitives.Epoch(2), jc.Epoch)
sjc := validHeadState.CurrentJustifiedCheckpoint()
require.Equal(t, primitives.Epoch(0), sjc.Epoch)
lvh := b.Block.Body.ExecutionPayload.ParentHash
// check our head
require.Equal(t, firstInvalidRoot, service.cfg.ForkChoiceStore.CachedHeadRoot())
// import another block to find out that it was invalid
mockEngine = &mockExecution.EngineClient{ErrNewPayload: execution.ErrAcceptedSyncingPayloadStatus, ErrForkchoiceUpdated: execution.ErrInvalidPayloadStatus, ForkChoiceUpdatedResp: lvh}
service.cfg.ExecutionEngineCaller = mockEngine
driftGenesisTime(service, 19, 0)
st, err = service.HeadState(ctx)
require.NoError(t, err)
b, err = util.GenerateFullBlockBellatrix(st, keys, util.DefaultBlockGenConfig(), 19)
require.NoError(t, err)
wsb, err = consensusblocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
preState, err = service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err = service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err = consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false})
require.ErrorContains(t, "received an INVALID payload from execution engine", err)
// Check that forkchoice's head is the last invalid block imported. The
// store's headroot is the previous head (since the invalid block did
// not finish importing) one and that the node is optimistic
require.Equal(t, root, service.cfg.ForkChoiceStore.CachedHeadRoot())
headRoot, err := service.HeadRoot(ctx)
require.NoError(t, err)
require.Equal(t, firstInvalidRoot, bytesutil.ToBytes32(headRoot))
optimistic, err := service.IsOptimistic(ctx)
require.NoError(t, err)
require.Equal(t, true, optimistic)
// import another block based on the last valid head state
mockEngine = &mockExecution.EngineClient{}
service.cfg.ExecutionEngineCaller = mockEngine
driftGenesisTime(service, 20, 0)
b, err = util.GenerateFullBlockBellatrix(validHeadState, keys, &util.BlockGenConfig{}, 20)
require.NoError(t, err)
wsb, err = consensusblocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
root, err = b.Block.HashTreeRoot()
require.NoError(t, err)
preState, err = service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err = service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err = consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, true})
require.NoError(t, err)
// Check the newly imported block is head, it justified the right
// checkpoint and the node is no longer optimistic
require.Equal(t, root, service.cfg.ForkChoiceStore.CachedHeadRoot())
sjc = service.CurrentJustifiedCheckpt()
require.Equal(t, jc.Epoch, sjc.Epoch)
require.Equal(t, jc.Root, bytesutil.ToBytes32(sjc.Root))
optimistic, err = service.IsOptimistic(ctx)
require.NoError(t, err)
require.Equal(t, false, optimistic)
}
// See the description in #10777 and #10782 for the full setup
// We sync optimistically a chain of blocks. Block 17 is the last block in Epoch
// 2. Block 18 justifies block 12 (the first in Epoch 2) and Block 19 returns
@@ -1642,7 +1449,9 @@ func TestStore_NoViableHead_NewPayload(t *testing.T) {
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
service.cfg.ForkChoiceStore.Unlock()
}
for i := 6; i < 12; i++ {
@@ -1662,8 +1471,9 @@ func TestStore_NoViableHead_NewPayload(t *testing.T) {
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false})
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
service.cfg.ForkChoiceStore.Unlock()
}
for i := 12; i < 18; i++ {
@@ -1684,8 +1494,9 @@ func TestStore_NoViableHead_NewPayload(t *testing.T) {
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false})
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
service.cfg.ForkChoiceStore.Unlock()
}
// Check that we haven't justified the second epoch yet
jc := service.cfg.ForkChoiceStore.JustifiedCheckpoint()
@@ -1708,7 +1519,9 @@ func TestStore_NoViableHead_NewPayload(t *testing.T) {
require.NoError(t, service.savePostStateInfo(ctx, firstInvalidRoot, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, firstInvalidRoot)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
err = service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false})
service.cfg.ForkChoiceStore.Unlock()
require.NoError(t, err)
jc = service.cfg.ForkChoiceStore.JustifiedCheckpoint()
require.Equal(t, primitives.Epoch(2), jc.Epoch)
@@ -1718,6 +1531,10 @@ func TestStore_NoViableHead_NewPayload(t *testing.T) {
lvh := b.Block.Body.ExecutionPayload.ParentHash
// check our head
require.Equal(t, firstInvalidRoot, service.cfg.ForkChoiceStore.CachedHeadRoot())
isBlock18OptimisticAfterImport, err := service.IsOptimisticForRoot(ctx, firstInvalidRoot)
require.NoError(t, err)
require.Equal(t, true, isBlock18OptimisticAfterImport)
time.Sleep(20 * time.Millisecond) // wait for async forkchoice update to be processed
// import another block to find out that it was invalid
mockEngine = &mockExecution.EngineClient{ErrNewPayload: execution.ErrInvalidPayloadStatus, NewPayloadResp: lvh}
@@ -1768,7 +1585,9 @@ func TestStore_NoViableHead_NewPayload(t *testing.T) {
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err = consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
err = service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, true})
service.cfg.ForkChoiceStore.Unlock()
require.NoError(t, err)
// Check the newly imported block is head, it justified the right
// checkpoint and the node is no longer optimistic
@@ -1835,7 +1654,9 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
service.cfg.ForkChoiceStore.Unlock()
}
for i := 6; i < 12; i++ {
@@ -1856,8 +1677,9 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false})
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
service.cfg.ForkChoiceStore.Unlock()
}
// import the merge block
@@ -1877,7 +1699,9 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
require.NoError(t, service.savePostStateInfo(ctx, lastValidRoot, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, lastValidRoot)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
err = service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false})
service.cfg.ForkChoiceStore.Unlock()
require.NoError(t, err)
// save the post state and the payload Hash of this block since it will
// be the LVH
@@ -1906,8 +1730,9 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
require.NoError(t, service.savePostStateInfo(ctx, invalidRoots[i-13], wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, invalidRoots[i-13])
require.NoError(t, err)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false})
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
service.cfg.ForkChoiceStore.Unlock()
}
// Check that we have justified the second epoch
jc := service.cfg.ForkChoiceStore.JustifiedCheckpoint()
@@ -1975,7 +1800,9 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err = consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, true}))
service.cfg.ForkChoiceStore.Unlock()
// Check that the head is still INVALID and the node is still optimistic
require.Equal(t, invalidHeadRoot, service.cfg.ForkChoiceStore.CachedHeadRoot())
optimistic, err = service.IsOptimistic(ctx)
@@ -2000,7 +1827,9 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
err = service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, true})
service.cfg.ForkChoiceStore.Unlock()
require.NoError(t, err)
st, err = service.cfg.StateGen.StateByRoot(ctx, root)
require.NoError(t, err)
@@ -2028,7 +1857,9 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err = consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
err = service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, true})
service.cfg.ForkChoiceStore.Unlock()
require.NoError(t, err)
require.Equal(t, root, service.cfg.ForkChoiceStore.CachedHeadRoot())
sjc = service.CurrentJustifiedCheckpt()
@@ -2072,7 +1903,6 @@ func TestNoViableHead_Reboot(t *testing.T) {
require.NoError(t, service.cfg.BeaconDB.SaveGenesisBlockRoot(ctx, genesisRoot), "Could not save genesis state")
for i := 1; i < 6; i++ {
t.Log(i)
driftGenesisTime(service, primitives.Slot(i), 0)
st, err := service.HeadState(ctx)
require.NoError(t, err)
@@ -2089,7 +1919,9 @@ func TestNoViableHead_Reboot(t *testing.T) {
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
service.cfg.ForkChoiceStore.Unlock()
}
for i := 6; i < 12; i++ {
@@ -2109,8 +1941,9 @@ func TestNoViableHead_Reboot(t *testing.T) {
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false})
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
service.cfg.ForkChoiceStore.Unlock()
}
// import the merge block
@@ -2130,7 +1963,9 @@ func TestNoViableHead_Reboot(t *testing.T) {
require.NoError(t, service.savePostStateInfo(ctx, lastValidRoot, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, lastValidRoot)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
err = service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false})
service.cfg.ForkChoiceStore.Unlock()
require.NoError(t, err)
// save the post state and the payload Hash of this block since it will
// be the LVH
@@ -2161,7 +1996,9 @@ func TestNoViableHead_Reboot(t *testing.T) {
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
service.cfg.ForkChoiceStore.Unlock()
require.NoError(t, service.updateJustificationOnBlock(ctx, preState, postState, currStoreJustifiedEpoch))
_, err = service.updateFinalizationOnBlock(ctx, preState, postState, currStoreFinalizedEpoch)
require.NoError(t, err)
@@ -2282,7 +2119,9 @@ func TestOnBlock_HandleBlockAttestations(t *testing.T) {
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
service.cfg.ForkChoiceStore.Unlock()
st, err = service.HeadState(ctx)
require.NoError(t, err)
@@ -2348,7 +2187,9 @@ func TestOnBlock_HandleBlockAttestations(t *testing.T) {
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
service.cfg.ForkChoiceStore.Unlock()
st, err = service.HeadState(ctx)
require.NoError(t, err)
@@ -2631,7 +2472,10 @@ func TestRollbackBlock(t *testing.T) {
require.NoError(t, err)
// Rollback block insertion into db and caches.
require.ErrorContains(t, fmt.Sprintf("could not insert block %d to fork choice store", roblock.Block().Slot()), service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
service.cfg.ForkChoiceStore.Lock()
err = service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false})
service.cfg.ForkChoiceStore.Unlock()
require.ErrorContains(t, fmt.Sprintf("could not insert block %d to fork choice store", roblock.Block().Slot()), err)
// The block should no longer exist.
require.Equal(t, false, service.cfg.BeaconDB.HasBlock(ctx, root))
@@ -2732,7 +2576,9 @@ func TestRollbackBlock_ContextDeadline(t *testing.T) {
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
service.cfg.ForkChoiceStore.Unlock()
b, err = util.GenerateFullBlock(postState, keys, util.DefaultBlockGenConfig(), 34)
require.NoError(t, err)
@@ -2766,7 +2612,10 @@ func TestRollbackBlock_ContextDeadline(t *testing.T) {
require.NoError(t, postState.SetFinalizedCheckpoint(cj))
// Rollback block insertion into db and caches.
require.ErrorContains(t, "context canceled", service.postBlockProcess(&postBlockProcessConfig{cancCtx, roblock, [32]byte{}, postState, false}))
service.cfg.ForkChoiceStore.Lock()
err = service.postBlockProcess(&postBlockProcessConfig{cancCtx, roblock, [32]byte{}, postState, false})
service.cfg.ForkChoiceStore.Unlock()
require.ErrorContains(t, "context canceled", err)
// The block should no longer exist.
require.Equal(t, false, service.cfg.BeaconDB.HasBlock(ctx, root))
@@ -3262,7 +3111,9 @@ func Test_postBlockProcess_EventSending(t *testing.T) {
}
// Execute postBlockProcess
service.cfg.ForkChoiceStore.Lock()
err = service.postBlockProcess(cfg)
service.cfg.ForkChoiceStore.Unlock()
// Check error expectation
if tt.expectError {

View File

@@ -156,13 +156,15 @@ func (s *Service) UpdateHead(ctx context.Context, proposingSlot primitives.Slot)
}
if s.inRegularSync() {
fcuArgs.attributes = s.getPayloadAttribute(ctx, headState, proposingSlot, newHeadRoot[:])
if fcuArgs.attributes != nil && s.shouldOverrideFCU(newHeadRoot, proposingSlot) {
return
}
go s.forkchoiceUpdateWithExecution(s.ctx, fcuArgs)
}
if fcuArgs.attributes != nil && s.shouldOverrideFCU(newHeadRoot, proposingSlot) {
return
}
if err := s.forkchoiceUpdateWithExecution(s.ctx, fcuArgs); err != nil {
log.WithError(err).Error("Could not update forkchoice")
if err := s.saveHead(s.ctx, fcuArgs.headRoot, fcuArgs.headBlock, fcuArgs.headState); err != nil {
log.WithError(err).Error("Could not save head")
}
s.pruneAttsFromPool(s.ctx, fcuArgs.headState, fcuArgs.headBlock)
}
// This processes fork choice attestations from the pool to account for validator votes and fork choice.

View File

@@ -117,7 +117,9 @@ func TestService_ProcessAttestationsAndUpdateHead(t *testing.T) {
require.NoError(t, service.savePostStateInfo(ctx, tRoot, wsb, postState))
roblock, err := blocks.NewROBlockWithRoot(wsb, tRoot)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
service.cfg.ForkChoiceStore.Unlock()
copied, err = service.cfg.StateGen.StateByRoot(ctx, tRoot)
require.NoError(t, err)
require.Equal(t, 2, fcs.NodeCount())
@@ -177,7 +179,9 @@ func TestService_UpdateHead_NoAtts(t *testing.T) {
require.NoError(t, service.savePostStateInfo(ctx, tRoot, wsb, postState))
roblock, err := blocks.NewROBlockWithRoot(wsb, tRoot)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
service.cfg.ForkChoiceStore.Unlock()
require.Equal(t, 2, fcs.NodeCount())
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wsb))
require.Equal(t, tRoot, service.head.root)

View File

@@ -3,7 +3,10 @@ load("@prysm//tools/go:def.bzl", "go_library")
go_library(
name = "go_default_library",
testonly = True,
srcs = ["mock.go"],
srcs = [
"log.go",
"mock.go",
],
importpath = "github.com/OffchainLabs/prysm/v7/beacon-chain/blockchain/testing",
visibility = [
"//beacon-chain:__subpackages__",

View File

@@ -0,0 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package testing
import "github.com/sirupsen/logrus"
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "beacon-chain/blockchain/testing")

View File

@@ -30,7 +30,6 @@ import (
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
var ErrNilState = errors.New("nil state")
@@ -267,7 +266,7 @@ func (s *ChainService) ReceiveBlockInitialSync(ctx context.Context, block interf
if err := s.DB.SaveBlock(ctx, block); err != nil {
return err
}
logrus.Infof("Saved block with root: %#x at slot %d", signingRoot, block.Block().Slot())
log.Infof("Saved block with root: %#x at slot %d", signingRoot, block.Block().Slot())
}
s.Root = signingRoot[:]
s.Block = block
@@ -296,7 +295,7 @@ func (s *ChainService) ReceiveBlockBatch(ctx context.Context, blks []blocks.ROBl
if err := s.DB.SaveBlock(ctx, b); err != nil {
return err
}
logrus.Infof("Saved block with root: %#x at slot %d", signingRoot, b.Block().Slot())
log.Infof("Saved block with root: %#x at slot %d", signingRoot, b.Block().Slot())
}
s.Root = signingRoot[:]
s.Block = b
@@ -328,7 +327,7 @@ func (s *ChainService) ReceiveBlock(ctx context.Context, block interfaces.ReadOn
if err := s.DB.SaveBlock(ctx, block); err != nil {
return err
}
logrus.Infof("Saved block with root: %#x at slot %d", signingRoot, block.Block().Slot())
log.Infof("Saved block with root: %#x at slot %d", signingRoot, block.Block().Slot())
}
s.Root = signingRoot[:]
s.Block = block
@@ -585,11 +584,11 @@ func (s *ChainService) UpdateHead(ctx context.Context, slot primitives.Slot) {
ojc := &ethpb.Checkpoint{}
st, root, err := prepareForkchoiceState(ctx, slot, bytesutil.ToBytes32(s.Root), [32]byte{}, [32]byte{}, ojc, ojc)
if err != nil {
logrus.WithError(err).Error("Could not update head")
log.WithError(err).Error("Could not update head")
}
err = s.ForkChoiceStore.InsertNode(ctx, st, root)
if err != nil {
logrus.WithError(err).Error("Could not insert node to forkchoice")
log.WithError(err).Error("Could not insert node to forkchoice")
}
}

View File

@@ -1,5 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package builder
import "github.com/sirupsen/logrus"
var log = logrus.WithField("prefix", "builder")
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "beacon-chain/builder")

View File

@@ -16,6 +16,7 @@ go_library(
"doc.go",
"error.go",
"interfaces.go",
"log.go",
"payload_id.go",
"proposer_indices.go",
"proposer_indices_disabled.go", # keep

View File

@@ -9,7 +9,6 @@ import (
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1/attestation"
"github.com/pkg/errors"
log "github.com/sirupsen/logrus"
)
type attGroup struct {

View File

@@ -17,7 +17,6 @@ import (
lru "github.com/hashicorp/golang-lru"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
log "github.com/sirupsen/logrus"
)
const (

View File

@@ -8,6 +8,7 @@ go_library(
"deposit_pruner.go",
"deposit_tree.go",
"deposit_tree_snapshot.go",
"log.go",
"merkle_tree.go",
],
importpath = "github.com/OffchainLabs/prysm/v7/beacon-chain/cache/depositsnapshot",

View File

@@ -20,7 +20,6 @@ var (
Name: "beacondb_all_deposits_eip4881",
Help: "The number of total deposits in memory",
})
log = logrus.WithField("prefix", "cache")
)
// InsertDeposit into the database. If deposit or block number are nil

View File

@@ -0,0 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package depositsnapshot
import "github.com/sirupsen/logrus"
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "beacon-chain/cache/depositsnapshot")

9
beacon-chain/cache/log.go vendored Normal file
View File

@@ -0,0 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package cache
import "github.com/sirupsen/logrus"
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "beacon-chain/cache")

View File

@@ -11,7 +11,6 @@ import (
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
log "github.com/sirupsen/logrus"
"k8s.io/client-go/tools/cache"
)

View File

@@ -9,7 +9,6 @@ import (
"github.com/pkg/errors"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/sirupsen/logrus"
)
const (
@@ -67,7 +66,7 @@ func (t *TrackedValidatorsCache) Validator(index primitives.ValidatorIndex) (Tra
val, ok := item.(TrackedValidator)
if !ok {
logrus.Errorf("Failed to cast tracked validator from cache, got unexpected item type %T", item)
log.Errorf("Failed to cast tracked validator from cache, got unexpected item type %T", item)
return TrackedValidator{}, false
}
@@ -113,7 +112,7 @@ func (t *TrackedValidatorsCache) Indices() map[primitives.ValidatorIndex]bool {
for cacheKey := range items {
index, err := fromCacheKey(cacheKey)
if err != nil {
logrus.WithError(err).Error("Failed to get validator index from cache key")
log.WithError(err).Error("Failed to get validator index from cache key")
continue
}

View File

@@ -8,6 +8,7 @@ go_library(
"deposit.go",
"epoch_precompute.go",
"epoch_spec.go",
"log.go",
"reward.go",
"sync_committee.go",
"transition.go",

View File

@@ -7,7 +7,6 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/time"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
"github.com/OffchainLabs/prysm/v7/config/params"
log "github.com/sirupsen/logrus"
)
// ProcessSyncCommitteeUpdates processes sync client committee updates for the beacon state.

View File

@@ -0,0 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package altair
import "github.com/sirupsen/logrus"
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "beacon-chain/core/altair")

View File

@@ -60,7 +60,7 @@ func Eth1DataHasEnoughSupport(beaconState state.ReadOnlyBeaconState, data *ethpb
voteCount := uint64(0)
for _, vote := range beaconState.Eth1DataVotes() {
if AreEth1DataEqual(vote, data.Copy()) {
if AreEth1DataEqual(vote, data) {
voteCount++
}
}

View File

@@ -290,52 +290,3 @@ func TestProcessBlockHeader_OK(t *testing.T) {
}
assert.Equal(t, true, proto.Equal(nsh, expected), "Expected %v, received %v", expected, nsh)
}
func TestBlockSignatureSet_OK(t *testing.T) {
validators := make([]*ethpb.Validator, params.BeaconConfig().MinGenesisActiveValidatorCount)
for i := range validators {
validators[i] = &ethpb.Validator{
PublicKey: make([]byte, 32),
WithdrawalCredentials: make([]byte, 32),
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
Slashed: true,
}
}
state, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, state.SetValidators(validators))
require.NoError(t, state.SetSlot(10))
require.NoError(t, state.SetLatestBlockHeader(util.HydrateBeaconHeader(&ethpb.BeaconBlockHeader{
Slot: 9,
ProposerIndex: 0,
})))
latestBlockSignedRoot, err := state.LatestBlockHeader().HashTreeRoot()
require.NoError(t, err)
currentEpoch := time.CurrentEpoch(state)
priv, err := bls.RandKey()
require.NoError(t, err)
pID, err := helpers.BeaconProposerIndex(t.Context(), state)
require.NoError(t, err)
block := util.NewBeaconBlock()
block.Block.Slot = 10
block.Block.ProposerIndex = pID
block.Block.Body.RandaoReveal = bytesutil.PadTo([]byte{'A', 'B', 'C'}, 96)
block.Block.ParentRoot = latestBlockSignedRoot[:]
block.Signature, err = signing.ComputeDomainAndSign(state, currentEpoch, block.Block, params.BeaconConfig().DomainBeaconProposer, priv)
require.NoError(t, err)
proposerIdx, err := helpers.BeaconProposerIndex(t.Context(), state)
require.NoError(t, err)
validators[proposerIdx].Slashed = false
validators[proposerIdx].PublicKey = priv.PublicKey().Marshal()
err = state.UpdateValidatorAtIndex(proposerIdx, validators[proposerIdx])
require.NoError(t, err)
set, err := blocks.BlockSignatureBatch(state, block.Block.ProposerIndex, block.Signature, block.Block.HashTreeRoot)
require.NoError(t, err)
verified, err := set.Verify()
require.NoError(t, err)
assert.Equal(t, true, verified, "Block signature set returned a set which was unable to be verified")
}

View File

@@ -1,5 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package blocks
import "github.com/sirupsen/logrus"
var log = logrus.WithField("prefix", "blocks")
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "beacon-chain/core/blocks")

View File

@@ -122,24 +122,6 @@ func VerifyBlockSignatureUsingCurrentFork(beaconState state.ReadOnlyBeaconState,
return nil
}
// BlockSignatureBatch retrieves the block signature batch from the provided block and its corresponding state.
func BlockSignatureBatch(beaconState state.ReadOnlyBeaconState,
proposerIndex primitives.ValidatorIndex,
sig []byte,
rootFunc func() ([32]byte, error)) (*bls.SignatureBatch, error) {
currentEpoch := slots.ToEpoch(beaconState.Slot())
domain, err := signing.Domain(beaconState.Fork(), currentEpoch, params.BeaconConfig().DomainBeaconProposer, beaconState.GenesisValidatorsRoot())
if err != nil {
return nil, err
}
proposer, err := beaconState.ValidatorAtIndex(proposerIndex)
if err != nil {
return nil, err
}
proposerPubKey := proposer.PublicKey
return signing.BlockSignatureBatch(proposerPubKey, sig, domain, rootFunc)
}
// RandaoSignatureBatch retrieves the relevant randao specific signature batch object
// from a block and its corresponding state.
func RandaoSignatureBatch(

View File

@@ -9,6 +9,7 @@ go_library(
"deposits.go",
"effective_balance_updates.go",
"error.go",
"log.go",
"registry_updates.go",
"transition.go",
"transition_no_verify_sig.go",

View File

@@ -17,7 +17,6 @@ import (
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/ethereum/go-ethereum/common/math"
"github.com/pkg/errors"
log "github.com/sirupsen/logrus"
)
// ProcessPendingConsolidations implements the spec definition below. This method makes mutating
@@ -278,12 +277,12 @@ func ProcessConsolidationRequests(ctx context.Context, st state.BeaconState, req
if uint64(curEpoch) < e {
continue
}
bal, err := st.PendingBalanceToWithdraw(srcIdx)
hasBal, err := st.HasPendingBalanceToWithdraw(srcIdx)
if err != nil {
log.WithError(err).Error("Failed to fetch pending balance to withdraw")
continue
}
if bal > 0 {
if hasBal {
continue
}

View File

@@ -18,7 +18,6 @@ import (
"github.com/OffchainLabs/prysm/v7/runtime/version"
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/pkg/errors"
log "github.com/sirupsen/logrus"
)
// ProcessDeposits is one of the operations performed on each processed

View File

@@ -0,0 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package electra
import "github.com/sirupsen/logrus"
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "beacon-chain/core/electra")

View File

@@ -17,7 +17,6 @@ import (
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
log "github.com/sirupsen/logrus"
)
// ProcessWithdrawalRequests processes the validator withdrawals from the provided execution payload

View File

@@ -8,6 +8,7 @@ go_library(
"block.go",
"genesis.go",
"legacy.go",
"log.go",
"metrics.go",
"randao.go",
"ranges.go",

View File

@@ -0,0 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package helpers
import "github.com/sirupsen/logrus"
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "beacon-chain/core/helpers")

View File

@@ -14,7 +14,6 @@ import (
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/pkg/errors"
log "github.com/sirupsen/logrus"
)
var (

View File

@@ -21,7 +21,6 @@ import (
"github.com/pkg/errors"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
log "github.com/sirupsen/logrus"
)
var (
@@ -152,7 +151,7 @@ func ActiveValidatorIndices(ctx context.Context, s state.ReadOnlyBeaconState, ep
}
if err := UpdateCommitteeCache(ctx, s, epoch); err != nil {
return nil, errors.Wrap(err, "could not update committee cache")
log.WithError(err).Error("Could not update committee cache")
}
return indices, nil

View File

@@ -5,10 +5,20 @@ import (
"github.com/prometheus/client_golang/prometheus/promauto"
)
var dataColumnComputationTime = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "beacon_data_column_sidecar_computation_milliseconds",
Help: "Captures the time taken to compute data column sidecars from blobs.",
Buckets: []float64{25, 50, 100, 250, 500, 750, 1000},
},
var (
dataColumnComputationTime = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "beacon_data_column_sidecar_computation_milliseconds",
Help: "Captures the time taken to compute data column sidecars from blobs.",
Buckets: []float64{25, 50, 100, 250, 500, 750, 1000},
},
)
cellsAndProofsFromStructuredComputationTime = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "cells_and_proofs_from_structured_computation_milliseconds",
Help: "Captures the time taken to compute cells and proofs from structured computation.",
Buckets: []float64{10, 20, 30, 40, 50, 100, 200},
},
)
)

View File

@@ -3,6 +3,7 @@ package peerdas
import (
"sort"
"sync"
"time"
"github.com/OffchainLabs/prysm/v7/beacon-chain/blockchain/kzg"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
@@ -296,32 +297,42 @@ func ComputeCellsAndProofsFromFlat(blobs [][]byte, cellProofs [][]byte) ([][]kzg
return nil, nil, ErrBlobsCellsProofsMismatch
}
cellsPerBlob := make([][]kzg.Cell, 0, blobCount)
proofsPerBlob := make([][]kzg.Proof, 0, blobCount)
var wg errgroup.Group
cellsPerBlob := make([][]kzg.Cell, blobCount)
proofsPerBlob := make([][]kzg.Proof, blobCount)
for i, blob := range blobs {
var kzgBlob kzg.Blob
if copy(kzgBlob[:], blob) != len(kzgBlob) {
return nil, nil, errors.New("wrong blob size - should never happen")
}
// Compute the extended cells from the (non-extended) blob.
cells, err := kzg.ComputeCells(&kzgBlob)
if err != nil {
return nil, nil, errors.Wrap(err, "compute cells")
}
var proofs []kzg.Proof
for idx := uint64(i) * numberOfColumns; idx < (uint64(i)+1)*numberOfColumns; idx++ {
var kzgProof kzg.Proof
if copy(kzgProof[:], cellProofs[idx]) != len(kzgProof) {
return nil, nil, errors.New("wrong KZG proof size - should never happen")
wg.Go(func() error {
var kzgBlob kzg.Blob
if copy(kzgBlob[:], blob) != len(kzgBlob) {
return errors.New("wrong blob size - should never happen")
}
proofs = append(proofs, kzgProof)
}
// Compute the extended cells from the (non-extended) blob.
cells, err := kzg.ComputeCells(&kzgBlob)
if err != nil {
return errors.Wrap(err, "compute cells")
}
cellsPerBlob = append(cellsPerBlob, cells)
proofsPerBlob = append(proofsPerBlob, proofs)
proofs := make([]kzg.Proof, 0, numberOfColumns)
for idx := uint64(i) * numberOfColumns; idx < (uint64(i)+1)*numberOfColumns; idx++ {
var kzgProof kzg.Proof
if copy(kzgProof[:], cellProofs[idx]) != len(kzgProof) {
return errors.New("wrong KZG proof size - should never happen")
}
proofs = append(proofs, kzgProof)
}
cellsPerBlob[i] = cells
proofsPerBlob[i] = proofs
return nil
})
}
if err := wg.Wait(); err != nil {
return nil, nil, err
}
return cellsPerBlob, proofsPerBlob, nil
@@ -329,40 +340,55 @@ func ComputeCellsAndProofsFromFlat(blobs [][]byte, cellProofs [][]byte) ([][]kzg
// ComputeCellsAndProofsFromStructured computes the cells and proofs from blobs and cell proofs.
func ComputeCellsAndProofsFromStructured(blobsAndProofs []*pb.BlobAndProofV2) ([][]kzg.Cell, [][]kzg.Proof, error) {
cellsPerBlob := make([][]kzg.Cell, 0, len(blobsAndProofs))
proofsPerBlob := make([][]kzg.Proof, 0, len(blobsAndProofs))
for _, blobAndProof := range blobsAndProofs {
start := time.Now()
defer func() {
cellsAndProofsFromStructuredComputationTime.Observe(float64(time.Since(start).Milliseconds()))
}()
var wg errgroup.Group
cellsPerBlob := make([][]kzg.Cell, len(blobsAndProofs))
proofsPerBlob := make([][]kzg.Proof, len(blobsAndProofs))
for i, blobAndProof := range blobsAndProofs {
if blobAndProof == nil {
return nil, nil, ErrNilBlobAndProof
}
var kzgBlob kzg.Blob
if copy(kzgBlob[:], blobAndProof.Blob) != len(kzgBlob) {
return nil, nil, errors.New("wrong blob size - should never happen")
}
// Compute the extended cells from the (non-extended) blob.
cells, err := kzg.ComputeCells(&kzgBlob)
if err != nil {
return nil, nil, errors.Wrap(err, "compute cells")
}
kzgProofs := make([]kzg.Proof, 0, fieldparams.NumberOfColumns)
for _, kzgProofBytes := range blobAndProof.KzgProofs {
if len(kzgProofBytes) != kzg.BytesPerProof {
return nil, nil, errors.New("wrong KZG proof size - should never happen")
wg.Go(func() error {
var kzgBlob kzg.Blob
if copy(kzgBlob[:], blobAndProof.Blob) != len(kzgBlob) {
return errors.New("wrong blob size - should never happen")
}
var kzgProof kzg.Proof
if copy(kzgProof[:], kzgProofBytes) != len(kzgProof) {
return nil, nil, errors.New("wrong copied KZG proof size - should never happen")
// Compute the extended cells from the (non-extended) blob.
cells, err := kzg.ComputeCells(&kzgBlob)
if err != nil {
return errors.Wrap(err, "compute cells")
}
kzgProofs = append(kzgProofs, kzgProof)
}
kzgProofs := make([]kzg.Proof, 0, fieldparams.NumberOfColumns)
for _, kzgProofBytes := range blobAndProof.KzgProofs {
if len(kzgProofBytes) != kzg.BytesPerProof {
return errors.New("wrong KZG proof size - should never happen")
}
cellsPerBlob = append(cellsPerBlob, cells)
proofsPerBlob = append(proofsPerBlob, kzgProofs)
var kzgProof kzg.Proof
if copy(kzgProof[:], kzgProofBytes) != len(kzgProof) {
return errors.New("wrong copied KZG proof size - should never happen")
}
kzgProofs = append(kzgProofs, kzgProof)
}
cellsPerBlob[i] = cells
proofsPerBlob[i] = kzgProofs
return nil
})
}
if err := wg.Wait(); err != nil {
return nil, nil, err
}
return cellsPerBlob, proofsPerBlob, nil

View File

@@ -1,10 +1,9 @@
// Package interop contains useful utilities for persisting
// ssz-encoded states and blocks to disk during each state
// transition for development purposes.
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package interop
import (
"github.com/sirupsen/logrus"
)
import "github.com/sirupsen/logrus"
var log = logrus.WithField("prefix", "interop")
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "beacon-chain/core/transition/interop")

View File

@@ -1,5 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package transition
import "github.com/sirupsen/logrus"
var log = logrus.WithField("prefix", "state")
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "beacon-chain/core/transition")

View File

@@ -182,12 +182,6 @@ func ProcessBlockNoVerifyAnySig(
return nil, nil, err
}
sig := signed.Signature()
bSet, err := b.BlockSignatureBatch(st, blk.ProposerIndex(), sig[:], blk.HashTreeRoot)
if err != nil {
tracing.AnnotateError(span, err)
return nil, nil, errors.Wrap(err, "could not retrieve block signature set")
}
randaoReveal := signed.Block().Body().RandaoReveal()
rSet, err := b.RandaoSignatureBatch(ctx, st, randaoReveal[:])
if err != nil {
@@ -201,7 +195,7 @@ func ProcessBlockNoVerifyAnySig(
// Merge beacon block, randao and attestations signatures into a set.
set := bls.NewSet()
set.Join(bSet).Join(rSet).Join(aSet)
set.Join(rSet).Join(aSet)
if blk.Version() >= version.Capella {
changes, err := signed.Block().Body().BLSToExecutionChanges()

View File

@@ -157,9 +157,8 @@ func TestProcessBlockNoVerify_SigSetContainsDescriptions(t *testing.T) {
set, _, err := transition.ProcessBlockNoVerifyAnySig(t.Context(), beaconState, wsb)
require.NoError(t, err)
assert.Equal(t, len(set.Signatures), len(set.Descriptions), "Signatures and descriptions do not match up")
assert.Equal(t, "block signature", set.Descriptions[0])
assert.Equal(t, "randao signature", set.Descriptions[1])
assert.Equal(t, "attestation signature", set.Descriptions[2])
assert.Equal(t, "randao signature", set.Descriptions[0])
assert.Equal(t, "attestation signature", set.Descriptions[1])
}
func TestProcessOperationsNoVerifyAttsSigs_OK(t *testing.T) {

View File

@@ -1,5 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package das
import "github.com/sirupsen/logrus"
var log = logrus.WithField("prefix", "das")
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "beacon-chain/das")

View File

@@ -67,9 +67,9 @@ func NewSyncNeeds(current CurrentSlotter, oldestSlotFlagPtr *primitives.Slot, bl
// Override spec minimum block retention with user-provided flag only if it is lower than the spec minimum.
sn.blockRetention = primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
if oldestSlotFlagPtr != nil {
oldestEpoch := slots.ToEpoch(*oldestSlotFlagPtr)
if oldestEpoch < sn.blockRetention {
if *oldestSlotFlagPtr <= syncEpochOffset(current(), sn.blockRetention) {
sn.validOldestSlotPtr = oldestSlotFlagPtr
} else {
log.WithField("backfill-oldest-slot", *oldestSlotFlagPtr).

View File

@@ -128,6 +128,9 @@ func TestSyncNeedsInitialize(t *testing.T) {
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
minBlobEpochs := params.BeaconConfig().MinEpochsForBlobsSidecarsRequest
minColEpochs := params.BeaconConfig().MinEpochsForDataColumnSidecarsRequest
denebSlot := slots.UnsafeEpochStart(params.BeaconConfig().DenebForkEpoch)
fuluSlot := slots.UnsafeEpochStart(params.BeaconConfig().FuluForkEpoch)
minSlots := slots.UnsafeEpochStart(primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests))
currentSlot := primitives.Slot(10000)
currentFunc := func() primitives.Slot { return currentSlot }
@@ -141,6 +144,7 @@ func TestSyncNeedsInitialize(t *testing.T) {
expectedCol primitives.Epoch
name string
input SyncNeeds
current func() primitives.Slot
}{
{
name: "basic initialization with no flags",
@@ -174,13 +178,13 @@ func TestSyncNeedsInitialize(t *testing.T) {
{
name: "valid oldestSlotFlagPtr (earlier than spec minimum)",
blobRetentionFlag: 0,
oldestSlotFlagPtr: func() *primitives.Slot {
slot := primitives.Slot(10)
return &slot
}(),
oldestSlotFlagPtr: &denebSlot,
expectValidOldest: true,
expectedBlob: minBlobEpochs,
expectedCol: minColEpochs,
current: func() primitives.Slot {
return fuluSlot + minSlots
},
},
{
name: "invalid oldestSlotFlagPtr (later than spec minimum)",
@@ -210,6 +214,9 @@ func TestSyncNeedsInitialize(t *testing.T) {
{
name: "both blob retention flag and oldest slot set",
blobRetentionFlag: minBlobEpochs + 5,
current: func() primitives.Slot {
return fuluSlot + minSlots
},
oldestSlotFlagPtr: func() *primitives.Slot {
slot := primitives.Slot(100)
return &slot
@@ -232,16 +239,27 @@ func TestSyncNeedsInitialize(t *testing.T) {
expectedBlob: 5000,
expectedCol: 5000,
},
{
name: "regression for deneb start",
blobRetentionFlag: 8212500,
expectValidOldest: true,
oldestSlotFlagPtr: &denebSlot,
current: func() primitives.Slot {
return fuluSlot + minSlots
},
expectedBlob: 8212500,
expectedCol: 8212500,
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
result, err := NewSyncNeeds(currentFunc, tc.oldestSlotFlagPtr, tc.blobRetentionFlag)
if tc.current == nil {
tc.current = currentFunc
}
result, err := NewSyncNeeds(tc.current, tc.oldestSlotFlagPtr, tc.blobRetentionFlag)
require.NoError(t, err)
// Check that current, deneb, fulu are set correctly
require.Equal(t, currentSlot, result.current())
// Check retention calculations
require.Equal(t, tc.expectedBlob, result.blobRetention)
require.Equal(t, tc.expectedCol, result.colRetention)

View File

@@ -38,6 +38,7 @@ go_library(
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_spf13_afero//:go_default_library",
"@org_golang_x_sync//errgroup:go_default_library",
],
)

View File

@@ -25,6 +25,7 @@ import (
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/pkg/errors"
"github.com/spf13/afero"
"golang.org/x/sync/errgroup"
)
const (
@@ -185,73 +186,162 @@ func (dcs *DataColumnStorage) WarmCache() {
highestStoredEpoch := primitives.Epoch(0)
// Walk the data column filesystem to warm up the cache.
if err := afero.Walk(dcs.fs, ".", func(path string, info os.FileInfo, fileErr error) (err error) {
if fileErr != nil {
return fileErr
}
// If not a leaf, skip.
if info.IsDir() {
return nil
}
// Extract metadata from the file path.
fileMetadata, err := extractFileMetadata(path)
if err != nil {
log.WithError(err).Error("Error encountered while extracting file metadata")
return nil
}
// Open the data column filesystem file.
f, err := dcs.fs.Open(path)
if err != nil {
log.WithError(err).Error("Error encountered while opening data column filesystem file")
return nil
}
// Close the file.
defer func() {
// Overwrite the existing error only if it is nil, since the close error is less important.
closeErr := f.Close()
if closeErr != nil && err == nil {
err = closeErr
}
}()
// Read the metadata of the file.
metadata, err := dcs.metadata(f)
if err != nil {
log.WithError(err).Error("Error encountered while reading metadata from data column filesystem file")
return nil
}
// Check the indices.
indices := metadata.indices.all()
if len(indices) == 0 {
return nil
}
// Build the ident.
dataColumnsIdent := DataColumnsIdent{Root: fileMetadata.blockRoot, Epoch: fileMetadata.epoch, Indices: indices}
// Update the highest stored epoch.
highestStoredEpoch = max(highestStoredEpoch, fileMetadata.epoch)
// Set the ident in the cache.
if err := dcs.cache.set(dataColumnsIdent); err != nil {
log.WithError(err).Error("Error encountered while ensuring data column filesystem cache")
}
return nil
}); err != nil {
log.WithError(err).Error("Error encountered while walking data column filesystem.")
// List all period directories
periodFileInfos, err := afero.ReadDir(dcs.fs, ".")
if err != nil {
log.WithError(err).Error("Error reading top directory during warm cache")
return
}
// Prune the cache and the filesystem.
// Iterate through periods
for _, periodFileInfo := range periodFileInfos {
if !periodFileInfo.IsDir() {
continue
}
periodPath := periodFileInfo.Name()
// List all epoch directories in this period
epochFileInfos, err := afero.ReadDir(dcs.fs, periodPath)
if err != nil {
log.WithError(err).WithField("period", periodPath).Error("Error reading period directory during warm cache")
continue
}
// Iterate through epochs
for _, epochFileInfo := range epochFileInfos {
if !epochFileInfo.IsDir() {
continue
}
epochPath := path.Join(periodPath, epochFileInfo.Name())
// List all .sszs files in this epoch
files, err := listEpochFiles(dcs.fs, epochPath)
if err != nil {
log.WithError(err).WithField("epoch", epochPath).Error("Error listing epoch files during warm cache")
continue
}
if len(files) == 0 {
continue
}
// Process all files in this epoch in parallel
epochHighest, err := dcs.processEpochFiles(files)
if err != nil {
log.WithError(err).WithField("epoch", epochPath).Error("Error processing epoch files during warm cache")
}
highestStoredEpoch = max(highestStoredEpoch, epochHighest)
}
}
// Prune the cache and the filesystem
dcs.prune()
log.WithField("elapsed", time.Since(start)).Info("Data column filesystem cache warm-up complete")
totalElapsed := time.Since(start)
// Log summary
log.WithField("elapsed", totalElapsed).Info("Data column filesystem cache warm-up complete")
}
// listEpochFiles lists all .sszs files in an epoch directory.
func listEpochFiles(fs afero.Fs, epochPath string) ([]string, error) {
fileInfos, err := afero.ReadDir(fs, epochPath)
if err != nil {
return nil, errors.Wrap(err, "read epoch directory")
}
files := make([]string, 0, len(fileInfos))
for _, fileInfo := range fileInfos {
if fileInfo.IsDir() {
continue
}
fileName := fileInfo.Name()
if strings.HasSuffix(fileName, "."+dataColumnsFileExtension) {
files = append(files, path.Join(epochPath, fileName))
}
}
return files, nil
}
// processEpochFiles processes all .sszs files in an epoch directory in parallel.
func (dcs *DataColumnStorage) processEpochFiles(files []string) (primitives.Epoch, error) {
var (
eg errgroup.Group
mu sync.Mutex
)
highestEpoch := primitives.Epoch(0)
for _, filePath := range files {
eg.Go(func() error {
epoch, err := dcs.processFile(filePath)
if err != nil {
log.WithError(err).WithField("file", filePath).Error("Error processing file during warm cache")
return nil
}
mu.Lock()
defer mu.Unlock()
highestEpoch = max(highestEpoch, epoch)
return nil
})
}
if err := eg.Wait(); err != nil {
return highestEpoch, err
}
return highestEpoch, nil
}
// processFile processes a single .sszs file.
func (dcs *DataColumnStorage) processFile(filePath string) (primitives.Epoch, error) {
// Extract metadata from the file path
fileMetadata, err := extractFileMetadata(filePath)
if err != nil {
return 0, errors.Wrap(err, "extract file metadata")
}
// Open the file (each goroutine gets its own FD)
f, err := dcs.fs.Open(filePath)
if err != nil {
return 0, errors.Wrap(err, "open file")
}
defer func() {
if closeErr := f.Close(); closeErr != nil {
log.WithError(closeErr).WithField("file", filePath).Error("Error closing file during warm cache")
}
}()
// Read metadata
metadata, err := dcs.metadata(f)
if err != nil {
return 0, errors.Wrap(err, "read metadata")
}
// Extract indices
indices := metadata.indices.all()
if len(indices) == 0 {
return fileMetadata.epoch, nil // No indices, skip
}
// Build ident and set in cache (thread-safe)
dataColumnsIdent := DataColumnsIdent{
Root: fileMetadata.blockRoot,
Epoch: fileMetadata.epoch,
Indices: indices,
}
if err := dcs.cache.set(dataColumnsIdent); err != nil {
return 0, errors.Wrap(err, "cache set")
}
return fileMetadata.epoch, nil
}
// Summary returns the DataColumnStorageSummary.
@@ -515,6 +605,11 @@ func (dcs *DataColumnStorage) Clear() error {
// prune clean the cache, the filesystem and mutexes.
func (dcs *DataColumnStorage) prune() {
startTime := time.Now()
defer func() {
dataColumnPruneLatency.Observe(float64(time.Since(startTime).Milliseconds()))
}()
highestStoredEpoch := dcs.cache.HighestEpoch()
// Check if we need to prune.
@@ -622,6 +717,9 @@ func (dcs *DataColumnStorage) saveDataColumnSidecarsExistingFile(filePath string
// Create the SSZ encoded data column sidecars.
var sszEncodedDataColumnSidecars []byte
// Initialize the count of the saved SSZ encoded data column sidecar.
storedCount := uint8(0)
for {
dataColumnSidecars := pullChan(inputDataColumnSidecars)
if len(dataColumnSidecars) == 0 {
@@ -668,6 +766,9 @@ func (dcs *DataColumnStorage) saveDataColumnSidecarsExistingFile(filePath string
return errors.Wrap(err, "set index")
}
// Increment the count of the saved SSZ encoded data column sidecar.
storedCount++
// Append the SSZ encoded data column sidecar to the SSZ encoded data column sidecars.
sszEncodedDataColumnSidecars = append(sszEncodedDataColumnSidecars, sszEncodedDataColumnSidecar...)
}
@@ -692,9 +793,12 @@ func (dcs *DataColumnStorage) saveDataColumnSidecarsExistingFile(filePath string
return errWrongBytesWritten
}
syncStart := time.Now()
if err := file.Sync(); err != nil {
return errors.Wrap(err, "sync")
}
dataColumnFileSyncLatency.Observe(float64(time.Since(syncStart).Milliseconds()))
dataColumnBatchStoreCount.Observe(float64(storedCount))
return nil
}
@@ -808,10 +912,14 @@ func (dcs *DataColumnStorage) saveDataColumnSidecarsNewFile(filePath string, inp
return errWrongBytesWritten
}
syncStart := time.Now()
if err := file.Sync(); err != nil {
return errors.Wrap(err, "sync")
}
dataColumnFileSyncLatency.Observe(float64(time.Since(syncStart).Milliseconds()))
dataColumnBatchStoreCount.Observe(float64(storedCount))
return nil
}

View File

@@ -1,5 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package filesystem
import "github.com/sirupsen/logrus"
var log = logrus.WithField("prefix", "filesystem")
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "beacon-chain/db/filesystem")

View File

@@ -36,16 +36,15 @@ var (
})
// Data columns
dataColumnBuckets = []float64{3, 5, 7, 9, 11, 13}
dataColumnSaveLatency = promauto.NewHistogram(prometheus.HistogramOpts{
Name: "data_column_storage_save_latency",
Help: "Latency of DataColumnSidecar storage save operations in milliseconds",
Buckets: dataColumnBuckets,
Buckets: []float64{10, 20, 30, 50, 100, 200, 500},
})
dataColumnFetchLatency = promauto.NewHistogram(prometheus.HistogramOpts{
Name: "data_column_storage_get_latency",
Help: "Latency of DataColumnSidecar storage get operations in milliseconds",
Buckets: dataColumnBuckets,
Buckets: []float64{3, 5, 7, 9, 11, 13},
})
dataColumnPrunedCounter = promauto.NewCounter(prometheus.CounterOpts{
Name: "data_column_pruned",
@@ -59,4 +58,16 @@ var (
Name: "data_column_disk_count",
Help: "Approximate number of data columns in storage",
})
dataColumnFileSyncLatency = promauto.NewSummary(prometheus.SummaryOpts{
Name: "data_column_file_sync_latency",
Help: "Latency of sync operations when saving data columns in milliseconds",
})
dataColumnBatchStoreCount = promauto.NewSummary(prometheus.SummaryOpts{
Name: "data_column_batch_store_count",
Help: "Number of data columns stored in a batch",
})
dataColumnPruneLatency = promauto.NewSummary(prometheus.SummaryOpts{
Name: "data_column_prune_latency",
Help: "Latency of data column prune operations in milliseconds",
})
)

View File

@@ -1,5 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package kv
import "github.com/sirupsen/logrus"
var log = logrus.WithField("prefix", "db")
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "beacon-chain/db/kv")

View File

@@ -1,5 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package db
import "github.com/sirupsen/logrus"
var log = logrus.WithField("prefix", "db")
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "beacon-chain/db")

View File

@@ -2,7 +2,10 @@ load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = ["pruner.go"],
srcs = [
"log.go",
"pruner.go",
],
importpath = "github.com/OffchainLabs/prysm/v7/beacon-chain/db/pruner",
visibility = [
"//beacon-chain:__subpackages__",

View File

@@ -0,0 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package pruner
import "github.com/sirupsen/logrus"
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "beacon-chain/db/pruner")

View File

@@ -13,8 +13,6 @@ import (
"github.com/sirupsen/logrus"
)
var log = logrus.WithField("prefix", "db-pruner")
const (
// defaultPrunableBatchSize is the number of slots that can be pruned at once.
defaultPrunableBatchSize = 32

View File

@@ -1,5 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package slasherkv
import "github.com/sirupsen/logrus"
var log = logrus.WithField("prefix", "slasherdb")
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "beacon-chain/db/slasherkv")

View File

@@ -532,12 +532,19 @@ func (s *Service) GetBlobsV2(ctx context.Context, versionedHashes []common.Hash)
ctx, span := trace.StartSpan(ctx, "powchain.engine-api-client.GetBlobsV2")
defer span.End()
start := time.Now()
if !s.capabilityCache.has(GetBlobsV2) {
return nil, errors.New(fmt.Sprintf("%s is not supported", GetBlobsV2))
}
result := make([]*pb.BlobAndProofV2, len(versionedHashes))
err := s.rpcClient.CallContext(ctx, &result, GetBlobsV2, versionedHashes)
if len(result) != 0 {
getBlobsV2Latency.Observe(float64(time.Since(start).Milliseconds()))
}
return result, handleRPCError(err)
}

View File

@@ -1,5 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package execution
import "github.com/sirupsen/logrus"
var log = logrus.WithField("prefix", "execution")
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "beacon-chain/execution")

View File

@@ -27,6 +27,13 @@ var (
Buckets: []float64{25, 50, 100, 200, 500, 1000, 2000, 4000},
},
)
getBlobsV2Latency = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "get_blobs_v2_latency_milliseconds",
Help: "Captures RPC latency for getBlobsV2 in milliseconds",
Buckets: []float64{25, 50, 100, 200, 500, 1000, 2000, 4000},
},
)
errParseCount = promauto.NewCounter(prometheus.CounterOpts{
Name: "execution_parse_error_count",
Help: "The number of errors that occurred while parsing execution payload",

View File

@@ -7,6 +7,7 @@ go_library(
"errors.go",
"forkchoice.go",
"last_root.go",
"log.go",
"metrics.go",
"node.go",
"on_tick.go",

View File

@@ -0,0 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package doublylinkedtree
import "github.com/sirupsen/logrus"
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "beacon-chain/forkchoice/doubly-linked-tree")

View File

@@ -3,12 +3,9 @@ package doublylinkedtree
import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/sirupsen/logrus"
)
var (
log = logrus.WithField("prefix", "forkchoice-doublylinkedtree")
headSlotNumber = promauto.NewGauge(
prometheus.GaugeOpts{
Name: "doublylinkedtree_head_slot",

View File

@@ -0,0 +1,95 @@
# Graffiti Version Info Implementation
## Summary
Add automatic EL+CL version info to block graffiti following [ethereum/execution-apis#517](https://github.com/ethereum/execution-apis/pull/517). Uses the [flexible standard](https://hackmd.io/@wmoBhF17RAOH2NZ5bNXJVg/BJX2c9gja) to pack client info into leftover space after user graffiti.
More details: https://github.com/ethereum/execution-apis/blob/main/src/engine/identification.md
## Implementation
### Core Component: GraffitiInfo Struct
Thread-safe struct holding version information:
```go
const clCode = "PR"
type GraffitiInfo struct {
mu sync.RWMutex
userGraffiti string // From --graffiti flag (set once at startup)
clCommit string // From version.GetCommitPrefix() helper function
elCode string // From engine_getClientVersionV1
elCommit string // From engine_getClientVersionV1
}
```
### Flow
1. **Startup**: Parse flags, create GraffitiInfo with user graffiti and CL info.
2. **Wiring**: Pass struct to both execution service and RPC validator server
3. **Runtime**: Execution service goroutine periodically calls `engine_getClientVersionV1` and updates EL fields
4. **Block Proposal**: RPC validator server calls `GenerateGraffiti()` to get formatted graffiti
### Flexible Graffiti Format
Packs as much client info as space allows (after user graffiti):
| Available Space | Format | Example |
|----------------|--------|---------|
| ≥12 bytes | `EL(2)+commit(4)+CL(2)+commit(4)+user` | `GE168dPR63afBob` |
| 8-11 bytes | `EL(2)+commit(2)+CL(2)+commit(2)+user` | `GE16PR63my node here` |
| 4-7 bytes | `EL(2)+CL(2)+user` | `GEPRthis is my graffiti msg` |
| 2-3 bytes | `EL(2)+user` | `GEalmost full graffiti message` |
| <2 bytes | user only | `full 32 byte user graffiti here` |
```go
func (g *GraffitiInfo) GenerateGraffiti() [32]byte {
available := 32 - len(userGraffiti)
if elCode == "" {
elCommit2 = elCommit4 = ""
}
switch {
case available >= 12:
return elCode + elCommit4 + clCode + clCommit4 + userGraffiti
case available >= 8:
return elCode + elCommit2 + clCode + clCommit2 + userGraffiti
case available >= 4:
return elCode + clCode + userGraffiti
case available >= 2:
return elCode + userGraffiti
default:
return userGraffiti
}
}
```
### Update Logic
Single testable function in execution service:
```go
func (s *Service) updateGraffitiInfo() {
versions, err := s.GetClientVersion(ctx)
if err != nil {
return // Keep last good value
}
if len(versions) == 1 {
s.graffitiInfo.UpdateFromEngine(versions[0].Code, versions[0].Commit)
}
}
```
Goroutine calls this on `slot % 8 == 4` timing (4 times per epoch, avoids slot boundaries).
### Files Changes Required
**New:**
- `beacon-chain/execution/graffiti_info.go` - The struct and methods
- `beacon-chain/execution/graffiti_info_test.go` - Unit tests
- `runtime/version/version.go` - Add `GetCommitPrefix()` helper that extracts first 4 hex chars from the git commit injected via Bazel ldflags at build time
**Modified:**
- `beacon-chain/execution/service.go` - Add goroutine + updateGraffitiInfo()
- `beacon-chain/execution/engine_client.go` - Add GetClientVersion() method that does engine call
- `beacon-chain/rpc/.../validator/proposer.go` - Call GenerateGraffiti()
- `beacon-chain/node/node.go` - Wire GraffitiInfo to services
### Testing Strategy
- Unit test GraffitiInfo methods (priority logic, thread safety)
- Unit test updateGraffitiInfo() with mocked engine client

View File

@@ -1,5 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package light_client
import "github.com/sirupsen/logrus"
var log = logrus.WithField("prefix", "light-client")
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "beacon-chain/light-client")

View File

@@ -4,6 +4,7 @@ go_library(
name = "go_default_library",
srcs = [
"doc.go",
"log.go",
"metrics.go",
"process_attestation.go",
"process_block.go",

View File

@@ -0,0 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package monitor
import "github.com/sirupsen/logrus"
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "beacon-chain/monitor")

Some files were not shown because too many files have changed in this diff Show More