Compare commits

...

145 Commits

Author SHA1 Message Date
Manu NALEPA
b57c35a468 manu 2025-12-22 11:23:18 +01:00
aarshkshah1992
08f117f04f better dial logic 2025-12-15 18:27:45 +04:00
aarshkshah1992
0b6365781f address review 2025-12-15 16:10:58 +04:00
aarshkshah1992
0c994445ea failed to ping node 2025-12-15 16:02:42 +04:00
aarshkshah1992
77c32203af removed seen check that is not needed 2025-12-15 15:59:07 +04:00
aarshkshah1992
743e6bab07 add a test for TestSubscriptionController_GetCurrentActiveTopicsWithMinPeerCount 2025-12-15 15:02:33 +04:00
aarshkshah1992
451d2a8bc5 per topic family dial policy 2025-12-15 14:46:39 +04:00
aarshkshah1992
6508bdfa9a fix bug in subscription 2025-12-15 12:25:12 +04:00
aarshkshah1992
d2bf512f36 remove pointer 2025-12-15 11:16:26 +04:00
aarshkshah1992
e62fe66b0a add metrics 2025-12-15 10:54:18 +04:00
aarshkshah1992
08fff3dec4 changes for fork watching 2025-12-15 09:53:03 +04:00
aarshkshah1992
75a3d45470 revert service dep 2025-12-12 18:38:59 +04:00
aarshkshah1992
bbd2d4da0f fix build 2025-12-12 16:28:36 +04:00
aarshkshah1992
378468c1ec fix flaky test 2025-12-12 16:06:57 +04:00
aarshkshah1992
25d375dbe0 change to control plane 2025-12-12 13:08:47 +04:00
aarshkshah1992
2d5f3112c8 merge develop 2025-12-12 13:03:29 +04:00
aarshkshah1992
1d49a2a88d change gossipsub to gossip 2025-12-12 12:57:29 +04:00
aarshkshah1992
b119290584 change gossipsub to gossip 2025-12-12 12:51:21 +04:00
aarshkshah1992
9b0cffcdea NSE changes 2025-12-12 12:39:04 +04:00
aarshkshah1992
979466a3d7 fix naming to subscription controller 2025-12-12 11:39:21 +04:00
aarshkshah1992
08ba8dd487 change interface names 2025-12-12 11:21:47 +04:00
aarshkshah1992
a235a581e1 remove service dep 2025-12-12 11:14:29 +04:00
aarshkshah1992
45b88de7f2 use beacon-chain config 2025-12-12 09:57:39 +04:00
Preston Van Loon
b360794c9c Update CHANGELOG.md for v7.1.0 release (#16127)
**What type of PR is this?**

Documentation

**What does this PR do? Why is it needed?**

Changelog

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.
2025-12-11 22:48:33 +00:00
aarshkshah1992
f69e017f6a add go-docs 2025-12-11 18:46:46 +04:00
aarshkshah1992
19e5684875 changes as per review 2025-12-11 18:33:43 +04:00
aarshkshah1992
24fc76b7fb changes as per review 2025-12-11 18:14:47 +04:00
aarshkshah1992
0a4aad543b address review 2025-12-11 17:11:15 +04:00
aarshkshah1992
ab8584d138 changes as per review 2025-12-11 16:28:16 +04:00
Aarsh Shah
0fc9ab925a feat: add support for detecting and logging per address reachability via libp2p AutoNAT v2 (#16100)
**What type of PR is this?**
Feature

**What does this PR do? Why is it needed?**

This PR adds support for detecting and logging per address reachability
via libp2p AutoNAT v2. See
https://github.com/libp2p/go-libp2p/releases/tag/v0.42.0 for details.
This PR also upgrades Prysm to libp2p v0.42.0

**Which issues(s) does this PR fix?**

Fixes #https://github.com/OffchainLabs/prysm/issues/16098

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.
2025-12-11 11:56:52 +00:00
aarshkshah1992
9b6cd96012 changes as per review 2025-12-11 15:56:08 +04:00
satushh
dda5ee3334 Graffiti proposal design doc (#15983)
<!-- Thanks for sending a PR! Before submitting:

1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
   fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
   in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
   Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->

**What type of PR is this?**

Design Doc

**What does this PR do? Why is it needed?**

This PR adds a design doc for adding graffiti. The idea is to have it
populated judiciously so that we can get proper information about the
EL, CL and their corresponding version info. At the same time being
flexible enough with the user input.

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.
2025-12-10 22:57:40 +00:00
Manu NALEPA
14c67376c3 Add test requirement to PULL_REQUEST_TEMPLATE.md (#16123)
**What type of PR is this?**
Other

**What does this PR do? Why is it needed?**
This pull request modifies the `PULL_REQUEST_TEMPLATE.md` to ensure the
developer checked that their PR works as expected.

Some contributors push some changes, without even running the modified
client once to see if their changes work as expected.

Avoidable back-and-forth trips between the contributor and the reviewers
could be prevented thanks to running the modified client.

**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2025-12-10 17:40:29 +00:00
Preston Van Loon
9c8b68a66d Update CHANGELOG.md for v7.0.1 release (#16107)
**What type of PR is this?**

Other

**What does this PR do? Why is it needed?**

**Which issues(s) does this PR fix?**

**Other notes for review**

Did not delete the fragments as they are still needed to generate v7.1.0
release notes. This release is all cherry-picks which would be included
in v7.1.0

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.
2025-12-10 17:07:38 +00:00
Potuz
a3210157e2 Fix TOCTOU race validating attestations (#16105)
A TOCTOU issue was reported by EF security in which two attestations
being validated at the same time may result in both of them being
forwarded. The spec says that we need to forward only the first one.
2025-12-09 19:26:05 +00:00
satushh
1536d59e30 Remove unnecessary copy in Eth1DataHasEnoughSupport (#16118)
<!-- Thanks for sending a PR! Before submitting:

1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
   fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
   in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
   Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->

**What type of PR is this?**

Other

**What does this PR do? Why is it needed?**

- Remove unnecessary `Copy()` call in `Eth1DataHasEnoughSupport`
- `data.Copy()` was called on every iteration of the vote counting loop,
even though `AreEth1DataEqual` only reads the data and never mutates it.
- Additionally, `Eth1DataVotes()` already returns copies of all votes,
so state is protected regardless.

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.
2025-12-09 19:02:36 +00:00
satushh
11e46a4560 Optimise for loop of MigrateToCold (#16101)
<!-- Thanks for sending a PR! Before submitting:

1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
   fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
   in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
   Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->

**What type of PR is this?**

 Other

**What does this PR do? Why is it needed?**

The for loop in MigrateToCold function was brute force in nature. It
could be improved by just directly jumping by `slotsPerArchivedPoint`
rather than going over every single slot.

```
for slot := oldFSlot; slot < fSlot; slot++ {
  ...
   if slot%s.slotsPerArchivedPoint == 0 && slot != 0 {
```
No need to do the modulo for every single slot.
We could just find the correct starting point and jump by
slotsPerArchivedPoint at a time.

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.

---------

Co-authored-by: Potuz <potuz@prysmaticlabs.com>
2025-12-09 17:15:52 +00:00
Snezhkko
5a2e51b894 fix(rpc): incorrect constructor return type (#16084)
The constructor `NewStateRootNotFoundError` incorrectly returned
`StateNotFoundError`. This prevented handlers that rely on
errors.As(err, *lookup.StateRootNotFoundError) from matching and mapping
the error to HTTP 404. The function now returns
StateRootNotFoundError and constructs that type, restoring the intended
behavior for “state root not found” cases.

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-12-09 13:56:00 +00:00
Potuz
d20ec4c7a1 Track the dependent root of the latest finalized checkpoint (#16103)
This PR adds the dependent root of the latest finalized checkpoint to
forkchoice since this node will be typically pruned upon finalization.
2025-12-08 16:16:32 +00:00
terence
7a70abbd15 Add --ignore-unviable-attestations and deprecate --disable-last-epoch-targets (#16094)
This PR introduces flag `--ignore-unviable-attestations` (replaces and
deprecates `--disable-last-epoch-targets`) to drop attestations whose
target state is not viable; default remains to process them unless
explicitly enabled.
2025-12-05 15:03:04 +00:00
aarshkshah1992
bba1358637 change data structures 2025-12-05 08:32:39 +04:00
Potuz
a2b84c9320 Use head state in more cases (#16095)
The head state is guaranteed to have the same shuffling and active
indices if the previous dependent root coincides with the target
checkpoint's in some cases.
2025-12-05 03:44:03 +00:00
terence
edef17e41d Add arrival latency tracking for data column sidecars (#16099)
We have this for blob sidecars but not for data columns
2025-12-04 21:28:02 +00:00
Manu NALEPA
85c5d31b5b blobsDataFromStoredDataColumns: Ask the use to use the --supernode flag and shorten the error mesage. (#16097)
**What type of PR is this?**
Other

**What does this PR do? Why is it needed?**
`blobsDataFromStoredDataColumns`: Ask the use to use the `--supernode`
flag and shorten the error mesage.

**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.
2025-12-04 15:54:13 +00:00
aarshkshah1992
e6af417c62 add a go doc for the PeersForTopic function 2025-12-04 17:35:20 +04:00
aarshkshah1992
3509622323 fix comment 2025-12-04 17:20:08 +04:00
aarshkshah1992
80d7bd6084 address review 2025-12-04 17:01:27 +04:00
aarshkshah1992
8d3c3fd40b remove context cancellation comment 2025-12-04 16:58:10 +04:00
aarshkshah1992
1caf7074a7 fix error wrapping 2025-12-04 16:56:53 +04:00
Aarsh Shah
92bd155e3f Apply suggestions from code review
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2025-12-04 16:50:24 +04:00
aarshkshah1992
8360b0f882 fix CI 2025-12-03 17:22:34 +04:00
aarshkshah1992
bc0d60138c run bazel gazelle 2025-12-03 17:12:48 +04:00
aarshkshah1992
32f377a665 fix crawl timing 2025-12-03 16:59:37 +04:00
aarshkshah1992
2ab792b0fe godocs 2025-12-03 16:50:13 +04:00
Aarsh Shah
2ae8de05dd Apply suggestions from code review
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2025-12-03 16:27:07 +04:00
aarshkshah1992
cff96129b5 fix unsafe cast 2025-12-03 16:22:41 +04:00
aarshkshah1992
8abd1db7c1 change maxConcurrentDials to uint 2025-12-03 16:08:55 +04:00
aarshkshah1992
a4d726184f rename to p2p service 2025-12-03 16:02:19 +04:00
aarshkshah1992
85acf242f6 remove circular dependency 2025-12-03 16:00:29 +04:00
Manu NALEPA
fa056c2d21 Move the "Not enough connected peers" (for a given subnet) from WARN to DEBUG (#16087)
**What type of PR is this?**
Other

**What does this PR do? Why is it needed?**
Move the "Not enough connected peers" (for a given subnet) from WARN to
DEBUG

**Rationale:**
The
<img width="1839" height="31" alt="image"
src="https://github.com/user-attachments/assets/44dbdc8d-3e37-42ee-967b-75a7a1fbcafb"
/>
log is (potentially) printed every 5 minutes.
Every 5 minutes, the BN checks if, for a given subnet, the actual count
of peers is at least equal to a minimum one.
If not, this kind of log is printed.

When validators are connected and selected to be an aggregator in the
next epoch, the BN needs to subscribe and find new peers in the
corresponding attestation subnet.
If, right after the beacon is subscribed (but before it had time to find
peers), the "5 min ticker" ticks, then this warning log is displayed,
even if the slot for which the validator is selected as an aggregator is
still minutes away.

For this reason, this log is moved from WARN to DEBUG

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.
2025-12-03 11:07:24 +00:00
aarshkshah1992
ba6d1d0c6b update godoc 2025-12-03 13:34:41 +04:00
aarshkshah1992
69d3453f97 fix sync 2025-12-03 13:32:31 +04:00
aarshkshah1992
a69cf6f5d4 refactor update peers 2025-12-03 13:29:20 +04:00
aarshkshah1992
0a46c2d16d remove topic type 2025-12-03 13:12:01 +04:00
aarshkshah1992
3c1b8859bc rename functions 2025-12-03 12:40:25 +04:00
aarshkshah1992
9f645ae0a4 refactor update peer 2025-12-03 12:25:38 +04:00
aarshkshah1992
5eeeb9ed15 wrap error and remove nodeId 2025-12-03 11:06:24 +04:00
kasey
61de11e2c4 Backfill data columns (#15580)
**What type of PR is this?**

Feature

**What does this PR do? Why is it needed?**

Adds data column support to backfill.

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.

---------

Co-authored-by: Kasey <kasey@users.noreply.github.com>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Preston Van Loon <preston@pvl.dev>
2025-12-02 15:19:32 +00:00
Manu NALEPA
2773bdef89 Remove NUMBER_OF_COLUMNS and MAX_CELLS_IN_EXTENDED_MATRIX configuration. (#16073)
**What type of PR is this?**
Other

**What does this PR do? Why is it needed?**
This pull request removes `NUMBER_OF_COLUMNS` and
`MAX_CELLS_IN_EXTENDED_MATRIX` configuration.

**Other notes for review**
Please read commit by commit, with commit messages.

**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.
2025-11-29 09:30:54 +00:00
Manu NALEPA
2a23dc7f4a Improve logs (#16075)
**What type of PR is this?**
Other

**What does this PR do? Why is it needed?**
- Added log prefix to the `genesis` package.
- Added log prefix to the `params` package.
- `WithGenesisValidatorsRoot`: Use camelCase for log field param.
- Move `Origin checkpoint found in db` log from WARN to INFO, since it
is the expected behaviour.

**Other notes for review**
Please read commit by commit

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.
2025-11-28 14:34:02 +00:00
james-prysm
f97622b054 e2e support electra forkstart (#16048)
<!-- Thanks for sending a PR! Before submitting:

1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
   fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
   in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
   Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->

**What type of PR is this?**

Bug fix


**What does this PR do? Why is it needed?**

Allows for starting e2e tests from electra or a specific fork of
interest again. doesn't fix missing execution requests tests, nishant
reverted it.

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-11-26 16:33:24 +00:00
Fibonacci747
08d0f42725 beacon-chain/das: remove dead slot parameter (#16021)
The slot parameter in blobCacheEntry.filter was unused and redundant.
All slot/epoch-sensitive checks happen before filter
(commitmentsToCheck), and disk availability is handled via
BlobStorageSummary (epoch-aware).

Changes:
- Drop slot from blobCacheEntry.filter signature.
- Update call sites in availability_blobs.go and blob_cache_test.go.

Mirrors the data_column_cache.filter API (which does not take slot),
reduces API noise, and removes dead code without changing behavior.
2025-11-26 16:04:05 +00:00
james-prysm
74c8a25354 adding semi-supernode feature (#16029)
<!-- Thanks for sending a PR! Before submitting:

1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
   fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
   in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
   Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->

**What type of PR is this?**

Feature


**What does this PR do? Why is it needed?**

| Feature | Semi-Supernode | Supernode |
| ----------------------- | ------------------------- |
------------------------ |
| **Custody Groups** | 64 | 128 |
| **Data Columns** | 64 | 128 |
| **Storage** | ~50% | ~100% |
| **Blob Reconstruction** | Yes (via Reed-Solomon) | No reconstruction
needed |
| **Flag** | `--semi-supernode` | `--supernode` |
| **Can serve all blobs** | Yes (with reconstruction) | Yes (directly) |

**note** if your validator total effective balance results in more
custody than the semi-supernode it will override those those
requirements.

cgc=64 from @nalepae 
Pro:
- We are useful to the network
- Less disconnection likelihood
- Straight forward to implement

Con:
- We cannot revert to a full node
- We have to serve incoming RPC requests corresponding to 64 columns

Tested the following using this kurtosis setup

```
participants:
  # Super-nodes
  - el_type: geth
    el_image: ethpandaops/geth:master
    cl_type: prysm
    vc_image: gcr.io/offchainlabs/prysm/validator:latest
    cl_image: gcr.io/offchainlabs/prysm/beacon-chain:latest
    count: 2
    cl_extra_params:
      - --supernode
    vc_extra_params:
      - --verbosity=debug
  # Full-nodes
  - el_type: geth
    el_image: ethpandaops/geth:master
    cl_type: prysm
    vc_image: gcr.io/offchainlabs/prysm/validator:latest
    cl_image: gcr.io/offchainlabs/prysm/beacon-chain:latest
    count: 2
    validator_count: 1
    cl_extra_params:
      - --semi-supernode
    vc_extra_params:
      - --verbosity=debug

additional_services:
  - dora
  - spamoor

spamoor_params:
  image: ethpandaops/spamoor:master
  max_mem: 4000
  spammers:
    - scenario: eoatx
      config:
        throughput: 200
    - scenario: blobs
      config:
        throughput: 20

network_params:
  fulu_fork_epoch: 0
  withdrawal_type: "0x02"
  preset: mainnet

global_log_level: debug
```

```
curl -H "Accept: application/json" http://127.0.0.1:32961/eth/v1/node/identity
{"data":{"peer_id":"16Uiu2HAm7xzhnGwea8gkcxRSC6fzUkvryP6d9HdWNkoeTkj6RSqw","enr":"enr:-Ni4QIH5u2NQz17_pTe9DcCfUyG8TidDJJjIeBpJRRm4ACQzGBpCJdyUP9eGZzwwZ2HS1TnB9ACxFMQ5LP5njnMDLm-GAZqZEXjih2F0dG5ldHOIAAAAAAAwAACDY2djQIRldGgykLZy_whwAAA4__________-CaWSCdjSCaXCErBAAE4NuZmSEAAAAAIRxdWljgjLIiXNlY3AyNTZrMaECulJrXpSOBmCsQWcGYzQsst7r3-Owlc9iZbEcJTDkB6qIc3luY25ldHMFg3RjcIIyyIN1ZHCCLuA","p2p_addresses":["/ip4/172.16.0.19/tcp/13000/p2p/16Uiu2HAm7xzhnGwea8gkcxRSC6fzUkvryP6d9HdWNkoeTkj6RSqw","/ip4/172.16.0.19/udp/13000/quic-v1/p2p/16Uiu2HAm7xzhnGwea8gkcxRSC6fzUkvryP6d9HdWNkoeTkj6RSqw"],"discovery_addresses":["/ip4/172.16.0.19/udp/12000/p2p/16Uiu2HAm7xzhnGwea8gkcxRSC6fzUkvryP6d9HdWNkoeTkj6RSqw"],"metadata":{"seq_number":"3","attnets":"0x0000000000300000","syncnets":"0x05","custody_group_count":"64"}}}
```

```
curl -s http://127.0.0.1:32961/eth/v1/debug/beacon/data_column_sidecars/head | jq '.data | length'
64
```

```
curl -X 'GET' \
  'http://127.0.0.1:32961/eth/v1/beacon/blobs/head' \
  -H 'accept: application/json'
  ```

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read [CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for reviewers to understand this PR.

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
Co-authored-by: james-prysm <jhe@offchainlabs.com>
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2025-11-26 15:31:15 +00:00
Potuz
a466c6db9c Fix linter (#16058)
Fix linter, otherwise #16049 fails the linter.
2025-11-26 15:03:26 +00:00
Preston Van Loon
4da6c4291f Fix metrics logging of http_error_count (#16055)
**What type of PR is this?**

Bug fix

**What does this PR do? Why is it needed?**

I am seeing massive metrics cardinality on error cases. 

Example:

```
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682952",method="GET"} 2
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682953",method="GET"} 2
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682954",method="GET"} 2
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682955",method="GET"} 2
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682956",method="GET"} 2
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682957",method="GET"} 2
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682958",method="GET"} 2
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682959",method="GET"} 2
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682960",method="GET"} 2
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682961",method="GET"} 2
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682962",method="GET"} 2
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682966",method="GET"} 2
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682967",method="GET"} 2
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682968",method="GET"} 2
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682969",method="GET"} 2
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682970",method="GET"} 2
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682971",method="GET"} 2
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682972",method="GET"} 2
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682973",method="GET"} 2
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682974",method="GET"} 2
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682975",method="GET"} 2
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682976",method="GET"} 2
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682977",method="GET"} 2
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682978",method="GET"} 2
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682980",method="GET"} 2
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682983",method="GET"} 2
```

Now it looks like this:

```
# TYPE http_error_count counter
http_error_count{code="Not Found",endpoint="beacon.GetBlockV2",method="GET"} 606
http_error_count{code="Not Found",endpoint="blob.Blobs",method="GET"} 4304
```

**Which issues(s) does this PR fix?**

**Other notes for review**


Other uses of http metrics use the endpoint name rather than the request
URL.

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.
2025-11-26 04:22:35 +00:00
Forostovec
2d242a8d09 execution: avoid redundant WithHttpEndpoint when JWT is provided (#16032)
This change ensures FlagOptions in cmd/beacon-chain/execution/options.go
appends only one endpoint option depending on whether a JWT secret is
present. Previously the code always appended WithHttpEndpoint and then
conditionally appended WithHttpEndpointAndJWTSecret which overwrote the
first option, adding unnecessary allocations and cognitive overhead.
Since WithHttpEndpointAndJWTSecret fully configures the endpoint,
including URL and Bearer auth needed by the Engine API, the initial
WithHttpEndpoint is redundant when a JWT is supplied. The refactor
preserves behavior while simplifying option composition and avoiding
redundant state churn.

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-11-25 17:43:38 +00:00
Radosław Kapka
6be1541e57 Initialize the ExecutionRequests field in gossip block map (#16047)
**What type of PR is this?**

Other

**What does this PR do? Why is it needed?**

When unmarshaling a block with fastssz, if the target block's
`ExecutionRequests` field is nil, it will not get populated
```
if b.ExecutionRequests == nil {
	b.ExecutionRequests = new(v1.ExecutionRequests)
}
if err = b.ExecutionRequests.UnmarshalSSZ(buf); err != nil {
	return err
}
```
This is true for other fields and that's why we initialize them in our
gossip data maps. There is no bug at the moment because even if
execution requests are nil, we initialize them in
`consensus-types/blocks/proto.go`
```
er := pb.ExecutionRequests
if er == nil {
	er = &enginev1.ExecutionRequests{}
}
```
However, since we initialize other fields in the data map, it's safer to
do it for execution requests too, to avoid a bug in case the code in
`consensus-types/blocks/proto.go` changes in the future.

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.
2025-11-25 16:17:44 +00:00
Bastin
b845222ce7 Integrate state-diff into HasState() (#16045)
**What type of PR is this?**
Feature

**What does this PR do? Why is it needed?**
This PR adds integrates state-diff into `HasState()`. 

One thing to note: we are assuming that, for a given block root, that
has either a state summary or a block in db, and also falls in the state
diff tree, then there must exist a state. This function could return
true, even when there is no actual state saved due to any error.
But this is fine, because we have that assumption throughout the whole
state diff feature.
2025-11-25 13:22:57 +00:00
aarshkshah1992
2061fc8a2f design doc 2025-11-25 15:49:37 +04:00
terence
5bbdebee22 Add Gloas consensus type block package (#15618)
Co-authored-by: Bastin <43618253+Inspector-Butters@users.noreply.github.com>
2025-11-25 09:21:19 +00:00
Bastin
26100e074d Refactor slot by blockroot (#16040)
**Review after #16033 is merged**

**What type of PR is this?**
Other

**What does this PR do? Why is it needed?**
This PR refactors the code to find the corresponding slot of a given
block root using state summary or the block itself, into its own
function `SlotByBlockRoot(ctx, blockroot)`.

Note that there exists a function `slotByBlockRoot(ctx context.Context,
tx *bolt.Tx, blockRoot []byte)` immediately below the new function. Also
note that this function has two drawbacks, which led to creation of the
new function:
- the old function requires a boltdb tx, which is not necessarily
available to the caller.
- the old function does NOT make use of the state summary cache. 

edit: 
- the old function also uses the state bucket to retrieve the state and
it's slot. this is not something we want in the state diff feature,
since there is no state bucket.
2025-11-24 19:59:13 +00:00
satushh
768fa0e5a1 Revert "Metrics for eas" (#16043)
Reverts OffchainLabs/prysm#16008
2025-11-24 16:12:30 +00:00
aarshkshah1992
eb93350583 Merge remote-tracking branch 'origin/feat/gossipsub-control-pane-peer-crawler-peer-controller' into feat/gossipsub-control-pane-peer-crawler-peer-controller 2025-11-24 17:50:19 +04:00
aarshkshah1992
affdab7776 fix changelog 2025-11-24 17:49:47 +04:00
Aarsh Shah
eb556adab5 Merge branch 'develop' into feat/gossipsub-control-pane-peer-crawler-peer-controller 2025-11-24 17:47:11 +04:00
Bastin
11bb8542a4 Integrate state-diff into State() (#16033)
**What type of PR is this?**
Feature

**What does this PR do? Why is it needed?**
This PR integrates the state diff path into the `State()` function from
`db/kv`, which allows reading of states using the state diff db, when
the `EnableStateDiff` flag is enabled.

**Notes for reviewers:**
Files `kv/state_diff_test.go` and `config/features/config.go` only
contain renamings:
- `kv/state_diff_test.go`: rename `setDefaultExponents()` to
`setDefaultStateDiffExponents()` to be less vague.
- `config/features/config.go`: rename `enableStateDiff` to
`EnableStateDiff` to make it public.
2025-11-24 12:42:43 +00:00
aarshkshah1992
09d886c676 fix test 2025-11-24 15:18:27 +04:00
aarshkshah1992
11c5c6fb8b fix dynamic families 2025-11-24 15:06:35 +04:00
aarshkshah1992
10804bbb56 fix conflicts 2025-11-24 14:58:09 +04:00
aarshkshah1992
e6f3b636ac fix lint 2025-11-24 14:54:23 +04:00
aarshkshah1992
e95676dd91 fix conflicts 2025-11-24 14:41:58 +04:00
Bharath Vedartham
b78c2c354b add a metric to measure the attestation gossip validator (#15785)
This PR adds a Summary metric to measure the p2p topic validation time
for attestations arriving from the network. It times the
`validateCommitteeIndexBeaconAttestation` method.

The metric is called `gossip_attestation_verification_milliseconds`

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-11-21 21:57:19 +00:00
Muzry
55e2001a0b Check the JWT secret length (#15939)
<!-- Thanks for sending a PR! Before submitting:

1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
   fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
   in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
   Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->

**What type of PR is this?**
 Bug fix

**What does this PR do? Why is it needed?**

Previously, JWT secrets longer than 256 bits could cause client
compatibility issues. For example, Prysm would accept longer secrets
while Geth strictly requires exactly 32 bytes, causing Geth startup
failures when using the same secret file.

This change enforces the Engine API specification requirement that JWT
secrets must be exactly 256 bits (32 bytes), ensuring consistent
behavior across different client implementations.

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-11-21 21:51:03 +00:00
sashaodessa
c093283b1b Replace fixed sleep delays with active polling in prometheus service test (#15828)
## **Description:**

**What type of PR is this?**

> Bug fix

**What does this PR do? Why is it needed?**

Replaces fixed `time.Sleep(time.Second)` delays in `TestLifecycle` with
active polling to wait for service readiness/shutdown. This improves
test reliability and reduces execution time by eliminating unnecessary
waits when services start/stop faster than expected.

**Which issues(s) does this PR fix?**

N/A - Minor test improvement

**Other notes for review**

- Uses 50ms polling interval with 3s timeout for both startup and
shutdown checks
- Maintains same test logic while making it more efficient and less
flaky
- No functional changes to the service itself

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [ ] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.
2025-11-21 21:39:24 +00:00
terence
5449fd0352 p2p: wire stategen into service for last finalized state (#16034)
This PR removes the last production usage for: `LastArchivedRoot` by

- extend the P2P config to accept a `StateGen` dependency and wire it up
from the beacon node
- update gossip scoring to read the active validator count via stategen
using last finalized block root

note: i think the correct implementation should process last finalizes
state to current slot, but that's a bigger change i dont want to make in
this PR, i just want to remove usages for `LastArchivedRoot`
2025-11-21 16:13:00 +00:00
Bastin
3d7f7b588b Fix state diff repetitive anchor slot bug (#16037)
**What type of PR is this?**
 Bug fix

**What does this PR do? Why is it needed?**
This PR fixes a bug in the state diff `getBaseAndDiffChain()` method. In
the process of finding the diff chain indices, there could be a scenario
where multiple levels return the same diff slot number not equal to the
base (full snapshot) slot number.
This resulted in multiple diff items being added to the diff chain for
the same slot, but on different levels, which resulted in errors reading
the diff.

Fix: we keep a `lastSeenAnchorSlot` equal to the `BaseAnchorSlot` and
update it every time we see a new anchor slot. We ignore if the current
found anchor slot is equal to the `lastSeenAnchorSlot`.

Scenario example: 
exponents: [20, 14, 10, 7, 5]
offset: 0
slots saved: slot 2^11, and slot (2^11 + 2^5)
slot to read: slot (2^11 + 2^5)
resulting list of anchor slots (diff chain indices): [0, 0, 2^11, 2^11,
2^11 + 2^5]
2025-11-21 15:42:27 +00:00
aarshkshah1992
86b65e0912 re-run test 2025-11-21 17:10:35 +04:00
aarshkshah1992
e9dac06037 finish tests 2025-11-21 16:38:00 +04:00
aarshkshah1992
893cf60921 fix tests 2025-11-21 15:37:15 +04:00
aarshkshah1992
41c9f160a2 fix bazel 2025-11-21 14:08:02 +04:00
aarshkshah1992
707abe6112 fix mock peer manager 2025-11-21 14:06:36 +04:00
aarshkshah1992
76975a134d fix build 2025-11-21 14:00:40 +04:00
aarshkshah1992
672de432a2 finish all changes 2025-11-21 12:08:17 +04:00
aarshkshah1992
4a6d88d9fb finish all changes 2025-11-21 12:05:57 +04:00
terence
2f067c4164 Add supported / unsupported version for fork enum (#16030)
* gate unreleased forks

* Preston + Bastin's feedback

* Rename back to all versions

* Clean up, mark PR ready for review

* Changelog
2025-11-20 14:58:41 +00:00
Radosław Kapka
81266f60af Move BlockGossipReceived event to the end of gossip validation. (#16031)
* Move `BlockGossipReceived` event to the end of gossip validation.

* changelog <3

* tests
2025-11-19 22:34:02 +00:00
Bastin
207f36065a state-diff configs & kv functions (#15903)
* state-diff configs

* state diff kv functions

* potuz's comments

* Update config.go

* fix merge conflicts

* apply bazel's suggestion and fix some bugs

* preston's feedback
2025-11-19 19:27:56 +00:00
terence
eb9feabd6f stop emitting payload attribute events during late block handling (#16026)
* stop emitting payload attribute events during late block handling when we are not proposing the next slot

* Change the behavior to not even enter FCU if we are not proposing next slot
2025-11-19 16:51:46 +00:00
terence
bc0868e232 Add Gloas beacon state package (#15611)
* Add Gloas protobuf definitions with spec tests

Add Gloas state fields to beacon state implementation

* Remove shared field for pending payment

* Radek's feedback

* Potuz feedback

* use slice concat

* Fix comment

* Fix concat

* Fix comment

* Fix correct index
2025-11-18 15:08:31 +00:00
Chris Berry
35c1ab5e88 Downgrade log level for all validator indices. (#15998)
* Update logging behaviour for updated fee recipient.

* Updated changelog.

* Display validator indices only on TRACE

* Fix tests

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2025-11-17 16:44:03 +00:00
aarshkshah1992
1ff836e549 peer controller first draft 2025-11-17 19:39:18 +04:00
Radosław Kapka
21bb6f5258 Remove validator cross client from e2e (#16025) 2025-11-17 15:02:06 +00:00
terence
4914882e97 Record gossip KZG batch verification durations (#16018)
* Record gossip KZG batch verification durations

* Add path
2025-11-14 23:18:56 +00:00
Preston Van Loon
2302ef918a Vendored github.com/tyler-smith/go-bip39 (#16015)
* Vendor go-bip39 dependency locally to third_party/

The github.com/tyler-smith/go-bip39 repository has been deleted from GitHub 
but is still needed for BIP-39 mnemonic functionality in the validator wallet 
system. This change vendors v1.1.0 of the library into third_party/go-bip39/ 
to ensure continued availability.

Changes:
- Copy go-bip39 v1.1.0 source from Go module cache to third_party/go-bip39/
- Create BUILD.bazel files for main package and wordlists subpackage
- Update 5 BUILD.bazel files to reference local vendored version instead of external dependency
- Remove go-bip39 from go.mod and deps.bzl
- All builds and tests pass successfully

The vendored package includes all 9 language wordlists (English, Chinese Simplified/Traditional, 
Czech, French, Italian, Japanese, Korean, Spanish) and maintains the original import paths for 
compatibility.

* Changelog fraagment

* use go mod replace for vendored lib

* Run gazelle

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-11-14 17:58:44 +00:00
Galoretka
76f3083090 fix(validator/db): proposals progress bar count (#16020)
* fix(validator/db): proposals progress bar count

* Create Galoretka_convert-progress.md
2025-11-14 13:24:21 +00:00
Preston Van Loon
2fd6bd8150 Add golang.org/x/tools modernize static analyzer and fix violations (#15946)
* Ran gopls modernize to fix everything

go run golang.org/x/tools/gopls/internal/analysis/modernize/cmd/modernize@latest -fix -test ./...

* Override rules_go provided dependency for golang.org/x/tools to v0.38.0.

To update this, checked out rules_go, then ran `bazel run //go/tools/releaser -- upgrade-dep -mirror=false org_golang_x_tools` and copied the patches.

* Fix buildtag violations and ignore buildtag violations in external

* Introduce modernize analyzer package.

* Add modernize "any" analyzer.

* Fix violations of any analyzer

* Add modernize "appendclipped" analyzer.

* Fix violations of appendclipped

* Add modernize "bloop" analyzer.

* Add modernize "fmtappendf" analyzer.

* Add modernize "forvar" analyzer.

* Add modernize "mapsloop" analyzer.

* Add modernize "minmax" analyzer.

* Fix violations of minmax analyzer

* Add modernize "omitzero" analyzer.

* Add modernize "rangeint" analyzer.

* Fix violations of rangeint.

* Add modernize "reflecttypefor" analyzer.

* Fix violations of reflecttypefor analyzer.

* Add modernize "slicescontains" analyzer.

* Add modernize "slicessort" analyzer.

* Add modernize "slicesdelete" analyzer. This is disabled by default for now. See https://go.dev/issue/73686.

* Add modernize "stringscutprefix" analyzer.

* Add modernize "stringsbuilder" analyzer.

* Fix violations of stringsbuilder analyzer.

* Add modernize "stringsseq" analyzer.

* Add modernize "testingcontext" analyzer.

* Add modernize "waitgroup" analyzer.

* Changelog fragment

* gofmt

* gazelle

* Add modernize "newexpr" analyzer.

* Disable newexpr until go1.26

* Add more details in WORKSPACE on how to update the override

* @nalepae feedback on min()

* gofmt

* Fix violations of forvar
2025-11-14 01:27:22 +00:00
terence
f77b78943a Use explicit slot component timing configs (#15999)
* Use new timing configs (due BPS)

* Bastin's feedback
2025-11-13 21:55:32 +00:00
james-prysm
7ba60d93f2 Changed subscribe-all-data-subnets to supernode (#16012)
* adding alias

* kasey's suggestion

* updating description

* Update changelog/james-prysm_supernode-alias.md

Co-authored-by: kasey <489222+kasey@users.noreply.github.com>

---------

Co-authored-by: kasey <489222+kasey@users.noreply.github.com>
2025-11-13 18:51:00 +00:00
aarshkshah1992
08f038fe80 fix lint 2025-11-13 18:57:18 +04:00
Preston Van Loon
b94904b784 Update CHANGELOG.md for v7.0.0 release (#16004)
* Unclog for v7.0.0

* Changelog notes

* Changelog fraagment
2025-11-13 14:25:29 +00:00
aarshkshah1992
63279bcadf revert new line change 2025-11-13 17:06:47 +04:00
aarshkshah1992
61628efd44 run CI 2025-11-13 17:04:41 +04:00
aarshkshah1992
9b07f13cd3 tests for the crawler 2025-11-13 16:58:49 +04:00
satushh
1af12d841d Metrics for eas (#16008)
* metrics for eas

* changelog
2025-11-13 10:59:09 +00:00
aarshkshah1992
1397a79b4c changelog 2025-11-12 17:09:33 +04:00
aarshkshah1992
929115639d draft 2025-11-12 16:38:07 +04:00
aarshkshah1992
14dca40786 draft 2025-11-12 16:36:53 +04:00
aarshkshah1992
fa7596bceb first draft 2025-11-12 12:54:41 +04:00
aarshkshah1992
5161f087fc fix compilation 2025-11-06 19:38:59 +04:00
aarshkshah1992
71050ab076 fix bazel 2025-11-06 19:10:19 +04:00
aarshkshah1992
614367ddcf fix lint 2025-11-05 22:26:55 +04:00
aarshkshah1992
3f7371445b fix test in sync 2025-11-05 22:23:26 +04:00
aarshkshah1992
a15a1ade17 fix test 2025-11-05 21:31:04 +04:00
aarshkshah1992
798376b1d7 fix bazel 2025-11-05 21:21:51 +04:00
aarshkshah1992
93271050bf more tests 2025-11-05 21:17:03 +04:00
aarshkshah1992
8dfbabc691 fork watcher test works 2025-11-05 14:45:35 +04:00
aarshkshah1992
af2522e5f0 fix schedule 2025-11-05 09:49:27 +04:00
aarshkshah1992
452d42bd10 fix test in sync 2025-11-05 09:21:34 +04:00
aarshkshah1992
3e985377ce fix test 2025-11-05 09:05:57 +04:00
aarshkshah1992
ab2e836d3f fix test 2025-11-04 21:11:45 +04:00
Aarsh Shah
14158bea9c Merge branch 'develop' into feat/use-topic-abstraction-for-gossipsub-and-refactor-fork-watcher 2025-11-04 12:02:25 -05:00
aarshkshah1992
e14590636f bazel gazelle 2025-11-04 20:08:41 +04:00
aarshkshah1992
ce3660d2e7 finish draft 2025-11-04 20:08:11 +04:00
aarshkshah1992
7853cb9db0 changelog fragment 2025-11-03 19:14:46 +04:00
aarshkshah1992
8cfeda1473 WIP gossipsub controller 2025-11-03 19:02:41 +04:00
903 changed files with 259638 additions and 5640 deletions

View File

@@ -34,4 +34,5 @@ Fixes #
- [ ] I have read [CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [ ] I have included a uniquely named [changelog fragment file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [ ] I have added a description to this PR with sufficient context for reviewers to understand this PR.
- [ ] I have added a description with sufficient context for reviewers to understand this PR.
- [ ] I have tested that my changes work as expected and I added a testing plan to the PR description (if applicable).

7
.gitignore vendored
View File

@@ -38,6 +38,13 @@ metaData
# execution API authentication
jwt.hex
execution/
# local execution client data
execution/
# local documentation
CLAUDE.md
# manual testing
tmp

View File

@@ -197,6 +197,25 @@ nogo(
"//tools/analyzers/logcapitalization:go_default_library",
"//tools/analyzers/logruswitherror:go_default_library",
"//tools/analyzers/maligned:go_default_library",
"//tools/analyzers/modernize/any:go_default_library",
"//tools/analyzers/modernize/appendclipped:go_default_library",
"//tools/analyzers/modernize/bloop:go_default_library",
"//tools/analyzers/modernize/fmtappendf:go_default_library",
"//tools/analyzers/modernize/forvar:go_default_library",
"//tools/analyzers/modernize/mapsloop:go_default_library",
"//tools/analyzers/modernize/minmax:go_default_library",
#"//tools/analyzers/modernize/newexpr:go_default_library", # Disabled until go 1.26.
"//tools/analyzers/modernize/omitzero:go_default_library",
"//tools/analyzers/modernize/rangeint:go_default_library",
"//tools/analyzers/modernize/reflecttypefor:go_default_library",
"//tools/analyzers/modernize/slicescontains:go_default_library",
#"//tools/analyzers/modernize/slicesdelete:go_default_library", # Disabled, see https://go.dev/issue/73686
"//tools/analyzers/modernize/slicessort:go_default_library",
"//tools/analyzers/modernize/stringsbuilder:go_default_library",
"//tools/analyzers/modernize/stringscutprefix:go_default_library",
"//tools/analyzers/modernize/stringsseq:go_default_library",
"//tools/analyzers/modernize/testingcontext:go_default_library",
"//tools/analyzers/modernize/waitgroup:go_default_library",
"//tools/analyzers/nop:go_default_library",
"//tools/analyzers/nopanic:go_default_library",
"//tools/analyzers/properpermissions:go_default_library",

View File

@@ -4,6 +4,172 @@ All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
## [v7.1.0](https://github.com/prysmaticlabs/prysm/compare/v7.0.0...v7.1.0) - 2025-12-10
This release includes several key features/fixes. If you are running v7.0.0 then you should update to v7.0.1 or later and remove the flag `--disable-last-epoch-targets`.
Release highlights:
- Backfill is now supported in Fulu. Backfill from checkpoint sync now supports data columns. Run with `--enable-backfill` when using checkpoint sync.
- A new node configuration to custody enough data columns to reconstruct blobs. Use flag `--semi-supernode` to custody at least 50% of the data columns.
- Critical fixes in attestation processing.
A post mortem doc with full details on the mainnet attestation processing issue from December 4th is expected in the coming days.
### Added
- add fulu support to light client processing. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15995)
- Record data column gossip KZG batch verification latency in both the pooled worker and fallback paths so the `beacon_kzg_verification_data_column_batch_milliseconds` histogram reflects gossip traffic, annotated with `path` labels to distinguish the sources. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16018)
- Implement Gloas state. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15611)
- Add initial configs for the state-diff feature. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15903)
- Add kv functions for the state-diff feature. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15903)
- Add supported version for fork versions. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16030)
- prometheus metric `gossip_attestation_verification_milliseconds` to track attestation gossip topic validation latency. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15785)
- Integrate state-diff into `State()`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16033)
- Implement Gloas fork support in consensus-types/blocks with factory methods, getters, setters, and proto handling. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15618)
- Integrate state-diff into `HasState()`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16045)
- Added `--semi-supernode` flag to custody half of a super node's datacolumn requirements but allowing for reconstruction for blob retrieval. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16029)
- Data column backfill. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15580)
- Backfill metrics for columns: backfill_data_column_sidecar_downloaded, backfill_data_column_sidecar_downloaded_bytes, backfill_batch_columns_download_ms, backfill_batch_columns_verify_ms. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15580)
- prometheus summary `gossip_data_column_sidecar_arrival_milliseconds` to track data column sidecar arrival latency since slot start. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16099)
### Changed
- Improve readability in slashing import and remove duplicated code. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15957)
- Use dependent root instead of target when possible. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15996)
- Changed `--subscribe-all-data-subnets` flag to `--supernode` and aliased `--subscribe-all-data-subnets` for existing users. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16012)
- Use explicit slot component timing configs. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15999)
- Downgraded log level from INFO to DEBUG on PrepareBeaconProposer updated fee recipients. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15998)
- Change the logging behaviour of Updated fee recipients to only log count of validators at Debug level and all validator indices at Trace level. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15998)
- Stop emitting payload attribute events during late block handling when we are not proposing the next slot. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16026)
- Initialize the `ExecutionRequests` field in gossip block map. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16047)
- Avoid redundant WithHttpEndpoint when JWT is provided. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16032)
- Removed dead slot parameter from blobCacheEntry.filter. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16021)
- Added log prefix to the `genesis` package. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16075)
- Added log prefix to the `params` package. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16075)
- `WithGenesisValidatorsRoot`: Use camelCase for log field param. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16075)
- Move `Origin checkpoint found in db` from WARN to INFO, since it is the expected behaviour. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16075)
- backfill metrics that changed name and/or histogram buckets: backfill_batch_time_verify -> backfill_batch_verify_ms, backfill_batch_time_waiting -> backfill_batch_waiting_ms, backfill_batch_time_roundtrip -> backfill_batch_roundtrip_ms, backfill_blocks_bytes_downloaded -> backfill_blocks_downloaded_bytes, backfill_batch_time_verify -> backfill_batch_verify_ms, backfill_batch_blocks_time_download -> backfill_batch_blocks_download_ms, backfill_batch_blobs_time_download -> backfill_batch_blobs_download_ms, backfill_blobs_bytes_downloaded -> backfill_blocks_downloaded_bytes. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15580)
- Move the "Not enough connected peers" (for a given subnet) from WARN to DEBUG. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16087)
- `blobsDataFromStoredDataColumns`: Ask the use to use the `--supernode` flag and shorten the error mesage. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16097)
- Introduced flag `--ignore-unviable-attestations` (replaces and deprecates `--disable-last-epoch-targets`) to drop attestations whose target state is not viable; default remains to process them unless explicitly enabled. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16094)
### Removed
- Remove validator cross-client from end-to-end tests. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16025)
- `NUMBER_OF_COLUMNS` configuration (not in the specification any more, replaced by a preset). [[PR]](https://github.com/prysmaticlabs/prysm/pull/16073)
- `MAX_CELLS_IN_EXTENDED_MATRIX` configuration (not in the specification any more). [[PR]](https://github.com/prysmaticlabs/prysm/pull/16073)
### Fixed
- Nil check for block if it doesn't exist in the DB in fetchOriginSidecars. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16006)
- Fix proposals progress bar count [#16020](https://github.com/OffchainLabs/prysm/pull/16020). [[PR]](https://github.com/prysmaticlabs/prysm/pull/16020)
- Move `BlockGossipReceived` event to the end of gossip validation. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16031)
- Fix state diff repetitive anchor slot bug. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16037)
- Check the JWT secret length is exactly 256 bits (32 bytes) as per Engine API specification. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15939)
- http_error_count now matches the other cases by listing the endpoint name rather than the actual URL requested. This improves metrics cardinality. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16055)
- Fix array out of bounds in static analyzer. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16058)
- fixes E2E tests to be able to start from Electra genesis fork or future forks. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16048)
- Use head state to validate attestations for old blocks if they are compatible. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16095)
## [v7.0.1](https://github.com/prysmaticlabs/prysm/compare/v7.0.0...v7.0.1) - 2025-12-08
This patch release contains 4 cherry-picked changes to address the mainnet attestation processing issue from 2025-12-04. Operators are encouraged to update to this release as soon as practical. As of this release, the feature flag `--disable-last-epoch-targets` has been deprecated and can be safely removed from your node configuration.
A post mortem doc with full details is expected to be published later this week.
### Changed
- Move the "Not enough connected peers" (for a given subnet) from WARN to DEBUG. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16087)
- Use dependent root instead of target when possible. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15996)
- Introduced flag `--ignore-unviable-attestations` (replaces and deprecates `--disable-last-epoch-targets`) to drop attestations whose target state is not viable; default remains to process them unless explicitly enabled. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16094)
### Fixed
- Use head state to validate attestations for old blocks if they are compatible. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16095)
## [v7.0.0](https://github.com/prysmaticlabs/prysm/compare/v6.1.4...v7.0.0) - 2025-11-10
This is our initial mainnet release for the Ethereum mainnet Fulu fork on December 3rd, 2025. All operators MUST update to v7.0.0 or later release prior to the fulu fork epoch `411392`. See the [Ethereum Foundation blog post](https://blog.ethereum.org/2025/11/06/fusaka-mainnet-announcement) for more information on Fulu.
Other than the mainnet fulu fork schedule, there are a few callouts in this release:
- `by-epoch` blob storage format is the default for new installations. Users that haven't migrated will see a warning to migrate to the new format. Existing deployments may set `--blob-storage-layout=by-epoch` to perform the migration.
- Several deprecated flags have been deleted! Please review the "removed" section of this changelog carefully. If you are referencing a removed flag, Prysm will not start! All of these flags had no effect for at least one release.
- Several deprecated API endpoints have been deleted. Please review the "removed" section of this changelog carefully.
- Backfill is not supported in Fulu. This is expected to be fixed in the next release and should be delivered prior to the mainnet activation fork.
- The builder default gas limit is raised from `45000000` (45 MGas) to `60000000` (60 MGas).
- Several bug fixes and improvements.
### Added
- Allow custom headers in validator client HTTP requests. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15884)
- Metric to track data columns recovered from execution layer. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15924)
- Metrics: Add count of peers per direction and type (inbound/outbound), (TCP/QUIC). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15922)
- `p2p_subscribed_topic_peer_total`: Reset to avoid dangling values. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15922)
- Add `p2p_minimum_peers_per_subnet` metric. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15922)
- Added GeneralizedIndicesFromPath function to calculate the GIs for a given sszInfo object and a PathElement. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15873)
- Add Gloas protobuf definitions with spec tests and SSZ serialization support. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15601)
- Fulu fork epoch for mainnet configurations set for December 3, 2025, 09:49:11pm UTC. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15975)
- Added BPO schedules for December 9, 2025, 02:21:11pm UTC and January 7, 2026, 01:01:11am UTC. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15975)
### Changed
- Updated consensus spec tests to v1.6.0-beta.1 with new hashes and URL template. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15918)
- Use the `by-epoch' blob storage layout by default and log a warning to users who continue to use the flat layout, encouraging them to switch. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15904)
- Update go-netroute to `v0.3.0`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15934)
- Introduced Path type for SSZ-QL queries and updated PathElement (removed Length field, kept Index) enforcing that len queries are terminal (at most one per path). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15935)
- Changed length query syntax from `block.payload.len(transactions)` to `len(block.payload.transactions)`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15935)
- Update `go-netroute` to `v0.4.0`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15949)
- Updated consensus spec tests to v1.6.0-beta.2. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15960)
- Updated go bitfield from prysmaticlabs to offchainlabs. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15968)
- Bump builder default gas limit from `45000000` (45 MGas) to `60000000` (60 MGas). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15979)
- Use head state for block pubsub validation when possible. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15972)
- updated consensus spec to 1.6.0 from 1.6.0-beta.2. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15975)
- Upgrade Prysm v6 to v7. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15989)
- Use head state readonly when possible to validate data column sidecars. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15977)
### Removed
- log mentioning removed flag `--show-deposit-data`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15926)
- Remove Beacon API endpoints that were deprecated in Electra: `GET /eth/v1/beacon/deposit_snapshot`, `GET /eth/v1/beacon/blocks/{block_id}/attestations`, `GET /eth/v1/beacon/pool/attestations`, `POST /eth/v1/beacon/pool/attestations`, `GET /eth/v1/beacon/pool/attester_slashings`, `POST /eth/v1/beacon/pool/attester_slashings`, `GET /eth/v1/validator/aggregate_attestation`, `POST /eth/v1/validator/aggregate_and_proofs`, `POST /eth/v1/beacon/blocks`, `POST /eth/v1/beacon/blinded_blocks`, `GET /eth/v1/builder/states/{state_id}/expected_withdrawals`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15962)
- Deprecated flag `--enable-optional-engine-methods` has been removed. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15986)
- Deprecated flag `--disable-build-block-parallel` has been removed. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15986)
- Deprecated flag `--disable-reorg-late-blocks` has been removed. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15986)
- Deprecated flag `--disable-optional-engine-methods` has been removed. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15986)
- Deprecated flag `--disable-aggregate-parallel` has been removed. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15986)
- Deprecated flag `--enable-eip-4881` has been removed. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15986)
- Deprecated flag `--disable-eip-4881` has been removed. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15986)
- Deprecated flag `--enable-verbose-sig-verification` has been removed. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15986)
- Deprecated flag `--enable-debug-rpc-endpoints` has been removed. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15986)
- Deprecated flag `--beacon-rpc-gateway-provider` has been removed. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15986)
- Deprecated flag `--disable-grpc-gateway` has been removed. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15986)
- Deprecated flag `--enable-experimental-state` has been removed. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15986)
- Deprecated flag `--enable-committee-aware-packing` has been removed. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15986)
- Deprecated flag `--interop-genesis-time` has been removed. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15986)
- Deprecated flag `--interop-num-validators` has been removed (from beacon-chain only; still available in validator client). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15986)
- Deprecated flag `--enable-quic` has been removed. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15986)
- Deprecated flag `--attest-timely` has been removed. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15986)
- Deprecated flag `--disable-experimental-state` has been removed. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15986)
- Deprecated flag `--p2p-metadata` has been removed. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15986)
### Fixed
- Remove `Reading static P2P private key from a file.` log if Fulu is enabled. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15913)
- `blobSidecarByRootRPCHandler`: Do not serve a sidecar if the corresponding block is not available. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15933)
- `dataColumnSidecarByRootRPCHandler`: Do not serve a sidecar if the corresponding block is not available. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15933)
- Fix incorrect version used when sending attestation version in Fulu. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15950)
- Changed the behavior of topic subscriptions such that only topics that require the active validator count will compute that value. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15955)
- Added a Mutex to the computation of active validator count during topic subscription to avoid a race condition where multiple goroutines are computing the same work. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15955)
- `RODataColumnsVerifier.ValidProposerSignature`: Ensure the expensive signature verification is only performed once for concurrent requests for the same signature data. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15954)
- use filepath for path operations (clean, join, etc.) to ensure correct behavior on Windows. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15953)
- Fix #15969: Handle addition overflow in `/eth/v1/beacon/rewards/attestations/{epoch}`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15970)
- `SidecarProposerExpected`: Add the slot in the single flight key. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15976)
- Ensures the rate limitation is respected for by root blob and data column sidecars requests. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15981)
- Use head only if its compatible with target for attestation validation. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15965)
- Backfill disabled if checkpoint sync origin is after fulu fork due to lack of DataColumnSidecar support in backfill. To track the availability of fulu-compatible backfill please watch https://github.com/OffchainLabs/prysm/issues/15982. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15987)
- `SidecarProposerExpected`: Use the correct value of proposer index in the singleflight group. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15993)
## [v6.1.4](https://github.com/prysmaticlabs/prysm/compare/v6.1.3...v6.1.4) - 2025-10-24
This release includes a bug fix affecting block proposals in rare cases, along with an important update for Windows users running post-Fusaka fork.
@@ -3820,4 +3986,4 @@ There are no security updates in this release.
# Older than v2.0.0
For changelog history for releases older than v2.0.0, please refer to https://github.com/prysmaticlabs/prysm/releases
For changelog history for releases older than v2.0.0, please refer to https://github.com/prysmaticlabs/prysm/releases

View File

@@ -205,6 +205,26 @@ prysm_image_deps()
load("@io_bazel_rules_go//go:deps.bzl", "go_register_toolchains", "go_rules_dependencies")
# Override golang.org/x/tools to use v0.38.0 instead of v0.30.0
# This is necessary as this dependency is required by rules_go and they do not accept dependency
# update PRs. Instead, they ask downstream projects to override the dependency. To generate the
# patches or update this dependency again, check out the rules_go repo then run the releaser tool.
# bazel run //go/tools/releaser -- upgrade-dep -mirror=false org_golang_x_tools
# Copy the patches and http_archive updates from rules_go here.
http_archive(
name = "org_golang_x_tools",
patch_args = ["-p1"],
patches = [
"//third_party:org_golang_x_tools-deletegopls.patch",
"//third_party:org_golang_x_tools-gazelle.patch",
],
sha256 = "8509908cd7fc35aa09ff49d8494e4fd25bab9e6239fbf57e0d8344f6bec5802b",
strip_prefix = "tools-0.38.0",
urls = [
"https://github.com/golang/tools/archive/refs/tags/v0.38.0.zip",
],
)
go_rules_dependencies()
go_register_toolchains(

View File

@@ -56,7 +56,7 @@ func ParseAccept(header string) []mediaRange {
}
var out []mediaRange
for _, field := range strings.Split(header, ",") {
for field := range strings.SplitSeq(header, ",") {
if r, ok := parseMediaRange(field); ok {
out = append(out, r)
}

View File

@@ -421,7 +421,7 @@ func (c *Client) RegisterValidator(ctx context.Context, svr []*ethpb.SignedValid
func jsonValidatorRegisterRequest(svr []*ethpb.SignedValidatorRegistrationV1) ([]byte, error) {
vs := make([]*structs.SignedValidatorRegistration, len(svr))
for i := 0; i < len(svr); i++ {
for i := range svr {
vs[i] = structs.SignedValidatorRegistrationFromConsensus(svr[i])
}
body, err := json.Marshal(vs)

View File

@@ -121,7 +121,7 @@ func (s *Uint64String) UnmarshalText(t []byte) error {
// MarshalText returns a byte representation of the text from Uint64String.
func (s Uint64String) MarshalText() ([]byte, error) {
return []byte(fmt.Sprintf("%d", s)), nil
return fmt.Appendf(nil, "%d", s), nil
}
// VersionResponse is a JSON representation of a field in the builder API header response.

View File

@@ -15,7 +15,7 @@ import (
func LogRequests(
ctx context.Context,
method string, req,
reply interface{},
reply any,
cc *grpc.ClientConn,
invoker grpc.UnaryInvoker,
opts ...grpc.CallOption,

View File

@@ -14,5 +14,5 @@ type GetForkScheduleResponse struct {
}
type GetSpecResponse struct {
Data interface{} `json:"data"`
Data any `json:"data"`
}

View File

@@ -93,9 +93,9 @@ func TestToggleMultipleTimes(t *testing.T) {
v := New()
pre := !v.IsSet()
for i := 0; i < 100; i++ {
for i := range 100 {
v.SetTo(false)
for j := 0; j < i; j++ {
for range i {
pre = v.Toggle()
}
@@ -149,7 +149,7 @@ func TestRace(t *testing.T) {
// Writer
go func() {
for i := 0; i < repeat; i++ {
for range repeat {
v.Set()
wg.Done()
}
@@ -157,7 +157,7 @@ func TestRace(t *testing.T) {
// Reader
go func() {
for i := 0; i < repeat; i++ {
for range repeat {
v.IsSet()
wg.Done()
}
@@ -165,7 +165,7 @@ func TestRace(t *testing.T) {
// Writer
go func() {
for i := 0; i < repeat; i++ {
for range repeat {
v.UnSet()
wg.Done()
}
@@ -173,7 +173,7 @@ func TestRace(t *testing.T) {
// Reader And Writer
go func() {
for i := 0; i < repeat; i++ {
for range repeat {
v.Toggle()
wg.Done()
}
@@ -198,8 +198,8 @@ func ExampleAtomicBool() {
func BenchmarkMutexRead(b *testing.B) {
var m sync.RWMutex
var v bool
b.ResetTimer()
for i := 0; i < b.N; i++ {
for b.Loop() {
m.RLock()
_ = v
m.RUnlock()
@@ -208,16 +208,16 @@ func BenchmarkMutexRead(b *testing.B) {
func BenchmarkAtomicValueRead(b *testing.B) {
var v atomic.Value
b.ResetTimer()
for i := 0; i < b.N; i++ {
for b.Loop() {
_ = v.Load() != nil
}
}
func BenchmarkAtomicBoolRead(b *testing.B) {
v := New()
b.ResetTimer()
for i := 0; i < b.N; i++ {
for b.Loop() {
_ = v.IsSet()
}
}
@@ -227,8 +227,8 @@ func BenchmarkAtomicBoolRead(b *testing.B) {
func BenchmarkMutexWrite(b *testing.B) {
var m sync.RWMutex
var v bool
b.ResetTimer()
for i := 0; i < b.N; i++ {
for b.Loop() {
m.RLock()
v = true
m.RUnlock()
@@ -239,16 +239,16 @@ func BenchmarkMutexWrite(b *testing.B) {
func BenchmarkAtomicValueWrite(b *testing.B) {
var v atomic.Value
b.ResetTimer()
for i := 0; i < b.N; i++ {
for b.Loop() {
v.Store(true)
}
}
func BenchmarkAtomicBoolWrite(b *testing.B) {
v := New()
b.ResetTimer()
for i := 0; i < b.N; i++ {
for b.Loop() {
v.Set()
}
}
@@ -258,8 +258,8 @@ func BenchmarkAtomicBoolWrite(b *testing.B) {
func BenchmarkMutexCAS(b *testing.B) {
var m sync.RWMutex
var v bool
b.ResetTimer()
for i := 0; i < b.N; i++ {
for b.Loop() {
m.Lock()
if !v {
v = true
@@ -270,8 +270,8 @@ func BenchmarkMutexCAS(b *testing.B) {
func BenchmarkAtomicBoolCAS(b *testing.B) {
v := New()
b.ResetTimer()
for i := 0; i < b.N; i++ {
for b.Loop() {
v.SetToIf(false, true)
}
}
@@ -281,8 +281,8 @@ func BenchmarkAtomicBoolCAS(b *testing.B) {
func BenchmarkMutexToggle(b *testing.B) {
var m sync.RWMutex
var v bool
b.ResetTimer()
for i := 0; i < b.N; i++ {
for b.Loop() {
m.Lock()
v = !v
m.Unlock()
@@ -291,8 +291,8 @@ func BenchmarkMutexToggle(b *testing.B) {
func BenchmarkAtomicBoolToggle(b *testing.B) {
v := New()
b.ResetTimer()
for i := 0; i < b.N; i++ {
for b.Loop() {
v.Toggle()
}
}

View File

@@ -21,7 +21,7 @@ const (
func init() {
input = make([][]byte, benchmarkElements)
for i := 0; i < benchmarkElements; i++ {
for i := range benchmarkElements {
input[i] = make([]byte, benchmarkElementSize)
_, err := rand.Read(input[i])
if err != nil {
@@ -35,7 +35,7 @@ func hash(input [][]byte) [][]byte {
output := make([][]byte, len(input))
for i := range input {
copy(output, input)
for j := 0; j < benchmarkHashRuns; j++ {
for range benchmarkHashRuns {
hash := sha256.Sum256(output[i])
output[i] = hash[:]
}
@@ -44,15 +44,15 @@ func hash(input [][]byte) [][]byte {
}
func BenchmarkHash(b *testing.B) {
for i := 0; i < b.N; i++ {
for b.Loop() {
hash(input)
}
}
func BenchmarkHashMP(b *testing.B) {
output := make([][]byte, len(input))
for i := 0; i < b.N; i++ {
workerResults, err := async.Scatter(len(input), func(offset int, entries int, _ *sync.RWMutex) (interface{}, error) {
for b.Loop() {
workerResults, err := async.Scatter(len(input), func(offset int, entries int, _ *sync.RWMutex) (any, error) {
return hash(input[offset : offset+entries]), nil
})
require.NoError(b, err)

View File

@@ -7,7 +7,7 @@ import (
// Debounce events fired over a channel by a specified duration, ensuring no events
// are handled until a certain interval of time has passed.
func Debounce(ctx context.Context, interval time.Duration, eventsChan <-chan interface{}, handler func(interface{})) {
func Debounce(ctx context.Context, interval time.Duration, eventsChan <-chan any, handler func(any)) {
var timer *time.Timer
defer func() {
if timer != nil {

View File

@@ -14,7 +14,7 @@ import (
)
func TestDebounce_NoEvents(t *testing.T) {
eventsChan := make(chan interface{}, 100)
eventsChan := make(chan any, 100)
ctx, cancel := context.WithCancel(t.Context())
interval := time.Second
timesHandled := int32(0)
@@ -26,7 +26,7 @@ func TestDebounce_NoEvents(t *testing.T) {
})
}()
go func() {
async.Debounce(ctx, interval, eventsChan, func(event interface{}) {
async.Debounce(ctx, interval, eventsChan, func(event any) {
atomic.AddInt32(&timesHandled, 1)
})
wg.Done()
@@ -38,7 +38,7 @@ func TestDebounce_NoEvents(t *testing.T) {
}
func TestDebounce_CtxClosing(t *testing.T) {
eventsChan := make(chan interface{}, 100)
eventsChan := make(chan any, 100)
ctx, cancel := context.WithCancel(t.Context())
interval := time.Second
timesHandled := int32(0)
@@ -62,7 +62,7 @@ func TestDebounce_CtxClosing(t *testing.T) {
})
}()
go func() {
async.Debounce(ctx, interval, eventsChan, func(event interface{}) {
async.Debounce(ctx, interval, eventsChan, func(event any) {
atomic.AddInt32(&timesHandled, 1)
})
wg.Done()
@@ -74,14 +74,14 @@ func TestDebounce_CtxClosing(t *testing.T) {
}
func TestDebounce_SingleHandlerInvocation(t *testing.T) {
eventsChan := make(chan interface{}, 100)
eventsChan := make(chan any, 100)
ctx, cancel := context.WithCancel(t.Context())
interval := time.Second
timesHandled := int32(0)
go async.Debounce(ctx, interval, eventsChan, func(event interface{}) {
go async.Debounce(ctx, interval, eventsChan, func(event any) {
atomic.AddInt32(&timesHandled, 1)
})
for i := 0; i < 100; i++ {
for range 100 {
eventsChan <- struct{}{}
}
// We should expect 100 rapid fire changes to only have caused
@@ -92,14 +92,14 @@ func TestDebounce_SingleHandlerInvocation(t *testing.T) {
}
func TestDebounce_MultipleHandlerInvocation(t *testing.T) {
eventsChan := make(chan interface{}, 100)
eventsChan := make(chan any, 100)
ctx, cancel := context.WithCancel(t.Context())
interval := time.Second
timesHandled := int32(0)
go async.Debounce(ctx, interval, eventsChan, func(event interface{}) {
go async.Debounce(ctx, interval, eventsChan, func(event any) {
atomic.AddInt32(&timesHandled, 1)
})
for i := 0; i < 100; i++ {
for range 100 {
eventsChan <- struct{}{}
}
require.Equal(t, int32(0), atomic.LoadInt32(&timesHandled), "Events must prevent from handler execution")

View File

@@ -93,9 +93,7 @@ func ExampleSubscriptionScope() {
// Run a subscriber in the background.
divsub := app.SubscribeResults('/', divs)
mulsub := app.SubscribeResults('*', muls)
wg.Add(1)
go func() {
defer wg.Done()
wg.Go(func() {
defer fmt.Println("subscriber exited")
defer divsub.Unsubscribe()
defer mulsub.Unsubscribe()
@@ -111,7 +109,7 @@ func ExampleSubscriptionScope() {
return
}
}
}()
})
// Interact with the app.
app.Calc('/', 22, 11)

View File

@@ -26,7 +26,7 @@ func ExampleNewSubscription() {
// Create a subscription that sends 10 integers on ch.
ch := make(chan int)
sub := event.NewSubscription(func(quit <-chan struct{}) error {
for i := 0; i < 10; i++ {
for i := range 10 {
select {
case ch <- i:
case <-quit:

View File

@@ -3,6 +3,6 @@ package event
// SubscriberSender is an abstract representation of an *event.Feed
// to use in describing types that accept or return an *event.Feed.
type SubscriberSender interface {
Subscribe(channel interface{}) Subscription
Send(value interface{}) (nsent int)
Subscribe(channel any) Subscription
Send(value any) (nsent int)
}

View File

@@ -30,7 +30,7 @@ var errInts = errors.New("error in subscribeInts")
func subscribeInts(max, fail int, c chan<- int) Subscription {
return NewSubscription(func(quit <-chan struct{}) error {
for i := 0; i < max; i++ {
for i := range max {
if i >= fail {
return errInts
}
@@ -50,7 +50,7 @@ func TestNewSubscriptionError(t *testing.T) {
channel := make(chan int)
sub := subscribeInts(10, 2, channel)
loop:
for want := 0; want < 10; want++ {
for want := range 10 {
select {
case got := <-channel:
require.Equal(t, want, got)

View File

@@ -107,15 +107,13 @@ func TestLockUnlock(_ *testing.T) {
func TestLockUnlock_CleansUnused(t *testing.T) {
var wg sync.WaitGroup
wg.Add(1)
go func() {
wg.Go(func() {
lock := NewMultilock("dog", "cat", "owl")
lock.Lock()
assert.Equal(t, 3, len(locks.list))
lock.Unlock()
wg.Done()
}()
})
wg.Wait()
// We expect that unlocking completely cleared the locks list
// given all 3 lock keys were unused at time of unlock.

View File

@@ -9,14 +9,14 @@ import (
// WorkerResults are the results of a scatter worker.
type WorkerResults struct {
Offset int
Extent interface{}
Extent any
}
// Scatter scatters a computation across multiple goroutines.
// This breaks the task in to a number of chunks and executes those chunks in parallel with the function provided.
// Results returned are collected and presented as a set of WorkerResults, which can be reassembled by the calling function.
// Any error that occurs in the workers will be passed back to the calling function.
func Scatter(inputLen int, sFunc func(int, int, *sync.RWMutex) (interface{}, error)) ([]*WorkerResults, error) {
func Scatter(inputLen int, sFunc func(int, int, *sync.RWMutex) (any, error)) ([]*WorkerResults, error) {
if inputLen <= 0 {
return nil, errors.New("input length must be greater than 0")
}

View File

@@ -46,9 +46,9 @@ func TestDouble(t *testing.T) {
inValues[i] = i
}
outValues := make([]int, test.inValues)
workerResults, err := async.Scatter(len(inValues), func(offset int, entries int, _ *sync.RWMutex) (interface{}, error) {
workerResults, err := async.Scatter(len(inValues), func(offset int, entries int, _ *sync.RWMutex) (any, error) {
extent := make([]int, entries)
for i := 0; i < entries; i++ {
for i := range entries {
extent[i] = inValues[offset+i] * 2
}
return extent, nil
@@ -72,8 +72,8 @@ func TestDouble(t *testing.T) {
func TestMutex(t *testing.T) {
totalRuns := 1048576
val := 0
_, err := async.Scatter(totalRuns, func(offset int, entries int, mu *sync.RWMutex) (interface{}, error) {
for i := 0; i < entries; i++ {
_, err := async.Scatter(totalRuns, func(offset int, entries int, mu *sync.RWMutex) (any, error) {
for range entries {
mu.Lock()
val++
mu.Unlock()
@@ -90,8 +90,8 @@ func TestMutex(t *testing.T) {
func TestError(t *testing.T) {
totalRuns := 1024
val := 0
_, err := async.Scatter(totalRuns, func(offset int, entries int, mu *sync.RWMutex) (interface{}, error) {
for i := 0; i < entries; i++ {
_, err := async.Scatter(totalRuns, func(offset int, entries int, mu *sync.RWMutex) (any, error) {
for range entries {
mu.Lock()
val++
if val == 1011 {

View File

@@ -70,7 +70,7 @@ func TestVerifyBlobKZGProofBatch(t *testing.T) {
commitments := make([][]byte, blobCount)
proofs := make([][]byte, blobCount)
for i := 0; i < blobCount; i++ {
for i := range blobCount {
blob := random.GetRandBlob(int64(i))
commitment, proof, err := GenerateCommitmentAndProof(blob)
require.NoError(t, err)
@@ -432,8 +432,8 @@ func TestVerifyCellKZGProofBatchFromBlobData(t *testing.T) {
commitments[1] = make([]byte, 32) // Wrong size
// Add cell proofs for both blobs
for i := 0; i < blobCount; i++ {
for j := uint64(0); j < numberOfColumns; j++ {
for range blobCount {
for range numberOfColumns {
allCellProofs = append(allCellProofs, make([]byte, 48))
}
}
@@ -450,7 +450,7 @@ func TestVerifyCellKZGProofBatchFromBlobData(t *testing.T) {
commitments := make([][]byte, blobCount)
var allCellProofs [][]byte
for i := 0; i < blobCount; i++ {
for i := range blobCount {
randBlob := random.GetRandBlob(int64(i))
var blob Blob
copy(blob[:], randBlob[:])
@@ -461,7 +461,7 @@ func TestVerifyCellKZGProofBatchFromBlobData(t *testing.T) {
commitments[i] = commitment[:]
// Add cell proofs - make some invalid in the second blob
for j := uint64(0); j < numberOfColumns; j++ {
for j := range numberOfColumns {
if i == 1 && j == 64 {
// Invalid proof size in middle of second blob's proofs
allCellProofs = append(allCellProofs, make([]byte, 20))

View File

@@ -22,10 +22,7 @@ import (
// The caller of this function must have a lock on forkchoice.
func (s *Service) getRecentPreState(ctx context.Context, c *ethpb.Checkpoint) state.ReadOnlyBeaconState {
headEpoch := slots.ToEpoch(s.HeadSlot())
if c.Epoch < headEpoch {
return nil
}
if !s.cfg.ForkChoiceStore.IsCanonical([32]byte(c.Root)) {
if c.Epoch < headEpoch || c.Epoch == 0 {
return nil
}
// Only use head state if the head state is compatible with the target checkpoint.
@@ -33,11 +30,11 @@ func (s *Service) getRecentPreState(ctx context.Context, c *ethpb.Checkpoint) st
if err != nil {
return nil
}
headDependent, err := s.cfg.ForkChoiceStore.DependentRootForEpoch([32]byte(headRoot), c.Epoch)
headDependent, err := s.cfg.ForkChoiceStore.DependentRootForEpoch([32]byte(headRoot), c.Epoch-1)
if err != nil {
return nil
}
targetDependent, err := s.cfg.ForkChoiceStore.DependentRootForEpoch([32]byte(c.Root), c.Epoch)
targetDependent, err := s.cfg.ForkChoiceStore.DependentRootForEpoch([32]byte(c.Root), c.Epoch-1)
if err != nil {
return nil
}
@@ -53,7 +50,11 @@ func (s *Service) getRecentPreState(ctx context.Context, c *ethpb.Checkpoint) st
}
return st
}
// Otherwise we need to advance the head state to the start of the target epoch.
// At this point we can only have c.Epoch > headEpoch.
if !s.cfg.ForkChoiceStore.IsCanonical([32]byte(c.Root)) {
return nil
}
// Advance the head state to the start of the target epoch.
// This point can only be reached if c.Root == headRoot and c.Epoch > headEpoch.
slot, err := slots.EpochStart(c.Epoch)
if err != nil {

View File

@@ -181,6 +181,123 @@ func TestService_GetRecentPreState(t *testing.T) {
require.NotNil(t, service.getRecentPreState(ctx, &ethpb.Checkpoint{Epoch: 1, Root: ckRoot}))
}
func TestService_GetRecentPreState_Epoch_0(t *testing.T) {
service, _ := minimalTestService(t)
ctx := t.Context()
require.IsNil(t, service.getRecentPreState(ctx, &ethpb.Checkpoint{}))
}
func TestService_GetRecentPreState_Old_Checkpoint(t *testing.T) {
service, _ := minimalTestService(t)
ctx := t.Context()
s, err := util.NewBeaconState()
require.NoError(t, err)
ckRoot := bytesutil.PadTo([]byte{'A'}, fieldparams.RootLength)
cp0 := &ethpb.Checkpoint{Epoch: 0, Root: ckRoot}
err = s.SetFinalizedCheckpoint(cp0)
require.NoError(t, err)
st, root, err := prepareForkchoiceState(ctx, 33, [32]byte(ckRoot), [32]byte{}, [32]byte{'R'}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, root))
service.head = &head{
root: [32]byte(ckRoot),
state: s,
slot: 33,
}
require.IsNil(t, service.getRecentPreState(ctx, &ethpb.Checkpoint{}))
}
func TestService_GetRecentPreState_Same_DependentRoots(t *testing.T) {
service, _ := minimalTestService(t)
ctx := t.Context()
s, err := util.NewBeaconState()
require.NoError(t, err)
ckRoot := bytesutil.PadTo([]byte{'A'}, fieldparams.RootLength)
cp0 := &ethpb.Checkpoint{Epoch: 0, Root: ckRoot}
// Create a fork 31 <-- 32 <--- 64
// \---------33
// With the same dependent root at epoch 0 for a checkpoint at epoch 2
st, blk, err := prepareForkchoiceState(ctx, 31, [32]byte(ckRoot), [32]byte{}, [32]byte{}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blk))
st, blk, err = prepareForkchoiceState(ctx, 32, [32]byte{'S'}, blk.Root(), [32]byte{}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blk))
st, blk, err = prepareForkchoiceState(ctx, 64, [32]byte{'T'}, blk.Root(), [32]byte{}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blk))
st, blk, err = prepareForkchoiceState(ctx, 33, [32]byte{'U'}, [32]byte(ckRoot), [32]byte{}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blk))
cpRoot := blk.Root()
service.head = &head{
root: [32]byte{'T'},
state: s,
slot: 64,
}
require.NotNil(t, service.getRecentPreState(ctx, &ethpb.Checkpoint{Epoch: 2, Root: cpRoot[:]}))
}
func TestService_GetRecentPreState_Different_DependentRoots(t *testing.T) {
service, _ := minimalTestService(t)
ctx := t.Context()
s, err := util.NewBeaconState()
require.NoError(t, err)
ckRoot := bytesutil.PadTo([]byte{'A'}, fieldparams.RootLength)
cp0 := &ethpb.Checkpoint{Epoch: 0, Root: ckRoot}
// Create a fork 30 <-- 31 <-- 32 <--- 64
// \---------33
// With the same dependent root at epoch 0 for a checkpoint at epoch 2
st, blk, err := prepareForkchoiceState(ctx, 30, [32]byte(ckRoot), [32]byte{}, [32]byte{}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blk))
st, blk, err = prepareForkchoiceState(ctx, 31, [32]byte{'S'}, blk.Root(), [32]byte{}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blk))
st, blk, err = prepareForkchoiceState(ctx, 32, [32]byte{'T'}, blk.Root(), [32]byte{}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blk))
st, blk, err = prepareForkchoiceState(ctx, 64, [32]byte{'U'}, blk.Root(), [32]byte{}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blk))
st, blk, err = prepareForkchoiceState(ctx, 33, [32]byte{'V'}, [32]byte(ckRoot), [32]byte{}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blk))
cpRoot := blk.Root()
service.head = &head{
root: [32]byte{'T'},
state: s,
slot: 64,
}
require.IsNil(t, service.getRecentPreState(ctx, &ethpb.Checkpoint{Epoch: 2, Root: cpRoot[:]}))
}
func TestService_GetRecentPreState_Different(t *testing.T) {
service, _ := minimalTestService(t)
ctx := t.Context()
s, err := util.NewBeaconState()
require.NoError(t, err)
ckRoot := bytesutil.PadTo([]byte{'A'}, fieldparams.RootLength)
cp0 := &ethpb.Checkpoint{Epoch: 0, Root: ckRoot}
err = s.SetFinalizedCheckpoint(cp0)
require.NoError(t, err)
st, root, err := prepareForkchoiceState(ctx, 33, [32]byte(ckRoot), [32]byte{}, [32]byte{'R'}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, root))
service.head = &head{
root: [32]byte(ckRoot),
state: s,
slot: 33,
}
require.IsNil(t, service.getRecentPreState(ctx, &ethpb.Checkpoint{}))
}
func TestService_GetAttPreState_Concurrency(t *testing.T) {
service, _ := minimalTestService(t)
ctx := t.Context()
@@ -209,16 +326,14 @@ func TestService_GetAttPreState_Concurrency(t *testing.T) {
var wg sync.WaitGroup
errChan := make(chan error, 1000)
for i := 0; i < 1000; i++ {
wg.Add(1)
go func() {
defer wg.Done()
for range 1000 {
wg.Go(func() {
cp1 := &ethpb.Checkpoint{Epoch: 1, Root: ckRoot}
_, err := service.getAttPreState(ctx, cp1)
if err != nil {
errChan <- err
}
}()
})
}
go func() {

View File

@@ -134,7 +134,7 @@ func getStateVersionAndPayload(st state.BeaconState) (int, interfaces.ExecutionD
return preStateVersion, preStateHeader, nil
}
func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlock, avs das.AvailabilityStore) error {
func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlock, avs das.AvailabilityChecker) error {
ctx, span := trace.StartSpan(ctx, "blockChain.onBlockBatch")
defer span.End()
@@ -306,7 +306,7 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlo
return s.saveHeadNoDB(ctx, lastB, lastBR, preState, !isValidPayload)
}
func (s *Service) areSidecarsAvailable(ctx context.Context, avs das.AvailabilityStore, roBlock consensusblocks.ROBlock) error {
func (s *Service) areSidecarsAvailable(ctx context.Context, avs das.AvailabilityChecker, roBlock consensusblocks.ROBlock) error {
blockVersion := roBlock.Version()
block := roBlock.Block()
slot := block.Slot()
@@ -634,9 +634,7 @@ func missingDataColumnIndices(store *filesystem.DataColumnStorage, root [fieldpa
return nil, nil
}
numberOfColumns := params.BeaconConfig().NumberOfColumns
if uint64(len(expected)) > numberOfColumns {
if len(expected) > fieldparams.NumberOfColumns {
return nil, errMaxDataColumnsExceeded
}
@@ -817,11 +815,10 @@ func (s *Service) areDataColumnsAvailable(
}
case <-ctx.Done():
var missingIndices interface{} = "all"
numberOfColumns := params.BeaconConfig().NumberOfColumns
missingIndicesCount := uint64(len(missing))
var missingIndices any = "all"
missingIndicesCount := len(missing)
if missingIndicesCount < numberOfColumns {
if missingIndicesCount < fieldparams.NumberOfColumns {
missingIndices = helpers.SortedPrettySliceFromMap(missing)
}
@@ -948,13 +945,6 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
attribute := s.getPayloadAttribute(ctx, headState, s.CurrentSlot()+1, headRoot[:])
// return early if we are not proposing next slot
if attribute.IsEmpty() {
headBlock, err := s.headBlock()
if err != nil {
log.WithError(err).WithField("head_root", headRoot).Error("Unable to retrieve head block to fire payload attributes event")
}
// notifyForkchoiceUpdate fires the payload attribute event. But in this case, we won't
// call notifyForkchoiceUpdate, so the event is fired here.
go s.firePayloadAttributesEvent(s.cfg.StateNotifier.StateFeed(), headBlock, headRoot, s.CurrentSlot()+1)
return
}

View File

@@ -147,7 +147,7 @@ func TestStore_OnBlockBatch(t *testing.T) {
bState := st.Copy()
var blks []consensusblocks.ROBlock
for i := 0; i < 97; i++ {
for i := range 97 {
b, err := util.GenerateFullBlock(bState, keys, util.DefaultBlockGenConfig(), primitives.Slot(i))
require.NoError(t, err)
wsb, err := consensusblocks.NewSignedBeaconBlock(b)
@@ -1323,7 +1323,7 @@ func TestOnBlock_ProcessBlocksParallel(t *testing.T) {
require.NoError(t, err)
logHook := logTest.NewGlobal()
for i := 0; i < 10; i++ {
for range 10 {
fc := &ethpb.Checkpoint{}
st, blkRoot, err := prepareForkchoiceState(ctx, 0, wsb1.Block().ParentRoot(), [32]byte{}, [32]byte{}, fc, fc)
require.NoError(t, err)
@@ -1949,7 +1949,7 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
require.Equal(t, true, optimistic)
// Check that the invalid blocks are not in database
for i := 0; i < 19-13; i++ {
for i := range 19 - 13 {
require.Equal(t, false, service.cfg.BeaconDB.HasBlock(ctx, invalidRoots[i]))
}
@@ -2495,7 +2495,8 @@ func TestMissingBlobIndices(t *testing.T) {
}
func TestMissingDataColumnIndices(t *testing.T) {
countPlusOne := params.BeaconConfig().NumberOfColumns + 1
const countPlusOne = fieldparams.NumberOfColumns + 1
tooManyColumns := make(map[uint64]bool, countPlusOne)
for i := range countPlusOne {
tooManyColumns[uint64(i)] = true
@@ -2805,6 +2806,10 @@ func TestProcessLightClientUpdate(t *testing.T) {
require.NoError(t, s.cfg.BeaconDB.SaveHeadBlockRoot(ctx, [32]byte{1, 2}))
for _, testVersion := range version.All()[1:] {
if testVersion == version.Gloas {
// TODO(16027): Unskip light client tests for Gloas
continue
}
t.Run(version.String(testVersion), func(t *testing.T) {
l := util.NewTestLightClient(t, testVersion)
@@ -2879,7 +2884,7 @@ func TestProcessLightClientUpdate(t *testing.T) {
// set a better sync aggregate
scb := make([]byte, 64)
for i := 0; i < 5; i++ {
for i := range 5 {
scb[i] = 0x01
}
oldUpdate.SetSyncAggregate(&ethpb.SyncAggregate{

View File

@@ -39,8 +39,8 @@ var epochsSinceFinalityExpandCache = primitives.Epoch(4)
// BlockReceiver interface defines the methods of chain service for receiving and processing new blocks.
type BlockReceiver interface {
ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, blockRoot [32]byte, avs das.AvailabilityStore) error
ReceiveBlockBatch(ctx context.Context, blocks []blocks.ROBlock, avs das.AvailabilityStore) error
ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, blockRoot [32]byte, avs das.AvailabilityChecker) error
ReceiveBlockBatch(ctx context.Context, blocks []blocks.ROBlock, avs das.AvailabilityChecker) error
HasBlock(ctx context.Context, root [32]byte) bool
RecentBlockSlot(root [32]byte) (primitives.Slot, error)
BlockBeingSynced([32]byte) bool
@@ -69,7 +69,7 @@ type SlashingReceiver interface {
// 1. Validate block, apply state transition and update checkpoints
// 2. Apply fork choice to the processed block
// 3. Save latest head info
func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, blockRoot [32]byte, avs das.AvailabilityStore) error {
func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, blockRoot [32]byte, avs das.AvailabilityChecker) error {
ctx, span := trace.StartSpan(ctx, "blockChain.ReceiveBlock")
defer span.End()
// Return early if the block is blacklisted
@@ -242,7 +242,7 @@ func (s *Service) validateExecutionAndConsensus(
return postState, isValidPayload, nil
}
func (s *Service) handleDA(ctx context.Context, avs das.AvailabilityStore, block blocks.ROBlock) (time.Duration, error) {
func (s *Service) handleDA(ctx context.Context, avs das.AvailabilityChecker, block blocks.ROBlock) (time.Duration, error) {
var err error
start := time.Now()
if avs != nil {
@@ -332,7 +332,7 @@ func (s *Service) executePostFinalizationTasks(ctx context.Context, finalizedSta
// ReceiveBlockBatch processes the whole block batch at once, assuming the block batch is linear ,transitioning
// the state, performing batch verification of all collected signatures and then performing the appropriate
// actions for a block post-transition.
func (s *Service) ReceiveBlockBatch(ctx context.Context, blocks []blocks.ROBlock, avs das.AvailabilityStore) error {
func (s *Service) ReceiveBlockBatch(ctx context.Context, blocks []blocks.ROBlock, avs das.AvailabilityChecker) error {
ctx, span := trace.StartSpan(ctx, "blockChain.ReceiveBlockBatch")
defer span.End()

View File

@@ -216,13 +216,11 @@ func TestService_ReceiveBlockUpdateHead(t *testing.T) {
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
wg := sync.WaitGroup{}
wg.Add(1)
go func() {
wg.Go(func() {
wsb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
require.NoError(t, s.ReceiveBlock(ctx, wsb, root, nil))
wg.Done()
}()
})
wg.Wait()
time.Sleep(100 * time.Millisecond)
if recvd := len(s.cfg.StateNotifier.(*blockchainTesting.MockStateNotifier).ReceivedEvents()); recvd < 1 {

View File

@@ -14,6 +14,7 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/cache"
statefeed "github.com/OffchainLabs/prysm/v7/beacon-chain/core/feed/state"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
coreTime "github.com/OffchainLabs/prysm/v7/beacon-chain/core/time"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/transition"
"github.com/OffchainLabs/prysm/v7/beacon-chain/db"
@@ -470,30 +471,35 @@ func (s *Service) removeStartupState() {
// UpdateCustodyInfoInDB updates the custody information in the database.
// It returns the (potentially updated) custody group count and the earliest available slot.
func (s *Service) updateCustodyInfoInDB(slot primitives.Slot) (primitives.Slot, uint64, error) {
isSubscribedToAllDataSubnets := flags.Get().SubscribeAllDataSubnets
isSupernode := flags.Get().Supernode
isSemiSupernode := flags.Get().SemiSupernode
cfg := params.BeaconConfig()
custodyRequirement := cfg.CustodyRequirement
// Check if the node was previously subscribed to all data subnets, and if so,
// store the new status accordingly.
wasSubscribedToAllDataSubnets, err := s.cfg.BeaconDB.UpdateSubscribedToAllDataSubnets(s.ctx, isSubscribedToAllDataSubnets)
wasSupernode, err := s.cfg.BeaconDB.UpdateSubscribedToAllDataSubnets(s.ctx, isSupernode)
if err != nil {
log.WithError(err).Error("Could not update subscription status to all data subnets")
return 0, 0, errors.Wrap(err, "update subscribed to all data subnets")
}
// Warn the user if the node was previously subscribed to all data subnets and is not any more.
if wasSubscribedToAllDataSubnets && !isSubscribedToAllDataSubnets {
log.Warnf(
"Because the flag `--%s` was previously used, the node will still subscribe to all data subnets.",
flags.SubscribeAllDataSubnets.Name,
)
// Compute the target custody group count based on current flag configuration.
targetCustodyGroupCount := custodyRequirement
// Supernode: custody all groups (either currently set or previously enabled)
if isSupernode {
targetCustodyGroupCount = cfg.NumberOfCustodyGroups
}
// Compute the custody group count.
custodyGroupCount := custodyRequirement
if isSubscribedToAllDataSubnets {
custodyGroupCount = cfg.NumberOfCustodyGroups
// Semi-supernode: custody minimum needed for reconstruction, or custody requirement if higher
if isSemiSupernode {
semiSupernodeCustody, err := peerdas.MinimumCustodyGroupCountToReconstruct()
if err != nil {
return 0, 0, errors.Wrap(err, "minimum custody group count")
}
targetCustodyGroupCount = max(custodyRequirement, semiSupernodeCustody)
}
// Safely compute the fulu fork slot.
@@ -510,12 +516,23 @@ func (s *Service) updateCustodyInfoInDB(slot primitives.Slot) (primitives.Slot,
}
}
earliestAvailableSlot, custodyGroupCount, err := s.cfg.BeaconDB.UpdateCustodyInfo(s.ctx, slot, custodyGroupCount)
earliestAvailableSlot, actualCustodyGroupCount, err := s.cfg.BeaconDB.UpdateCustodyInfo(s.ctx, slot, targetCustodyGroupCount)
if err != nil {
return 0, 0, errors.Wrap(err, "update custody info")
}
return earliestAvailableSlot, custodyGroupCount, nil
if isSupernode {
log.WithFields(logrus.Fields{
"current": actualCustodyGroupCount,
"target": cfg.NumberOfCustodyGroups,
}).Info("Supernode mode enabled. Will custody all data columns going forward.")
}
if wasSupernode && !isSupernode {
log.Warningf("Because the `--%s` flag was previously used, the node will continue to act as a super node.", flags.Supernode.Name)
}
return earliestAvailableSlot, actualCustodyGroupCount, nil
}
func spawnCountdownIfPreGenesis(ctx context.Context, genesisTime time.Time, db db.HeadAccessDatabase) {

View File

@@ -412,8 +412,7 @@ func BenchmarkHasBlockDB(b *testing.B) {
r, err := blk.Block.HashTreeRoot()
require.NoError(b, err)
b.ResetTimer()
for i := 0; i < b.N; i++ {
for b.Loop() {
require.Equal(b, true, s.cfg.BeaconDB.HasBlock(ctx, r), "Block is not in DB")
}
}
@@ -432,8 +431,7 @@ func BenchmarkHasBlockForkChoiceStore_DoublyLinkedTree(b *testing.B) {
require.NoError(b, err)
require.NoError(b, s.cfg.ForkChoiceStore.InsertNode(ctx, beaconState, roblock))
b.ResetTimer()
for i := 0; i < b.N; i++ {
for b.Loop() {
require.Equal(b, true, s.cfg.ForkChoiceStore.HasNode(r), "Block is not in fork choice store")
}
}
@@ -605,7 +603,6 @@ func TestUpdateCustodyInfoInDB(t *testing.T) {
custodyRequirement = uint64(4)
earliestStoredSlot = primitives.Slot(12)
numberOfCustodyGroups = uint64(64)
numberOfColumns = uint64(128)
)
params.SetupTestConfigCleanup(t)
@@ -613,7 +610,6 @@ func TestUpdateCustodyInfoInDB(t *testing.T) {
cfg.FuluForkEpoch = fuluForkEpoch
cfg.CustodyRequirement = custodyRequirement
cfg.NumberOfCustodyGroups = numberOfCustodyGroups
cfg.NumberOfColumns = numberOfColumns
params.OverrideBeaconConfig(cfg)
ctx := t.Context()
@@ -644,7 +640,7 @@ func TestUpdateCustodyInfoInDB(t *testing.T) {
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.SubscribeAllDataSubnets = true
gFlags.Supernode = true
flags.Init(gFlags)
defer flags.Init(resetFlags)
@@ -682,7 +678,7 @@ func TestUpdateCustodyInfoInDB(t *testing.T) {
// ----------
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.SubscribeAllDataSubnets = true
gFlags.Supernode = true
flags.Init(gFlags)
defer flags.Init(resetFlags)
@@ -697,4 +693,121 @@ func TestUpdateCustodyInfoInDB(t *testing.T) {
require.Equal(t, slot, actualEas)
require.Equal(t, numberOfCustodyGroups, actualCgc)
})
t.Run("Supernode downgrade prevented", func(t *testing.T) {
service, requirements := minimalTestService(t)
err = requirements.db.SaveBlock(ctx, roBlock)
require.NoError(t, err)
// Enable supernode
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.Supernode = true
flags.Init(gFlags)
slot := fuluForkEpoch*primitives.Slot(cfg.SlotsPerEpoch) + 1
actualEas, actualCgc, err := service.updateCustodyInfoInDB(slot)
require.NoError(t, err)
require.Equal(t, slot, actualEas)
require.Equal(t, numberOfCustodyGroups, actualCgc)
// Try to downgrade by removing flag
gFlags.Supernode = false
flags.Init(gFlags)
defer flags.Init(resetFlags)
// Should still be supernode
actualEas, actualCgc, err = service.updateCustodyInfoInDB(slot + 2)
require.NoError(t, err)
require.Equal(t, slot, actualEas)
require.Equal(t, numberOfCustodyGroups, actualCgc) // Still 64, not downgraded
})
t.Run("Semi-supernode downgrade prevented", func(t *testing.T) {
service, requirements := minimalTestService(t)
err = requirements.db.SaveBlock(ctx, roBlock)
require.NoError(t, err)
// Enable semi-supernode
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.SemiSupernode = true
flags.Init(gFlags)
slot := fuluForkEpoch*primitives.Slot(cfg.SlotsPerEpoch) + 1
actualEas, actualCgc, err := service.updateCustodyInfoInDB(slot)
require.NoError(t, err)
require.Equal(t, slot, actualEas)
semiSupernodeCustody := numberOfCustodyGroups / 2 // 64
require.Equal(t, semiSupernodeCustody, actualCgc) // Semi-supernode custodies 64 groups
// Try to downgrade by removing flag
gFlags.SemiSupernode = false
flags.Init(gFlags)
defer flags.Init(resetFlags)
// UpdateCustodyInfo should prevent downgrade - custody count should remain at 64
actualEas, actualCgc, err = service.updateCustodyInfoInDB(slot + 2)
require.NoError(t, err)
require.Equal(t, slot, actualEas)
require.Equal(t, semiSupernodeCustody, actualCgc) // Still 64 due to downgrade prevention by UpdateCustodyInfo
})
t.Run("Semi-supernode to supernode upgrade allowed", func(t *testing.T) {
service, requirements := minimalTestService(t)
err = requirements.db.SaveBlock(ctx, roBlock)
require.NoError(t, err)
// Start with semi-supernode
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.SemiSupernode = true
flags.Init(gFlags)
slot := fuluForkEpoch*primitives.Slot(cfg.SlotsPerEpoch) + 1
actualEas, actualCgc, err := service.updateCustodyInfoInDB(slot)
require.NoError(t, err)
require.Equal(t, slot, actualEas)
semiSupernodeCustody := numberOfCustodyGroups / 2 // 64
require.Equal(t, semiSupernodeCustody, actualCgc) // Semi-supernode custodies 64 groups
// Upgrade to full supernode
gFlags.SemiSupernode = false
gFlags.Supernode = true
flags.Init(gFlags)
defer flags.Init(resetFlags)
// Should upgrade to full supernode
upgradeSlot := slot + 2
actualEas, actualCgc, err = service.updateCustodyInfoInDB(upgradeSlot)
require.NoError(t, err)
require.Equal(t, upgradeSlot, actualEas) // Earliest slot updates when upgrading
require.Equal(t, numberOfCustodyGroups, actualCgc) // Upgraded to 128
})
t.Run("Semi-supernode with high validator requirements uses higher custody", func(t *testing.T) {
service, requirements := minimalTestService(t)
err = requirements.db.SaveBlock(ctx, roBlock)
require.NoError(t, err)
// Enable semi-supernode
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.SemiSupernode = true
flags.Init(gFlags)
defer flags.Init(resetFlags)
// Mock a high custody requirement (simulating many validators)
// We need to override the custody requirement calculation
// For this test, we'll verify the logic by checking if custodyRequirement > 64
// Since custodyRequirement in minimalTestService is 4, we can't test the high case here
// This would require a different test setup with actual validators
slot := fuluForkEpoch*primitives.Slot(cfg.SlotsPerEpoch) + 1
actualEas, actualCgc, err := service.updateCustodyInfoInDB(slot)
require.NoError(t, err)
require.Equal(t, slot, actualEas)
semiSupernodeCustody := numberOfCustodyGroups / 2 // 64
// With low validator requirements (4), should use semi-supernode minimum (64)
require.Equal(t, semiSupernodeCustody, actualCgc)
})
}

View File

@@ -106,7 +106,7 @@ type EventFeedWrapper struct {
subscribed chan struct{} // this channel is closed once a subscription is made
}
func (w *EventFeedWrapper) Subscribe(channel interface{}) event.Subscription {
func (w *EventFeedWrapper) Subscribe(channel any) event.Subscription {
select {
case <-w.subscribed:
break // already closed
@@ -116,7 +116,7 @@ func (w *EventFeedWrapper) Subscribe(channel interface{}) event.Subscription {
return w.feed.Subscribe(channel)
}
func (w *EventFeedWrapper) Send(value interface{}) int {
func (w *EventFeedWrapper) Send(value any) int {
return w.feed.Send(value)
}
@@ -275,7 +275,7 @@ func (s *ChainService) ReceiveBlockInitialSync(ctx context.Context, block interf
}
// ReceiveBlockBatch processes blocks in batches from initial-sync.
func (s *ChainService) ReceiveBlockBatch(ctx context.Context, blks []blocks.ROBlock, _ das.AvailabilityStore) error {
func (s *ChainService) ReceiveBlockBatch(ctx context.Context, blks []blocks.ROBlock, _ das.AvailabilityChecker) error {
if s.State == nil {
return ErrNilState
}
@@ -305,7 +305,7 @@ func (s *ChainService) ReceiveBlockBatch(ctx context.Context, blks []blocks.ROBl
}
// ReceiveBlock mocks ReceiveBlock method in chain service.
func (s *ChainService) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, _ [32]byte, _ das.AvailabilityStore) error {
func (s *ChainService) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, _ [32]byte, _ das.AvailabilityChecker) error {
if s.ReceiveBlockMockErr != nil {
return s.ReceiveBlockMockErr
}

View File

@@ -166,7 +166,7 @@ func (s *Service) RegisterValidator(ctx context.Context, reg []*ethpb.SignedVali
indexToRegistration := make(map[primitives.ValidatorIndex]*ethpb.ValidatorRegistrationV1)
valid := make([]*ethpb.SignedValidatorRegistrationV1, 0)
for i := 0; i < len(reg); i++ {
for i := range reg {
r := reg[i]
nx, exists := s.cfg.headFetcher.HeadPublicKeyToValidatorIndex(bytesutil.ToBytes48(r.Message.Pubkey))
if !exists {

View File

@@ -17,7 +17,7 @@ import (
func TestBalanceCache_AddGetBalance(t *testing.T) {
blockRoots := make([][]byte, params.BeaconConfig().SlotsPerHistoricalRoot)
for i := 0; i < len(blockRoots); i++ {
for i := range blockRoots {
b := make([]byte, 8)
binary.LittleEndian.PutUint64(b, uint64(i))
blockRoots[i] = b
@@ -61,7 +61,7 @@ func TestBalanceCache_AddGetBalance(t *testing.T) {
func TestBalanceCache_BalanceKey(t *testing.T) {
blockRoots := make([][]byte, params.BeaconConfig().SlotsPerHistoricalRoot)
for i := 0; i < len(blockRoots); i++ {
for i := range blockRoots {
b := make([]byte, 8)
binary.LittleEndian.PutUint64(b, uint64(i))
blockRoots[i] = b

View File

@@ -51,7 +51,7 @@ type CommitteeCache struct {
}
// committeeKeyFn takes the seed as the key to retrieve shuffled indices of a committee in a given epoch.
func committeeKeyFn(obj interface{}) (string, error) {
func committeeKeyFn(obj any) (string, error) {
info, ok := obj.(*Committees)
if !ok {
return "", ErrNotCommittee

View File

@@ -14,7 +14,7 @@ func TestCommitteeKeyFuzz_OK(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
c := &Committees{}
for i := 0; i < 100000; i++ {
for range 100000 {
fuzzer.Fuzz(c)
k, err := committeeKeyFn(c)
require.NoError(t, err)
@@ -27,7 +27,7 @@ func TestCommitteeCache_FuzzCommitteesByEpoch(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
c := &Committees{}
for i := 0; i < 100000; i++ {
for range 100000 {
fuzzer.Fuzz(c)
require.NoError(t, cache.AddCommitteeShuffledList(t.Context(), c))
_, err := cache.Committee(t.Context(), 0, c.Seed, 0)
@@ -42,7 +42,7 @@ func TestCommitteeCache_FuzzActiveIndices(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
c := &Committees{}
for i := 0; i < 100000; i++ {
for range 100000 {
fuzzer.Fuzz(c)
require.NoError(t, cache.AddCommitteeShuffledList(t.Context(), c))

View File

@@ -17,6 +17,6 @@ func trim(queue *cache.FIFO, maxSize uint64) {
}
// popProcessNoopFunc is a no-op function that never returns an error.
func popProcessNoopFunc(_ interface{}, _ bool) error {
func popProcessNoopFunc(_ any, _ bool) error {
return nil
}

View File

@@ -769,7 +769,7 @@ func TestFinalizedDeposits_ReturnsTrieCorrectly(t *testing.T) {
}
var ctrs []*ethpb.DepositContainer
for i := 0; i < 2000; i++ {
for i := range 2000 {
ctrs = append(ctrs, generateCtr(uint64(10+(i/2)), int64(i)))
}
@@ -948,9 +948,9 @@ func rootCreator(rn byte) []byte {
func BenchmarkDepositTree_InsertNewImplementation(b *testing.B) {
totalDeposits := 10000
input := bytesutil.ToBytes32([]byte("foo"))
for i := 0; i < b.N; i++ {
for b.Loop() {
dt := NewDepositTree()
for j := 0; j < totalDeposits; j++ {
for range totalDeposits {
err := dt.Insert(input[:], 0)
require.NoError(b, err)
}
@@ -959,10 +959,10 @@ func BenchmarkDepositTree_InsertNewImplementation(b *testing.B) {
func BenchmarkDepositTree_InsertOldImplementation(b *testing.B) {
totalDeposits := 10000
input := bytesutil.ToBytes32([]byte("foo"))
for i := 0; i < b.N; i++ {
for b.Loop() {
dt, err := trie.NewTrie(33)
require.NoError(b, err)
for j := 0; j < totalDeposits; j++ {
for range totalDeposits {
err := dt.Insert(input[:], 0)
require.NoError(b, err)
}
@@ -980,8 +980,8 @@ func BenchmarkDepositTree_HashTreeRootNewImplementation(b *testing.B) {
}
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
for b.Loop() {
_, err = tr.HashTreeRoot()
require.NoError(b, err)
}
@@ -999,8 +999,8 @@ func BenchmarkDepositTree_HashTreeRootOldImplementation(b *testing.B) {
}
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
for b.Loop() {
_, err = dt.HashTreeRoot()
require.NoError(b, err)
}

View File

@@ -20,7 +20,7 @@ func (ds *DepositTreeSnapshot) CalculateRoot() ([32]byte, error) {
size := ds.depositCount
index := len(ds.finalized)
root := trie.ZeroHashes[0]
for i := 0; i < DepositContractDepth; i++ {
for i := range DepositContractDepth {
if (size & 1) == 1 {
if index == 0 {
break

View File

@@ -47,15 +47,13 @@ func TestSkipSlotCache_DisabledAndEnabled(t *testing.T) {
c.Enable()
wg := new(sync.WaitGroup)
wg.Add(1)
go func() {
wg.Go(func() {
// Get call will only terminate when
// it is not longer in progress.
obj, err := c.Get(ctx, r)
require.NoError(t, err)
require.IsNil(t, obj)
wg.Done()
}()
})
c.MarkNotInProgress(r)
wg.Wait()

View File

@@ -236,7 +236,7 @@ func (s *SyncCommitteeCache) UpdatePositionsInCommittee(syncCommitteeBoundaryRoo
// Given the `syncCommitteeIndexPosition` object, this returns the key of the object.
// The key is the `currentSyncCommitteeRoot` within the field.
// Error gets returned if input does not comply with `currentSyncCommitteeRoot` object.
func keyFn(obj interface{}) (string, error) {
func keyFn(obj any) (string, error) {
info, ok := obj.(*syncCommitteeIndexPosition)
if !ok {
return "", errNotSyncCommitteeIndexPosition

View File

@@ -12,12 +12,12 @@ import (
func TestSyncSubnetIDsCache_Roundtrip(t *testing.T) {
c := newSyncSubnetIDs()
for i := 0; i < 20; i++ {
for i := range 20 {
pubkey := [fieldparams.BLSPubkeyLength]byte{byte(i)}
c.AddSyncCommitteeSubnets(pubkey[:], 100, []uint64{uint64(i)}, 0)
}
for i := uint64(0); i < 20; i++ {
for i := range uint64(20) {
pubkey := [fieldparams.BLSPubkeyLength]byte{byte(i)}
idxs, _, ok, _ := c.GetSyncCommitteeSubnets(pubkey[:], 100)
@@ -34,7 +34,7 @@ func TestSyncSubnetIDsCache_Roundtrip(t *testing.T) {
func TestSyncSubnetIDsCache_ValidateCurrentEpoch(t *testing.T) {
c := newSyncSubnetIDs()
for i := 0; i < 20; i++ {
for i := range 20 {
pubkey := [fieldparams.BLSPubkeyLength]byte{byte(i)}
c.AddSyncCommitteeSubnets(pubkey[:], 100, []uint64{uint64(i)}, 0)
}
@@ -42,7 +42,7 @@ func TestSyncSubnetIDsCache_ValidateCurrentEpoch(t *testing.T) {
coms := c.GetAllSubnets(50)
assert.Equal(t, 0, len(coms))
for i := uint64(0); i < 20; i++ {
for i := range uint64(20) {
pubkey := [fieldparams.BLSPubkeyLength]byte{byte(i)}
_, jEpoch, ok, _ := c.GetSyncCommitteeSubnets(pubkey[:], 100)

View File

@@ -461,7 +461,7 @@ func TestFuzzProcessAttestationsNoVerify_10000(t *testing.T) {
fuzzer := gofuzz.NewWithSeed(0)
st := &ethpb.BeaconStateAltair{}
b := &ethpb.SignedBeaconBlockAltair{Block: &ethpb.BeaconBlockAltair{}}
for i := 0; i < 10000; i++ {
for i := range 10000 {
fuzzer.Fuzz(st)
fuzzer.Fuzz(b)
if b.Block == nil {

View File

@@ -240,7 +240,7 @@ func TestProcessSyncCommittee_processSyncAggregate(t *testing.T) {
proposerIndex, err := helpers.BeaconProposerIndex(t.Context(), beaconState)
require.NoError(t, err)
for i := 0; i < len(syncBits); i++ {
for i := range syncBits {
if syncBits.BitAt(uint64(i)) {
pk := bytesutil.ToBytes48(committeeKeys[i])
require.DeepEqual(t, true, votedMap[pk])

View File

@@ -195,10 +195,7 @@ func AddValidatorToRegistry(beaconState state.BeaconState, pubKey []byte, withdr
// withdrawable_epoch=FAR_FUTURE_EPOCH,
// )
func GetValidatorFromDeposit(pubKey []byte, withdrawalCredentials []byte, amount uint64) *ethpb.Validator {
effectiveBalance := amount - (amount % params.BeaconConfig().EffectiveBalanceIncrement)
if params.BeaconConfig().MaxEffectiveBalance < effectiveBalance {
effectiveBalance = params.BeaconConfig().MaxEffectiveBalance
}
effectiveBalance := min(params.BeaconConfig().MaxEffectiveBalance, amount-(amount%params.BeaconConfig().EffectiveBalanceIncrement))
return &ethpb.Validator{
PublicKey: pubKey,

View File

@@ -16,7 +16,7 @@ func TestFuzzProcessDeposits_10000(t *testing.T) {
state := &ethpb.BeaconStateAltair{}
deposits := make([]*ethpb.Deposit, 100)
ctx := t.Context()
for i := 0; i < 10000; i++ {
for i := range 10000 {
fuzzer.Fuzz(state)
for i := range deposits {
fuzzer.Fuzz(deposits[i])
@@ -37,7 +37,7 @@ func TestFuzzProcessPreGenesisDeposit_10000(t *testing.T) {
deposit := &ethpb.Deposit{}
ctx := t.Context()
for i := 0; i < 10000; i++ {
for i := range 10000 {
fuzzer.Fuzz(state)
fuzzer.Fuzz(deposit)
s, err := state_native.InitializeFromProtoUnsafeAltair(state)
@@ -56,7 +56,7 @@ func TestFuzzProcessPreGenesisDeposit_Phase0_10000(t *testing.T) {
deposit := &ethpb.Deposit{}
ctx := t.Context()
for i := 0; i < 10000; i++ {
for i := range 10000 {
fuzzer.Fuzz(state)
fuzzer.Fuzz(deposit)
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
@@ -74,7 +74,7 @@ func TestFuzzProcessDeposit_Phase0_10000(t *testing.T) {
state := &ethpb.BeaconState{}
deposit := &ethpb.Deposit{}
for i := 0; i < 10000; i++ {
for i := range 10000 {
fuzzer.Fuzz(state)
fuzzer.Fuzz(deposit)
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
@@ -92,7 +92,7 @@ func TestFuzzProcessDeposit_10000(t *testing.T) {
state := &ethpb.BeaconStateAltair{}
deposit := &ethpb.Deposit{}
for i := 0; i < 10000; i++ {
for i := range 10000 {
fuzzer.Fuzz(state)
fuzzer.Fuzz(deposit)
s, err := state_native.InitializeFromProtoUnsafeAltair(state)

View File

@@ -122,11 +122,8 @@ func ProcessInactivityScores(
}
if !helpers.IsInInactivityLeak(prevEpoch, finalizedEpoch) {
score := recoveryRate
// Prevents underflow below 0.
if score > v.InactivityScore {
score = v.InactivityScore
}
score := min(recoveryRate, v.InactivityScore)
v.InactivityScore -= score
}
inactivityScores[i] = v.InactivityScore
@@ -242,7 +239,7 @@ func ProcessRewardsAndPenaltiesPrecompute(
}
balances := beaconState.Balances()
for i := 0; i < numOfVals; i++ {
for i := range numOfVals {
vals[i].BeforeEpochTransitionBalance = balances[i]
// Compute the post balance of the validator after accounting for the

View File

@@ -21,7 +21,7 @@ import (
func TestSyncCommitteeIndices_CanGet(t *testing.T) {
getState := func(t *testing.T, count uint64, vers int) state.BeaconState {
validators := make([]*ethpb.Validator, count)
for i := 0; i < len(validators); i++ {
for i := range validators {
validators[i] = &ethpb.Validator{
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
EffectiveBalance: params.BeaconConfig().MinDepositAmount,
@@ -113,7 +113,7 @@ func TestSyncCommitteeIndices_DifferentPeriods(t *testing.T) {
helpers.ClearCache()
getState := func(t *testing.T, count uint64) state.BeaconState {
validators := make([]*ethpb.Validator, count)
for i := 0; i < len(validators); i++ {
for i := range validators {
validators[i] = &ethpb.Validator{
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
EffectiveBalance: params.BeaconConfig().MinDepositAmount,
@@ -147,7 +147,7 @@ func TestSyncCommitteeIndices_DifferentPeriods(t *testing.T) {
func TestSyncCommittee_CanGet(t *testing.T) {
getState := func(t *testing.T, count uint64) state.BeaconState {
validators := make([]*ethpb.Validator, count)
for i := 0; i < len(validators); i++ {
for i := range validators {
blsKey, err := bls.RandKey()
require.NoError(t, err)
validators[i] = &ethpb.Validator{
@@ -394,7 +394,7 @@ func Test_ValidateSyncMessageTime(t *testing.T) {
func getState(t *testing.T, count uint64) state.BeaconState {
validators := make([]*ethpb.Validator, count)
for i := 0; i < len(validators); i++ {
for i := range validators {
blsKey, err := bls.RandKey()
require.NoError(t, err)
validators[i] = &ethpb.Validator{

View File

@@ -33,7 +33,7 @@ func TestTranslateParticipation(t *testing.T) {
r, err := helpers.BlockRootAtSlot(s, 0)
require.NoError(t, err)
var pendingAtts []*ethpb.PendingAttestation
for i := 0; i < 3; i++ {
for i := range 3 {
pendingAtts = append(pendingAtts, &ethpb.PendingAttestation{
Data: &ethpb.AttestationData{
CommitteeIndex: primitives.CommitteeIndex(i),

View File

@@ -257,7 +257,7 @@ func VerifyIndexedAttestation(ctx context.Context, beaconState state.ReadOnlyBea
}
indices := indexedAtt.GetAttestingIndices()
var pubkeys []bls.PublicKey
for i := 0; i < len(indices); i++ {
for i := range indices {
pubkeyAtIdx := beaconState.PubkeyAtIndex(primitives.ValidatorIndex(indices[i]))
pk, err := bls.PublicKeyFromBytes(pubkeyAtIdx[:])
if err != nil {

View File

@@ -317,7 +317,7 @@ func TestVerifyAttestationNoVerifySignature_Electra(t *testing.T) {
func TestConvertToIndexed_OK(t *testing.T) {
helpers.ClearCache()
validators := make([]*ethpb.Validator, 2*params.BeaconConfig().SlotsPerEpoch)
for i := 0; i < len(validators); i++ {
for i := range validators {
validators[i] = &ethpb.Validator{
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
}
@@ -373,7 +373,7 @@ func TestVerifyIndexedAttestation_OK(t *testing.T) {
validators := make([]*ethpb.Validator, numOfValidators)
_, keys, err := util.DeterministicDepositsAndKeys(numOfValidators)
require.NoError(t, err)
for i := 0; i < len(validators); i++ {
for i := range validators {
validators[i] = &ethpb.Validator{
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
PublicKey: keys[i].PublicKey().Marshal(),
@@ -481,7 +481,7 @@ func TestValidateIndexedAttestation_BadAttestationsSignatureSet(t *testing.T) {
sig := keys[0].Sign([]byte{'t', 'e', 's', 't'})
list := bitfield.Bitlist{0b11111}
var atts []ethpb.Att
for i := uint64(0); i < 1000; i++ {
for range uint64(1000) {
atts = append(atts, &ethpb.Attestation{
Data: &ethpb.AttestationData{
CommitteeIndex: 1,
@@ -498,7 +498,7 @@ func TestValidateIndexedAttestation_BadAttestationsSignatureSet(t *testing.T) {
atts = []ethpb.Att{}
list = bitfield.Bitlist{0b10000}
for i := uint64(0); i < 1000; i++ {
for range uint64(1000) {
atts = append(atts, &ethpb.Attestation{
Data: &ethpb.AttestationData{
CommitteeIndex: 1,
@@ -524,7 +524,7 @@ func TestVerifyAttestations_HandlesPlannedFork(t *testing.T) {
validators := make([]*ethpb.Validator, numOfValidators)
_, keys, err := util.DeterministicDepositsAndKeys(numOfValidators)
require.NoError(t, err)
for i := 0; i < len(validators); i++ {
for i := range validators {
validators[i] = &ethpb.Validator{
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
PublicKey: keys[i].PublicKey().Marshal(),
@@ -588,7 +588,7 @@ func TestRetrieveAttestationSignatureSet_VerifiesMultipleAttestations(t *testing
validators := make([]*ethpb.Validator, numOfValidators)
_, keys, err := util.DeterministicDepositsAndKeys(numOfValidators)
require.NoError(t, err)
for i := 0; i < len(validators); i++ {
for i := range validators {
validators[i] = &ethpb.Validator{
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
PublicKey: keys[i].PublicKey().Marshal(),
@@ -707,7 +707,7 @@ func TestRetrieveAttestationSignatureSet_AcrossFork(t *testing.T) {
validators := make([]*ethpb.Validator, numOfValidators)
_, keys, err := util.DeterministicDepositsAndKeys(numOfValidators)
require.NoError(t, err)
for i := 0; i < len(validators); i++ {
for i := range validators {
validators[i] = &ethpb.Validator{
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
PublicKey: keys[i].PublicKey().Marshal(),

View File

@@ -21,7 +21,7 @@ func TestFuzzProcessAttestationNoVerify_10000(t *testing.T) {
state := &ethpb.BeaconState{}
att := &ethpb.Attestation{}
for i := 0; i < 10000; i++ {
for i := range 10000 {
fuzzer.Fuzz(state)
fuzzer.Fuzz(att)
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
@@ -37,7 +37,7 @@ func TestFuzzProcessBlockHeader_10000(t *testing.T) {
state := &ethpb.BeaconState{}
block := &ethpb.SignedBeaconBlock{}
for i := 0; i < 10000; i++ {
for i := range 10000 {
fuzzer.Fuzz(state)
fuzzer.Fuzz(block)
@@ -63,7 +63,7 @@ func TestFuzzverifyDepositDataSigningRoot_10000(_ *testing.T) {
var p []byte
var s []byte
var d []byte
for i := 0; i < 10000; i++ {
for range 10000 {
fuzzer.Fuzz(&ba)
fuzzer.Fuzz(&pubkey)
fuzzer.Fuzz(&sig)
@@ -83,7 +83,7 @@ func TestFuzzProcessEth1DataInBlock_10000(t *testing.T) {
e := &ethpb.Eth1Data{}
state, err := state_native.InitializeFromProtoUnsafePhase0(&ethpb.BeaconState{})
require.NoError(t, err)
for i := 0; i < 10000; i++ {
for range 10000 {
fuzzer.Fuzz(state)
fuzzer.Fuzz(e)
s, err := ProcessEth1DataInBlock(t.Context(), state, e)
@@ -98,7 +98,7 @@ func TestFuzzareEth1DataEqual_10000(_ *testing.T) {
eth1data := &ethpb.Eth1Data{}
eth1data2 := &ethpb.Eth1Data{}
for i := 0; i < 10000; i++ {
for range 10000 {
fuzzer.Fuzz(eth1data)
fuzzer.Fuzz(eth1data2)
AreEth1DataEqual(eth1data, eth1data2)
@@ -110,7 +110,7 @@ func TestFuzzEth1DataHasEnoughSupport_10000(t *testing.T) {
fuzzer := gofuzz.NewWithSeed(0)
eth1data := &ethpb.Eth1Data{}
var stateVotes []*ethpb.Eth1Data
for i := 0; i < 100000; i++ {
for i := range 100000 {
fuzzer.Fuzz(eth1data)
fuzzer.Fuzz(&stateVotes)
s, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
@@ -129,7 +129,7 @@ func TestFuzzProcessBlockHeaderNoVerify_10000(t *testing.T) {
state := &ethpb.BeaconState{}
block := &ethpb.BeaconBlock{}
for i := 0; i < 10000; i++ {
for i := range 10000 {
fuzzer.Fuzz(state)
fuzzer.Fuzz(block)
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
@@ -145,7 +145,7 @@ func TestFuzzProcessRandao_10000(t *testing.T) {
state := &ethpb.BeaconState{}
b := &ethpb.SignedBeaconBlock{}
for i := 0; i < 10000; i++ {
for i := range 10000 {
fuzzer.Fuzz(state)
fuzzer.Fuzz(b)
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
@@ -168,7 +168,7 @@ func TestFuzzProcessRandaoNoVerify_10000(t *testing.T) {
state := &ethpb.BeaconState{}
blockBody := &ethpb.BeaconBlockBody{}
for i := 0; i < 10000; i++ {
for i := range 10000 {
fuzzer.Fuzz(state)
fuzzer.Fuzz(blockBody)
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
@@ -186,7 +186,7 @@ func TestFuzzProcessProposerSlashings_10000(t *testing.T) {
state := &ethpb.BeaconState{}
p := &ethpb.ProposerSlashing{}
ctx := t.Context()
for i := 0; i < 10000; i++ {
for i := range 10000 {
fuzzer.Fuzz(state)
fuzzer.Fuzz(p)
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
@@ -203,7 +203,7 @@ func TestFuzzVerifyProposerSlashing_10000(t *testing.T) {
fuzzer := gofuzz.NewWithSeed(0)
state := &ethpb.BeaconState{}
proposerSlashing := &ethpb.ProposerSlashing{}
for i := 0; i < 10000; i++ {
for i := range 10000 {
fuzzer.Fuzz(state)
fuzzer.Fuzz(proposerSlashing)
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
@@ -219,7 +219,7 @@ func TestFuzzProcessAttesterSlashings_10000(t *testing.T) {
state := &ethpb.BeaconState{}
a := &ethpb.AttesterSlashing{}
ctx := t.Context()
for i := 0; i < 10000; i++ {
for i := range 10000 {
fuzzer.Fuzz(state)
fuzzer.Fuzz(a)
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
@@ -237,7 +237,7 @@ func TestFuzzVerifyAttesterSlashing_10000(t *testing.T) {
state := &ethpb.BeaconState{}
attesterSlashing := &ethpb.AttesterSlashing{}
ctx := t.Context()
for i := 0; i < 10000; i++ {
for i := range 10000 {
fuzzer.Fuzz(state)
fuzzer.Fuzz(attesterSlashing)
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
@@ -253,7 +253,7 @@ func TestFuzzIsSlashableAttestationData_10000(_ *testing.T) {
attestationData := &ethpb.AttestationData{}
attestationData2 := &ethpb.AttestationData{}
for i := 0; i < 10000; i++ {
for range 10000 {
fuzzer.Fuzz(attestationData)
fuzzer.Fuzz(attestationData2)
IsSlashableAttestationData(attestationData, attestationData2)
@@ -264,7 +264,7 @@ func TestFuzzslashableAttesterIndices_10000(_ *testing.T) {
fuzzer := gofuzz.NewWithSeed(0)
attesterSlashing := &ethpb.AttesterSlashing{}
for i := 0; i < 10000; i++ {
for range 10000 {
fuzzer.Fuzz(attesterSlashing)
SlashableAttesterIndices(attesterSlashing)
}
@@ -275,7 +275,7 @@ func TestFuzzProcessAttestationsNoVerify_10000(t *testing.T) {
state := &ethpb.BeaconState{}
b := &ethpb.SignedBeaconBlock{}
ctx := t.Context()
for i := 0; i < 10000; i++ {
for i := range 10000 {
fuzzer.Fuzz(state)
fuzzer.Fuzz(b)
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
@@ -298,7 +298,7 @@ func TestFuzzVerifyIndexedAttestationn_10000(t *testing.T) {
state := &ethpb.BeaconState{}
idxAttestation := &ethpb.IndexedAttestation{}
ctx := t.Context()
for i := 0; i < 10000; i++ {
for i := range 10000 {
fuzzer.Fuzz(state)
fuzzer.Fuzz(idxAttestation)
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
@@ -313,7 +313,7 @@ func TestFuzzverifyDeposit_10000(t *testing.T) {
fuzzer := gofuzz.NewWithSeed(0)
state := &ethpb.BeaconState{}
deposit := &ethpb.Deposit{}
for i := 0; i < 10000; i++ {
for i := range 10000 {
fuzzer.Fuzz(state)
fuzzer.Fuzz(deposit)
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
@@ -329,7 +329,7 @@ func TestFuzzProcessVoluntaryExits_10000(t *testing.T) {
state := &ethpb.BeaconState{}
e := &ethpb.SignedVoluntaryExit{}
ctx := t.Context()
for i := 0; i < 10000; i++ {
for i := range 10000 {
fuzzer.Fuzz(state)
fuzzer.Fuzz(e)
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
@@ -346,7 +346,7 @@ func TestFuzzProcessVoluntaryExitsNoVerify_10000(t *testing.T) {
fuzzer := gofuzz.NewWithSeed(0)
state := &ethpb.BeaconState{}
e := &ethpb.SignedVoluntaryExit{}
for i := 0; i < 10000; i++ {
for i := range 10000 {
fuzzer.Fuzz(state)
fuzzer.Fuzz(e)
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
@@ -366,7 +366,7 @@ func TestFuzzVerifyExit_10000(t *testing.T) {
fork := &ethpb.Fork{}
var slot primitives.Slot
for i := 0; i < 10000; i++ {
for i := range 10000 {
fuzzer.Fuzz(ve)
fuzzer.Fuzz(rawVal)
fuzzer.Fuzz(fork)

View File

@@ -60,7 +60,7 @@ func Eth1DataHasEnoughSupport(beaconState state.ReadOnlyBeaconState, data *ethpb
voteCount := uint64(0)
for _, vote := range beaconState.Eth1DataVotes() {
if AreEth1DataEqual(vote, data.Copy()) {
if AreEth1DataEqual(vote, data) {
voteCount++
}
}

View File

@@ -19,7 +19,7 @@ import (
func FakeDeposits(n uint64) []*ethpb.Eth1Data {
deposits := make([]*ethpb.Eth1Data, n)
for i := uint64(0); i < n; i++ {
for i := range n {
deposits[i] = &ethpb.Eth1Data{
DepositCount: 1,
DepositRoot: bytesutil.PadTo([]byte("root"), 32),
@@ -175,7 +175,7 @@ func TestProcessEth1Data_SetsCorrectly(t *testing.T) {
}
period := uint64(params.BeaconConfig().SlotsPerEpoch.Mul(uint64(params.BeaconConfig().EpochsPerEth1VotingPeriod)))
for i := uint64(0); i < period; i++ {
for range period {
processedState, err := blocks.ProcessEth1DataInBlock(t.Context(), beaconState, b.Block.Body.Eth1Data)
require.NoError(t, err)
require.Equal(t, true, processedState.Version() == version.Phase0)

View File

@@ -27,7 +27,7 @@ func init() {
func TestProcessBlockHeader_ImproperBlockSlot(t *testing.T) {
validators := make([]*ethpb.Validator, params.BeaconConfig().MinGenesisActiveValidatorCount)
for i := 0; i < len(validators); i++ {
for i := range validators {
validators[i] = &ethpb.Validator{
PublicKey: make([]byte, 32),
WithdrawalCredentials: make([]byte, 32),
@@ -104,7 +104,7 @@ func TestProcessBlockHeader_WrongProposerSig(t *testing.T) {
func TestProcessBlockHeader_DifferentSlots(t *testing.T) {
validators := make([]*ethpb.Validator, params.BeaconConfig().MinGenesisActiveValidatorCount)
for i := 0; i < len(validators); i++ {
for i := range validators {
validators[i] = &ethpb.Validator{
PublicKey: make([]byte, 32),
WithdrawalCredentials: make([]byte, 32),
@@ -148,7 +148,7 @@ func TestProcessBlockHeader_DifferentSlots(t *testing.T) {
func TestProcessBlockHeader_PreviousBlockRootNotSignedRoot(t *testing.T) {
validators := make([]*ethpb.Validator, params.BeaconConfig().MinGenesisActiveValidatorCount)
for i := 0; i < len(validators); i++ {
for i := range validators {
validators[i] = &ethpb.Validator{
PublicKey: make([]byte, 48),
WithdrawalCredentials: make([]byte, 32),
@@ -189,7 +189,7 @@ func TestProcessBlockHeader_PreviousBlockRootNotSignedRoot(t *testing.T) {
func TestProcessBlockHeader_SlashedProposer(t *testing.T) {
validators := make([]*ethpb.Validator, params.BeaconConfig().MinGenesisActiveValidatorCount)
for i := 0; i < len(validators); i++ {
for i := range validators {
validators[i] = &ethpb.Validator{
PublicKey: make([]byte, 48),
WithdrawalCredentials: make([]byte, 32),
@@ -233,7 +233,7 @@ func TestProcessBlockHeader_SlashedProposer(t *testing.T) {
func TestProcessBlockHeader_OK(t *testing.T) {
validators := make([]*ethpb.Validator, params.BeaconConfig().MinGenesisActiveValidatorCount)
for i := 0; i < len(validators); i++ {
for i := range validators {
validators[i] = &ethpb.Validator{
PublicKey: make([]byte, 32),
WithdrawalCredentials: make([]byte, 32),
@@ -293,7 +293,7 @@ func TestProcessBlockHeader_OK(t *testing.T) {
func TestBlockSignatureSet_OK(t *testing.T) {
validators := make([]*ethpb.Validator, params.BeaconConfig().MinGenesisActiveValidatorCount)
for i := 0; i < len(validators); i++ {
for i := range validators {
validators[i] = &ethpb.Validator{
PublicKey: make([]byte, 32),
WithdrawalCredentials: make([]byte, 32),

View File

@@ -90,6 +90,9 @@ func IsExecutionEnabled(st state.ReadOnlyBeaconState, body interfaces.ReadOnlyBe
if st == nil || body == nil {
return false, errors.New("nil state or block body")
}
if st.Version() >= version.Capella {
return true, nil
}
if IsPreBellatrixVersion(st.Version()) {
return false, nil
}

View File

@@ -260,11 +260,12 @@ func Test_IsExecutionBlockCapella(t *testing.T) {
func Test_IsExecutionEnabled(t *testing.T) {
tests := []struct {
name string
payload *enginev1.ExecutionPayload
header interfaces.ExecutionData
useAltairSt bool
want bool
name string
payload *enginev1.ExecutionPayload
header interfaces.ExecutionData
useAltairSt bool
useCapellaSt bool
want bool
}{
{
name: "use older than bellatrix state",
@@ -331,6 +332,17 @@ func Test_IsExecutionEnabled(t *testing.T) {
}(),
want: true,
},
{
name: "capella state always enabled",
payload: emptyPayload(),
header: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
return h
}(),
useCapellaSt: true,
want: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
@@ -342,6 +354,8 @@ func Test_IsExecutionEnabled(t *testing.T) {
require.NoError(t, err)
if tt.useAltairSt {
st, _ = util.DeterministicGenesisStateAltair(t, 1)
} else if tt.useCapellaSt {
st, _ = util.DeterministicGenesisStateCapella(t, 1)
}
got, err := blocks.IsExecutionEnabled(st, body)
require.NoError(t, err)
@@ -851,8 +865,7 @@ func BenchmarkBellatrixComplete(b *testing.B) {
require.NoError(b, err)
require.NoError(b, st.SetLatestExecutionPayloadHeader(h))
b.ResetTimer()
for i := 0; i < b.N; i++ {
for b.Loop() {
_, err := blocks.IsMergeTransitionComplete(st)
require.NoError(b, err)
}

View File

@@ -28,7 +28,7 @@ func createValidatorsWithTotalActiveBalance(totalBal primitives.Gwei) []*eth.Val
ActivationEpoch: primitives.Epoch(0),
EffectiveBalance: params.BeaconConfig().MinActivationBalance,
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
PublicKey: []byte(fmt.Sprintf("val_%d", i)),
PublicKey: fmt.Appendf(nil, "val_%d", i),
WithdrawableEpoch: params.BeaconConfig().FarFutureEpoch,
WithdrawalCredentials: wd,
}

View File

@@ -16,7 +16,7 @@ func TestFuzzProcessDeposits_10000(t *testing.T) {
state := &ethpb.BeaconStateElectra{}
deposits := make([]*ethpb.Deposit, 100)
ctx := t.Context()
for i := 0; i < 10000; i++ {
for i := range 10000 {
fuzzer.Fuzz(state)
for i := range deposits {
fuzzer.Fuzz(deposits[i])
@@ -36,7 +36,7 @@ func TestFuzzProcessDeposit_10000(t *testing.T) {
state := &ethpb.BeaconStateElectra{}
deposit := &ethpb.Deposit{}
for i := 0; i < 10000; i++ {
for i := range 10000 {
fuzzer.Fuzz(state)
fuzzer.Fuzz(deposit)
s, err := state_native.InitializeFromProtoUnsafeElectra(state)

View File

@@ -95,7 +95,7 @@ func TestProcessPendingDeposits(t *testing.T) {
require.NoError(t, err)
require.Equal(t, primitives.Gwei(100), res)
// Validators 0..9 should have their balance increased
for i := primitives.ValidatorIndex(0); i < 10; i++ {
for i := range primitives.ValidatorIndex(10) {
b, err := st.BalanceAtIndex(i)
require.NoError(t, err)
require.Equal(t, params.BeaconConfig().MinActivationBalance+uint64(amountAvailForProcessing)/10, b)
@@ -122,7 +122,7 @@ func TestProcessPendingDeposits(t *testing.T) {
check: func(t *testing.T, st state.BeaconState) {
amountAvailForProcessing := helpers.ActivationExitChurnLimit(1_000 * 1e9)
// Validators 0..9 should have their balance increased
for i := primitives.ValidatorIndex(0); i < 2; i++ {
for i := range primitives.ValidatorIndex(2) {
b, err := st.BalanceAtIndex(i)
require.NoError(t, err)
require.Equal(t, params.BeaconConfig().MinActivationBalance+uint64(amountAvailForProcessing), b)
@@ -149,7 +149,7 @@ func TestProcessPendingDeposits(t *testing.T) {
require.NoError(t, err)
require.Equal(t, primitives.Gwei(0), res)
// Validators 0..4 should have their balance increased
for i := primitives.ValidatorIndex(0); i < 4; i++ {
for i := range primitives.ValidatorIndex(4) {
b, err := st.BalanceAtIndex(i)
require.NoError(t, err)
require.Equal(t, params.BeaconConfig().MinActivationBalance+uint64(amountAvailForProcessing)/5, b)
@@ -528,7 +528,7 @@ func stateWithActiveBalanceETH(t *testing.T, balETH uint64) state.BeaconState {
vals := make([]*eth.Validator, numVals)
bals := make([]uint64, numVals)
for i := uint64(0); i < numVals; i++ {
for i := range numVals {
wc := make([]byte, 32)
wc[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
wc[31] = byte(i)

View File

@@ -56,7 +56,7 @@ func TestProcessRegistryUpdates(t *testing.T) {
Slot: 5 * params.BeaconConfig().SlotsPerEpoch,
FinalizedCheckpoint: &eth.Checkpoint{Epoch: finalizedEpoch, Root: make([]byte, fieldparams.RootLength)},
}
for i := uint64(0); i < 10; i++ {
for range uint64(10) {
base.Validators = append(base.Validators, &eth.Validator{
ActivationEligibilityEpoch: finalizedEpoch,
EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance,
@@ -82,7 +82,7 @@ func TestProcessRegistryUpdates(t *testing.T) {
Slot: 5 * params.BeaconConfig().SlotsPerEpoch,
FinalizedCheckpoint: &eth.Checkpoint{Epoch: finalizedEpoch, Root: make([]byte, fieldparams.RootLength)},
}
for i := uint64(0); i < 10; i++ {
for range uint64(10) {
base.Validators = append(base.Validators, &eth.Validator{
EffectiveBalance: params.BeaconConfig().EjectionBalance - 1,
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
@@ -108,7 +108,7 @@ func TestProcessRegistryUpdates(t *testing.T) {
Slot: 5 * params.BeaconConfig().SlotsPerEpoch,
FinalizedCheckpoint: &eth.Checkpoint{Epoch: finalizedEpoch, Root: make([]byte, fieldparams.RootLength)},
}
for i := uint64(0); i < 10; i++ {
for range uint64(10) {
base.Validators = append(base.Validators, &eth.Validator{
EffectiveBalance: params.BeaconConfig().EjectionBalance - 1,
ExitEpoch: 10,
@@ -157,7 +157,7 @@ func Benchmark_ProcessRegistryUpdates_MassEjection(b *testing.B) {
st, err := util.NewBeaconStateElectra()
require.NoError(b, err)
for i := 0; i < b.N; i++ {
for b.Loop() {
b.StopTimer()
if err := st.SetValidators(genValidators(100000)); err != nil {
panic(err)

View File

@@ -329,10 +329,7 @@ func ProcessEffectiveBalanceUpdates(st state.BeaconState) (state.BeaconState, er
balance := bals[idx]
if balance+downwardThreshold < val.EffectiveBalance() || val.EffectiveBalance()+upwardThreshold < balance {
effectiveBal := maxEffBalance
if effectiveBal > balance-balance%effBalanceInc {
effectiveBal = balance - balance%effBalanceInc
}
effectiveBal := min(maxEffBalance, balance-balance%effBalanceInc)
if effectiveBal != val.EffectiveBalance() {
newVal = val.Copy()
newVal.EffectiveBalance = effectiveBal

View File

@@ -14,7 +14,7 @@ func TestFuzzFinalUpdates_10000(t *testing.T) {
fuzzer := gofuzz.NewWithSeed(0)
base := &ethpb.BeaconState{}
for i := 0; i < 10000; i++ {
for i := range 10000 {
fuzzer.Fuzz(base)
s, err := state_native.InitializeFromProtoUnsafePhase0(base)
require.NoError(t, err)

View File

@@ -218,7 +218,7 @@ func TestProcessRegistryUpdates_EligibleToActivate_Cancun(t *testing.T) {
cfg.ChurnLimitQuotient = 1
params.OverrideBeaconConfig(cfg)
for i := uint64(0); i < 10; i++ {
for range uint64(10) {
base.Validators = append(base.Validators, &ethpb.Validator{
ActivationEligibilityEpoch: finalizedEpoch,
EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance,
@@ -314,28 +314,28 @@ func TestProcessRegistryUpdates_CanExits(t *testing.T) {
func buildState(t testing.TB, slot primitives.Slot, validatorCount uint64) state.BeaconState {
validators := make([]*ethpb.Validator, validatorCount)
for i := 0; i < len(validators); i++ {
for i := range validators {
validators[i] = &ethpb.Validator{
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance,
}
}
validatorBalances := make([]uint64, len(validators))
for i := 0; i < len(validatorBalances); i++ {
for i := range validatorBalances {
validatorBalances[i] = params.BeaconConfig().MaxEffectiveBalance
}
latestActiveIndexRoots := make(
[][]byte,
params.BeaconConfig().EpochsPerHistoricalVector,
)
for i := 0; i < len(latestActiveIndexRoots); i++ {
for i := range latestActiveIndexRoots {
latestActiveIndexRoots[i] = params.BeaconConfig().ZeroHash[:]
}
latestRandaoMixes := make(
[][]byte,
params.BeaconConfig().EpochsPerHistoricalVector,
)
for i := 0; i < len(latestRandaoMixes); i++ {
for i := range latestRandaoMixes {
latestRandaoMixes[i] = params.BeaconConfig().ZeroHash[:]
}
s, err := util.NewBeaconState()

View File

@@ -19,7 +19,7 @@ func TestProcessJustificationAndFinalizationPreCompute_ConsecutiveEpochs(t *test
e := params.BeaconConfig().FarFutureEpoch
a := params.BeaconConfig().MaxEffectiveBalance
blockRoots := make([][]byte, params.BeaconConfig().SlotsPerEpoch*2+1)
for i := 0; i < len(blockRoots); i++ {
for i := range blockRoots {
blockRoots[i] = []byte{byte(i)}
}
base := &ethpb.BeaconState{
@@ -56,7 +56,7 @@ func TestProcessJustificationAndFinalizationPreCompute_JustifyCurrentEpoch(t *te
e := params.BeaconConfig().FarFutureEpoch
a := params.BeaconConfig().MaxEffectiveBalance
blockRoots := make([][]byte, params.BeaconConfig().SlotsPerEpoch*2+1)
for i := 0; i < len(blockRoots); i++ {
for i := range blockRoots {
blockRoots[i] = []byte{byte(i)}
}
base := &ethpb.BeaconState{
@@ -93,7 +93,7 @@ func TestProcessJustificationAndFinalizationPreCompute_JustifyPrevEpoch(t *testi
e := params.BeaconConfig().FarFutureEpoch
a := params.BeaconConfig().MaxEffectiveBalance
blockRoots := make([][]byte, params.BeaconConfig().SlotsPerEpoch*2+1)
for i := 0; i < len(blockRoots); i++ {
for i := range blockRoots {
blockRoots[i] = []byte{byte(i)}
}
base := &ethpb.BeaconState{
@@ -128,7 +128,7 @@ func TestProcessJustificationAndFinalizationPreCompute_JustifyPrevEpoch(t *testi
func TestUnrealizedCheckpoints(t *testing.T) {
validators := make([]*ethpb.Validator, params.BeaconConfig().MinGenesisActiveValidatorCount)
balances := make([]uint64, len(validators))
for i := 0; i < len(validators); i++ {
for i := range validators {
validators[i] = &ethpb.Validator{
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance,

View File

@@ -42,7 +42,7 @@ func ProcessRewardsAndPenaltiesPrecompute(
return nil, errors.Wrap(err, "could not get proposer attestation delta")
}
validatorBals := state.Balances()
for i := 0; i < numOfVals; i++ {
for i := range numOfVals {
vp[i].BeforeEpochTransitionBalance = validatorBals[i]
// Compute the post balance of the validator after accounting for the

View File

@@ -24,7 +24,7 @@ func TestProcessRewardsAndPenaltiesPrecompute(t *testing.T) {
validatorCount := uint64(2048)
base := buildState(e+3, validatorCount)
atts := make([]*ethpb.PendingAttestation, 3)
for i := 0; i < len(atts); i++ {
for i := range atts {
atts[i] = &ethpb.PendingAttestation{
Data: &ethpb.AttestationData{
Target: &ethpb.Checkpoint{Root: make([]byte, fieldparams.RootLength)},
@@ -63,7 +63,7 @@ func TestAttestationDeltas_ZeroEpoch(t *testing.T) {
base := buildState(e+2, validatorCount)
atts := make([]*ethpb.PendingAttestation, 3)
var emptyRoot [32]byte
for i := 0; i < len(atts); i++ {
for i := range atts {
atts[i] = &ethpb.PendingAttestation{
Data: &ethpb.AttestationData{
Target: &ethpb.Checkpoint{
@@ -99,7 +99,7 @@ func TestAttestationDeltas_ZeroInclusionDelay(t *testing.T) {
base := buildState(e+2, validatorCount)
atts := make([]*ethpb.PendingAttestation, 3)
var emptyRoot [32]byte
for i := 0; i < len(atts); i++ {
for i := range atts {
atts[i] = &ethpb.PendingAttestation{
Data: &ethpb.AttestationData{
Target: &ethpb.Checkpoint{
@@ -131,7 +131,7 @@ func TestProcessRewardsAndPenaltiesPrecompute_SlashedInactivePenalty(t *testing.
validatorCount := uint64(2048)
base := buildState(e+3, validatorCount)
atts := make([]*ethpb.PendingAttestation, 3)
for i := 0; i < len(atts); i++ {
for i := range atts {
atts[i] = &ethpb.PendingAttestation{
Data: &ethpb.AttestationData{
Target: &ethpb.Checkpoint{Root: make([]byte, fieldparams.RootLength)},
@@ -176,28 +176,28 @@ func TestProcessRewardsAndPenaltiesPrecompute_SlashedInactivePenalty(t *testing.
func buildState(slot primitives.Slot, validatorCount uint64) *ethpb.BeaconState {
validators := make([]*ethpb.Validator, validatorCount)
for i := 0; i < len(validators); i++ {
for i := range validators {
validators[i] = &ethpb.Validator{
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance,
}
}
validatorBalances := make([]uint64, len(validators))
for i := 0; i < len(validatorBalances); i++ {
for i := range validatorBalances {
validatorBalances[i] = params.BeaconConfig().MaxEffectiveBalance
}
latestActiveIndexRoots := make(
[][]byte,
params.BeaconConfig().EpochsPerHistoricalVector,
)
for i := 0; i < len(latestActiveIndexRoots); i++ {
for i := range latestActiveIndexRoots {
latestActiveIndexRoots[i] = params.BeaconConfig().ZeroHash[:]
}
latestRandaoMixes := make(
[][]byte,
params.BeaconConfig().EpochsPerHistoricalVector,
)
for i := 0; i < len(latestRandaoMixes); i++ {
for i := range latestRandaoMixes {
latestRandaoMixes[i] = params.BeaconConfig().ZeroHash[:]
}
return &ethpb.BeaconState{

View File

@@ -17,5 +17,5 @@ type Event struct {
// Type is the type of event.
Type EventType
// Data is event-specific data.
Data interface{}
Data any
}

View File

@@ -54,7 +54,7 @@ func TestAttestation_ComputeSubnetForAttestation(t *testing.T) {
validatorCount := committeeCount * params.BeaconConfig().TargetCommitteeSize
validators := make([]*ethpb.Validator, validatorCount)
for i := 0; i < len(validators); i++ {
for i := range validators {
k := make([]byte, 48)
copy(k, strconv.Itoa(i))
validators[i] = &ethpb.Validator{

View File

@@ -5,7 +5,7 @@ package helpers
import (
"context"
"fmt"
"sort"
"slices"
"github.com/OffchainLabs/go-bitfield"
"github.com/OffchainLabs/prysm/v7/beacon-chain/cache"
@@ -515,9 +515,7 @@ func UpdateCommitteeCache(ctx context.Context, state state.ReadOnlyBeaconState,
// used for failing verify signature fallback.
sortedIndices := make([]primitives.ValidatorIndex, len(shuffledIndices))
copy(sortedIndices, shuffledIndices)
sort.Slice(sortedIndices, func(i, j int) bool {
return sortedIndices[i] < sortedIndices[j]
})
slices.Sort(sortedIndices)
if err := committeeCache.AddCommitteeShuffledList(ctx, &cache.Committees{
ShuffledIndices: shuffledIndices,

View File

@@ -29,7 +29,7 @@ func TestComputeCommittee_WithoutCache(t *testing.T) {
validatorCount := committeeCount * params.BeaconConfig().TargetCommitteeSize
validators := make([]*ethpb.Validator, validatorCount)
for i := 0; i < len(validators); i++ {
for i := range validators {
k := make([]byte, 48)
copy(k, strconv.Itoa(i))
validators[i] = &ethpb.Validator{
@@ -122,7 +122,7 @@ func TestCommitteeAssignments_NoProposerForSlot0(t *testing.T) {
helpers.ClearCache()
validators := make([]*ethpb.Validator, 4*params.BeaconConfig().SlotsPerEpoch)
for i := 0; i < len(validators); i++ {
for i := range validators {
var activationEpoch primitives.Epoch
if i >= len(validators)/2 {
activationEpoch = 3
@@ -151,7 +151,7 @@ func TestCommitteeAssignments_CanRetrieve(t *testing.T) {
// Initialize test with 256 validators, each slot and each index gets 4 validators.
validators := make([]*ethpb.Validator, 4*params.BeaconConfig().SlotsPerEpoch)
validatorIndices := make([]primitives.ValidatorIndex, len(validators))
for i := 0; i < len(validators); i++ {
for i := range validators {
// First 2 epochs only half validators are activated.
var activationEpoch primitives.Epoch
if i >= len(validators)/2 {
@@ -234,7 +234,7 @@ func TestCommitteeAssignments_CannotRetrieveFuture(t *testing.T) {
// Initialize test with 256 validators, each slot and each index gets 4 validators.
validators := make([]*ethpb.Validator, 4*params.BeaconConfig().SlotsPerEpoch)
for i := 0; i < len(validators); i++ {
for i := range validators {
// First 2 epochs only half validators are activated.
var activationEpoch primitives.Epoch
if i >= len(validators)/2 {
@@ -266,7 +266,7 @@ func TestCommitteeAssignments_CannotRetrieveOlderThanSlotsPerHistoricalRoot(t *t
// Initialize test with 256 validators, each slot and each index gets 4 validators.
validators := make([]*ethpb.Validator, 4*params.BeaconConfig().SlotsPerEpoch)
for i := 0; i < len(validators); i++ {
for i := range validators {
validators[i] = &ethpb.Validator{
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
}
@@ -287,7 +287,7 @@ func TestCommitteeAssignments_EverySlotHasMin1Proposer(t *testing.T) {
// Initialize test with 256 validators, each slot and each index gets 4 validators.
validators := make([]*ethpb.Validator, 4*params.BeaconConfig().SlotsPerEpoch)
for i := 0; i < len(validators); i++ {
for i := range validators {
validators[i] = &ethpb.Validator{
ActivationEpoch: 0,
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
@@ -323,7 +323,7 @@ func TestCommitteeAssignments_EverySlotHasMin1Proposer(t *testing.T) {
func TestVerifyAttestationBitfieldLengths_OK(t *testing.T) {
validators := make([]*ethpb.Validator, 2*params.BeaconConfig().SlotsPerEpoch)
activeRoots := make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector)
for i := 0; i < len(validators); i++ {
for i := range validators {
validators[i] = &ethpb.Validator{
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
}
@@ -489,7 +489,7 @@ func TestUpdateCommitteeCache_CanUpdateAcrossEpochs(t *testing.T) {
func BenchmarkComputeCommittee300000_WithPreCache(b *testing.B) {
validators := make([]*ethpb.Validator, 300000)
for i := 0; i < len(validators); i++ {
for i := range validators {
validators[i] = &ethpb.Validator{
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
}
@@ -512,8 +512,7 @@ func BenchmarkComputeCommittee300000_WithPreCache(b *testing.B) {
panic(err)
}
b.ResetTimer()
for n := 0; n < b.N; n++ {
for b.Loop() {
_, err := helpers.ComputeCommittee(indices, seed, index, params.BeaconConfig().MaxCommitteesPerSlot)
if err != nil {
panic(err)
@@ -523,7 +522,7 @@ func BenchmarkComputeCommittee300000_WithPreCache(b *testing.B) {
func BenchmarkComputeCommittee3000000_WithPreCache(b *testing.B) {
validators := make([]*ethpb.Validator, 3000000)
for i := 0; i < len(validators); i++ {
for i := range validators {
validators[i] = &ethpb.Validator{
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
}
@@ -546,8 +545,7 @@ func BenchmarkComputeCommittee3000000_WithPreCache(b *testing.B) {
panic(err)
}
b.ResetTimer()
for n := 0; n < b.N; n++ {
for b.Loop() {
_, err := helpers.ComputeCommittee(indices, seed, index, params.BeaconConfig().MaxCommitteesPerSlot)
if err != nil {
panic(err)
@@ -557,7 +555,7 @@ func BenchmarkComputeCommittee3000000_WithPreCache(b *testing.B) {
func BenchmarkComputeCommittee128000_WithOutPreCache(b *testing.B) {
validators := make([]*ethpb.Validator, 128000)
for i := 0; i < len(validators); i++ {
for i := range validators {
validators[i] = &ethpb.Validator{
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
}
@@ -576,8 +574,8 @@ func BenchmarkComputeCommittee128000_WithOutPreCache(b *testing.B) {
i := uint64(0)
index := uint64(0)
b.ResetTimer()
for n := 0; n < b.N; n++ {
for b.Loop() {
i++
_, err := helpers.ComputeCommittee(indices, seed, index, params.BeaconConfig().MaxCommitteesPerSlot)
if err != nil {
@@ -592,7 +590,7 @@ func BenchmarkComputeCommittee128000_WithOutPreCache(b *testing.B) {
func BenchmarkComputeCommittee1000000_WithOutCache(b *testing.B) {
validators := make([]*ethpb.Validator, 1000000)
for i := 0; i < len(validators); i++ {
for i := range validators {
validators[i] = &ethpb.Validator{
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
}
@@ -611,8 +609,8 @@ func BenchmarkComputeCommittee1000000_WithOutCache(b *testing.B) {
i := uint64(0)
index := uint64(0)
b.ResetTimer()
for n := 0; n < b.N; n++ {
for b.Loop() {
i++
_, err := helpers.ComputeCommittee(indices, seed, index, params.BeaconConfig().MaxCommitteesPerSlot)
if err != nil {
@@ -627,7 +625,7 @@ func BenchmarkComputeCommittee1000000_WithOutCache(b *testing.B) {
func BenchmarkComputeCommittee4000000_WithOutCache(b *testing.B) {
validators := make([]*ethpb.Validator, 4000000)
for i := 0; i < len(validators); i++ {
for i := range validators {
validators[i] = &ethpb.Validator{
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
}
@@ -646,8 +644,8 @@ func BenchmarkComputeCommittee4000000_WithOutCache(b *testing.B) {
i := uint64(0)
index := uint64(0)
b.ResetTimer()
for n := 0; n < b.N; n++ {
for b.Loop() {
i++
_, err := helpers.ComputeCommittee(indices, seed, index, params.BeaconConfig().MaxCommitteesPerSlot)
if err != nil {
@@ -663,7 +661,7 @@ func BenchmarkComputeCommittee4000000_WithOutCache(b *testing.B) {
func TestBeaconCommitteeFromState_UpdateCacheForPreviousEpoch(t *testing.T) {
committeeSize := uint64(16)
validators := make([]*ethpb.Validator, params.BeaconConfig().SlotsPerEpoch.Mul(committeeSize))
for i := 0; i < len(validators); i++ {
for i := range validators {
validators[i] = &ethpb.Validator{
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
}
@@ -688,7 +686,7 @@ func TestBeaconCommitteeFromState_UpdateCacheForPreviousEpoch(t *testing.T) {
func TestPrecomputeProposerIndices_Ok(t *testing.T) {
validators := make([]*ethpb.Validator, params.BeaconConfig().MinGenesisActiveValidatorCount)
for i := 0; i < len(validators); i++ {
for i := range validators {
validators[i] = &ethpb.Validator{
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
}
@@ -732,7 +730,7 @@ func TestAttestationCommitteesFromState(t *testing.T) {
ctx := t.Context()
validators := make([]*ethpb.Validator, params.BeaconConfig().SlotsPerEpoch.Mul(params.BeaconConfig().TargetCommitteeSize))
for i := 0; i < len(validators); i++ {
for i := range validators {
validators[i] = &ethpb.Validator{
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
}
@@ -768,7 +766,7 @@ func TestAttestationCommitteesFromCache(t *testing.T) {
ctx := t.Context()
validators := make([]*ethpb.Validator, params.BeaconConfig().SlotsPerEpoch.Mul(params.BeaconConfig().TargetCommitteeSize))
for i := 0; i < len(validators); i++ {
for i := range validators {
validators[i] = &ethpb.Validator{
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
}
@@ -934,7 +932,7 @@ func TestInitializeProposerLookahead_RegressionTest(t *testing.T) {
proposerLookahead, err := helpers.InitializeProposerLookahead(ctx, state, epoch)
require.NoError(t, err)
slotsPerEpoch := int(params.BeaconConfig().SlotsPerEpoch)
for epochOffset := primitives.Epoch(0); epochOffset < 2; epochOffset++ {
for epochOffset := range primitives.Epoch(2) {
targetEpoch := epoch + epochOffset
activeIndices, err := helpers.ActiveValidatorIndices(ctx, state, targetEpoch)

View File

@@ -16,7 +16,7 @@ import (
func TestRandaoMix_OK(t *testing.T) {
randaoMixes := make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector)
for i := 0; i < len(randaoMixes); i++ {
for i := range randaoMixes {
intInBytes := make([]byte, 32)
binary.LittleEndian.PutUint64(intInBytes, uint64(i))
randaoMixes[i] = intInBytes
@@ -52,7 +52,7 @@ func TestRandaoMix_OK(t *testing.T) {
func TestRandaoMix_CopyOK(t *testing.T) {
randaoMixes := make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector)
for i := 0; i < len(randaoMixes); i++ {
for i := range randaoMixes {
intInBytes := make([]byte, 32)
binary.LittleEndian.PutUint64(intInBytes, uint64(i))
randaoMixes[i] = intInBytes
@@ -96,7 +96,7 @@ func TestGenerateSeed_OK(t *testing.T) {
helpers.ClearCache()
randaoMixes := make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector)
for i := 0; i < len(randaoMixes); i++ {
for i := range randaoMixes {
intInBytes := make([]byte, 32)
binary.LittleEndian.PutUint64(intInBytes, uint64(i))
randaoMixes[i] = intInBytes

View File

@@ -239,28 +239,28 @@ func TestIsInInactivityLeak(t *testing.T) {
func buildState(slot primitives.Slot, validatorCount uint64) *ethpb.BeaconState {
validators := make([]*ethpb.Validator, validatorCount)
for i := 0; i < len(validators); i++ {
for i := range validators {
validators[i] = &ethpb.Validator{
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance,
}
}
validatorBalances := make([]uint64, len(validators))
for i := 0; i < len(validatorBalances); i++ {
for i := range validatorBalances {
validatorBalances[i] = params.BeaconConfig().MaxEffectiveBalance
}
latestActiveIndexRoots := make(
[][]byte,
params.BeaconConfig().EpochsPerHistoricalVector,
)
for i := 0; i < len(latestActiveIndexRoots); i++ {
for i := range latestActiveIndexRoots {
latestActiveIndexRoots[i] = params.BeaconConfig().ZeroHash[:]
}
latestRandaoMixes := make(
[][]byte,
params.BeaconConfig().EpochsPerHistoricalVector,
)
for i := 0; i < len(latestRandaoMixes); i++ {
for i := range latestRandaoMixes {
latestRandaoMixes[i] = params.BeaconConfig().ZeroHash[:]
}
return &ethpb.BeaconState{

View File

@@ -23,7 +23,7 @@ var maxShuffleListSize uint64 = 1 << 40
func SplitIndices(l []uint64, n uint64) [][]uint64 {
var divided [][]uint64
var lSize = uint64(len(l))
for i := uint64(0); i < n; i++ {
for i := range n {
start := slice.SplitOffset(lSize, n, i)
end := slice.SplitOffset(lSize, n, i+1)
divided = append(divided, l[start:end])
@@ -103,10 +103,7 @@ func ComputeShuffledIndex(index primitives.ValidatorIndex, indexCount uint64, se
pivot := hash8Int % indexCount
flip := (pivot + indexCount - uint64(index)) % indexCount
// Consider every pair only once by picking the highest pair index to retrieve randomness.
position := uint64(index)
if flip > position {
position = flip
}
position := max(flip, uint64(index))
// Add position except its last byte to []buf for randomness,
// it will be used later to select a bit from the resulting hash.
binary.LittleEndian.PutUint64(posBuffer[:8], position>>8)

View File

@@ -30,7 +30,7 @@ func TestShuffleList_OK(t *testing.T) {
var list1 []primitives.ValidatorIndex
seed1 := [32]byte{1, 128, 12}
seed2 := [32]byte{2, 128, 12}
for i := 0; i < 10; i++ {
for i := range 10 {
list1 = append(list1, primitives.ValidatorIndex(i))
}
@@ -55,7 +55,7 @@ func TestSplitIndices_OK(t *testing.T) {
var l []uint64
numValidators := uint64(64000)
for i := uint64(0); i < numValidators; i++ {
for i := range numValidators {
l = append(l, i)
}
split := SplitIndices(l, uint64(params.BeaconConfig().SlotsPerEpoch))
@@ -104,7 +104,7 @@ func BenchmarkIndexComparison(b *testing.B) {
seed := [32]byte{123, 42}
for _, listSize := range listSizes {
b.Run(fmt.Sprintf("Indexwise_ShuffleList_%d", listSize), func(ib *testing.B) {
for i := 0; i < ib.N; i++ {
for ib.Loop() {
// Simulate a list-shuffle by running shuffle-index listSize times.
for j := primitives.ValidatorIndex(0); uint64(j) < listSize; j++ {
_, err := ShuffledIndex(j, listSize, seed)
@@ -120,11 +120,11 @@ func BenchmarkShuffleList(b *testing.B) {
seed := [32]byte{123, 42}
for _, listSize := range listSizes {
testIndices := make([]primitives.ValidatorIndex, listSize)
for i := uint64(0); i < listSize; i++ {
for i := range listSize {
testIndices[i] = primitives.ValidatorIndex(i)
}
b.Run(fmt.Sprintf("ShuffleList_%d", listSize), func(ib *testing.B) {
for i := 0; i < ib.N; i++ {
for ib.Loop() {
_, err := ShuffleList(testIndices, seed)
assert.NoError(b, err)
}
@@ -161,12 +161,12 @@ func TestSplitIndicesAndOffset_OK(t *testing.T) {
var l []uint64
validators := uint64(64000)
for i := uint64(0); i < validators; i++ {
for i := range validators {
l = append(l, i)
}
chunks := uint64(6)
split := SplitIndices(l, chunks)
for i := uint64(0); i < chunks; i++ {
for i := range chunks {
if !reflect.DeepEqual(split[i], l[slice.SplitOffset(uint64(len(l)), chunks, i):slice.SplitOffset(uint64(len(l)), chunks, i+1)]) {
t.Errorf("Want: %v got: %v", l[slice.SplitOffset(uint64(len(l)), chunks, i):slice.SplitOffset(uint64(len(l)), chunks, i+1)], split[i])
break

View File

@@ -24,7 +24,7 @@ func TestCurrentPeriodPositions(t *testing.T) {
syncCommittee := &ethpb.SyncCommittee{
Pubkeys: make([][]byte, params.BeaconConfig().SyncCommitteeSize),
}
for i := 0; i < len(validators); i++ {
for i := range validators {
k := make([]byte, 48)
copy(k, strconv.Itoa(i))
validators[i] = &ethpb.Validator{
@@ -56,7 +56,7 @@ func TestIsCurrentEpochSyncCommittee_UsingCache(t *testing.T) {
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),
}
for i := 0; i < len(validators); i++ {
for i := range validators {
k := make([]byte, 48)
copy(k, strconv.Itoa(i))
validators[i] = &ethpb.Validator{
@@ -87,7 +87,7 @@ func TestIsCurrentEpochSyncCommittee_UsingCommittee(t *testing.T) {
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),
}
for i := 0; i < len(validators); i++ {
for i := range validators {
k := make([]byte, 48)
copy(k, strconv.Itoa(i))
validators[i] = &ethpb.Validator{
@@ -116,7 +116,7 @@ func TestIsCurrentEpochSyncCommittee_DoesNotExist(t *testing.T) {
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),
}
for i := 0; i < len(validators); i++ {
for i := range validators {
k := make([]byte, 48)
copy(k, strconv.Itoa(i))
validators[i] = &ethpb.Validator{
@@ -144,7 +144,7 @@ func TestIsNextEpochSyncCommittee_UsingCache(t *testing.T) {
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),
}
for i := 0; i < len(validators); i++ {
for i := range validators {
k := make([]byte, 48)
copy(k, strconv.Itoa(i))
validators[i] = &ethpb.Validator{
@@ -175,7 +175,7 @@ func TestIsNextEpochSyncCommittee_UsingCommittee(t *testing.T) {
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),
}
for i := 0; i < len(validators); i++ {
for i := range validators {
k := make([]byte, 48)
copy(k, strconv.Itoa(i))
validators[i] = &ethpb.Validator{
@@ -203,7 +203,7 @@ func TestIsNextEpochSyncCommittee_DoesNotExist(t *testing.T) {
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),
}
for i := 0; i < len(validators); i++ {
for i := range validators {
k := make([]byte, 48)
copy(k, strconv.Itoa(i))
validators[i] = &ethpb.Validator{
@@ -231,7 +231,7 @@ func TestCurrentEpochSyncSubcommitteeIndices_UsingCache(t *testing.T) {
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),
}
for i := 0; i < len(validators); i++ {
for i := range validators {
k := make([]byte, 48)
copy(k, strconv.Itoa(i))
validators[i] = &ethpb.Validator{
@@ -262,7 +262,7 @@ func TestCurrentEpochSyncSubcommitteeIndices_UsingCommittee(t *testing.T) {
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),
}
for i := 0; i < len(validators); i++ {
for i := range validators {
k := make([]byte, 48)
copy(k, strconv.Itoa(i))
validators[i] = &ethpb.Validator{
@@ -304,7 +304,7 @@ func TestCurrentEpochSyncSubcommitteeIndices_DoesNotExist(t *testing.T) {
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),
}
for i := 0; i < len(validators); i++ {
for i := range validators {
k := make([]byte, 48)
copy(k, strconv.Itoa(i))
validators[i] = &ethpb.Validator{
@@ -332,7 +332,7 @@ func TestNextEpochSyncSubcommitteeIndices_UsingCache(t *testing.T) {
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),
}
for i := 0; i < len(validators); i++ {
for i := range validators {
k := make([]byte, 48)
copy(k, strconv.Itoa(i))
validators[i] = &ethpb.Validator{
@@ -363,7 +363,7 @@ func TestNextEpochSyncSubcommitteeIndices_UsingCommittee(t *testing.T) {
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),
}
for i := 0; i < len(validators); i++ {
for i := range validators {
k := make([]byte, 48)
copy(k, strconv.Itoa(i))
validators[i] = &ethpb.Validator{
@@ -391,7 +391,7 @@ func TestNextEpochSyncSubcommitteeIndices_DoesNotExist(t *testing.T) {
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),
}
for i := 0; i < len(validators); i++ {
for i := range validators {
k := make([]byte, 48)
copy(k, strconv.Itoa(i))
validators[i] = &ethpb.Validator{
@@ -449,7 +449,7 @@ func TestIsCurrentEpochSyncCommittee_SameBlockRoot(t *testing.T) {
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),
}
for i := 0; i < len(validators); i++ {
for i := range validators {
k := make([]byte, 48)
copy(k, strconv.Itoa(i))
validators[i] = &ethpb.Validator{

View File

@@ -184,7 +184,7 @@ func TestBeaconProposerIndex_OK(t *testing.T) {
c.MinGenesisActiveValidatorCount = 16384
params.OverrideBeaconConfig(c)
validators := make([]*ethpb.Validator, params.BeaconConfig().MinGenesisActiveValidatorCount/8)
for i := 0; i < len(validators); i++ {
for i := range validators {
validators[i] = &ethpb.Validator{
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
}
@@ -241,7 +241,7 @@ func TestBeaconProposerIndex_BadState(t *testing.T) {
c.MinGenesisActiveValidatorCount = 16384
params.OverrideBeaconConfig(c)
validators := make([]*ethpb.Validator, params.BeaconConfig().MinGenesisActiveValidatorCount/8)
for i := 0; i < len(validators); i++ {
for i := range validators {
validators[i] = &ethpb.Validator{
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
}
@@ -270,7 +270,7 @@ func TestComputeProposerIndex_Compatibility(t *testing.T) {
helpers.ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().MinGenesisActiveValidatorCount)
for i := 0; i < len(validators); i++ {
for i := range validators {
validators[i] = &ethpb.Validator{
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
}
@@ -322,7 +322,7 @@ func TestActiveValidatorCount_Genesis(t *testing.T) {
c := 1000
validators := make([]*ethpb.Validator, c)
for i := 0; i < len(validators); i++ {
for i := range validators {
validators[i] = &ethpb.Validator{
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
}
@@ -357,7 +357,7 @@ func TestChurnLimit_OK(t *testing.T) {
helpers.ClearCache()
validators := make([]*ethpb.Validator, test.validatorCount)
for i := 0; i < len(validators); i++ {
for i := range validators {
validators[i] = &ethpb.Validator{
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
}
@@ -861,7 +861,7 @@ func TestLastActivatedValidatorIndex_OK(t *testing.T) {
validators := make([]*ethpb.Validator, 4)
balances := make([]uint64, len(validators))
for i := uint64(0); i < 4; i++ {
for i := range uint64(4) {
validators[i] = &ethpb.Validator{
PublicKey: make([]byte, params.BeaconConfig().BLSPubkeyLength),
WithdrawalCredentials: make([]byte, 32),

View File

@@ -270,7 +270,7 @@ func genState(t *testing.T, valCount, avgBalance uint64) state.BeaconState {
validators := make([]*ethpb.Validator, valCount)
balances := make([]uint64, len(validators))
for i := uint64(0); i < valCount; i++ {
for i := range valCount {
validators[i] = &ethpb.Validator{
PublicKey: make([]byte, params.BeaconConfig().BLSPubkeyLength),
WithdrawalCredentials: make([]byte, 32),

View File

@@ -45,12 +45,13 @@ go_test(
"p2p_interface_test.go",
"reconstruction_helpers_test.go",
"reconstruction_test.go",
"semi_supernode_test.go",
"utils_test.go",
"validator_test.go",
"verification_test.go",
],
embed = [":go_default_library"],
deps = [
":go_default_library",
"//beacon-chain/blockchain/kzg:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//config/fieldparams:go_default_library",

View File

@@ -5,6 +5,7 @@ import (
"math"
"slices"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/crypto/hash"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
@@ -96,8 +97,7 @@ func ComputeColumnsForCustodyGroup(custodyGroup uint64) ([]uint64, error) {
return nil, ErrCustodyGroupTooLarge
}
numberOfColumns := cfg.NumberOfColumns
numberOfColumns := uint64(fieldparams.NumberOfColumns)
columnsPerGroup := numberOfColumns / numberOfCustodyGroups
columns := make([]uint64, 0, columnsPerGroup)
@@ -112,8 +112,9 @@ func ComputeColumnsForCustodyGroup(custodyGroup uint64) ([]uint64, error) {
// ComputeCustodyGroupForColumn computes the custody group for a given column.
// It is the reciprocal function of ComputeColumnsForCustodyGroup.
func ComputeCustodyGroupForColumn(columnIndex uint64) (uint64, error) {
const numberOfColumns = fieldparams.NumberOfColumns
cfg := params.BeaconConfig()
numberOfColumns := cfg.NumberOfColumns
numberOfCustodyGroups := cfg.NumberOfCustodyGroups
if columnIndex >= numberOfColumns {

View File

@@ -30,7 +30,6 @@ func TestComputeColumnsForCustodyGroup(t *testing.T) {
func TestComputeCustodyGroupForColumn(t *testing.T) {
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
config.NumberOfColumns = 128
config.NumberOfCustodyGroups = 64
params.OverrideBeaconConfig(config)

View File

@@ -2,6 +2,7 @@ package peerdas
import (
"encoding/binary"
"maps"
"sync"
"github.com/ethereum/go-ethereum/p2p/enode"
@@ -107,3 +108,102 @@ func computeInfoCacheKey(nodeID enode.ID, custodyGroupCount uint64) [nodeInfoCac
return key
}
// ColumnIndices represents as a set of ColumnIndices. This could be the set of indices that a node is required to custody,
// the set that a peer custodies, missing indices for a given block, indices that are present on disk, etc.
type ColumnIndices map[uint64]struct{}
// Has returns true if the index is present in the ColumnIndices.
func (ci ColumnIndices) Has(index uint64) bool {
_, ok := ci[index]
return ok
}
// Count returns the number of indices present in the ColumnIndices.
func (ci ColumnIndices) Count() int {
return len(ci)
}
// Set sets the index in the ColumnIndices.
func (ci ColumnIndices) Set(index uint64) {
ci[index] = struct{}{}
}
// Unset removes the index from the ColumnIndices.
func (ci ColumnIndices) Unset(index uint64) {
delete(ci, index)
}
// Copy creates a copy of the ColumnIndices.
func (ci ColumnIndices) Copy() ColumnIndices {
newCi := make(ColumnIndices, len(ci))
maps.Copy(newCi, ci)
return newCi
}
// Intersection returns a new ColumnIndices that contains only the indices that are present in both ColumnIndices.
func (ci ColumnIndices) Intersection(other ColumnIndices) ColumnIndices {
result := make(ColumnIndices)
for index := range ci {
if other.Has(index) {
result.Set(index)
}
}
return result
}
// Merge mutates the receiver so that any index that is set in either of
// the two ColumnIndices is set in the receiver after the function finishes.
// It does not mutate the other ColumnIndices given as a function argument.
func (ci ColumnIndices) Merge(other ColumnIndices) {
for index := range other {
ci.Set(index)
}
}
// ToMap converts a ColumnIndices into a map[uint64]struct{}.
// In the future ColumnIndices may be changed to a bit map, so using
// ToMap will ensure forwards-compatibility.
func (ci ColumnIndices) ToMap() map[uint64]struct{} {
return ci.Copy()
}
// ToSlice converts a ColumnIndices into a slice of uint64 indices.
func (ci ColumnIndices) ToSlice() []uint64 {
indices := make([]uint64, 0, len(ci))
for index := range ci {
indices = append(indices, index)
}
return indices
}
// NewColumnIndicesFromSlice creates a ColumnIndices from a slice of uint64.
func NewColumnIndicesFromSlice(indices []uint64) ColumnIndices {
ci := make(ColumnIndices, len(indices))
for _, index := range indices {
ci[index] = struct{}{}
}
return ci
}
// NewColumnIndicesFromMap creates a ColumnIndices from a map[uint64]bool. This kind of map
// is used in several places in peerdas code. Converting from this map type to ColumnIndices
// will allow us to move ColumnIndices underlying type to a bitmap in the future and avoid
// lots of loops for things like intersections/unions or copies.
func NewColumnIndicesFromMap(indices map[uint64]bool) ColumnIndices {
ci := make(ColumnIndices, len(indices))
for index, set := range indices {
if !set {
continue
}
ci[index] = struct{}{}
}
return ci
}
// NewColumnIndices creates an empty ColumnIndices.
// In the future ColumnIndices may change from a reference type to a value type,
// so using this constructor will ensure forwards-compatibility.
func NewColumnIndices() ColumnIndices {
return make(ColumnIndices)
}

View File

@@ -25,3 +25,10 @@ func TestInfo(t *testing.T) {
require.DeepEqual(t, expectedDataColumnsSubnets, actual.DataColumnsSubnets)
}
}
func TestNewColumnIndicesFromMap(t *testing.T) {
t.Run("nil map", func(t *testing.T) {
ci := peerdas.NewColumnIndicesFromMap(nil)
require.Equal(t, 0, ci.Count())
})
}

View File

@@ -33,8 +33,7 @@ func (Cgc) ENRKey() string { return params.BeaconNetworkConfig().CustodyGroupCou
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/p2p-interface.md#verify_data_column_sidecar
func VerifyDataColumnSidecar(sidecar blocks.RODataColumn) error {
// The sidecar index must be within the valid range.
numberOfColumns := params.BeaconConfig().NumberOfColumns
if sidecar.Index >= numberOfColumns {
if sidecar.Index >= fieldparams.NumberOfColumns {
return ErrIndexTooLarge
}

View File

@@ -100,7 +100,7 @@ func Test_VerifyKZGInclusionProofColumn(t *testing.T) {
// Generate random KZG commitments `blobCount` blobs.
kzgCommitments := make([][]byte, blobCount)
for i := 0; i < blobCount; i++ {
for i := range blobCount {
kzgCommitments[i] = make([]byte, 48)
_, err := rand.Read(kzgCommitments[i])
require.NoError(t, err)
@@ -281,8 +281,11 @@ func BenchmarkVerifyDataColumnSidecarKZGProofs_SameCommitments_NoBatch(b *testin
}
func BenchmarkVerifyDataColumnSidecarKZGProofs_DiffCommitments_Batch(b *testing.B) {
const blobCount = 12
numberOfColumns := int64(params.BeaconConfig().NumberOfColumns)
const (
blobCount = 12
numberOfColumns = fieldparams.NumberOfColumns
)
err := kzg.Start()
require.NoError(b, err)

View File

@@ -26,7 +26,40 @@ var (
func MinimumColumnCountToReconstruct() uint64 {
// If the number of columns is odd, then we need total / 2 + 1 columns to reconstruct.
// If the number of columns is even, then we need total / 2 columns to reconstruct.
return (params.BeaconConfig().NumberOfColumns + 1) / 2
return (fieldparams.NumberOfColumns + 1) / 2
}
// MinimumCustodyGroupCountToReconstruct returns the minimum number of custody groups needed to
// custody enough data columns for reconstruction. This accounts for the relationship between
// custody groups and columns, making it future-proof if these values change.
// Returns an error if the configuration values are invalid (zero or would cause division by zero).
func MinimumCustodyGroupCountToReconstruct() (uint64, error) {
const numberOfColumns = fieldparams.NumberOfColumns
cfg := params.BeaconConfig()
// Validate configuration values
if numberOfColumns == 0 {
return 0, errors.New("NumberOfColumns cannot be zero")
}
if cfg.NumberOfCustodyGroups == 0 {
return 0, errors.New("NumberOfCustodyGroups cannot be zero")
}
minimumColumnCount := MinimumColumnCountToReconstruct()
// Calculate how many columns each custody group represents
columnsPerGroup := numberOfColumns / cfg.NumberOfCustodyGroups
// If there are more groups than columns (columnsPerGroup = 0), this is an invalid configuration
// for reconstruction purposes as we cannot determine a meaningful custody group count
if columnsPerGroup == 0 {
return 0, errors.Errorf("invalid configuration: NumberOfCustodyGroups (%d) exceeds NumberOfColumns (%d)",
cfg.NumberOfCustodyGroups, numberOfColumns)
}
// Use ceiling division to ensure we have enough groups to cover the minimum columns
// ceiling(a/b) = (a + b - 1) / b
return (minimumColumnCount + columnsPerGroup - 1) / columnsPerGroup, nil
}
// recoverCellsForBlobs reconstructs cells for specified blobs from the given data column sidecars.
@@ -253,7 +286,8 @@ func ReconstructBlobSidecars(block blocks.ROBlock, verifiedDataColumnSidecars []
// ComputeCellsAndProofsFromFlat computes the cells and proofs from blobs and cell flat proofs.
func ComputeCellsAndProofsFromFlat(blobs [][]byte, cellProofs [][]byte) ([][]kzg.Cell, [][]kzg.Proof, error) {
numberOfColumns := params.BeaconConfig().NumberOfColumns
const numberOfColumns = fieldparams.NumberOfColumns
blobCount := uint64(len(blobs))
cellProofsCount := uint64(len(cellProofs))
@@ -295,8 +329,6 @@ func ComputeCellsAndProofsFromFlat(blobs [][]byte, cellProofs [][]byte) ([][]kzg
// ComputeCellsAndProofsFromStructured computes the cells and proofs from blobs and cell proofs.
func ComputeCellsAndProofsFromStructured(blobsAndProofs []*pb.BlobAndProofV2) ([][]kzg.Cell, [][]kzg.Proof, error) {
numberOfColumns := params.BeaconConfig().NumberOfColumns
cellsPerBlob := make([][]kzg.Cell, 0, len(blobsAndProofs))
proofsPerBlob := make([][]kzg.Proof, 0, len(blobsAndProofs))
for _, blobAndProof := range blobsAndProofs {
@@ -315,7 +347,7 @@ func ComputeCellsAndProofsFromStructured(blobsAndProofs []*pb.BlobAndProofV2) ([
return nil, nil, errors.Wrap(err, "compute cells")
}
kzgProofs := make([]kzg.Proof, 0, numberOfColumns)
kzgProofs := make([]kzg.Proof, 0, fieldparams.NumberOfColumns)
for _, kzgProofBytes := range blobAndProof.KzgProofs {
if len(kzgProofBytes) != kzg.BytesPerProof {
return nil, nil, errors.New("wrong KZG proof size - should never happen")

View File

@@ -16,11 +16,11 @@ import (
// testBlobSetup holds common test data for blob reconstruction tests.
type testBlobSetup struct {
blobCount int
blobs []kzg.Blob
roBlock blocks.ROBlock
roDataColumnSidecars []blocks.RODataColumn
verifiedRoDataColumnSidecars []blocks.VerifiedRODataColumn
blobCount int
blobs []kzg.Blob
roBlock blocks.ROBlock
roDataColumnSidecars []blocks.RODataColumn
verifiedRoDataColumnSidecars []blocks.VerifiedRODataColumn
}
// setupTestBlobs creates a complete test setup with blobs, cells, proofs, and data column sidecars.

View File

@@ -17,41 +17,9 @@ import (
)
func TestMinimumColumnsCountToReconstruct(t *testing.T) {
testCases := []struct {
name string
numberOfColumns uint64
expected uint64
}{
{
name: "numberOfColumns=128",
numberOfColumns: 128,
expected: 64,
},
{
name: "numberOfColumns=129",
numberOfColumns: 129,
expected: 65,
},
{
name: "numberOfColumns=130",
numberOfColumns: 130,
expected: 65,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
// Set the total number of columns.
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.NumberOfColumns = tc.numberOfColumns
params.OverrideBeaconConfig(cfg)
// Compute the minimum number of columns needed to reconstruct.
actual := peerdas.MinimumColumnCountToReconstruct()
require.Equal(t, tc.expected, actual)
})
}
const expected = uint64(64)
actual := peerdas.MinimumColumnCountToReconstruct()
require.Equal(t, expected, actual)
}
func TestReconstructDataColumnSidecars(t *testing.T) {
@@ -200,7 +168,6 @@ func TestReconstructBlobSidecars(t *testing.T) {
t.Run("nominal", func(t *testing.T) {
const blobCount = 3
numberOfColumns := params.BeaconConfig().NumberOfColumns
roBlock, roBlobSidecars := util.GenerateTestElectraBlockWithSidecar(t, [fieldparams.RootLength]byte{}, 42, blobCount)
@@ -236,7 +203,7 @@ func TestReconstructBlobSidecars(t *testing.T) {
require.NoError(t, err)
// Flatten proofs.
cellProofs := make([][]byte, 0, blobCount*numberOfColumns)
cellProofs := make([][]byte, 0, blobCount*fieldparams.NumberOfColumns)
for _, proofs := range inputProofsPerBlob {
for _, proof := range proofs {
cellProofs = append(cellProofs, proof[:])
@@ -428,13 +395,12 @@ func TestReconstructBlobs(t *testing.T) {
}
func TestComputeCellsAndProofsFromFlat(t *testing.T) {
const numberOfColumns = fieldparams.NumberOfColumns
// Start the trusted setup.
err := kzg.Start()
require.NoError(t, err)
t.Run("mismatched blob and proof counts", func(t *testing.T) {
numberOfColumns := params.BeaconConfig().NumberOfColumns
// Create one blob but proofs for two blobs
blobs := [][]byte{{}}
@@ -447,7 +413,6 @@ func TestComputeCellsAndProofsFromFlat(t *testing.T) {
t.Run("nominal", func(t *testing.T) {
const blobCount = 2
numberOfColumns := params.BeaconConfig().NumberOfColumns
// Generate test blobs
_, roBlobSidecars := util.GenerateTestElectraBlockWithSidecar(t, [fieldparams.RootLength]byte{}, 42, blobCount)

View File

@@ -0,0 +1,137 @@
package peerdas
import (
"testing"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/ethereum/go-ethereum/p2p/enode"
)
func TestSemiSupernodeCustody(t *testing.T) {
const numberOfColumns = fieldparams.NumberOfColumns
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.NumberOfCustodyGroups = 128
params.OverrideBeaconConfig(cfg)
// Create a test node ID
nodeID := enode.ID([32]byte{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32})
t.Run("semi-supernode custodies exactly 64 columns", func(t *testing.T) {
// Semi-supernode uses 64 custody groups (half of 128)
const semiSupernodeCustodyGroupCount = 64
// Get custody groups for semi-supernode
custodyGroups, err := CustodyGroups(nodeID, semiSupernodeCustodyGroupCount)
require.NoError(t, err)
require.Equal(t, semiSupernodeCustodyGroupCount, len(custodyGroups))
// Verify we get exactly 64 custody columns
custodyColumns, err := CustodyColumns(custodyGroups)
require.NoError(t, err)
require.Equal(t, semiSupernodeCustodyGroupCount, len(custodyColumns))
// Verify the columns are valid (within 0-127 range)
for columnIndex := range custodyColumns {
if columnIndex >= numberOfColumns {
t.Fatalf("Invalid column index %d, should be less than %d", columnIndex, numberOfColumns)
}
}
})
t.Run("64 columns is exactly the minimum for reconstruction", func(t *testing.T) {
minimumCount := MinimumColumnCountToReconstruct()
require.Equal(t, uint64(64), minimumCount)
})
t.Run("semi-supernode vs supernode custody", func(t *testing.T) {
// Semi-supernode (64 custody groups)
semiSupernodeGroups, err := CustodyGroups(nodeID, 64)
require.NoError(t, err)
semiSupernodeColumns, err := CustodyColumns(semiSupernodeGroups)
require.NoError(t, err)
// Supernode (128 custody groups = all groups)
supernodeGroups, err := CustodyGroups(nodeID, 128)
require.NoError(t, err)
supernodeColumns, err := CustodyColumns(supernodeGroups)
require.NoError(t, err)
// Verify semi-supernode has exactly half the columns of supernode
require.Equal(t, 64, len(semiSupernodeColumns))
require.Equal(t, 128, len(supernodeColumns))
require.Equal(t, len(supernodeColumns)/2, len(semiSupernodeColumns))
// Verify all semi-supernode columns are a subset of supernode columns
for columnIndex := range semiSupernodeColumns {
if !supernodeColumns[columnIndex] {
t.Fatalf("Semi-supernode column %d not found in supernode columns", columnIndex)
}
}
})
}
func TestMinimumCustodyGroupCountToReconstruct(t *testing.T) {
tests := []struct {
name string
numberOfGroups uint64
expectedResult uint64
}{
{
name: "Standard 1:1 ratio (128 columns, 128 groups)",
numberOfGroups: 128,
expectedResult: 64, // Need half of 128 groups
},
{
name: "2 columns per group (128 columns, 64 groups)",
numberOfGroups: 64,
expectedResult: 32, // Need 64 columns, which is 32 groups (64/2)
},
{
name: "4 columns per group (128 columns, 32 groups)",
numberOfGroups: 32,
expectedResult: 16, // Need 64 columns, which is 16 groups (64/4)
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.NumberOfCustodyGroups = tt.numberOfGroups
params.OverrideBeaconConfig(cfg)
result, err := MinimumCustodyGroupCountToReconstruct()
require.NoError(t, err)
require.Equal(t, tt.expectedResult, result)
})
}
}
func TestMinimumCustodyGroupCountToReconstruct_ErrorCases(t *testing.T) {
t.Run("Returns error when NumberOfCustodyGroups is zero", func(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.NumberOfCustodyGroups = 0
params.OverrideBeaconConfig(cfg)
_, err := MinimumCustodyGroupCountToReconstruct()
require.NotNil(t, err)
require.Equal(t, true, err.Error() == "NumberOfCustodyGroups cannot be zero")
})
t.Run("Returns error when NumberOfCustodyGroups exceeds NumberOfColumns", func(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.NumberOfCustodyGroups = 256
params.OverrideBeaconConfig(cfg)
_, err := MinimumCustodyGroupCountToReconstruct()
require.NotNil(t, err)
// Just check that we got an error about the configuration
require.Equal(t, true, len(err.Error()) > 0)
})
}

View File

@@ -102,11 +102,13 @@ func ValidatorsCustodyRequirement(state beaconState.ReadOnlyBeaconState, validat
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/validator.md#get_data_column_sidecars_from_block and
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/validator.md#get_data_column_sidecars_from_column_sidecar
func DataColumnSidecars(cellsPerBlob [][]kzg.Cell, proofsPerBlob [][]kzg.Proof, src ConstructionPopulator) ([]blocks.RODataColumn, error) {
const numberOfColumns = uint64(fieldparams.NumberOfColumns)
if len(cellsPerBlob) == 0 {
return nil, nil
}
start := time.Now()
cells, proofs, err := rotateRowsToCols(cellsPerBlob, proofsPerBlob, params.BeaconConfig().NumberOfColumns)
cells, proofs, err := rotateRowsToCols(cellsPerBlob, proofsPerBlob, numberOfColumns)
if err != nil {
return nil, errors.Wrap(err, "rotate cells and proofs")
}
@@ -115,9 +117,8 @@ func DataColumnSidecars(cellsPerBlob [][]kzg.Cell, proofsPerBlob [][]kzg.Proof,
return nil, errors.Wrap(err, "extract block info")
}
maxIdx := params.BeaconConfig().NumberOfColumns
roSidecars := make([]blocks.RODataColumn, 0, maxIdx)
for idx := range maxIdx {
roSidecars := make([]blocks.RODataColumn, 0, numberOfColumns)
for idx := range numberOfColumns {
sidecar := &ethpb.DataColumnSidecar{
Index: idx,
Column: cells[idx],
@@ -216,7 +217,7 @@ func rotateRowsToCols(cellsPerBlob [][]kzg.Cell, proofsPerBlob [][]kzg.Proof, nu
if len(cells) != len(proofs) {
return nil, nil, errors.Wrap(ErrNotEnoughDataColumnSidecars, "not enough proofs")
}
for j := uint64(0); j < numCols; j++ {
for j := range numCols {
if i == 0 {
cellCols[j] = make([][]byte, len(cellsPerBlob))
proofCols[j] = make([][]byte, len(cellsPerBlob))

View File

@@ -6,7 +6,7 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/blockchain/kzg"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
state_native "github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native"
"github.com/OffchainLabs/prysm/v7/config/params"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
@@ -59,6 +59,8 @@ func TestValidatorsCustodyRequirement(t *testing.T) {
}
func TestDataColumnSidecars(t *testing.T) {
const numberOfColumns = fieldparams.NumberOfColumns
t.Run("sizes mismatch", func(t *testing.T) {
// Create a protobuf signed beacon block.
signedBeaconBlockPb := util.NewBeaconBlockDeneb()
@@ -69,10 +71,10 @@ func TestDataColumnSidecars(t *testing.T) {
// Create cells and proofs.
cellsPerBlob := [][]kzg.Cell{
make([]kzg.Cell, params.BeaconConfig().NumberOfColumns),
make([]kzg.Cell, numberOfColumns),
}
proofsPerBlob := [][]kzg.Proof{
make([]kzg.Proof, params.BeaconConfig().NumberOfColumns),
make([]kzg.Proof, numberOfColumns),
}
rob, err := blocks.NewROBlock(signedBeaconBlock)
@@ -117,7 +119,6 @@ func TestDataColumnSidecars(t *testing.T) {
require.NoError(t, err)
// Create cells and proofs with sufficient cells but insufficient proofs.
numberOfColumns := params.BeaconConfig().NumberOfColumns
cellsPerBlob := [][]kzg.Cell{
make([]kzg.Cell, numberOfColumns),
}
@@ -149,7 +150,6 @@ func TestDataColumnSidecars(t *testing.T) {
require.NoError(t, err)
// Create cells and proofs with correct dimensions.
numberOfColumns := params.BeaconConfig().NumberOfColumns
cellsPerBlob := [][]kzg.Cell{
make([]kzg.Cell, numberOfColumns),
make([]kzg.Cell, numberOfColumns),
@@ -197,6 +197,7 @@ func TestDataColumnSidecars(t *testing.T) {
}
func TestReconstructionSource(t *testing.T) {
const numberOfColumns = fieldparams.NumberOfColumns
// Create a Fulu block with blob commitments.
signedBeaconBlockPb := util.NewBeaconBlockFulu()
commitment1 := make([]byte, 48)
@@ -212,7 +213,6 @@ func TestReconstructionSource(t *testing.T) {
require.NoError(t, err)
// Create cells and proofs with correct dimensions.
numberOfColumns := params.BeaconConfig().NumberOfColumns
cellsPerBlob := [][]kzg.Cell{
make([]kzg.Cell, numberOfColumns),
make([]kzg.Cell, numberOfColumns),

View File

@@ -119,7 +119,7 @@ func TestFuzzverifySigningRoot_10000(_ *testing.T) {
var p []byte
var s []byte
var d []byte
for i := 0; i < 10000; i++ {
for range 10000 {
fuzzer.Fuzz(st)
fuzzer.Fuzz(&pubkey)
fuzzer.Fuzz(&sig)

View File

@@ -28,8 +28,7 @@ func BenchmarkExecuteStateTransition_FullBlock(b *testing.B) {
block, err := benchmark.PreGenFullBlock()
require.NoError(b, err)
b.ResetTimer()
for i := 0; i < b.N; i++ {
for i := 0; b.Loop(); i++ {
wsb, err := blocks.NewSignedBeaconBlock(block)
require.NoError(b, err)
_, err = coreState.ExecuteStateTransition(b.Context(), cleanStates[i], wsb)
@@ -60,8 +59,7 @@ func BenchmarkExecuteStateTransition_WithCache(b *testing.B) {
_, err = coreState.ExecuteStateTransition(b.Context(), beaconState, wsb)
require.NoError(b, err, "Failed to process block, benchmarks will fail")
b.ResetTimer()
for i := 0; i < b.N; i++ {
for i := 0; b.Loop(); i++ {
wsb, err := blocks.NewSignedBeaconBlock(block)
require.NoError(b, err)
_, err = coreState.ExecuteStateTransition(b.Context(), cleanStates[i], wsb)
@@ -83,8 +81,7 @@ func BenchmarkProcessEpoch_2FullEpochs(b *testing.B) {
require.NoError(b, helpers.UpdateCommitteeCache(b.Context(), beaconState, time.CurrentEpoch(beaconState)))
require.NoError(b, beaconState.SetSlot(currentSlot))
b.ResetTimer()
for i := 0; i < b.N; i++ {
for b.Loop() {
// ProcessEpochPrecompute is the optimized version of process epoch. It's enabled by default
// at run time.
_, err := coreState.ProcessEpochPrecompute(b.Context(), beaconState.Copy())
@@ -96,8 +93,7 @@ func BenchmarkHashTreeRoot_FullState(b *testing.B) {
beaconState, err := benchmark.PreGenstateFullEpochs()
require.NoError(b, err)
b.ResetTimer()
for i := 0; i < b.N; i++ {
for b.Loop() {
_, err := beaconState.HashTreeRoot(b.Context())
require.NoError(b, err)
}
@@ -113,8 +109,7 @@ func BenchmarkHashTreeRootState_FullState(b *testing.B) {
_, err = beaconState.HashTreeRoot(ctx)
require.NoError(b, err)
b.ResetTimer()
for i := 0; i < b.N; i++ {
for b.Loop() {
_, err := beaconState.HashTreeRoot(ctx)
require.NoError(b, err)
}
@@ -128,7 +123,7 @@ func BenchmarkMarshalState_FullState(b *testing.B) {
b.Run("Proto_Marshal", func(b *testing.B) {
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
_, err := proto.Marshal(natState)
require.NoError(b, err)
}
@@ -137,7 +132,7 @@ func BenchmarkMarshalState_FullState(b *testing.B) {
b.Run("Fast_SSZ_Marshal", func(b *testing.B) {
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
_, err := natState.MarshalSSZ()
require.NoError(b, err)
}
@@ -157,7 +152,7 @@ func BenchmarkUnmarshalState_FullState(b *testing.B) {
b.Run("Proto_Unmarshal", func(b *testing.B) {
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
require.NoError(b, proto.Unmarshal(protoObject, &ethpb.BeaconState{}))
}
})
@@ -165,7 +160,7 @@ func BenchmarkUnmarshalState_FullState(b *testing.B) {
b.Run("Fast_SSZ_Unmarshal", func(b *testing.B) {
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for b.Loop() {
sszState := &ethpb.BeaconState{}
require.NoError(b, sszState.UnmarshalSSZ(sszObject))
}
@@ -174,7 +169,7 @@ func BenchmarkUnmarshalState_FullState(b *testing.B) {
func clonedStates(beaconState state.BeaconState) []state.BeaconState {
clonedStates := make([]state.BeaconState, runAmount)
for i := 0; i < runAmount; i++ {
for i := range runAmount {
clonedStates[i] = beaconState.Copy()
}
return clonedStates

View File

@@ -108,7 +108,7 @@ func TestSkipSlotCache_ConcurrentMixup(t *testing.T) {
// prepare copies for both states
var setups []state.BeaconState
for i := uint64(0); i < 300; i++ {
for i := range uint64(300) {
var st state.BeaconState
if i%2 == 0 {
st = s1

View File

@@ -95,7 +95,7 @@ func OptimizedGenesisBeaconStateBellatrix(genesisTime uint64, preState state.Bea
}
randaoMixes := make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector)
for i := 0; i < len(randaoMixes); i++ {
for i := range randaoMixes {
h := make([]byte, 32)
copy(h, eth1Data.BlockHash)
randaoMixes[i] = h
@@ -104,17 +104,17 @@ func OptimizedGenesisBeaconStateBellatrix(genesisTime uint64, preState state.Bea
zeroHash := params.BeaconConfig().ZeroHash[:]
activeIndexRoots := make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector)
for i := 0; i < len(activeIndexRoots); i++ {
for i := range activeIndexRoots {
activeIndexRoots[i] = zeroHash
}
blockRoots := make([][]byte, params.BeaconConfig().SlotsPerHistoricalRoot)
for i := 0; i < len(blockRoots); i++ {
for i := range blockRoots {
blockRoots[i] = zeroHash
}
stateRoots := make([][]byte, params.BeaconConfig().SlotsPerHistoricalRoot)
for i := 0; i < len(stateRoots); i++ {
for i := range stateRoots {
stateRoots[i] = zeroHash
}
@@ -131,7 +131,7 @@ func OptimizedGenesisBeaconStateBellatrix(genesisTime uint64, preState state.Bea
}
scoresMissing := len(preState.Validators()) - len(scores)
if scoresMissing > 0 {
for i := 0; i < scoresMissing; i++ {
for range scoresMissing {
scores = append(scores, 0)
}
}

View File

@@ -122,7 +122,7 @@ func OptimizedGenesisBeaconState(genesisTime uint64, preState state.BeaconState,
}
randaoMixes := make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector)
for i := 0; i < len(randaoMixes); i++ {
for i := range randaoMixes {
h := make([]byte, 32)
copy(h, eth1Data.BlockHash)
randaoMixes[i] = h
@@ -131,17 +131,17 @@ func OptimizedGenesisBeaconState(genesisTime uint64, preState state.BeaconState,
zeroHash := params.BeaconConfig().ZeroHash[:]
activeIndexRoots := make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector)
for i := 0; i < len(activeIndexRoots); i++ {
for i := range activeIndexRoots {
activeIndexRoots[i] = zeroHash
}
blockRoots := make([][]byte, params.BeaconConfig().SlotsPerHistoricalRoot)
for i := 0; i < len(blockRoots); i++ {
for i := range blockRoots {
blockRoots[i] = zeroHash
}
stateRoots := make([][]byte, params.BeaconConfig().SlotsPerHistoricalRoot)
for i := 0; i < len(stateRoots); i++ {
for i := range stateRoots {
stateRoots[i] = zeroHash
}

View File

@@ -17,7 +17,7 @@ func TestGenesisBeaconState_1000(t *testing.T) {
deposits := make([]*ethpb.Deposit, 300000)
var genesisTime uint64
eth1Data := &ethpb.Eth1Data{}
for i := 0; i < 1000; i++ {
for range 1000 {
fuzzer.Fuzz(&deposits)
fuzzer.Fuzz(&genesisTime)
fuzzer.Fuzz(eth1Data)
@@ -40,7 +40,7 @@ func TestOptimizedGenesisBeaconState_1000(t *testing.T) {
preState, err := state_native.InitializeFromProtoUnsafePhase0(&ethpb.BeaconState{})
require.NoError(t, err)
eth1Data := &ethpb.Eth1Data{}
for i := 0; i < 1000; i++ {
for range 1000 {
fuzzer.Fuzz(&genesisTime)
fuzzer.Fuzz(eth1Data)
fuzzer.Fuzz(preState)
@@ -60,7 +60,7 @@ func TestIsValidGenesisState_100000(_ *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
fuzzer.NilChance(0.1)
var chainStartDepositCount, currentTime uint64
for i := 0; i < 100000; i++ {
for range 100000 {
fuzzer.Fuzz(&chainStartDepositCount)
fuzzer.Fuzz(&currentTime)
IsValidGenesisState(chainStartDepositCount, currentTime)

View File

@@ -21,7 +21,7 @@ func TestFuzzExecuteStateTransition_1000(t *testing.T) {
sb := &ethpb.SignedBeaconBlock{}
fuzzer := fuzz.NewWithSeed(0)
fuzzer.NilChance(0.1)
for i := 0; i < 1000; i++ {
for range 1000 {
fuzzer.Fuzz(state)
fuzzer.Fuzz(sb)
if sb.Block == nil || sb.Block.Body == nil {
@@ -45,7 +45,7 @@ func TestFuzzCalculateStateRoot_1000(t *testing.T) {
sb := &ethpb.SignedBeaconBlock{}
fuzzer := fuzz.NewWithSeed(0)
fuzzer.NilChance(0.1)
for i := 0; i < 1000; i++ {
for range 1000 {
fuzzer.Fuzz(state)
fuzzer.Fuzz(sb)
if sb.Block == nil || sb.Block.Body == nil {
@@ -68,7 +68,7 @@ func TestFuzzProcessSlot_1000(t *testing.T) {
require.NoError(t, err)
fuzzer := fuzz.NewWithSeed(0)
fuzzer.NilChance(0.1)
for i := 0; i < 1000; i++ {
for range 1000 {
fuzzer.Fuzz(state)
s, err := ProcessSlot(ctx, state)
if err != nil && s != nil {
@@ -86,7 +86,7 @@ func TestFuzzProcessSlots_1000(t *testing.T) {
slot := primitives.Slot(0)
fuzzer := fuzz.NewWithSeed(0)
fuzzer.NilChance(0.1)
for i := 0; i < 1000; i++ {
for range 1000 {
fuzzer.Fuzz(state)
fuzzer.Fuzz(&slot)
s, err := ProcessSlots(ctx, state, slot)
@@ -105,7 +105,7 @@ func TestFuzzprocessOperationsNoVerify_1000(t *testing.T) {
bb := &ethpb.BeaconBlock{}
fuzzer := fuzz.NewWithSeed(0)
fuzzer.NilChance(0.1)
for i := 0; i < 1000; i++ {
for range 1000 {
fuzzer.Fuzz(state)
fuzzer.Fuzz(bb)
if bb.Body == nil {
@@ -128,7 +128,7 @@ func TestFuzzverifyOperationLengths_10000(t *testing.T) {
bb := &ethpb.BeaconBlock{}
fuzzer := fuzz.NewWithSeed(0)
fuzzer.NilChance(0.1)
for i := 0; i < 10000; i++ {
for range 10000 {
fuzzer.Fuzz(state)
fuzzer.Fuzz(bb)
if bb.Body == nil {
@@ -148,7 +148,7 @@ func TestFuzzCanProcessEpoch_10000(t *testing.T) {
require.NoError(t, err)
fuzzer := fuzz.NewWithSeed(0)
fuzzer.NilChance(0.1)
for i := 0; i < 10000; i++ {
for range 10000 {
fuzzer.Fuzz(state)
time.CanProcessEpoch(state)
}
@@ -162,7 +162,7 @@ func TestFuzzProcessEpochPrecompute_1000(t *testing.T) {
require.NoError(t, err)
fuzzer := fuzz.NewWithSeed(0)
fuzzer.NilChance(0.1)
for i := 0; i < 1000; i++ {
for range 1000 {
fuzzer.Fuzz(state)
s, err := ProcessEpochPrecompute(ctx, state)
if err != nil && s != nil {
@@ -180,7 +180,7 @@ func TestFuzzProcessBlockForStateRoot_1000(t *testing.T) {
sb := &ethpb.SignedBeaconBlock{}
fuzzer := fuzz.NewWithSeed(0)
fuzzer.NilChance(0.1)
for i := 0; i < 1000; i++ {
for range 1000 {
fuzzer.Fuzz(state)
fuzzer.Fuzz(sb)
if sb.Block == nil || sb.Block.Body == nil || sb.Block.Body.Eth1Data == nil {

View File

@@ -754,8 +754,7 @@ func BenchmarkProcessSlots_Capella(b *testing.B) {
var err error
b.ResetTimer()
for i := 0; i < b.N; i++ {
for b.Loop() {
st, err = transition.ProcessSlots(b.Context(), st, st.Slot()+1)
if err != nil {
b.Fatalf("Failed to process slot %v", err)
@@ -768,8 +767,7 @@ func BenchmarkProcessSlots_Deneb(b *testing.B) {
var err error
b.ResetTimer()
for i := 0; i < b.N; i++ {
for b.Loop() {
st, err = transition.ProcessSlots(b.Context(), st, st.Slot()+1)
if err != nil {
b.Fatalf("Failed to process slot %v", err)
@@ -782,8 +780,7 @@ func BenchmarkProcessSlots_Electra(b *testing.B) {
var err error
b.ResetTimer()
for i := 0; i < b.N; i++ {
for b.Loop() {
st, err = transition.ProcessSlots(b.Context(), st, st.Slot()+1)
if err != nil {
b.Fatalf("Failed to process slot %v", err)

View File

@@ -307,7 +307,7 @@ func SlashValidator(
// ActivatedValidatorIndices determines the indices activated during the given epoch.
func ActivatedValidatorIndices(epoch primitives.Epoch, validators []*ethpb.Validator) []primitives.ValidatorIndex {
activations := make([]primitives.ValidatorIndex, 0)
for i := 0; i < len(validators); i++ {
for i := range validators {
val := validators[i]
if val.ActivationEpoch <= epoch && epoch < val.ExitEpoch {
activations = append(activations, primitives.ValidatorIndex(i))
@@ -319,7 +319,7 @@ func ActivatedValidatorIndices(epoch primitives.Epoch, validators []*ethpb.Valid
// SlashedValidatorIndices determines the indices slashed during the given epoch.
func SlashedValidatorIndices(epoch primitives.Epoch, validators []*ethpb.Validator) []primitives.ValidatorIndex {
slashed := make([]primitives.ValidatorIndex, 0)
for i := 0; i < len(validators); i++ {
for i := range validators {
val := validators[i]
maxWithdrawableEpoch := primitives.MaxEpoch(val.WithdrawableEpoch, epoch+params.BeaconConfig().EpochsPerSlashingsVector)
if val.WithdrawableEpoch == maxWithdrawableEpoch && val.Slashed {

View File

@@ -172,7 +172,7 @@ func TestSlashValidator_OK(t *testing.T) {
validatorCount := 100
registry := make([]*ethpb.Validator, 0, validatorCount)
balances := make([]uint64, 0, validatorCount)
for i := 0; i < validatorCount; i++ {
for range validatorCount {
registry = append(registry, &ethpb.Validator{
ActivationEpoch: 0,
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
@@ -226,7 +226,7 @@ func TestSlashValidator_Electra(t *testing.T) {
validatorCount := 100
registry := make([]*ethpb.Validator, 0, validatorCount)
balances := make([]uint64, 0, validatorCount)
for i := 0; i < validatorCount; i++ {
for range validatorCount {
registry = append(registry, &ethpb.Validator{
ActivationEpoch: 0,
ExitEpoch: params.BeaconConfig().FarFutureEpoch,

Some files were not shown because too many files have changed in this diff Show More