Compare commits

...

112 Commits

Author SHA1 Message Date
james-prysm
4a6f470dd2 Fix earliestAvailableSlot going backwards on custody increase 2025-12-16 11:09:57 -06:00
james-prysm
4e0bfada17 Add e2e test for earliestAvailableSlot regression 2025-12-16 11:09:44 -06:00
james-prysm
8a37575407 gaz 2025-12-15 13:03:25 -06:00
james-prysm
0b1d4c485d Merge branch 'develop' into fulu-e2e 2025-12-15 10:57:28 -08:00
james-prysm
0d04013443 more reverals removing e2e build" 2025-12-15 12:56:28 -06:00
james-prysm
2fbcbea550 reverting changes adding e2e field params and custom build 2025-12-15 12:37:25 -06:00
terence
9fcc1a7a77 Guard KZG send with context cancellation (#16144)
Avoid sending KZG verification reqs when the caller context is already
canceled to prevent blocking on the channel
2025-12-15 16:58:51 +00:00
james-prysm
5a49dcabb6 fixing test 2025-12-15 10:12:35 -06:00
james-prysm
d0580debf6 Merge branch 'develop' into fulu-e2e 2025-12-15 06:36:18 -08:00
Potuz
75dea214ac Do not error when indices have been computed (#16142)
If there is a context deadline updating the committee cache, but the
indices have been computed correctly, do not error out but rather return
the indices and log the error.
2025-12-13 17:36:06 +00:00
james-prysm
4374e709cb fixing state replay caused by REST api duties attester and sync committee endpoints (#16136)
<!-- Thanks for sending a PR! Before submitting:

1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
   fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
   in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
   Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->

**What type of PR is this?**

 Bug fix


**What does this PR do? Why is it needed?**

s.Stater.StateBySlot may replay if it's the current epoch as it's for
values in the db, if we are in the current we should try to get head
slot and use the cache, proposer duties was doing this already but the
other 2 duties endpoints was not. this pr aligns all 3 and introduces a
new `statebyepoch` that just wraps the approach.

I tested by running this kurtosis config with and without the fix to see
that the replays stop, the blockchain progresses, and the upgraded to
fulu is not printed multiple times

```
participants:
 # Super-nodes
 - el_type: nethermind
   cl_type: prysm
   cl_image: gcr.io/offchainlabs/prysm/beacon-chain:latest
   count: 2
   supernode: true
   cl_extra_params:
     - --subscribe-all-subnets
     - --verbosity=debug
   vc_extra_params:
     - --enable-beacon-rest-api
     - --verbosity=debug

 # Full-nodes
 - el_type: nethermind
   cl_type: prysm
   cl_image: gcr.io/offchainlabs/prysm/beacon-chain:latest
   validator_count: 63
   cl_extra_params:
     - --verbosity=debug
   vc_extra_params:
     - --enable-beacon-rest-api
     - --verbosity=debug

 - el_type: nethermind
   cl_type: prysm
   cl_image: gcr.io/offchainlabs/prysm/beacon-chain:latest
   cl_extra_params:
     - --verbosity=debug
   vc_extra_params:
     - --enable-beacon-rest-api
     - --verbosity=debug
   validator_count: 13

additional_services:
 - dora
 - spamoor

spamoor_params:
 image: ethpandaops/spamoor:master
 max_mem: 4000
 spammers:
   - scenario: eoatx
     config:
       throughput: 200
   - scenario: blobs
     config:
       throughput: 20

network_params:
  fulu_fork_epoch: 2
  bpo_1_epoch: 8
  bpo_1_max_blobs: 21
  withdrawal_type: "0x02"
  preset: mainnet
  seconds_per_slot: 6

global_log_level: debug
```

**Which issues(s) does this PR fix?**

Fixes # https://github.com/OffchainLabs/prysm/issues/16135

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2025-12-12 23:18:22 +00:00
james-prysm
869a6586ff removing e2e field params based on feedback 2025-12-12 16:35:28 -06:00
Radosław Kapka
be300f80bd Static analyzer for httputil.HandleError calls (#16134)
**What type of PR is this?**

Tooling

**What does the PR do?**

Every call to `httputil.HandleError` must be followed by a `return`
statement. It's easy to miss this during reviews, so having a static
analyzer that enforces this will make our life easier.

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-12-12 21:38:09 +00:00
terence
096cba5b2d sync: fix KZG batch verifier deadlock on timeout (#16141)
`validateWithKzgBatchVerifier` could timeout (12s) and once it times out
because `resChan` is unbuffered, the verifier will stuck at following
line at `verifyKzgBatch` as its waiting for someone to grab the result
from `resChan`:
```
	for _, verifier := range kzgBatch {
		verifier.resChan <- verificationErr
	}
```
Fix is to make kzg batch verification non blocking on timeouts by
buffering each request’s buffered size 1
2025-12-12 17:17:40 +00:00
SashaMalysehko
d5127233e4 fix: missing return after version header check (#16126)
Ensure SubmitAttesterSlashingsV2 returns immediately when the
Eth-Consensus-Version header is missing. Without this early return the
handler calls version.FromString with an empty value and writes a second
JSON error to the response, producing invalid JSON and duplicating error
output. This change aligns the handler with the error-handling pattern
used in other endpoints that validate the version header.
2025-12-12 17:09:35 +00:00
Radosław Kapka
3d35cc20ec Use WriteStateFetchError in API handlers whenever possible (#16140)
**What type of PR is this?**

Other

**What does this PR do? Why is it needed?**

Calls to `Stater.StateBySlot` and `Stater.State` should be followed by
`shared.WriteStateFetchError` to provide the most robust error handling.

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2025-12-12 16:26:27 +00:00
Aarsh Shah
1e658530a7 revert https://github.com/OffchainLabs/prysm/pull/16100 (#16139)
This PR reverts https://github.com/OffchainLabs/prysm/pull/16100.

**What type of PR is this?**
Bug fix


**What does this PR do? Why is it needed?**
This PR reverts https://github.com/OffchainLabs/prysm/pull/16100 as that
PR deprecates mplex but other implementations only support mplex for
now..


**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2025-12-12 14:59:32 +00:00
Preston Van Loon
b360794c9c Update CHANGELOG.md for v7.1.0 release (#16127)
**What type of PR is this?**

Documentation

**What does this PR do? Why is it needed?**

Changelog

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.
2025-12-11 22:48:33 +00:00
Aarsh Shah
0fc9ab925a feat: add support for detecting and logging per address reachability via libp2p AutoNAT v2 (#16100)
**What type of PR is this?**
Feature

**What does this PR do? Why is it needed?**

This PR adds support for detecting and logging per address reachability
via libp2p AutoNAT v2. See
https://github.com/libp2p/go-libp2p/releases/tag/v0.42.0 for details.
This PR also upgrades Prysm to libp2p v0.42.0

**Which issues(s) does this PR fix?**

Fixes #https://github.com/OffchainLabs/prysm/issues/16098

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.
2025-12-11 11:56:52 +00:00
satushh
dda5ee3334 Graffiti proposal design doc (#15983)
<!-- Thanks for sending a PR! Before submitting:

1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
   fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
   in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
   Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->

**What type of PR is this?**

Design Doc

**What does this PR do? Why is it needed?**

This PR adds a design doc for adding graffiti. The idea is to have it
populated judiciously so that we can get proper information about the
EL, CL and their corresponding version info. At the same time being
flexible enough with the user input.

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.
2025-12-10 22:57:40 +00:00
Manu NALEPA
14c67376c3 Add test requirement to PULL_REQUEST_TEMPLATE.md (#16123)
**What type of PR is this?**
Other

**What does this PR do? Why is it needed?**
This pull request modifies the `PULL_REQUEST_TEMPLATE.md` to ensure the
developer checked that their PR works as expected.

Some contributors push some changes, without even running the modified
client once to see if their changes work as expected.

Avoidable back-and-forth trips between the contributor and the reviewers
could be prevented thanks to running the modified client.

**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2025-12-10 17:40:29 +00:00
Preston Van Loon
9c8b68a66d Update CHANGELOG.md for v7.0.1 release (#16107)
**What type of PR is this?**

Other

**What does this PR do? Why is it needed?**

**Which issues(s) does this PR fix?**

**Other notes for review**

Did not delete the fragments as they are still needed to generate v7.1.0
release notes. This release is all cherry-picks which would be included
in v7.1.0

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.
2025-12-10 17:07:38 +00:00
Potuz
a3210157e2 Fix TOCTOU race validating attestations (#16105)
A TOCTOU issue was reported by EF security in which two attestations
being validated at the same time may result in both of them being
forwarded. The spec says that we need to forward only the first one.
2025-12-09 19:26:05 +00:00
satushh
1536d59e30 Remove unnecessary copy in Eth1DataHasEnoughSupport (#16118)
<!-- Thanks for sending a PR! Before submitting:

1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
   fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
   in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
   Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->

**What type of PR is this?**

Other

**What does this PR do? Why is it needed?**

- Remove unnecessary `Copy()` call in `Eth1DataHasEnoughSupport`
- `data.Copy()` was called on every iteration of the vote counting loop,
even though `AreEth1DataEqual` only reads the data and never mutates it.
- Additionally, `Eth1DataVotes()` already returns copies of all votes,
so state is protected regardless.

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.
2025-12-09 19:02:36 +00:00
satushh
11e46a4560 Optimise for loop of MigrateToCold (#16101)
<!-- Thanks for sending a PR! Before submitting:

1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
   fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
   in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
   Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->

**What type of PR is this?**

 Other

**What does this PR do? Why is it needed?**

The for loop in MigrateToCold function was brute force in nature. It
could be improved by just directly jumping by `slotsPerArchivedPoint`
rather than going over every single slot.

```
for slot := oldFSlot; slot < fSlot; slot++ {
  ...
   if slot%s.slotsPerArchivedPoint == 0 && slot != 0 {
```
No need to do the modulo for every single slot.
We could just find the correct starting point and jump by
slotsPerArchivedPoint at a time.

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.

---------

Co-authored-by: Potuz <potuz@prysmaticlabs.com>
2025-12-09 17:15:52 +00:00
Snezhkko
5a2e51b894 fix(rpc): incorrect constructor return type (#16084)
The constructor `NewStateRootNotFoundError` incorrectly returned
`StateNotFoundError`. This prevented handlers that rely on
errors.As(err, *lookup.StateRootNotFoundError) from matching and mapping
the error to HTTP 404. The function now returns
StateRootNotFoundError and constructs that type, restoring the intended
behavior for “state root not found” cases.

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-12-09 13:56:00 +00:00
Potuz
d20ec4c7a1 Track the dependent root of the latest finalized checkpoint (#16103)
This PR adds the dependent root of the latest finalized checkpoint to
forkchoice since this node will be typically pruned upon finalization.
2025-12-08 16:16:32 +00:00
terence
7a70abbd15 Add --ignore-unviable-attestations and deprecate --disable-last-epoch-targets (#16094)
This PR introduces flag `--ignore-unviable-attestations` (replaces and
deprecates `--disable-last-epoch-targets`) to drop attestations whose
target state is not viable; default remains to process them unless
explicitly enabled.
2025-12-05 15:03:04 +00:00
Potuz
a2b84c9320 Use head state in more cases (#16095)
The head state is guaranteed to have the same shuffling and active
indices if the previous dependent root coincides with the target
checkpoint's in some cases.
2025-12-05 03:44:03 +00:00
terence
edef17e41d Add arrival latency tracking for data column sidecars (#16099)
We have this for blob sidecars but not for data columns
2025-12-04 21:28:02 +00:00
Manu NALEPA
85c5d31b5b blobsDataFromStoredDataColumns: Ask the use to use the --supernode flag and shorten the error mesage. (#16097)
**What type of PR is this?**
Other

**What does this PR do? Why is it needed?**
`blobsDataFromStoredDataColumns`: Ask the use to use the `--supernode`
flag and shorten the error mesage.

**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.
2025-12-04 15:54:13 +00:00
james-prysm
0b47ac51f7 fixing starting post deneb 2025-12-03 12:09:56 -06:00
james-prysm
75916243f2 adding support to start from fulu fork 2025-12-03 09:40:47 -06:00
james-prysm
b499186483 fixing evaluator to match others 2025-12-03 08:50:37 -06:00
Manu NALEPA
fa056c2d21 Move the "Not enough connected peers" (for a given subnet) from WARN to DEBUG (#16087)
**What type of PR is this?**
Other

**What does this PR do? Why is it needed?**
Move the "Not enough connected peers" (for a given subnet) from WARN to
DEBUG

**Rationale:**
The
<img width="1839" height="31" alt="image"
src="https://github.com/user-attachments/assets/44dbdc8d-3e37-42ee-967b-75a7a1fbcafb"
/>
log is (potentially) printed every 5 minutes.
Every 5 minutes, the BN checks if, for a given subnet, the actual count
of peers is at least equal to a minimum one.
If not, this kind of log is printed.

When validators are connected and selected to be an aggregator in the
next epoch, the BN needs to subscribe and find new peers in the
corresponding attestation subnet.
If, right after the beacon is subscribed (but before it had time to find
peers), the "5 min ticker" ticks, then this warning log is displayed,
even if the slot for which the validator is selected as an aggregator is
still minutes away.

For this reason, this log is moved from WARN to DEBUG

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.
2025-12-03 11:07:24 +00:00
james-prysm
cb28503b5f fixing e2e 2025-12-02 12:25:14 -06:00
kasey
61de11e2c4 Backfill data columns (#15580)
**What type of PR is this?**

Feature

**What does this PR do? Why is it needed?**

Adds data column support to backfill.

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.

---------

Co-authored-by: Kasey <kasey@users.noreply.github.com>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Preston Van Loon <preston@pvl.dev>
2025-12-02 15:19:32 +00:00
james-prysm
48fd9509ef fixing import issue after merge 2025-12-02 08:47:32 -06:00
james-prysm
316f0932ff Merge branch 'develop' into fulu-e2e 2025-12-02 06:36:55 -08:00
james-prysm
7f321a5835 Update changelog/james-prysm_fulu-e2e.md
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-12-02 08:22:24 -06:00
james-prysm
1b302219a8 Update runtime/interop/genesis.go
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-12-02 08:18:53 -06:00
james-prysm
e36c7ec26e Update testing/endtoend/components/eth1/transactions.go
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-12-02 08:18:45 -06:00
james-prysm
8df746cead Update testing/endtoend/components/eth1/transactions.go
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-12-02 08:18:34 -06:00
james-prysm
4294025eed Update ssz_proto_library.bzl
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2025-12-01 21:59:53 -08:00
Manu NALEPA
2773bdef89 Remove NUMBER_OF_COLUMNS and MAX_CELLS_IN_EXTENDED_MATRIX configuration. (#16073)
**What type of PR is this?**
Other

**What does this PR do? Why is it needed?**
This pull request removes `NUMBER_OF_COLUMNS` and
`MAX_CELLS_IN_EXTENDED_MATRIX` configuration.

**Other notes for review**
Please read commit by commit, with commit messages.

**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.
2025-11-29 09:30:54 +00:00
Manu NALEPA
2a23dc7f4a Improve logs (#16075)
**What type of PR is this?**
Other

**What does this PR do? Why is it needed?**
- Added log prefix to the `genesis` package.
- Added log prefix to the `params` package.
- `WithGenesisValidatorsRoot`: Use camelCase for log field param.
- Move `Origin checkpoint found in db` log from WARN to INFO, since it
is the expected behaviour.

**Other notes for review**
Please read commit by commit

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.
2025-11-28 14:34:02 +00:00
james-prysm
41e336d373 Merge branch 'develop' into fulu-e2e 2025-11-25 12:43:24 -08:00
james-prysm
86cb003b8e removing unneeded change 2025-11-20 09:16:28 -06:00
james-prysm
853d64f8f9 fix gosec 2025-11-19 22:46:33 -06:00
james-prysm
6dd11aaa2d found better solution just removing the library, updated changelog and removed transition code 2025-11-19 22:26:46 -06:00
james-prysm
f1d4f6d136 updating third party gosec ignore 2025-11-19 19:40:15 -06:00
james-prysm
c5690a7eb6 Merge branch 'develop' into fulu-e2e 2025-11-19 16:17:12 -08:00
james-prysm
edf60e6e33 gaz 2025-11-19 18:16:53 -06:00
james-prysm
ad022eead9 resolving more build issues 2025-11-19 18:15:59 -06:00
james-prysm
409b48940d gaz 2025-11-19 16:53:06 -06:00
james-prysm
64324211b8 attempting to fix ci 2025-11-19 16:50:51 -06:00
james-prysm
5531014a75 gaz 2025-11-19 16:33:54 -06:00
james-prysm
80dc3d953d trying to fix gobuild by moving fuzzyvm to third party build 2025-11-19 16:30:13 -06:00
james-prysm
d175bd93ba Merge branch 'develop' into fulu-e2e 2025-11-19 14:05:22 -08:00
james-prysm
4419a36cc2 fixing and updating go ethereum to 1.16.7 2025-11-19 16:04:55 -06:00
james-prysm
0176706fad Merge branch 'develop' into fulu-e2e 2025-11-19 08:42:42 -08:00
james-prysm
df307544c9 Merge branch 'develop' into fulu-e2e 2025-11-17 13:31:24 -08:00
james-prysm
10821dc451 Merge branch 'develop' into fulu-e2e 2025-10-07 14:54:07 -05:00
james-prysm
4706b758dd Merge branch 'develop' into fulu-e2e 2025-10-07 14:48:06 -05:00
james-prysm
9f59b78cec updating dependency 2025-10-07 11:43:46 -05:00
james-prysm
75d8a2bdb6 Merge branch 'develop' into fulu-e2e 2025-10-07 11:35:33 -05:00
james-prysm
ca4ae35f6e fixing comment 2025-09-26 10:50:42 -05:00
james-prysm
3245b884cf Merge branch 'develop' into fulu-e2e 2025-09-26 10:41:31 -05:00
james-prysm
00e66ccab6 more review feedback 2025-09-26 10:40:39 -05:00
james-prysm
ed1885be8a fixing log 2025-09-26 10:35:34 -05:00
james-prysm
fad101c4a0 Update testing/endtoend/components/eth1/transactions.go
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2025-09-26 10:32:45 -05:00
james-prysm
1c0107812b some review 2025-09-26 09:55:03 -05:00
james-prysm
fd710886c8 Merge branch 'develop' into fulu-e2e 2025-09-26 08:34:36 -05:00
james-prysm
20d6bacfe9 Update testing/endtoend/components/eth1/transactions.go
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2025-09-26 08:30:11 -05:00
james-prysm
1140141aaa Merge branch 'develop' into fulu-e2e 2025-09-25 13:26:53 -05:00
james-prysm
7402efad2d fixing even more linting 2025-09-25 11:56:26 -05:00
james-prysm
f6cd1d9f7f more linting 2025-09-25 11:43:59 -05:00
james-prysm
a4751b057d even more linting issues 2025-09-25 11:29:21 -05:00
james-prysm
eb5e2a5094 Merge branch 'develop' into fulu-e2e 2025-09-25 10:59:50 -05:00
james-prysm
626830ff9c more linting fixes 2025-09-25 10:59:30 -05:00
james-prysm
6f20ac57d4 attempting to fix linting 2025-09-25 10:07:54 -05:00
james-prysm
cf4095cf3b fixing some linting 2025-09-25 09:19:35 -05:00
james-prysm
39b3dba946 Merge branch 'develop' into fulu-e2e 2025-09-25 08:50:34 -05:00
james-prysm
fa7e0b6e50 Merge branch 'develop' into fulu-e2e 2025-09-24 14:43:28 -05:00
james-prysm
4aed113dea Merge branch 'develop' into fulu-e2e 2025-09-24 13:51:01 -05:00
james-prysm
20ffd4c523 fixing linting 2025-09-24 13:50:51 -05:00
james-prysm
1cd3c3cc2d upgrading size to enormous 2025-09-24 12:55:42 -05:00
james-prysm
15e2e6b85e trying to fix remaining e2e issues 2025-09-24 12:54:36 -05:00
james-prysm
203a098076 adding go ethereum update to changelog 2025-09-23 16:45:35 -05:00
james-prysm
9e5b7b00f3 Merge branch 'develop' into fulu-e2e 2025-09-23 16:30:32 -05:00
james-prysm
1fcf3c7f30 adding new e2e type for build so that we can control the end to end tests 2025-09-23 16:30:11 -05:00
james-prysm
ccf81ed33c fixing test 2025-09-23 14:45:11 -05:00
james-prysm
382934fd30 Merge branch 'develop' into fulu-e2e 2025-09-23 12:56:58 -05:00
james-prysm
c31f0435c6 adjusting transations to avoid execution client issue 2025-09-23 11:06:10 -05:00
james-prysm
59dc0263f2 Merge branch 'develop' into fulu-e2e 2025-09-22 10:20:53 -05:00
james-prysm
de7ea1f72c Merge branch 'develop' into fulu-e2e 2025-09-19 16:37:29 -05:00
james-prysm
2fc7a5af82 Merge branch 'develop' into fulu-e2e 2025-09-19 10:55:29 -05:00
james-prysm
76d137198b Merge branch 'develop' into fulu-e2e 2025-09-18 15:32:51 -05:00
james-prysm
05d8ce15af Merge branch 'develop' into fulu-e2e 2025-09-17 14:07:05 -05:00
james-prysm
2715cd3fc7 Merge branch 'develop' into fulu-e2e 2025-09-11 14:35:58 -05:00
james-prysm
7aa79a420c Merge branch 'develop' into fulu-e2e 2025-09-08 15:43:41 -05:00
james-prysm
b3dc7e4afb changelog 2025-09-08 15:26:44 -05:00
james-prysm
db794db4ee gofmt 2025-09-08 15:22:13 -05:00
james-prysm
f9bc42eed4 go mod tidy 2025-09-08 15:16:00 -05:00
james-prysm
f97082df32 fixing missing fulu from evaluators 2025-09-08 09:14:48 -05:00
james-prysm
0d55b61b3d adding fulu support for validator 2025-09-05 14:10:00 -05:00
Preston Van Loon
d487e5c109 Fixed some build issues for gnark. There may be more but Prysm builds. 2025-09-05 12:10:55 -05:00
james-prysm
7ffbf77b87 updating dependency wip 2025-09-04 15:15:58 -05:00
james-prysm
f103267f10 updating genesis file 2025-09-03 23:29:45 -05:00
james-prysm
6f0ffa2a20 updating goethereum to temp use fusaka devnet 3 and updating genesis helpers 2025-09-03 14:11:15 -05:00
james-prysm
bcb5add346 updating goethereum version 2025-09-03 11:35:36 -05:00
james-prysm
a43fc50015 wip 2025-08-27 13:56:31 -05:00
251 changed files with 14232 additions and 1757 deletions

View File

@@ -34,4 +34,5 @@ Fixes #
- [ ] I have read [CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [ ] I have included a uniquely named [changelog fragment file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [ ] I have added a description to this PR with sufficient context for reviewers to understand this PR.
- [ ] I have added a description with sufficient context for reviewers to understand this PR.
- [ ] I have tested that my changes work as expected and I added a testing plan to the PR description (if applicable).

View File

@@ -193,6 +193,7 @@ nogo(
"//tools/analyzers/featureconfig:go_default_library",
"//tools/analyzers/gocognit:go_default_library",
"//tools/analyzers/ineffassign:go_default_library",
"//tools/analyzers/httperror:go_default_library",
"//tools/analyzers/interfacechecker:go_default_library",
"//tools/analyzers/logcapitalization:go_default_library",
"//tools/analyzers/logruswitherror:go_default_library",

View File

@@ -4,6 +4,91 @@ All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
## [v7.1.0](https://github.com/prysmaticlabs/prysm/compare/v7.0.0...v7.1.0) - 2025-12-10
This release includes several key features/fixes. If you are running v7.0.0 then you should update to v7.0.1 or later and remove the flag `--disable-last-epoch-targets`.
Release highlights:
- Backfill is now supported in Fulu. Backfill from checkpoint sync now supports data columns. Run with `--enable-backfill` when using checkpoint sync.
- A new node configuration to custody enough data columns to reconstruct blobs. Use flag `--semi-supernode` to custody at least 50% of the data columns.
- Critical fixes in attestation processing.
A post mortem doc with full details on the mainnet attestation processing issue from December 4th is expected in the coming days.
### Added
- add fulu support to light client processing. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15995)
- Record data column gossip KZG batch verification latency in both the pooled worker and fallback paths so the `beacon_kzg_verification_data_column_batch_milliseconds` histogram reflects gossip traffic, annotated with `path` labels to distinguish the sources. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16018)
- Implement Gloas state. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15611)
- Add initial configs for the state-diff feature. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15903)
- Add kv functions for the state-diff feature. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15903)
- Add supported version for fork versions. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16030)
- prometheus metric `gossip_attestation_verification_milliseconds` to track attestation gossip topic validation latency. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15785)
- Integrate state-diff into `State()`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16033)
- Implement Gloas fork support in consensus-types/blocks with factory methods, getters, setters, and proto handling. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15618)
- Integrate state-diff into `HasState()`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16045)
- Added `--semi-supernode` flag to custody half of a super node's datacolumn requirements but allowing for reconstruction for blob retrieval. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16029)
- Data column backfill. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15580)
- Backfill metrics for columns: backfill_data_column_sidecar_downloaded, backfill_data_column_sidecar_downloaded_bytes, backfill_batch_columns_download_ms, backfill_batch_columns_verify_ms. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15580)
- prometheus summary `gossip_data_column_sidecar_arrival_milliseconds` to track data column sidecar arrival latency since slot start. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16099)
### Changed
- Improve readability in slashing import and remove duplicated code. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15957)
- Use dependent root instead of target when possible. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15996)
- Changed `--subscribe-all-data-subnets` flag to `--supernode` and aliased `--subscribe-all-data-subnets` for existing users. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16012)
- Use explicit slot component timing configs. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15999)
- Downgraded log level from INFO to DEBUG on PrepareBeaconProposer updated fee recipients. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15998)
- Change the logging behaviour of Updated fee recipients to only log count of validators at Debug level and all validator indices at Trace level. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15998)
- Stop emitting payload attribute events during late block handling when we are not proposing the next slot. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16026)
- Initialize the `ExecutionRequests` field in gossip block map. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16047)
- Avoid redundant WithHttpEndpoint when JWT is provided. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16032)
- Removed dead slot parameter from blobCacheEntry.filter. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16021)
- Added log prefix to the `genesis` package. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16075)
- Added log prefix to the `params` package. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16075)
- `WithGenesisValidatorsRoot`: Use camelCase for log field param. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16075)
- Move `Origin checkpoint found in db` from WARN to INFO, since it is the expected behaviour. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16075)
- backfill metrics that changed name and/or histogram buckets: backfill_batch_time_verify -> backfill_batch_verify_ms, backfill_batch_time_waiting -> backfill_batch_waiting_ms, backfill_batch_time_roundtrip -> backfill_batch_roundtrip_ms, backfill_blocks_bytes_downloaded -> backfill_blocks_downloaded_bytes, backfill_batch_time_verify -> backfill_batch_verify_ms, backfill_batch_blocks_time_download -> backfill_batch_blocks_download_ms, backfill_batch_blobs_time_download -> backfill_batch_blobs_download_ms, backfill_blobs_bytes_downloaded -> backfill_blocks_downloaded_bytes. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15580)
- Move the "Not enough connected peers" (for a given subnet) from WARN to DEBUG. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16087)
- `blobsDataFromStoredDataColumns`: Ask the use to use the `--supernode` flag and shorten the error mesage. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16097)
- Introduced flag `--ignore-unviable-attestations` (replaces and deprecates `--disable-last-epoch-targets`) to drop attestations whose target state is not viable; default remains to process them unless explicitly enabled. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16094)
### Removed
- Remove validator cross-client from end-to-end tests. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16025)
- `NUMBER_OF_COLUMNS` configuration (not in the specification any more, replaced by a preset). [[PR]](https://github.com/prysmaticlabs/prysm/pull/16073)
- `MAX_CELLS_IN_EXTENDED_MATRIX` configuration (not in the specification any more). [[PR]](https://github.com/prysmaticlabs/prysm/pull/16073)
### Fixed
- Nil check for block if it doesn't exist in the DB in fetchOriginSidecars. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16006)
- Fix proposals progress bar count [#16020](https://github.com/OffchainLabs/prysm/pull/16020). [[PR]](https://github.com/prysmaticlabs/prysm/pull/16020)
- Move `BlockGossipReceived` event to the end of gossip validation. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16031)
- Fix state diff repetitive anchor slot bug. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16037)
- Check the JWT secret length is exactly 256 bits (32 bytes) as per Engine API specification. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15939)
- http_error_count now matches the other cases by listing the endpoint name rather than the actual URL requested. This improves metrics cardinality. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16055)
- Fix array out of bounds in static analyzer. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16058)
- fixes E2E tests to be able to start from Electra genesis fork or future forks. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16048)
- Use head state to validate attestations for old blocks if they are compatible. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16095)
## [v7.0.1](https://github.com/prysmaticlabs/prysm/compare/v7.0.0...v7.0.1) - 2025-12-08
This patch release contains 4 cherry-picked changes to address the mainnet attestation processing issue from 2025-12-04. Operators are encouraged to update to this release as soon as practical. As of this release, the feature flag `--disable-last-epoch-targets` has been deprecated and can be safely removed from your node configuration.
A post mortem doc with full details is expected to be published later this week.
### Changed
- Move the "Not enough connected peers" (for a given subnet) from WARN to DEBUG. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16087)
- Use dependent root instead of target when possible. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15996)
- Introduced flag `--ignore-unviable-attestations` (replaces and deprecates `--disable-last-epoch-targets`) to drop attestations whose target state is not viable; default remains to process them unless explicitly enabled. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16094)
### Fixed
- Use head state to validate attestations for old blocks if they are compatible. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16095)
## [v7.0.0](https://github.com/prysmaticlabs/prysm/compare/v6.1.4...v7.0.0) - 2025-11-10
This is our initial mainnet release for the Ethereum mainnet Fulu fork on December 3rd, 2025. All operators MUST update to v7.0.0 or later release prior to the fulu fork epoch `411392`. See the [Ethereum Foundation blog post](https://blog.ethereum.org/2025/11/06/fusaka-mainnet-announcement) for more information on Fulu.

View File

@@ -22,10 +22,7 @@ import (
// The caller of this function must have a lock on forkchoice.
func (s *Service) getRecentPreState(ctx context.Context, c *ethpb.Checkpoint) state.ReadOnlyBeaconState {
headEpoch := slots.ToEpoch(s.HeadSlot())
if c.Epoch < headEpoch {
return nil
}
if !s.cfg.ForkChoiceStore.IsCanonical([32]byte(c.Root)) {
if c.Epoch < headEpoch || c.Epoch == 0 {
return nil
}
// Only use head state if the head state is compatible with the target checkpoint.
@@ -33,11 +30,11 @@ func (s *Service) getRecentPreState(ctx context.Context, c *ethpb.Checkpoint) st
if err != nil {
return nil
}
headDependent, err := s.cfg.ForkChoiceStore.DependentRootForEpoch([32]byte(headRoot), c.Epoch)
headDependent, err := s.cfg.ForkChoiceStore.DependentRootForEpoch([32]byte(headRoot), c.Epoch-1)
if err != nil {
return nil
}
targetDependent, err := s.cfg.ForkChoiceStore.DependentRootForEpoch([32]byte(c.Root), c.Epoch)
targetDependent, err := s.cfg.ForkChoiceStore.DependentRootForEpoch([32]byte(c.Root), c.Epoch-1)
if err != nil {
return nil
}
@@ -53,7 +50,11 @@ func (s *Service) getRecentPreState(ctx context.Context, c *ethpb.Checkpoint) st
}
return st
}
// Otherwise we need to advance the head state to the start of the target epoch.
// At this point we can only have c.Epoch > headEpoch.
if !s.cfg.ForkChoiceStore.IsCanonical([32]byte(c.Root)) {
return nil
}
// Advance the head state to the start of the target epoch.
// This point can only be reached if c.Root == headRoot and c.Epoch > headEpoch.
slot, err := slots.EpochStart(c.Epoch)
if err != nil {

View File

@@ -181,6 +181,123 @@ func TestService_GetRecentPreState(t *testing.T) {
require.NotNil(t, service.getRecentPreState(ctx, &ethpb.Checkpoint{Epoch: 1, Root: ckRoot}))
}
func TestService_GetRecentPreState_Epoch_0(t *testing.T) {
service, _ := minimalTestService(t)
ctx := t.Context()
require.IsNil(t, service.getRecentPreState(ctx, &ethpb.Checkpoint{}))
}
func TestService_GetRecentPreState_Old_Checkpoint(t *testing.T) {
service, _ := minimalTestService(t)
ctx := t.Context()
s, err := util.NewBeaconState()
require.NoError(t, err)
ckRoot := bytesutil.PadTo([]byte{'A'}, fieldparams.RootLength)
cp0 := &ethpb.Checkpoint{Epoch: 0, Root: ckRoot}
err = s.SetFinalizedCheckpoint(cp0)
require.NoError(t, err)
st, root, err := prepareForkchoiceState(ctx, 33, [32]byte(ckRoot), [32]byte{}, [32]byte{'R'}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, root))
service.head = &head{
root: [32]byte(ckRoot),
state: s,
slot: 33,
}
require.IsNil(t, service.getRecentPreState(ctx, &ethpb.Checkpoint{}))
}
func TestService_GetRecentPreState_Same_DependentRoots(t *testing.T) {
service, _ := minimalTestService(t)
ctx := t.Context()
s, err := util.NewBeaconState()
require.NoError(t, err)
ckRoot := bytesutil.PadTo([]byte{'A'}, fieldparams.RootLength)
cp0 := &ethpb.Checkpoint{Epoch: 0, Root: ckRoot}
// Create a fork 31 <-- 32 <--- 64
// \---------33
// With the same dependent root at epoch 0 for a checkpoint at epoch 2
st, blk, err := prepareForkchoiceState(ctx, 31, [32]byte(ckRoot), [32]byte{}, [32]byte{}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blk))
st, blk, err = prepareForkchoiceState(ctx, 32, [32]byte{'S'}, blk.Root(), [32]byte{}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blk))
st, blk, err = prepareForkchoiceState(ctx, 64, [32]byte{'T'}, blk.Root(), [32]byte{}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blk))
st, blk, err = prepareForkchoiceState(ctx, 33, [32]byte{'U'}, [32]byte(ckRoot), [32]byte{}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blk))
cpRoot := blk.Root()
service.head = &head{
root: [32]byte{'T'},
state: s,
slot: 64,
}
require.NotNil(t, service.getRecentPreState(ctx, &ethpb.Checkpoint{Epoch: 2, Root: cpRoot[:]}))
}
func TestService_GetRecentPreState_Different_DependentRoots(t *testing.T) {
service, _ := minimalTestService(t)
ctx := t.Context()
s, err := util.NewBeaconState()
require.NoError(t, err)
ckRoot := bytesutil.PadTo([]byte{'A'}, fieldparams.RootLength)
cp0 := &ethpb.Checkpoint{Epoch: 0, Root: ckRoot}
// Create a fork 30 <-- 31 <-- 32 <--- 64
// \---------33
// With the same dependent root at epoch 0 for a checkpoint at epoch 2
st, blk, err := prepareForkchoiceState(ctx, 30, [32]byte(ckRoot), [32]byte{}, [32]byte{}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blk))
st, blk, err = prepareForkchoiceState(ctx, 31, [32]byte{'S'}, blk.Root(), [32]byte{}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blk))
st, blk, err = prepareForkchoiceState(ctx, 32, [32]byte{'T'}, blk.Root(), [32]byte{}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blk))
st, blk, err = prepareForkchoiceState(ctx, 64, [32]byte{'U'}, blk.Root(), [32]byte{}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blk))
st, blk, err = prepareForkchoiceState(ctx, 33, [32]byte{'V'}, [32]byte(ckRoot), [32]byte{}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blk))
cpRoot := blk.Root()
service.head = &head{
root: [32]byte{'T'},
state: s,
slot: 64,
}
require.IsNil(t, service.getRecentPreState(ctx, &ethpb.Checkpoint{Epoch: 2, Root: cpRoot[:]}))
}
func TestService_GetRecentPreState_Different(t *testing.T) {
service, _ := minimalTestService(t)
ctx := t.Context()
s, err := util.NewBeaconState()
require.NoError(t, err)
ckRoot := bytesutil.PadTo([]byte{'A'}, fieldparams.RootLength)
cp0 := &ethpb.Checkpoint{Epoch: 0, Root: ckRoot}
err = s.SetFinalizedCheckpoint(cp0)
require.NoError(t, err)
st, root, err := prepareForkchoiceState(ctx, 33, [32]byte(ckRoot), [32]byte{}, [32]byte{'R'}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, root))
service.head = &head{
root: [32]byte(ckRoot),
state: s,
slot: 33,
}
require.IsNil(t, service.getRecentPreState(ctx, &ethpb.Checkpoint{}))
}
func TestService_GetAttPreState_Concurrency(t *testing.T) {
service, _ := minimalTestService(t)
ctx := t.Context()

View File

@@ -134,7 +134,7 @@ func getStateVersionAndPayload(st state.BeaconState) (int, interfaces.ExecutionD
return preStateVersion, preStateHeader, nil
}
func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlock, avs das.AvailabilityStore) error {
func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlock, avs das.AvailabilityChecker) error {
ctx, span := trace.StartSpan(ctx, "blockChain.onBlockBatch")
defer span.End()
@@ -306,7 +306,7 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlo
return s.saveHeadNoDB(ctx, lastB, lastBR, preState, !isValidPayload)
}
func (s *Service) areSidecarsAvailable(ctx context.Context, avs das.AvailabilityStore, roBlock consensusblocks.ROBlock) error {
func (s *Service) areSidecarsAvailable(ctx context.Context, avs das.AvailabilityChecker, roBlock consensusblocks.ROBlock) error {
blockVersion := roBlock.Version()
block := roBlock.Block()
slot := block.Slot()
@@ -634,9 +634,7 @@ func missingDataColumnIndices(store *filesystem.DataColumnStorage, root [fieldpa
return nil, nil
}
numberOfColumns := params.BeaconConfig().NumberOfColumns
if uint64(len(expected)) > numberOfColumns {
if len(expected) > fieldparams.NumberOfColumns {
return nil, errMaxDataColumnsExceeded
}
@@ -818,10 +816,9 @@ func (s *Service) areDataColumnsAvailable(
case <-ctx.Done():
var missingIndices any = "all"
numberOfColumns := params.BeaconConfig().NumberOfColumns
missingIndicesCount := uint64(len(missing))
missingIndicesCount := len(missing)
if missingIndicesCount < numberOfColumns {
if missingIndicesCount < fieldparams.NumberOfColumns {
missingIndices = helpers.SortedPrettySliceFromMap(missing)
}

View File

@@ -2495,7 +2495,8 @@ func TestMissingBlobIndices(t *testing.T) {
}
func TestMissingDataColumnIndices(t *testing.T) {
countPlusOne := params.BeaconConfig().NumberOfColumns + 1
const countPlusOne = fieldparams.NumberOfColumns + 1
tooManyColumns := make(map[uint64]bool, countPlusOne)
for i := range countPlusOne {
tooManyColumns[uint64(i)] = true

View File

@@ -39,8 +39,8 @@ var epochsSinceFinalityExpandCache = primitives.Epoch(4)
// BlockReceiver interface defines the methods of chain service for receiving and processing new blocks.
type BlockReceiver interface {
ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, blockRoot [32]byte, avs das.AvailabilityStore) error
ReceiveBlockBatch(ctx context.Context, blocks []blocks.ROBlock, avs das.AvailabilityStore) error
ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, blockRoot [32]byte, avs das.AvailabilityChecker) error
ReceiveBlockBatch(ctx context.Context, blocks []blocks.ROBlock, avs das.AvailabilityChecker) error
HasBlock(ctx context.Context, root [32]byte) bool
RecentBlockSlot(root [32]byte) (primitives.Slot, error)
BlockBeingSynced([32]byte) bool
@@ -69,7 +69,7 @@ type SlashingReceiver interface {
// 1. Validate block, apply state transition and update checkpoints
// 2. Apply fork choice to the processed block
// 3. Save latest head info
func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, blockRoot [32]byte, avs das.AvailabilityStore) error {
func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, blockRoot [32]byte, avs das.AvailabilityChecker) error {
ctx, span := trace.StartSpan(ctx, "blockChain.ReceiveBlock")
defer span.End()
// Return early if the block is blacklisted
@@ -242,7 +242,7 @@ func (s *Service) validateExecutionAndConsensus(
return postState, isValidPayload, nil
}
func (s *Service) handleDA(ctx context.Context, avs das.AvailabilityStore, block blocks.ROBlock) (time.Duration, error) {
func (s *Service) handleDA(ctx context.Context, avs das.AvailabilityChecker, block blocks.ROBlock) (time.Duration, error) {
var err error
start := time.Now()
if avs != nil {
@@ -332,7 +332,7 @@ func (s *Service) executePostFinalizationTasks(ctx context.Context, finalizedSta
// ReceiveBlockBatch processes the whole block batch at once, assuming the block batch is linear ,transitioning
// the state, performing batch verification of all collected signatures and then performing the appropriate
// actions for a block post-transition.
func (s *Service) ReceiveBlockBatch(ctx context.Context, blocks []blocks.ROBlock, avs das.AvailabilityStore) error {
func (s *Service) ReceiveBlockBatch(ctx context.Context, blocks []blocks.ROBlock, avs das.AvailabilityChecker) error {
ctx, span := trace.StartSpan(ctx, "blockChain.ReceiveBlockBatch")
defer span.End()

View File

@@ -521,6 +521,13 @@ func (s *Service) updateCustodyInfoInDB(slot primitives.Slot) (primitives.Slot,
return 0, 0, errors.Wrap(err, "update custody info")
}
log.WithFields(logrus.Fields{
"earliestAvailableSlot": earliestAvailableSlot,
"custodyGroupCount": actualCustodyGroupCount,
"inputSlot": slot,
"targetCustodyGroups": targetCustodyGroupCount,
}).Info("Updated custody info in database")
if isSupernode {
log.WithFields(logrus.Fields{
"current": actualCustodyGroupCount,

View File

@@ -603,7 +603,6 @@ func TestUpdateCustodyInfoInDB(t *testing.T) {
custodyRequirement = uint64(4)
earliestStoredSlot = primitives.Slot(12)
numberOfCustodyGroups = uint64(64)
numberOfColumns = uint64(128)
)
params.SetupTestConfigCleanup(t)
@@ -611,7 +610,6 @@ func TestUpdateCustodyInfoInDB(t *testing.T) {
cfg.FuluForkEpoch = fuluForkEpoch
cfg.CustodyRequirement = custodyRequirement
cfg.NumberOfCustodyGroups = numberOfCustodyGroups
cfg.NumberOfColumns = numberOfColumns
params.OverrideBeaconConfig(cfg)
ctx := t.Context()

View File

@@ -275,7 +275,7 @@ func (s *ChainService) ReceiveBlockInitialSync(ctx context.Context, block interf
}
// ReceiveBlockBatch processes blocks in batches from initial-sync.
func (s *ChainService) ReceiveBlockBatch(ctx context.Context, blks []blocks.ROBlock, _ das.AvailabilityStore) error {
func (s *ChainService) ReceiveBlockBatch(ctx context.Context, blks []blocks.ROBlock, _ das.AvailabilityChecker) error {
if s.State == nil {
return ErrNilState
}
@@ -305,7 +305,7 @@ func (s *ChainService) ReceiveBlockBatch(ctx context.Context, blks []blocks.ROBl
}
// ReceiveBlock mocks ReceiveBlock method in chain service.
func (s *ChainService) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, _ [32]byte, _ das.AvailabilityStore) error {
func (s *ChainService) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, _ [32]byte, _ das.AvailabilityChecker) error {
if s.ReceiveBlockMockErr != nil {
return s.ReceiveBlockMockErr
}

View File

@@ -60,7 +60,7 @@ func Eth1DataHasEnoughSupport(beaconState state.ReadOnlyBeaconState, data *ethpb
voteCount := uint64(0)
for _, vote := range beaconState.Eth1DataVotes() {
if AreEth1DataEqual(vote, data.Copy()) {
if AreEth1DataEqual(vote, data) {
voteCount++
}
}

View File

@@ -152,7 +152,7 @@ func ActiveValidatorIndices(ctx context.Context, s state.ReadOnlyBeaconState, ep
}
if err := UpdateCommitteeCache(ctx, s, epoch); err != nil {
return nil, errors.Wrap(err, "could not update committee cache")
log.WithError(err).Error("Could not update committee cache")
}
return indices, nil

View File

@@ -5,6 +5,7 @@ import (
"math"
"slices"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/crypto/hash"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
@@ -96,8 +97,7 @@ func ComputeColumnsForCustodyGroup(custodyGroup uint64) ([]uint64, error) {
return nil, ErrCustodyGroupTooLarge
}
numberOfColumns := cfg.NumberOfColumns
numberOfColumns := uint64(fieldparams.NumberOfColumns)
columnsPerGroup := numberOfColumns / numberOfCustodyGroups
columns := make([]uint64, 0, columnsPerGroup)
@@ -112,8 +112,9 @@ func ComputeColumnsForCustodyGroup(custodyGroup uint64) ([]uint64, error) {
// ComputeCustodyGroupForColumn computes the custody group for a given column.
// It is the reciprocal function of ComputeColumnsForCustodyGroup.
func ComputeCustodyGroupForColumn(columnIndex uint64) (uint64, error) {
const numberOfColumns = fieldparams.NumberOfColumns
cfg := params.BeaconConfig()
numberOfColumns := cfg.NumberOfColumns
numberOfCustodyGroups := cfg.NumberOfCustodyGroups
if columnIndex >= numberOfColumns {

View File

@@ -30,7 +30,6 @@ func TestComputeColumnsForCustodyGroup(t *testing.T) {
func TestComputeCustodyGroupForColumn(t *testing.T) {
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
config.NumberOfColumns = 128
config.NumberOfCustodyGroups = 64
params.OverrideBeaconConfig(config)

View File

@@ -2,6 +2,7 @@ package peerdas
import (
"encoding/binary"
"maps"
"sync"
"github.com/ethereum/go-ethereum/p2p/enode"
@@ -107,3 +108,102 @@ func computeInfoCacheKey(nodeID enode.ID, custodyGroupCount uint64) [nodeInfoCac
return key
}
// ColumnIndices represents as a set of ColumnIndices. This could be the set of indices that a node is required to custody,
// the set that a peer custodies, missing indices for a given block, indices that are present on disk, etc.
type ColumnIndices map[uint64]struct{}
// Has returns true if the index is present in the ColumnIndices.
func (ci ColumnIndices) Has(index uint64) bool {
_, ok := ci[index]
return ok
}
// Count returns the number of indices present in the ColumnIndices.
func (ci ColumnIndices) Count() int {
return len(ci)
}
// Set sets the index in the ColumnIndices.
func (ci ColumnIndices) Set(index uint64) {
ci[index] = struct{}{}
}
// Unset removes the index from the ColumnIndices.
func (ci ColumnIndices) Unset(index uint64) {
delete(ci, index)
}
// Copy creates a copy of the ColumnIndices.
func (ci ColumnIndices) Copy() ColumnIndices {
newCi := make(ColumnIndices, len(ci))
maps.Copy(newCi, ci)
return newCi
}
// Intersection returns a new ColumnIndices that contains only the indices that are present in both ColumnIndices.
func (ci ColumnIndices) Intersection(other ColumnIndices) ColumnIndices {
result := make(ColumnIndices)
for index := range ci {
if other.Has(index) {
result.Set(index)
}
}
return result
}
// Merge mutates the receiver so that any index that is set in either of
// the two ColumnIndices is set in the receiver after the function finishes.
// It does not mutate the other ColumnIndices given as a function argument.
func (ci ColumnIndices) Merge(other ColumnIndices) {
for index := range other {
ci.Set(index)
}
}
// ToMap converts a ColumnIndices into a map[uint64]struct{}.
// In the future ColumnIndices may be changed to a bit map, so using
// ToMap will ensure forwards-compatibility.
func (ci ColumnIndices) ToMap() map[uint64]struct{} {
return ci.Copy()
}
// ToSlice converts a ColumnIndices into a slice of uint64 indices.
func (ci ColumnIndices) ToSlice() []uint64 {
indices := make([]uint64, 0, len(ci))
for index := range ci {
indices = append(indices, index)
}
return indices
}
// NewColumnIndicesFromSlice creates a ColumnIndices from a slice of uint64.
func NewColumnIndicesFromSlice(indices []uint64) ColumnIndices {
ci := make(ColumnIndices, len(indices))
for _, index := range indices {
ci[index] = struct{}{}
}
return ci
}
// NewColumnIndicesFromMap creates a ColumnIndices from a map[uint64]bool. This kind of map
// is used in several places in peerdas code. Converting from this map type to ColumnIndices
// will allow us to move ColumnIndices underlying type to a bitmap in the future and avoid
// lots of loops for things like intersections/unions or copies.
func NewColumnIndicesFromMap(indices map[uint64]bool) ColumnIndices {
ci := make(ColumnIndices, len(indices))
for index, set := range indices {
if !set {
continue
}
ci[index] = struct{}{}
}
return ci
}
// NewColumnIndices creates an empty ColumnIndices.
// In the future ColumnIndices may change from a reference type to a value type,
// so using this constructor will ensure forwards-compatibility.
func NewColumnIndices() ColumnIndices {
return make(ColumnIndices)
}

View File

@@ -25,3 +25,10 @@ func TestInfo(t *testing.T) {
require.DeepEqual(t, expectedDataColumnsSubnets, actual.DataColumnsSubnets)
}
}
func TestNewColumnIndicesFromMap(t *testing.T) {
t.Run("nil map", func(t *testing.T) {
ci := peerdas.NewColumnIndicesFromMap(nil)
require.Equal(t, 0, ci.Count())
})
}

View File

@@ -33,8 +33,7 @@ func (Cgc) ENRKey() string { return params.BeaconNetworkConfig().CustodyGroupCou
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/p2p-interface.md#verify_data_column_sidecar
func VerifyDataColumnSidecar(sidecar blocks.RODataColumn) error {
// The sidecar index must be within the valid range.
numberOfColumns := params.BeaconConfig().NumberOfColumns
if sidecar.Index >= numberOfColumns {
if sidecar.Index >= fieldparams.NumberOfColumns {
return ErrIndexTooLarge
}

View File

@@ -281,8 +281,11 @@ func BenchmarkVerifyDataColumnSidecarKZGProofs_SameCommitments_NoBatch(b *testin
}
func BenchmarkVerifyDataColumnSidecarKZGProofs_DiffCommitments_Batch(b *testing.B) {
const blobCount = 12
numberOfColumns := int64(params.BeaconConfig().NumberOfColumns)
const (
blobCount = 12
numberOfColumns = fieldparams.NumberOfColumns
)
err := kzg.Start()
require.NoError(b, err)

View File

@@ -26,7 +26,7 @@ var (
func MinimumColumnCountToReconstruct() uint64 {
// If the number of columns is odd, then we need total / 2 + 1 columns to reconstruct.
// If the number of columns is even, then we need total / 2 columns to reconstruct.
return (params.BeaconConfig().NumberOfColumns + 1) / 2
return (fieldparams.NumberOfColumns + 1) / 2
}
// MinimumCustodyGroupCountToReconstruct returns the minimum number of custody groups needed to
@@ -34,10 +34,11 @@ func MinimumColumnCountToReconstruct() uint64 {
// custody groups and columns, making it future-proof if these values change.
// Returns an error if the configuration values are invalid (zero or would cause division by zero).
func MinimumCustodyGroupCountToReconstruct() (uint64, error) {
const numberOfColumns = fieldparams.NumberOfColumns
cfg := params.BeaconConfig()
// Validate configuration values
if cfg.NumberOfColumns == 0 {
if numberOfColumns == 0 {
return 0, errors.New("NumberOfColumns cannot be zero")
}
if cfg.NumberOfCustodyGroups == 0 {
@@ -47,13 +48,13 @@ func MinimumCustodyGroupCountToReconstruct() (uint64, error) {
minimumColumnCount := MinimumColumnCountToReconstruct()
// Calculate how many columns each custody group represents
columnsPerGroup := cfg.NumberOfColumns / cfg.NumberOfCustodyGroups
columnsPerGroup := numberOfColumns / cfg.NumberOfCustodyGroups
// If there are more groups than columns (columnsPerGroup = 0), this is an invalid configuration
// for reconstruction purposes as we cannot determine a meaningful custody group count
if columnsPerGroup == 0 {
return 0, errors.Errorf("invalid configuration: NumberOfCustodyGroups (%d) exceeds NumberOfColumns (%d)",
cfg.NumberOfCustodyGroups, cfg.NumberOfColumns)
cfg.NumberOfCustodyGroups, numberOfColumns)
}
// Use ceiling division to ensure we have enough groups to cover the minimum columns
@@ -285,7 +286,8 @@ func ReconstructBlobSidecars(block blocks.ROBlock, verifiedDataColumnSidecars []
// ComputeCellsAndProofsFromFlat computes the cells and proofs from blobs and cell flat proofs.
func ComputeCellsAndProofsFromFlat(blobs [][]byte, cellProofs [][]byte) ([][]kzg.Cell, [][]kzg.Proof, error) {
numberOfColumns := params.BeaconConfig().NumberOfColumns
const numberOfColumns = fieldparams.NumberOfColumns
blobCount := uint64(len(blobs))
cellProofsCount := uint64(len(cellProofs))
@@ -327,8 +329,6 @@ func ComputeCellsAndProofsFromFlat(blobs [][]byte, cellProofs [][]byte) ([][]kzg
// ComputeCellsAndProofsFromStructured computes the cells and proofs from blobs and cell proofs.
func ComputeCellsAndProofsFromStructured(blobsAndProofs []*pb.BlobAndProofV2) ([][]kzg.Cell, [][]kzg.Proof, error) {
numberOfColumns := params.BeaconConfig().NumberOfColumns
cellsPerBlob := make([][]kzg.Cell, 0, len(blobsAndProofs))
proofsPerBlob := make([][]kzg.Proof, 0, len(blobsAndProofs))
for _, blobAndProof := range blobsAndProofs {
@@ -347,7 +347,7 @@ func ComputeCellsAndProofsFromStructured(blobsAndProofs []*pb.BlobAndProofV2) ([
return nil, nil, errors.Wrap(err, "compute cells")
}
kzgProofs := make([]kzg.Proof, 0, numberOfColumns)
kzgProofs := make([]kzg.Proof, 0, fieldparams.NumberOfColumns)
for _, kzgProofBytes := range blobAndProof.KzgProofs {
if len(kzgProofBytes) != kzg.BytesPerProof {
return nil, nil, errors.New("wrong KZG proof size - should never happen")

View File

@@ -17,41 +17,9 @@ import (
)
func TestMinimumColumnsCountToReconstruct(t *testing.T) {
testCases := []struct {
name string
numberOfColumns uint64
expected uint64
}{
{
name: "numberOfColumns=128",
numberOfColumns: 128,
expected: 64,
},
{
name: "numberOfColumns=129",
numberOfColumns: 129,
expected: 65,
},
{
name: "numberOfColumns=130",
numberOfColumns: 130,
expected: 65,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
// Set the total number of columns.
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.NumberOfColumns = tc.numberOfColumns
params.OverrideBeaconConfig(cfg)
// Compute the minimum number of columns needed to reconstruct.
actual := peerdas.MinimumColumnCountToReconstruct()
require.Equal(t, tc.expected, actual)
})
}
const expected = uint64(64)
actual := peerdas.MinimumColumnCountToReconstruct()
require.Equal(t, expected, actual)
}
func TestReconstructDataColumnSidecars(t *testing.T) {
@@ -200,7 +168,6 @@ func TestReconstructBlobSidecars(t *testing.T) {
t.Run("nominal", func(t *testing.T) {
const blobCount = 3
numberOfColumns := params.BeaconConfig().NumberOfColumns
roBlock, roBlobSidecars := util.GenerateTestElectraBlockWithSidecar(t, [fieldparams.RootLength]byte{}, 42, blobCount)
@@ -236,7 +203,7 @@ func TestReconstructBlobSidecars(t *testing.T) {
require.NoError(t, err)
// Flatten proofs.
cellProofs := make([][]byte, 0, blobCount*numberOfColumns)
cellProofs := make([][]byte, 0, blobCount*fieldparams.NumberOfColumns)
for _, proofs := range inputProofsPerBlob {
for _, proof := range proofs {
cellProofs = append(cellProofs, proof[:])
@@ -428,13 +395,12 @@ func TestReconstructBlobs(t *testing.T) {
}
func TestComputeCellsAndProofsFromFlat(t *testing.T) {
const numberOfColumns = fieldparams.NumberOfColumns
// Start the trusted setup.
err := kzg.Start()
require.NoError(t, err)
t.Run("mismatched blob and proof counts", func(t *testing.T) {
numberOfColumns := params.BeaconConfig().NumberOfColumns
// Create one blob but proofs for two blobs
blobs := [][]byte{{}}
@@ -447,7 +413,6 @@ func TestComputeCellsAndProofsFromFlat(t *testing.T) {
t.Run("nominal", func(t *testing.T) {
const blobCount = 2
numberOfColumns := params.BeaconConfig().NumberOfColumns
// Generate test blobs
_, roBlobSidecars := util.GenerateTestElectraBlockWithSidecar(t, [fieldparams.RootLength]byte{}, 42, blobCount)

View File

@@ -3,16 +3,18 @@ package peerdas
import (
"testing"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/ethereum/go-ethereum/p2p/enode"
)
func TestSemiSupernodeCustody(t *testing.T) {
const numberOfColumns = fieldparams.NumberOfColumns
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.NumberOfCustodyGroups = 128
cfg.NumberOfColumns = 128
params.OverrideBeaconConfig(cfg)
// Create a test node ID
@@ -34,8 +36,8 @@ func TestSemiSupernodeCustody(t *testing.T) {
// Verify the columns are valid (within 0-127 range)
for columnIndex := range custodyColumns {
if columnIndex >= cfg.NumberOfColumns {
t.Fatalf("Invalid column index %d, should be less than %d", columnIndex, cfg.NumberOfColumns)
if columnIndex >= numberOfColumns {
t.Fatalf("Invalid column index %d, should be less than %d", columnIndex, numberOfColumns)
}
}
})
@@ -75,33 +77,23 @@ func TestSemiSupernodeCustody(t *testing.T) {
func TestMinimumCustodyGroupCountToReconstruct(t *testing.T) {
tests := []struct {
name string
numberOfColumns uint64
numberOfGroups uint64
expectedResult uint64
numberOfGroups uint64
expectedResult uint64
}{
{
name: "Standard 1:1 ratio (128 columns, 128 groups)",
numberOfColumns: 128,
numberOfGroups: 128,
expectedResult: 64, // Need half of 128 groups
numberOfGroups: 128,
expectedResult: 64, // Need half of 128 groups
},
{
name: "2 columns per group (128 columns, 64 groups)",
numberOfColumns: 128,
numberOfGroups: 64,
expectedResult: 32, // Need 64 columns, which is 32 groups (64/2)
numberOfGroups: 64,
expectedResult: 32, // Need 64 columns, which is 32 groups (64/2)
},
{
name: "4 columns per group (128 columns, 32 groups)",
numberOfColumns: 128,
numberOfGroups: 32,
expectedResult: 16, // Need 64 columns, which is 16 groups (64/4)
},
{
name: "Odd number requiring ceiling division (100 columns, 30 groups)",
numberOfColumns: 100,
numberOfGroups: 30,
expectedResult: 17, // Need 50 columns, 3 columns per group (100/30), ceiling(50/3) = 17
numberOfGroups: 32,
expectedResult: 16, // Need 64 columns, which is 16 groups (64/4)
},
}
@@ -109,7 +101,6 @@ func TestMinimumCustodyGroupCountToReconstruct(t *testing.T) {
t.Run(tt.name, func(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.NumberOfColumns = tt.numberOfColumns
cfg.NumberOfCustodyGroups = tt.numberOfGroups
params.OverrideBeaconConfig(cfg)
@@ -121,22 +112,9 @@ func TestMinimumCustodyGroupCountToReconstruct(t *testing.T) {
}
func TestMinimumCustodyGroupCountToReconstruct_ErrorCases(t *testing.T) {
t.Run("Returns error when NumberOfColumns is zero", func(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.NumberOfColumns = 0
cfg.NumberOfCustodyGroups = 128
params.OverrideBeaconConfig(cfg)
_, err := MinimumCustodyGroupCountToReconstruct()
require.NotNil(t, err)
require.Equal(t, true, err.Error() == "NumberOfColumns cannot be zero")
})
t.Run("Returns error when NumberOfCustodyGroups is zero", func(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.NumberOfColumns = 128
cfg.NumberOfCustodyGroups = 0
params.OverrideBeaconConfig(cfg)
@@ -148,7 +126,6 @@ func TestMinimumCustodyGroupCountToReconstruct_ErrorCases(t *testing.T) {
t.Run("Returns error when NumberOfCustodyGroups exceeds NumberOfColumns", func(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.NumberOfColumns = 128
cfg.NumberOfCustodyGroups = 256
params.OverrideBeaconConfig(cfg)

View File

@@ -102,11 +102,13 @@ func ValidatorsCustodyRequirement(state beaconState.ReadOnlyBeaconState, validat
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/validator.md#get_data_column_sidecars_from_block and
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/validator.md#get_data_column_sidecars_from_column_sidecar
func DataColumnSidecars(cellsPerBlob [][]kzg.Cell, proofsPerBlob [][]kzg.Proof, src ConstructionPopulator) ([]blocks.RODataColumn, error) {
const numberOfColumns = uint64(fieldparams.NumberOfColumns)
if len(cellsPerBlob) == 0 {
return nil, nil
}
start := time.Now()
cells, proofs, err := rotateRowsToCols(cellsPerBlob, proofsPerBlob, params.BeaconConfig().NumberOfColumns)
cells, proofs, err := rotateRowsToCols(cellsPerBlob, proofsPerBlob, numberOfColumns)
if err != nil {
return nil, errors.Wrap(err, "rotate cells and proofs")
}
@@ -115,9 +117,8 @@ func DataColumnSidecars(cellsPerBlob [][]kzg.Cell, proofsPerBlob [][]kzg.Proof,
return nil, errors.Wrap(err, "extract block info")
}
maxIdx := params.BeaconConfig().NumberOfColumns
roSidecars := make([]blocks.RODataColumn, 0, maxIdx)
for idx := range maxIdx {
roSidecars := make([]blocks.RODataColumn, 0, numberOfColumns)
for idx := range numberOfColumns {
sidecar := &ethpb.DataColumnSidecar{
Index: idx,
Column: cells[idx],

View File

@@ -6,7 +6,7 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/blockchain/kzg"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
state_native "github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native"
"github.com/OffchainLabs/prysm/v7/config/params"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
@@ -59,6 +59,8 @@ func TestValidatorsCustodyRequirement(t *testing.T) {
}
func TestDataColumnSidecars(t *testing.T) {
const numberOfColumns = fieldparams.NumberOfColumns
t.Run("sizes mismatch", func(t *testing.T) {
// Create a protobuf signed beacon block.
signedBeaconBlockPb := util.NewBeaconBlockDeneb()
@@ -69,10 +71,10 @@ func TestDataColumnSidecars(t *testing.T) {
// Create cells and proofs.
cellsPerBlob := [][]kzg.Cell{
make([]kzg.Cell, params.BeaconConfig().NumberOfColumns),
make([]kzg.Cell, numberOfColumns),
}
proofsPerBlob := [][]kzg.Proof{
make([]kzg.Proof, params.BeaconConfig().NumberOfColumns),
make([]kzg.Proof, numberOfColumns),
}
rob, err := blocks.NewROBlock(signedBeaconBlock)
@@ -117,7 +119,6 @@ func TestDataColumnSidecars(t *testing.T) {
require.NoError(t, err)
// Create cells and proofs with sufficient cells but insufficient proofs.
numberOfColumns := params.BeaconConfig().NumberOfColumns
cellsPerBlob := [][]kzg.Cell{
make([]kzg.Cell, numberOfColumns),
}
@@ -149,7 +150,6 @@ func TestDataColumnSidecars(t *testing.T) {
require.NoError(t, err)
// Create cells and proofs with correct dimensions.
numberOfColumns := params.BeaconConfig().NumberOfColumns
cellsPerBlob := [][]kzg.Cell{
make([]kzg.Cell, numberOfColumns),
make([]kzg.Cell, numberOfColumns),
@@ -197,6 +197,7 @@ func TestDataColumnSidecars(t *testing.T) {
}
func TestReconstructionSource(t *testing.T) {
const numberOfColumns = fieldparams.NumberOfColumns
// Create a Fulu block with blob commitments.
signedBeaconBlockPb := util.NewBeaconBlockFulu()
commitment1 := make([]byte, 48)
@@ -212,7 +213,6 @@ func TestReconstructionSource(t *testing.T) {
require.NoError(t, err)
// Create cells and proofs with correct dimensions.
numberOfColumns := params.BeaconConfig().NumberOfColumns
cellsPerBlob := [][]kzg.Cell{
make([]kzg.Cell, numberOfColumns),
make([]kzg.Cell, numberOfColumns),

View File

@@ -4,14 +4,19 @@ go_library(
name = "go_default_library",
srcs = [
"availability_blobs.go",
"availability_columns.go",
"bisect.go",
"blob_cache.go",
"data_column_cache.go",
"iface.go",
"log.go",
"mock.go",
"needs.go",
],
importpath = "github.com/OffchainLabs/prysm/v7/beacon-chain/das",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/db/filesystem:go_default_library",
"//beacon-chain/verification:go_default_library",
"//config/fieldparams:go_default_library",
@@ -21,6 +26,7 @@ go_library(
"//runtime/logging:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
@@ -30,11 +36,14 @@ go_test(
name = "go_default_test",
srcs = [
"availability_blobs_test.go",
"availability_columns_test.go",
"blob_cache_test.go",
"data_column_cache_test.go",
"needs_test.go",
],
embed = [":go_default_library"],
deps = [
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/db/filesystem:go_default_library",
"//beacon-chain/verification:go_default_library",
"//config/fieldparams:go_default_library",
@@ -45,6 +54,7 @@ go_test(
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)

View File

@@ -11,9 +11,8 @@ import (
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/runtime/logging"
"github.com/OffchainLabs/prysm/v7/runtime/version"
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/pkg/errors"
log "github.com/sirupsen/logrus"
"github.com/sirupsen/logrus"
)
var (
@@ -24,12 +23,13 @@ var (
// This implementation will hold any blobs passed to Persist until the IsDataAvailable is called for their
// block, at which time they will undergo full verification and be saved to the disk.
type LazilyPersistentStoreBlob struct {
store *filesystem.BlobStorage
cache *blobCache
verifier BlobBatchVerifier
store *filesystem.BlobStorage
cache *blobCache
verifier BlobBatchVerifier
shouldRetain RetentionChecker
}
var _ AvailabilityStore = &LazilyPersistentStoreBlob{}
var _ AvailabilityChecker = &LazilyPersistentStoreBlob{}
// BlobBatchVerifier enables LazyAvailabilityStore to manage the verification process
// going from ROBlob->VerifiedROBlob, while avoiding the decision of which individual verifications
@@ -42,11 +42,12 @@ type BlobBatchVerifier interface {
// NewLazilyPersistentStore creates a new LazilyPersistentStore. This constructor should always be used
// when creating a LazilyPersistentStore because it needs to initialize the cache under the hood.
func NewLazilyPersistentStore(store *filesystem.BlobStorage, verifier BlobBatchVerifier) *LazilyPersistentStoreBlob {
func NewLazilyPersistentStore(store *filesystem.BlobStorage, verifier BlobBatchVerifier, shouldRetain RetentionChecker) *LazilyPersistentStoreBlob {
return &LazilyPersistentStoreBlob{
store: store,
cache: newBlobCache(),
verifier: verifier,
store: store,
cache: newBlobCache(),
verifier: verifier,
shouldRetain: shouldRetain,
}
}
@@ -66,9 +67,6 @@ func (s *LazilyPersistentStoreBlob) Persist(current primitives.Slot, sidecars ..
}
}
}
if !params.WithinDAPeriod(slots.ToEpoch(sidecars[0].Slot()), slots.ToEpoch(current)) {
return nil
}
key := keyFromSidecar(sidecars[0])
entry := s.cache.ensure(key)
for _, blobSidecar := range sidecars {
@@ -81,8 +79,17 @@ func (s *LazilyPersistentStoreBlob) Persist(current primitives.Slot, sidecars ..
// IsDataAvailable returns nil if all the commitments in the given block are persisted to the db and have been verified.
// BlobSidecars already in the db are assumed to have been previously verified against the block.
func (s *LazilyPersistentStoreBlob) IsDataAvailable(ctx context.Context, current primitives.Slot, b blocks.ROBlock) error {
blockCommitments, err := commitmentsToCheck(b, current)
func (s *LazilyPersistentStoreBlob) IsDataAvailable(ctx context.Context, current primitives.Slot, blks ...blocks.ROBlock) error {
for _, b := range blks {
if err := s.checkOne(ctx, current, b); err != nil {
return err
}
}
return nil
}
func (s *LazilyPersistentStoreBlob) checkOne(ctx context.Context, current primitives.Slot, b blocks.ROBlock) error {
blockCommitments, err := commitmentsToCheck(b, s.shouldRetain)
if err != nil {
return errors.Wrapf(err, "could not check data availability for block %#x", b.Root())
}
@@ -112,7 +119,7 @@ func (s *LazilyPersistentStoreBlob) IsDataAvailable(ctx context.Context, current
ok := errors.As(err, &me)
if ok {
fails := me.Failures()
lf := make(log.Fields, len(fails))
lf := make(logrus.Fields, len(fails))
for i := range fails {
lf[fmt.Sprintf("fail_%d", i)] = fails[i].Error()
}
@@ -131,13 +138,12 @@ func (s *LazilyPersistentStoreBlob) IsDataAvailable(ctx context.Context, current
return nil
}
func commitmentsToCheck(b blocks.ROBlock, current primitives.Slot) ([][]byte, error) {
func commitmentsToCheck(b blocks.ROBlock, shouldRetain RetentionChecker) ([][]byte, error) {
if b.Version() < version.Deneb {
return nil, nil
}
// We are only required to check within MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUEST
if !params.WithinDAPeriod(slots.ToEpoch(b.Block().Slot()), slots.ToEpoch(current)) {
if !shouldRetain(b.Block().Slot()) {
return nil, nil
}

View File

@@ -17,6 +17,10 @@ import (
errors "github.com/pkg/errors"
)
func testShouldRetainAlways(s primitives.Slot) bool {
return true
}
func Test_commitmentsToCheck(t *testing.T) {
params.SetupTestConfigCleanup(t)
params.BeaconConfig().FuluForkEpoch = params.BeaconConfig().ElectraForkEpoch + 4096*2
@@ -30,11 +34,12 @@ func Test_commitmentsToCheck(t *testing.T) {
commits[i] = bytesutil.PadTo([]byte{byte(i)}, 48)
}
cases := []struct {
name string
commits [][]byte
block func(*testing.T) blocks.ROBlock
slot primitives.Slot
err error
name string
commits [][]byte
block func(*testing.T) blocks.ROBlock
slot primitives.Slot
err error
shouldRetain RetentionChecker
}{
{
name: "pre deneb",
@@ -60,6 +65,7 @@ func Test_commitmentsToCheck(t *testing.T) {
require.NoError(t, err)
return rb
},
shouldRetain: testShouldRetainAlways,
commits: func() [][]byte {
mb := params.GetNetworkScheduleEntry(slots.ToEpoch(fulu + 100)).MaxBlobsPerBlock
return commits[:mb]
@@ -79,7 +85,8 @@ func Test_commitmentsToCheck(t *testing.T) {
require.NoError(t, err)
return rb
},
slot: fulu + windowSlots + 1,
shouldRetain: func(s primitives.Slot) bool { return false },
slot: fulu + windowSlots + 1,
},
{
name: "excessive commitments",
@@ -97,14 +104,15 @@ func Test_commitmentsToCheck(t *testing.T) {
require.Equal(t, true, len(c) > params.BeaconConfig().MaxBlobsPerBlock(sb.Block().Slot()))
return rb
},
slot: windowSlots + 1,
err: errIndexOutOfBounds,
shouldRetain: testShouldRetainAlways,
slot: windowSlots + 1,
err: errIndexOutOfBounds,
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
b := c.block(t)
co, err := commitmentsToCheck(b, c.slot)
co, err := commitmentsToCheck(b, c.shouldRetain)
if c.err != nil {
require.ErrorIs(t, err, c.err)
} else {
@@ -126,7 +134,7 @@ func TestLazilyPersistent_Missing(t *testing.T) {
blk, blobSidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, ds, 3)
mbv := &mockBlobBatchVerifier{t: t, scs: blobSidecars}
as := NewLazilyPersistentStore(store, mbv)
as := NewLazilyPersistentStore(store, mbv, testShouldRetainAlways)
// Only one commitment persisted, should return error with other indices
require.NoError(t, as.Persist(ds, blobSidecars[2]))
@@ -153,7 +161,7 @@ func TestLazilyPersistent_Mismatch(t *testing.T) {
mbv := &mockBlobBatchVerifier{t: t, err: errors.New("kzg check should not run")}
blobSidecars[0].KzgCommitment = bytesutil.PadTo([]byte("nope"), 48)
as := NewLazilyPersistentStore(store, mbv)
as := NewLazilyPersistentStore(store, mbv, testShouldRetainAlways)
// Only one commitment persisted, should return error with other indices
require.NoError(t, as.Persist(ds, blobSidecars[0]))
@@ -166,11 +174,11 @@ func TestLazyPersistOnceCommitted(t *testing.T) {
ds := util.SlotAtEpoch(t, params.BeaconConfig().DenebForkEpoch)
_, blobSidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, ds, 6)
as := NewLazilyPersistentStore(filesystem.NewEphemeralBlobStorage(t), &mockBlobBatchVerifier{})
as := NewLazilyPersistentStore(filesystem.NewEphemeralBlobStorage(t), &mockBlobBatchVerifier{}, testShouldRetainAlways)
// stashes as expected
require.NoError(t, as.Persist(ds, blobSidecars...))
// ignores duplicates
require.ErrorIs(t, as.Persist(ds, blobSidecars...), ErrDuplicateSidecar)
require.ErrorIs(t, as.Persist(ds, blobSidecars...), errDuplicateSidecar)
// ignores index out of bound
blobSidecars[0].Index = 6
@@ -183,7 +191,7 @@ func TestLazyPersistOnceCommitted(t *testing.T) {
require.NoError(t, as.Persist(slotOOB, moreBlobSidecars[0]))
// doesn't ignore new sidecars with a different block root
require.NoError(t, as.Persist(ds, moreBlobSidecars...))
require.NoError(t, as.Persist(ds, moreBlobSidecars[1:]...))
}
type mockBlobBatchVerifier struct {

View File

@@ -0,0 +1,244 @@
package das
import (
"context"
"io"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v7/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v7/beacon-chain/verification"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/ethereum/go-ethereum/p2p/enode"
errors "github.com/pkg/errors"
)
// LazilyPersistentStoreColumn is an implementation of AvailabilityStore to be used when batch syncing data columns.
// This implementation will hold any data columns passed to Persist until the IsDataAvailable is called for their
// block, at which time they will undergo full verification and be saved to the disk.
type LazilyPersistentStoreColumn struct {
store *filesystem.DataColumnStorage
cache *dataColumnCache
newDataColumnsVerifier verification.NewDataColumnsVerifier
custody *custodyRequirement
bisector Bisector
shouldRetain RetentionChecker
}
var _ AvailabilityChecker = &LazilyPersistentStoreColumn{}
// DataColumnsVerifier enables LazilyPersistentStoreColumn to manage the verification process
// going from RODataColumn->VerifiedRODataColumn, while avoiding the decision of which individual verifications
// to run and in what order. Since LazilyPersistentStoreColumn always tries to verify and save data columns only when
// they are all available, the interface takes a slice of data column sidecars.
type DataColumnsVerifier interface {
VerifiedRODataColumns(ctx context.Context, blk blocks.ROBlock, scs []blocks.RODataColumn) ([]blocks.VerifiedRODataColumn, error)
}
// NewLazilyPersistentStoreColumn creates a new LazilyPersistentStoreColumn.
// WARNING: The resulting LazilyPersistentStoreColumn is NOT thread-safe.
func NewLazilyPersistentStoreColumn(
store *filesystem.DataColumnStorage,
newDataColumnsVerifier verification.NewDataColumnsVerifier,
nodeID enode.ID,
cgc uint64,
bisector Bisector,
shouldRetain RetentionChecker,
) *LazilyPersistentStoreColumn {
return &LazilyPersistentStoreColumn{
store: store,
cache: newDataColumnCache(),
newDataColumnsVerifier: newDataColumnsVerifier,
custody: &custodyRequirement{nodeID: nodeID, cgc: cgc},
bisector: bisector,
shouldRetain: shouldRetain,
}
}
// PersistColumns adds columns to the working column cache. Columns stored in this cache will be persisted
// for at least as long as the node is running. Once IsDataAvailable succeeds, all columns referenced
// by the given block are guaranteed to be persisted for the remainder of the retention period.
func (s *LazilyPersistentStoreColumn) Persist(_ primitives.Slot, sidecars ...blocks.RODataColumn) error {
for _, sidecar := range sidecars {
if err := s.cache.stash(sidecar); err != nil {
return errors.Wrap(err, "stash DataColumnSidecar")
}
}
return nil
}
// IsDataAvailable returns nil if all the commitments in the given block are persisted to the db and have been verified.
// DataColumnsSidecars already in the db are assumed to have been previously verified against the block.
func (s *LazilyPersistentStoreColumn) IsDataAvailable(ctx context.Context, _ primitives.Slot, blks ...blocks.ROBlock) error {
toVerify := make([]blocks.RODataColumn, 0)
for _, block := range blks {
indices, err := s.required(block)
if err != nil {
return errors.Wrapf(err, "full commitments to check with block root `%#x`", block.Root())
}
if indices.Count() == 0 {
continue
}
key := keyFromBlock(block)
entry := s.cache.entry(key)
toVerify, err = entry.append(toVerify, IndicesNotStored(s.store.Summary(block.Root()), indices))
if err != nil {
return errors.Wrap(err, "entry filter")
}
}
if err := s.verifyAndSave(toVerify); err != nil {
log.Warn("Batch verification failed, bisecting columns by peer")
if err := s.bisectVerification(toVerify); err != nil {
return errors.Wrap(err, "bisect verification")
}
}
s.cache.cleanup(blks)
return nil
}
// required returns the set of column indices to check for a given block.
func (s *LazilyPersistentStoreColumn) required(block blocks.ROBlock) (peerdas.ColumnIndices, error) {
if !s.shouldRetain(block.Block().Slot()) {
return peerdas.NewColumnIndices(), nil
}
// If there are any commitments in the block, there are blobs,
// and if there are blobs, we need the columns bisecting those blobs.
commitments, err := block.Block().Body().BlobKzgCommitments()
if err != nil {
return nil, errors.Wrap(err, "blob KZG commitments")
}
// No DA check needed if the block has no blobs.
if len(commitments) == 0 {
return peerdas.NewColumnIndices(), nil
}
return s.custody.required()
}
// verifyAndSave calls Save on the column store if the columns pass verification.
func (s *LazilyPersistentStoreColumn) verifyAndSave(columns []blocks.RODataColumn) error {
if len(columns) == 0 {
return nil
}
verified, err := s.verifyColumns(columns)
if err != nil {
return errors.Wrap(err, "verify columns")
}
if err := s.store.Save(verified); err != nil {
return errors.Wrap(err, "save data column sidecars")
}
return nil
}
func (s *LazilyPersistentStoreColumn) verifyColumns(columns []blocks.RODataColumn) ([]blocks.VerifiedRODataColumn, error) {
if len(columns) == 0 {
return nil, nil
}
verifier := s.newDataColumnsVerifier(columns, verification.ByRangeRequestDataColumnSidecarRequirements)
if err := verifier.ValidFields(); err != nil {
return nil, errors.Wrap(err, "valid fields")
}
if err := verifier.SidecarInclusionProven(); err != nil {
return nil, errors.Wrap(err, "sidecar inclusion proven")
}
if err := verifier.SidecarKzgProofVerified(); err != nil {
return nil, errors.Wrap(err, "sidecar KZG proof verified")
}
return verifier.VerifiedRODataColumns()
}
// bisectVerification is used when verification of a batch of columns fails. Since the batch could
// span multiple blocks or have been fetched from multiple peers, this pattern enables code using the
// store to break the verification into smaller units and learn the results, in order to plan to retry
// retrieval of the unusable columns.
func (s *LazilyPersistentStoreColumn) bisectVerification(columns []blocks.RODataColumn) error {
if len(columns) == 0 {
return nil
}
if s.bisector == nil {
return errors.New("bisector not initialized")
}
iter, err := s.bisector.Bisect(columns)
if err != nil {
return errors.Wrap(err, "Bisector.Bisect")
}
// It's up to the bisector how to chunk up columns for verification,
// which could be by block, or by peer, or any other strategy.
// For the purposes of range syncing or backfill this will be by peer,
// so that the node can learn which peer is giving us bad data and downscore them.
for columns, err := iter.Next(); columns != nil; columns, err = iter.Next() {
if err != nil {
if !errors.Is(err, io.EOF) {
return errors.Wrap(err, "Bisector.Next")
}
break // io.EOF signals end of iteration
}
// We save the parts of the batch that have been verified successfully even though we don't know
// if all columns for the block will be available until the block is imported.
if err := s.verifyAndSave(s.columnsNotStored(columns)); err != nil {
iter.OnError(err)
continue
}
}
// This should give us a single error representing any unresolved errors seen via onError.
return iter.Error()
}
// columnsNotStored filters the list of ROColumnSidecars to only include those that are not found in the storage summary.
func (s *LazilyPersistentStoreColumn) columnsNotStored(sidecars []blocks.RODataColumn) []blocks.RODataColumn {
// We use this method to filter a set of sidecars that were previously seen to be unavailable on disk. So our base assumption
// is that they are still available and we don't need to copy the list. Instead we make a slice of any indices that are unexpectedly
// stored and only when we find that the storage view has changed do we need to create a new slice.
stored := make(map[int]struct{}, 0)
lastRoot := [32]byte{}
var sum filesystem.DataColumnStorageSummary
for i, sc := range sidecars {
if sc.BlockRoot() != lastRoot {
sum = s.store.Summary(sc.BlockRoot())
lastRoot = sc.BlockRoot()
}
if sum.HasIndex(sc.Index) {
stored[i] = struct{}{}
}
}
// If the view on storage hasn't changed, return the original list.
if len(stored) == 0 {
return sidecars
}
shift := 0
for i := range sidecars {
if _, ok := stored[i]; ok {
// If the index is stored, skip and overwrite it.
// Track how many spaces down to shift unseen sidecars (to overwrite the previously shifted or seen).
shift++
continue
}
if shift > 0 {
// If the index is not stored and we have seen stored indices,
// we need to shift the current index down.
sidecars[i-shift] = sidecars[i]
}
}
return sidecars[:len(sidecars)-shift]
}
type custodyRequirement struct {
nodeID enode.ID
cgc uint64 // custody group count
indices peerdas.ColumnIndices
}
func (c *custodyRequirement) required() (peerdas.ColumnIndices, error) {
peerInfo, _, err := peerdas.Info(c.nodeID, c.cgc)
if err != nil {
return peerdas.NewColumnIndices(), errors.Wrap(err, "peer info")
}
return peerdas.NewColumnIndicesFromMap(peerInfo.CustodyColumns), nil
}

View File

@@ -0,0 +1,908 @@
package das
import (
"context"
"fmt"
"io"
"testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v7/beacon-chain/verification"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/OffchainLabs/prysm/v7/testing/util"
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/pkg/errors"
)
func mockShouldRetain(current primitives.Epoch) RetentionChecker {
return func(slot primitives.Slot) bool {
return params.WithinDAPeriod(slots.ToEpoch(slot), current)
}
}
var commitments = [][]byte{
bytesutil.PadTo([]byte("a"), 48),
bytesutil.PadTo([]byte("b"), 48),
bytesutil.PadTo([]byte("c"), 48),
bytesutil.PadTo([]byte("d"), 48),
}
func TestPersist(t *testing.T) {
t.Run("no sidecars", func(t *testing.T) {
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, nil, enode.ID{}, 0, nil, mockShouldRetain(0))
err := lazilyPersistentStoreColumns.Persist(0)
require.NoError(t, err)
require.Equal(t, 0, len(lazilyPersistentStoreColumns.cache.entries))
})
t.Run("outside DA period", func(t *testing.T) {
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
dataColumnParamsByBlockRoot := []util.DataColumnParam{
{Slot: 1, Index: 1},
}
var current primitives.Slot = 1_000_000
sr := mockShouldRetain(slots.ToEpoch(current))
roSidecars, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, dataColumnParamsByBlockRoot)
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, nil, enode.ID{}, 0, nil, sr)
err := lazilyPersistentStoreColumns.Persist(current, roSidecars...)
require.NoError(t, err)
require.Equal(t, len(roSidecars), len(lazilyPersistentStoreColumns.cache.entries))
})
t.Run("nominal", func(t *testing.T) {
const slot = 42
store := filesystem.NewEphemeralDataColumnStorage(t)
dataColumnParamsByBlockRoot := []util.DataColumnParam{
{Slot: slot, Index: 1},
{Slot: slot, Index: 5},
}
roSidecars, roDataColumns := util.CreateTestVerifiedRoDataColumnSidecars(t, dataColumnParamsByBlockRoot)
avs := NewLazilyPersistentStoreColumn(store, nil, enode.ID{}, 0, nil, mockShouldRetain(slots.ToEpoch(slot)))
err := avs.Persist(slot, roSidecars...)
require.NoError(t, err)
require.Equal(t, 1, len(avs.cache.entries))
key := cacheKey{slot: slot, root: roDataColumns[0].BlockRoot()}
entry, ok := avs.cache.entries[key]
require.Equal(t, true, ok)
summary := store.Summary(key.root)
// A call to Persist does NOT save the sidecars to disk.
require.Equal(t, uint64(0), summary.Count())
require.Equal(t, len(roSidecars), len(entry.scs))
idx1 := entry.scs[1]
require.NotNil(t, idx1)
require.DeepSSZEqual(t, roDataColumns[0].BlockRoot(), idx1.BlockRoot())
idx5 := entry.scs[5]
require.NotNil(t, idx5)
require.DeepSSZEqual(t, roDataColumns[1].BlockRoot(), idx5.BlockRoot())
for i, roDataColumn := range entry.scs {
if map[uint64]bool{1: true, 5: true}[i] {
continue
}
require.IsNil(t, roDataColumn)
}
})
}
func TestIsDataAvailable(t *testing.T) {
params.SetupTestConfigCleanup(t)
params.BeaconConfig().FuluForkEpoch = params.BeaconConfig().ElectraForkEpoch + 4096*2
newDataColumnsVerifier := func(dataColumnSidecars []blocks.RODataColumn, _ []verification.Requirement) verification.DataColumnsVerifier {
return &mockDataColumnsVerifier{t: t, dataColumnSidecars: dataColumnSidecars}
}
ctx := t.Context()
t.Run("without commitments", func(t *testing.T) {
signedBeaconBlockFulu := util.NewBeaconBlockFulu()
signedRoBlock := newSignedRoBlock(t, signedBeaconBlockFulu)
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, newDataColumnsVerifier, enode.ID{}, 0, nil, mockShouldRetain(0))
err := lazilyPersistentStoreColumns.IsDataAvailable(ctx, 0, signedRoBlock)
require.NoError(t, err)
})
t.Run("with commitments", func(t *testing.T) {
signedBeaconBlockFulu := util.NewBeaconBlockFulu()
signedBeaconBlockFulu.Block.Slot = primitives.Slot(params.BeaconConfig().FuluForkEpoch) * params.BeaconConfig().SlotsPerEpoch
signedBeaconBlockFulu.Block.Body.BlobKzgCommitments = commitments
signedRoBlock := newSignedRoBlock(t, signedBeaconBlockFulu)
block := signedRoBlock.Block()
slot := block.Slot()
proposerIndex := block.ProposerIndex()
parentRoot := block.ParentRoot()
stateRoot := block.StateRoot()
bodyRoot, err := block.Body().HashTreeRoot()
require.NoError(t, err)
root := signedRoBlock.Root()
storage := filesystem.NewEphemeralDataColumnStorage(t)
indices := []uint64{1, 17, 19, 42, 75, 87, 102, 117}
avs := NewLazilyPersistentStoreColumn(storage, newDataColumnsVerifier, enode.ID{}, uint64(len(indices)), nil, mockShouldRetain(slots.ToEpoch(slot)))
dcparams := make([]util.DataColumnParam, 0, len(indices))
for _, index := range indices {
dataColumnParams := util.DataColumnParam{
Index: index,
KzgCommitments: commitments,
Slot: slot,
ProposerIndex: proposerIndex,
ParentRoot: parentRoot[:],
StateRoot: stateRoot[:],
BodyRoot: bodyRoot[:],
}
dcparams = append(dcparams, dataColumnParams)
}
_, verifiedRoDataColumns := util.CreateTestVerifiedRoDataColumnSidecars(t, dcparams)
key := keyFromBlock(signedRoBlock)
entry := avs.cache.entry(key)
defer avs.cache.delete(key)
for _, verifiedRoDataColumn := range verifiedRoDataColumns {
err := entry.stash(verifiedRoDataColumn.RODataColumn)
require.NoError(t, err)
}
err = avs.IsDataAvailable(ctx, slot, signedRoBlock)
require.NoError(t, err)
actual, err := storage.Get(root, indices)
require.NoError(t, err)
//summary := storage.Summary(root)
require.Equal(t, len(verifiedRoDataColumns), len(actual))
//require.Equal(t, uint64(len(indices)), summary.Count())
//require.DeepSSZEqual(t, verifiedRoDataColumns, actual)
})
}
func TestRetentionWindow(t *testing.T) {
windowSlots, err := slots.EpochEnd(params.BeaconConfig().MinEpochsForDataColumnSidecarsRequest)
require.NoError(t, err)
fuluSlot, err := slots.EpochStart(params.BeaconConfig().FuluForkEpoch)
require.NoError(t, err)
numberOfColumns := fieldparams.NumberOfColumns
testCases := []struct {
name string
commitments [][]byte
block func(*testing.T) blocks.ROBlock
slot primitives.Slot
wantedCols int
}{
{
name: "Pre-Fulu block",
block: func(t *testing.T) blocks.ROBlock {
return newSignedRoBlock(t, util.NewBeaconBlockElectra())
},
},
{
name: "Commitments outside data availability window",
block: func(t *testing.T) blocks.ROBlock {
beaconBlockElectra := util.NewBeaconBlockElectra()
// Block is from slot 0, "current slot" is window size +1 (so outside the window)
beaconBlockElectra.Block.Body.BlobKzgCommitments = commitments
return newSignedRoBlock(t, beaconBlockElectra)
},
slot: fuluSlot + windowSlots,
},
{
name: "Commitments within data availability window",
block: func(t *testing.T) blocks.ROBlock {
signedBeaconBlockFulu := util.NewBeaconBlockFulu()
signedBeaconBlockFulu.Block.Body.BlobKzgCommitments = commitments
signedBeaconBlockFulu.Block.Slot = fuluSlot + windowSlots - 1
return newSignedRoBlock(t, signedBeaconBlockFulu)
},
commitments: commitments,
slot: fuluSlot + windowSlots,
wantedCols: numberOfColumns,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
b := tc.block(t)
s := NewLazilyPersistentStoreColumn(nil, nil, enode.ID{}, uint64(numberOfColumns), nil, mockShouldRetain(slots.ToEpoch(tc.slot)))
indices, err := s.required(b)
require.NoError(t, err)
require.Equal(t, tc.wantedCols, len(indices))
})
}
}
func newSignedRoBlock(t *testing.T, signedBeaconBlock any) blocks.ROBlock {
sb, err := blocks.NewSignedBeaconBlock(signedBeaconBlock)
require.NoError(t, err)
rb, err := blocks.NewROBlock(sb)
require.NoError(t, err)
return rb
}
type mockDataColumnsVerifier struct {
t *testing.T
dataColumnSidecars []blocks.RODataColumn
validCalled, SidecarInclusionProvenCalled, SidecarKzgProofVerifiedCalled bool
}
var _ verification.DataColumnsVerifier = &mockDataColumnsVerifier{}
func (m *mockDataColumnsVerifier) VerifiedRODataColumns() ([]blocks.VerifiedRODataColumn, error) {
require.Equal(m.t, true, m.validCalled && m.SidecarInclusionProvenCalled && m.SidecarKzgProofVerifiedCalled)
verifiedDataColumnSidecars := make([]blocks.VerifiedRODataColumn, 0, len(m.dataColumnSidecars))
for _, dataColumnSidecar := range m.dataColumnSidecars {
verifiedDataColumnSidecar := blocks.NewVerifiedRODataColumn(dataColumnSidecar)
verifiedDataColumnSidecars = append(verifiedDataColumnSidecars, verifiedDataColumnSidecar)
}
return verifiedDataColumnSidecars, nil
}
func (m *mockDataColumnsVerifier) SatisfyRequirement(verification.Requirement) {}
func (m *mockDataColumnsVerifier) ValidFields() error {
m.validCalled = true
return nil
}
func (m *mockDataColumnsVerifier) CorrectSubnet(dataColumnSidecarSubTopic string, expectedTopics []string) error {
return nil
}
func (m *mockDataColumnsVerifier) NotFromFutureSlot() error { return nil }
func (m *mockDataColumnsVerifier) SlotAboveFinalized() error { return nil }
func (m *mockDataColumnsVerifier) ValidProposerSignature(ctx context.Context) error { return nil }
func (m *mockDataColumnsVerifier) SidecarParentSeen(parentSeen func([fieldparams.RootLength]byte) bool) error {
return nil
}
func (m *mockDataColumnsVerifier) SidecarParentValid(badParent func([fieldparams.RootLength]byte) bool) error {
return nil
}
func (m *mockDataColumnsVerifier) SidecarParentSlotLower() error { return nil }
func (m *mockDataColumnsVerifier) SidecarDescendsFromFinalized() error { return nil }
func (m *mockDataColumnsVerifier) SidecarInclusionProven() error {
m.SidecarInclusionProvenCalled = true
return nil
}
func (m *mockDataColumnsVerifier) SidecarKzgProofVerified() error {
m.SidecarKzgProofVerifiedCalled = true
return nil
}
func (m *mockDataColumnsVerifier) SidecarProposerExpected(ctx context.Context) error { return nil }
// Mock implementations for bisectVerification tests
// mockBisectionIterator simulates a BisectionIterator for testing.
type mockBisectionIterator struct {
chunks [][]blocks.RODataColumn
chunkErrors []error
finalError error
chunkIndex int
nextCallCount int
onErrorCallCount int
onErrorErrors []error
}
func (m *mockBisectionIterator) Next() ([]blocks.RODataColumn, error) {
if m.chunkIndex >= len(m.chunks) {
return nil, io.EOF
}
chunk := m.chunks[m.chunkIndex]
var err error
if m.chunkIndex < len(m.chunkErrors) {
err = m.chunkErrors[m.chunkIndex]
}
m.chunkIndex++
m.nextCallCount++
if err != nil {
return chunk, err
}
return chunk, nil
}
func (m *mockBisectionIterator) OnError(err error) {
m.onErrorCallCount++
m.onErrorErrors = append(m.onErrorErrors, err)
}
func (m *mockBisectionIterator) Error() error {
return m.finalError
}
// mockBisector simulates a Bisector for testing.
type mockBisector struct {
shouldError bool
bisectErr error
iterator *mockBisectionIterator
}
func (m *mockBisector) Bisect(columns []blocks.RODataColumn) (BisectionIterator, error) {
if m.shouldError {
return nil, m.bisectErr
}
return m.iterator, nil
}
// testDataColumnsVerifier implements verification.DataColumnsVerifier for testing.
type testDataColumnsVerifier struct {
t *testing.T
shouldFail bool
columns []blocks.RODataColumn
}
func (v *testDataColumnsVerifier) VerifiedRODataColumns() ([]blocks.VerifiedRODataColumn, error) {
verified := make([]blocks.VerifiedRODataColumn, len(v.columns))
for i, col := range v.columns {
verified[i] = blocks.NewVerifiedRODataColumn(col)
}
return verified, nil
}
func (v *testDataColumnsVerifier) SatisfyRequirement(verification.Requirement) {}
func (v *testDataColumnsVerifier) ValidFields() error {
if v.shouldFail {
return errors.New("verification failed")
}
return nil
}
func (v *testDataColumnsVerifier) CorrectSubnet(string, []string) error { return nil }
func (v *testDataColumnsVerifier) NotFromFutureSlot() error { return nil }
func (v *testDataColumnsVerifier) SlotAboveFinalized() error { return nil }
func (v *testDataColumnsVerifier) ValidProposerSignature(context.Context) error { return nil }
func (v *testDataColumnsVerifier) SidecarParentSeen(func([fieldparams.RootLength]byte) bool) error {
return nil
}
func (v *testDataColumnsVerifier) SidecarParentValid(func([fieldparams.RootLength]byte) bool) error {
return nil
}
func (v *testDataColumnsVerifier) SidecarParentSlotLower() error { return nil }
func (v *testDataColumnsVerifier) SidecarDescendsFromFinalized() error { return nil }
func (v *testDataColumnsVerifier) SidecarInclusionProven() error { return nil }
func (v *testDataColumnsVerifier) SidecarKzgProofVerified() error { return nil }
func (v *testDataColumnsVerifier) SidecarProposerExpected(context.Context) error { return nil }
// Helper function to create test data columns
func makeTestDataColumns(t *testing.T, count int, blockRoot [32]byte, startIndex uint64) []blocks.RODataColumn {
columns := make([]blocks.RODataColumn, 0, count)
for i := range count {
params := util.DataColumnParam{
Index: startIndex + uint64(i),
KzgCommitments: commitments,
Slot: primitives.Slot(params.BeaconConfig().FuluForkEpoch) * params.BeaconConfig().SlotsPerEpoch,
}
_, verifiedCols := util.CreateTestVerifiedRoDataColumnSidecars(t, []util.DataColumnParam{params})
if len(verifiedCols) > 0 {
columns = append(columns, verifiedCols[0].RODataColumn)
}
}
return columns
}
// Helper function to create test verifier factory with failure pattern
func makeTestVerifierFactory(failurePattern []bool) verification.NewDataColumnsVerifier {
callIndex := 0
return func(cols []blocks.RODataColumn, _ []verification.Requirement) verification.DataColumnsVerifier {
shouldFail := callIndex < len(failurePattern) && failurePattern[callIndex]
callIndex++
return &testDataColumnsVerifier{
shouldFail: shouldFail,
columns: cols,
}
}
}
// TestBisectVerification tests the bisectVerification method with comprehensive table-driven test cases.
func TestBisectVerification(t *testing.T) {
params.SetupTestConfigCleanup(t)
params.BeaconConfig().FuluForkEpoch = params.BeaconConfig().ElectraForkEpoch + 4096*2
cases := []struct {
expectedError bool
bisectorNil bool
expectedOnErrorCallCount int
expectedNextCallCount int
inputCount int
iteratorFinalError error
bisectorError error
name string
storedColumnIndices []uint64
verificationFailurePattern []bool
chunkErrors []error
chunks [][]blocks.RODataColumn
}{
{
name: "EmptyColumns",
inputCount: 0,
expectedError: false,
expectedNextCallCount: 0,
expectedOnErrorCallCount: 0,
},
{
name: "NilBisector",
inputCount: 3,
bisectorNil: true,
expectedError: true,
expectedNextCallCount: 0,
expectedOnErrorCallCount: 0,
},
{
name: "BisectError",
inputCount: 5,
bisectorError: errors.New("bisect failed"),
expectedError: true,
expectedNextCallCount: 0,
expectedOnErrorCallCount: 0,
},
{
name: "SingleChunkSuccess",
inputCount: 4,
chunks: [][]blocks.RODataColumn{{}},
verificationFailurePattern: []bool{false},
expectedError: false,
expectedNextCallCount: 2,
expectedOnErrorCallCount: 0,
},
{
name: "SingleChunkFails",
inputCount: 4,
chunks: [][]blocks.RODataColumn{{}},
verificationFailurePattern: []bool{true},
iteratorFinalError: errors.New("chunk failed"),
expectedError: true,
expectedNextCallCount: 2,
expectedOnErrorCallCount: 1,
},
{
name: "TwoChunks_BothPass",
inputCount: 8,
chunks: [][]blocks.RODataColumn{{}, {}},
verificationFailurePattern: []bool{false, false},
expectedError: false,
expectedNextCallCount: 3,
expectedOnErrorCallCount: 0,
},
{
name: "TwoChunks_FirstFails",
inputCount: 8,
chunks: [][]blocks.RODataColumn{{}, {}},
verificationFailurePattern: []bool{true, false},
iteratorFinalError: errors.New("first failed"),
expectedError: true,
expectedNextCallCount: 3,
expectedOnErrorCallCount: 1,
},
{
name: "TwoChunks_SecondFails",
inputCount: 8,
chunks: [][]blocks.RODataColumn{{}, {}},
verificationFailurePattern: []bool{false, true},
iteratorFinalError: errors.New("second failed"),
expectedError: true,
expectedNextCallCount: 3,
expectedOnErrorCallCount: 1,
},
{
name: "TwoChunks_BothFail",
inputCount: 8,
chunks: [][]blocks.RODataColumn{{}, {}},
verificationFailurePattern: []bool{true, true},
iteratorFinalError: errors.New("both failed"),
expectedError: true,
expectedNextCallCount: 3,
expectedOnErrorCallCount: 2,
},
{
name: "ManyChunks_AllPass",
inputCount: 16,
chunks: [][]blocks.RODataColumn{{}, {}, {}, {}},
verificationFailurePattern: []bool{false, false, false, false},
expectedError: false,
expectedNextCallCount: 5,
expectedOnErrorCallCount: 0,
},
{
name: "ManyChunks_MixedFail",
inputCount: 16,
chunks: [][]blocks.RODataColumn{{}, {}, {}, {}},
verificationFailurePattern: []bool{false, true, false, true},
iteratorFinalError: errors.New("mixed failures"),
expectedError: true,
expectedNextCallCount: 5,
expectedOnErrorCallCount: 2,
},
{
name: "FilterStoredColumns_PartialFilter",
inputCount: 6,
chunks: [][]blocks.RODataColumn{{}},
verificationFailurePattern: []bool{false},
storedColumnIndices: []uint64{1, 3},
expectedError: false,
expectedNextCallCount: 2,
expectedOnErrorCallCount: 0,
},
{
name: "FilterStoredColumns_AllStored",
inputCount: 6,
chunks: [][]blocks.RODataColumn{{}},
verificationFailurePattern: []bool{false},
storedColumnIndices: []uint64{0, 1, 2, 3, 4, 5},
expectedError: false,
expectedNextCallCount: 2,
expectedOnErrorCallCount: 0,
},
{
name: "FilterStoredColumns_MixedAccess",
inputCount: 10,
chunks: [][]blocks.RODataColumn{{}},
verificationFailurePattern: []bool{false},
storedColumnIndices: []uint64{1, 5, 9},
expectedError: false,
expectedNextCallCount: 2,
expectedOnErrorCallCount: 0,
},
{
name: "IteratorNextError",
inputCount: 4,
chunks: [][]blocks.RODataColumn{{}, {}},
chunkErrors: []error{nil, errors.New("next error")},
verificationFailurePattern: []bool{false},
expectedError: true,
expectedNextCallCount: 2,
expectedOnErrorCallCount: 0,
},
{
name: "IteratorNextEOF",
inputCount: 4,
chunks: [][]blocks.RODataColumn{{}},
verificationFailurePattern: []bool{false},
expectedError: false,
expectedNextCallCount: 2,
expectedOnErrorCallCount: 0,
},
{
name: "LargeChunkSize",
inputCount: 128,
chunks: [][]blocks.RODataColumn{{}},
verificationFailurePattern: []bool{false},
expectedError: false,
expectedNextCallCount: 2,
expectedOnErrorCallCount: 0,
},
{
name: "ManySmallChunks",
inputCount: 32,
chunks: [][]blocks.RODataColumn{{}, {}, {}, {}, {}, {}, {}, {}},
verificationFailurePattern: []bool{false, false, false, false, false, false, false, false},
expectedError: false,
expectedNextCallCount: 9,
expectedOnErrorCallCount: 0,
},
{
name: "ChunkWithSomeStoredColumns",
inputCount: 6,
chunks: [][]blocks.RODataColumn{{}},
verificationFailurePattern: []bool{false},
storedColumnIndices: []uint64{0, 2, 4},
expectedError: false,
expectedNextCallCount: 2,
expectedOnErrorCallCount: 0,
},
{
name: "OnErrorDoesNotStopIteration",
inputCount: 8,
chunks: [][]blocks.RODataColumn{{}, {}},
verificationFailurePattern: []bool{true, false},
iteratorFinalError: errors.New("first failed"),
expectedError: true,
expectedNextCallCount: 3,
expectedOnErrorCallCount: 1,
},
{
name: "VerificationErrorWrapping",
inputCount: 4,
chunks: [][]blocks.RODataColumn{{}},
verificationFailurePattern: []bool{true},
iteratorFinalError: errors.New("verification failed"),
expectedError: true,
expectedNextCallCount: 2,
expectedOnErrorCallCount: 1,
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
// Setup storage
var store *filesystem.DataColumnStorage
if len(tc.storedColumnIndices) > 0 {
mocker, s := filesystem.NewEphemeralDataColumnStorageWithMocker(t)
blockRoot := [32]byte{1, 2, 3}
slot := primitives.Slot(params.BeaconConfig().FuluForkEpoch) * params.BeaconConfig().SlotsPerEpoch
require.NoError(t, mocker.CreateFakeIndices(blockRoot, slot, tc.storedColumnIndices...))
store = s
} else {
store = filesystem.NewEphemeralDataColumnStorage(t)
}
// Create test columns
blockRoot := [32]byte{1, 2, 3}
columns := makeTestDataColumns(t, tc.inputCount, blockRoot, 0)
// Setup iterator with chunks
iterator := &mockBisectionIterator{
chunks: tc.chunks,
chunkErrors: tc.chunkErrors,
finalError: tc.iteratorFinalError,
}
// Setup bisector
var bisector Bisector
if tc.bisectorNil || tc.inputCount == 0 {
bisector = nil
} else if tc.bisectorError != nil {
bisector = &mockBisector{
shouldError: true,
bisectErr: tc.bisectorError,
}
} else {
bisector = &mockBisector{
shouldError: false,
iterator: iterator,
}
}
// Create store with verifier
verifierFactory := makeTestVerifierFactory(tc.verificationFailurePattern)
lazilyPersistentStore := &LazilyPersistentStoreColumn{
store: store,
cache: newDataColumnCache(),
newDataColumnsVerifier: verifierFactory,
custody: &custodyRequirement{},
bisector: bisector,
}
// Execute
err := lazilyPersistentStore.bisectVerification(columns)
// Assert
if tc.expectedError {
require.NotNil(t, err)
} else {
require.NoError(t, err)
}
// Verify iterator interactions for non-error cases
if tc.inputCount > 0 && bisector != nil && tc.bisectorError == nil && !tc.expectedError {
require.NotEqual(t, 0, iterator.nextCallCount, "iterator Next() should have been called")
require.Equal(t, tc.expectedOnErrorCallCount, iterator.onErrorCallCount, "OnError() call count mismatch")
}
})
}
}
func allIndicesExcept(total int, excluded []uint64) []uint64 {
excludeMap := make(map[uint64]bool)
for _, idx := range excluded {
excludeMap[idx] = true
}
var result []uint64
for i := range total {
if !excludeMap[uint64(i)] {
result = append(result, uint64(i))
}
}
return result
}
// TestColumnsNotStored tests the columnsNotStored method.
func TestColumnsNotStored(t *testing.T) {
params.SetupTestConfigCleanup(t)
params.BeaconConfig().FuluForkEpoch = params.BeaconConfig().ElectraForkEpoch + 4096*2
cases := []struct {
name string
count int
stored []uint64 // Column indices marked as stored
expected []uint64 // Expected column indices in returned result
}{
// Empty cases
{
name: "EmptyInput",
count: 0,
stored: []uint64{},
expected: []uint64{},
},
// Single element cases
{
name: "SingleElement_NotStored",
count: 1,
stored: []uint64{},
expected: []uint64{0},
},
{
name: "SingleElement_Stored",
count: 1,
stored: []uint64{0},
expected: []uint64{},
},
// All not stored cases
{
name: "AllNotStored_FiveElements",
count: 5,
stored: []uint64{},
expected: []uint64{0, 1, 2, 3, 4},
},
// All stored cases
{
name: "AllStored",
count: 5,
stored: []uint64{0, 1, 2, 3, 4},
expected: []uint64{},
},
// Partial storage - beginning
{
name: "StoredAtBeginning",
count: 5,
stored: []uint64{0, 1},
expected: []uint64{2, 3, 4},
},
// Partial storage - end
{
name: "StoredAtEnd",
count: 5,
stored: []uint64{3, 4},
expected: []uint64{0, 1, 2},
},
// Partial storage - middle
{
name: "StoredInMiddle",
count: 5,
stored: []uint64{2},
expected: []uint64{0, 1, 3, 4},
},
// Partial storage - scattered
{
name: "StoredScattered",
count: 8,
stored: []uint64{1, 3, 5},
expected: []uint64{0, 2, 4, 6, 7},
},
// Alternating pattern
{
name: "AlternatingPattern",
count: 8,
stored: []uint64{0, 2, 4, 6},
expected: []uint64{1, 3, 5, 7},
},
// Consecutive stored
{
name: "ConsecutiveStored",
count: 10,
stored: []uint64{3, 4, 5, 6},
expected: []uint64{0, 1, 2, 7, 8, 9},
},
// Large slice cases
{
name: "LargeSlice_NoStored",
count: 64,
stored: []uint64{},
expected: allIndicesExcept(64, []uint64{}),
},
{
name: "LargeSlice_SingleStored",
count: 64,
stored: []uint64{32},
expected: allIndicesExcept(64, []uint64{32}),
},
}
slot := primitives.Slot(params.BeaconConfig().FuluForkEpoch) * params.BeaconConfig().SlotsPerEpoch
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
// Create test columns first to get the actual block root
var columns []blocks.RODataColumn
if tc.count > 0 {
columns = makeTestDataColumns(t, tc.count, [32]byte{}, 0)
}
// Get the actual block root from the first column (if any)
var blockRoot [32]byte
if len(columns) > 0 {
blockRoot = columns[0].BlockRoot()
}
// Setup storage
var store *filesystem.DataColumnStorage
if len(tc.stored) > 0 {
mocker, s := filesystem.NewEphemeralDataColumnStorageWithMocker(t)
require.NoError(t, mocker.CreateFakeIndices(blockRoot, slot, tc.stored...))
store = s
} else {
store = filesystem.NewEphemeralDataColumnStorage(t)
}
// Create store instance
lazilyPersistentStore := &LazilyPersistentStoreColumn{
store: store,
}
// Execute
result := lazilyPersistentStore.columnsNotStored(columns)
// Assert count
require.Equal(t, len(tc.expected), len(result),
fmt.Sprintf("expected %d columns, got %d", len(tc.expected), len(result)))
// Verify that no stored columns are in the result
if len(tc.stored) > 0 {
resultIndices := make(map[uint64]bool)
for _, col := range result {
resultIndices[col.Index] = true
}
for _, storedIdx := range tc.stored {
require.Equal(t, false, resultIndices[storedIdx],
fmt.Sprintf("stored column index %d should not be in result", storedIdx))
}
}
// If expectedIndices is specified, verify the exact column indices in order
if len(tc.expected) > 0 && len(tc.stored) == 0 {
// Only check exact order for non-stored cases (where we know they stay in same order)
for i, expectedIdx := range tc.expected {
require.Equal(t, columns[expectedIdx].Index, result[i].Index,
fmt.Sprintf("column %d: expected index %d, got %d", i, columns[expectedIdx].Index, result[i].Index))
}
}
// Verify optimization: if nothing stored, should return original slice
if len(tc.stored) == 0 && tc.count > 0 {
require.Equal(t, &columns[0], &result[0],
"when no columns stored, should return original slice (same pointer)")
}
// Verify optimization: if some stored, result should use in-place shifting
if len(tc.stored) > 0 && len(tc.expected) > 0 && tc.count > 0 {
require.Equal(t, cap(columns), cap(result),
"result should be in-place shifted from original (same capacity)")
}
})
}
}

View File

@@ -0,0 +1,40 @@
package das
import (
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
)
// Bisector describes a type that takes a set of RODataColumns via the Bisect method
// and returns a BisectionIterator that returns batches of those columns to be
// verified together.
type Bisector interface {
// Bisect initializes the BisectionIterator and returns the result.
Bisect([]blocks.RODataColumn) (BisectionIterator, error)
}
// BisectionIterator describes an iterator that returns groups of columns to verify.
// It is up to the bisector implementation to decide how to chunk up the columns,
// whether by block, by peer, or any other strategy. For example, backfill implements
// a bisector that keeps track of the source of each sidecar by peer, and groups
// sidecars by peer in the Next method, enabling it to track which peers, out of all
// the peers contributing to a batch, gave us bad data.
// When a batch fails, the OnError method should be used so that the bisector can
// keep track of the failed groups of columns and eg apply that knowledge in peer scoring.
// The same column will be returned multiple times by Next; first as part of a larger batch,
// and again as part of a more fine grained batch if there was an error in the large batch.
// For example, first as part of a batch of all columns spanning peers, and then again
// as part of a batch of columns from a single peer if some column in the larger batch
// failed verification.
type BisectionIterator interface {
// Next returns the next group of columns to verify.
// When the iteration is complete, Next should return (nil, io.EOF).
Next() ([]blocks.RODataColumn, error)
// OnError should be called when verification of a group of columns obtained via Next() fails.
OnError(error)
// Error can be used at the end of the iteration to get a single error result. It will return
// nil if OnError was never called, or an error of the implementers choosing representing the set
// of errors seen during iteration. For instance when bisecting from columns spanning peers to columns
// from a single peer, the broader error could be dropped, and then the more specific error
// (for a single peer's response) returned after bisecting to it.
Error() error
}

View File

@@ -76,7 +76,7 @@ func (e *blobCacheEntry) stash(sc *blocks.ROBlob) error {
e.scs = make([]*blocks.ROBlob, maxBlobsPerBlock)
}
if e.scs[sc.Index] != nil {
return errors.Wrapf(ErrDuplicateSidecar, "root=%#x, index=%d, commitment=%#x", sc.BlockRoot(), sc.Index, sc.KzgCommitment)
return errors.Wrapf(errDuplicateSidecar, "root=%#x, index=%d, commitment=%#x", sc.BlockRoot(), sc.Index, sc.KzgCommitment)
}
e.scs[sc.Index] = sc
return nil

View File

@@ -34,7 +34,8 @@ type filterTestCaseSetupFunc func(t *testing.T) (*blobCacheEntry, [][]byte, []bl
func filterTestCaseSetup(slot primitives.Slot, nBlobs int, onDisk []int, numExpected int) filterTestCaseSetupFunc {
return func(t *testing.T) (*blobCacheEntry, [][]byte, []blocks.ROBlob) {
blk, blobs := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, slot, nBlobs)
commits, err := commitmentsToCheck(blk, blk.Block().Slot())
shouldRetain := func(s primitives.Slot) bool { return true }
commits, err := commitmentsToCheck(blk, shouldRetain)
require.NoError(t, err)
entry := &blobCacheEntry{}
if len(onDisk) > 0 {

View File

@@ -1,9 +1,7 @@
package das
import (
"bytes"
"slices"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v7/beacon-chain/db/filesystem"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
@@ -11,9 +9,9 @@ import (
)
var (
ErrDuplicateSidecar = errors.New("duplicate sidecar stashed in AvailabilityStore")
errDuplicateSidecar = errors.New("duplicate sidecar stashed in AvailabilityStore")
errColumnIndexTooHigh = errors.New("column index too high")
errCommitmentMismatch = errors.New("KzgCommitment of sidecar in cache did not match block commitment")
errCommitmentMismatch = errors.New("commitment of sidecar in cache did not match block commitment")
errMissingSidecar = errors.New("no sidecar in cache for block commitment")
)
@@ -25,107 +23,80 @@ func newDataColumnCache() *dataColumnCache {
return &dataColumnCache{entries: make(map[cacheKey]*dataColumnCacheEntry)}
}
// ensure returns the entry for the given key, creating it if it isn't already present.
func (c *dataColumnCache) ensure(key cacheKey) *dataColumnCacheEntry {
// entry returns the entry for the given key, creating it if it isn't already present.
func (c *dataColumnCache) entry(key cacheKey) *dataColumnCacheEntry {
entry, ok := c.entries[key]
if !ok {
entry = &dataColumnCacheEntry{}
entry = newDataColumnCacheEntry(key.root)
c.entries[key] = entry
}
return entry
}
func (c *dataColumnCache) cleanup(blks []blocks.ROBlock) {
for _, block := range blks {
key := cacheKey{slot: block.Block().Slot(), root: block.Root()}
c.delete(key)
}
}
// delete removes the cache entry from the cache.
func (c *dataColumnCache) delete(key cacheKey) {
delete(c.entries, key)
}
// dataColumnCacheEntry holds a fixed-length cache of BlobSidecars.
type dataColumnCacheEntry struct {
scs [fieldparams.NumberOfColumns]*blocks.RODataColumn
diskSummary filesystem.DataColumnStorageSummary
func (c *dataColumnCache) stash(sc blocks.RODataColumn) error {
key := cacheKey{slot: sc.Slot(), root: sc.BlockRoot()}
entry := c.entry(key)
return entry.stash(sc)
}
func (e *dataColumnCacheEntry) setDiskSummary(sum filesystem.DataColumnStorageSummary) {
e.diskSummary = sum
func newDataColumnCacheEntry(root [32]byte) *dataColumnCacheEntry {
return &dataColumnCacheEntry{scs: make(map[uint64]blocks.RODataColumn), root: &root}
}
// dataColumnCacheEntry is the set of RODataColumns for a given block.
type dataColumnCacheEntry struct {
root *[32]byte
scs map[uint64]blocks.RODataColumn
}
// stash adds an item to the in-memory cache of DataColumnSidecars.
// Only the first DataColumnSidecar of a given Index will be kept in the cache.
// stash will return an error if the given data colunn is already in the cache, or if the Index is out of bounds.
func (e *dataColumnCacheEntry) stash(sc *blocks.RODataColumn) error {
// stash will return an error if the given data column Index is out of bounds.
// It will overwrite any existing entry for the same index.
func (e *dataColumnCacheEntry) stash(sc blocks.RODataColumn) error {
if sc.Index >= fieldparams.NumberOfColumns {
return errors.Wrapf(errColumnIndexTooHigh, "index=%d", sc.Index)
}
if e.scs[sc.Index] != nil {
return errors.Wrapf(ErrDuplicateSidecar, "root=%#x, index=%d, commitment=%#x", sc.BlockRoot(), sc.Index, sc.KzgCommitments)
}
e.scs[sc.Index] = sc
return nil
}
func (e *dataColumnCacheEntry) filter(root [32]byte, commitmentsArray *safeCommitmentsArray) ([]blocks.RODataColumn, error) {
nonEmptyIndices := commitmentsArray.nonEmptyIndices()
if e.diskSummary.AllAvailable(nonEmptyIndices) {
return nil, nil
// append appends the requested root and indices from the cache to the given sidecars slice and returns the result.
// If any of the given indices are missing, an error will be returned and the sidecars slice will be unchanged.
func (e *dataColumnCacheEntry) append(sidecars []blocks.RODataColumn, indices peerdas.ColumnIndices) ([]blocks.RODataColumn, error) {
needed := indices.ToMap()
for col := range needed {
_, ok := e.scs[col]
if !ok {
return nil, errors.Wrapf(errMissingSidecar, "root=%#x, index=%#x", e.root, col)
}
}
commitmentsCount := commitmentsArray.count()
sidecars := make([]blocks.RODataColumn, 0, commitmentsCount)
for i := range nonEmptyIndices {
if e.diskSummary.HasIndex(i) {
continue
}
if e.scs[i] == nil {
return nil, errors.Wrapf(errMissingSidecar, "root=%#x, index=%#x", root, i)
}
if !sliceBytesEqual(commitmentsArray[i], e.scs[i].KzgCommitments) {
return nil, errors.Wrapf(errCommitmentMismatch, "root=%#x, index=%#x, commitment=%#x, block commitment=%#x", root, i, e.scs[i].KzgCommitments, commitmentsArray[i])
}
sidecars = append(sidecars, *e.scs[i])
// Loop twice so we can avoid touching the slice if any of the blobs are missing.
for col := range needed {
sidecars = append(sidecars, e.scs[col])
}
return sidecars, nil
}
// safeCommitmentsArray is a fixed size array of commitments.
// This is helpful for avoiding gratuitous bounds checks.
type safeCommitmentsArray [fieldparams.NumberOfColumns][][]byte
// count returns the number of commitments in the array.
func (s *safeCommitmentsArray) count() int {
count := 0
for i := range s {
if s[i] != nil {
count++
// IndicesNotStored filters the list of indices to only include those that are not found in the storage summary.
func IndicesNotStored(sum filesystem.DataColumnStorageSummary, indices peerdas.ColumnIndices) peerdas.ColumnIndices {
indices = indices.Copy()
for col := range indices {
if sum.HasIndex(col) {
indices.Unset(col)
}
}
return count
}
// nonEmptyIndices returns a map of indices that are non-nil in the array.
func (s *safeCommitmentsArray) nonEmptyIndices() map[uint64]bool {
columns := make(map[uint64]bool)
for i := range s {
if s[i] != nil {
columns[uint64(i)] = true
}
}
return columns
}
func sliceBytesEqual(a, b [][]byte) bool {
return slices.EqualFunc(a, b, bytes.Equal)
return indices
}

View File

@@ -1,8 +1,10 @@
package das
import (
"slices"
"testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v7/beacon-chain/db/filesystem"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
@@ -13,124 +15,105 @@ import (
func TestEnsureDeleteSetDiskSummary(t *testing.T) {
c := newDataColumnCache()
key := cacheKey{}
entry := c.ensure(key)
require.DeepEqual(t, dataColumnCacheEntry{}, *entry)
entry := c.entry(key)
require.Equal(t, 0, len(entry.scs))
diskSummary := filesystem.NewDataColumnStorageSummary(42, [fieldparams.NumberOfColumns]bool{true})
entry.setDiskSummary(diskSummary)
entry = c.ensure(key)
require.DeepEqual(t, dataColumnCacheEntry{diskSummary: diskSummary}, *entry)
nonDupe := c.entry(key)
require.Equal(t, entry, nonDupe) // same pointer
expect, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, []util.DataColumnParam{{Index: 1}})
require.NoError(t, entry.stash(expect[0]))
require.Equal(t, 1, len(entry.scs))
cols, err := nonDupe.append([]blocks.RODataColumn{}, peerdas.NewColumnIndicesFromSlice([]uint64{expect[0].Index}))
require.NoError(t, err)
require.DeepEqual(t, expect[0], cols[0])
c.delete(key)
entry = c.ensure(key)
require.DeepEqual(t, dataColumnCacheEntry{}, *entry)
entry = c.entry(key)
require.Equal(t, 0, len(entry.scs))
require.NotEqual(t, entry, nonDupe) // different pointer
}
func TestStash(t *testing.T) {
t.Run("Index too high", func(t *testing.T) {
roDataColumns, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, []util.DataColumnParam{{Index: 10_000}})
columns, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, []util.DataColumnParam{{Index: 10_000}})
var entry dataColumnCacheEntry
err := entry.stash(&roDataColumns[0])
err := entry.stash(columns[0])
require.NotNil(t, err)
})
t.Run("Nominal and already existing", func(t *testing.T) {
roDataColumns, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, []util.DataColumnParam{{Index: 1}})
var entry dataColumnCacheEntry
err := entry.stash(&roDataColumns[0])
entry := newDataColumnCacheEntry(roDataColumns[0].BlockRoot())
err := entry.stash(roDataColumns[0])
require.NoError(t, err)
require.DeepEqual(t, roDataColumns[0], entry.scs[1])
err = entry.stash(&roDataColumns[0])
require.NotNil(t, err)
require.NoError(t, entry.stash(roDataColumns[0]))
// stash simply replaces duplicate values now
require.DeepEqual(t, roDataColumns[0], entry.scs[1])
})
}
func TestFilterDataColumns(t *testing.T) {
func TestAppendDataColumns(t *testing.T) {
t.Run("All available", func(t *testing.T) {
commitmentsArray := safeCommitmentsArray{nil, [][]byte{[]byte{1}}, nil, [][]byte{[]byte{3}}}
diskSummary := filesystem.NewDataColumnStorageSummary(42, [fieldparams.NumberOfColumns]bool{false, true, false, true})
dataColumnCacheEntry := dataColumnCacheEntry{diskSummary: diskSummary}
actual, err := dataColumnCacheEntry.filter([fieldparams.RootLength]byte{}, &commitmentsArray)
sum := filesystem.NewDataColumnStorageSummary(42, [fieldparams.NumberOfColumns]bool{false, true, false, true})
notStored := IndicesNotStored(sum, peerdas.NewColumnIndicesFromSlice([]uint64{1, 3}))
actual, err := newDataColumnCacheEntry([32]byte{}).append([]blocks.RODataColumn{}, notStored)
require.NoError(t, err)
require.IsNil(t, actual)
require.Equal(t, 0, len(actual))
})
t.Run("Some scs missing", func(t *testing.T) {
commitmentsArray := safeCommitmentsArray{nil, [][]byte{[]byte{1}}}
sum := filesystem.NewDataColumnStorageSummary(42, [fieldparams.NumberOfColumns]bool{})
diskSummary := filesystem.NewDataColumnStorageSummary(42, [fieldparams.NumberOfColumns]bool{})
dataColumnCacheEntry := dataColumnCacheEntry{diskSummary: diskSummary}
_, err := dataColumnCacheEntry.filter([fieldparams.RootLength]byte{}, &commitmentsArray)
require.NotNil(t, err)
})
t.Run("Commitments not equal", func(t *testing.T) {
commitmentsArray := safeCommitmentsArray{nil, [][]byte{[]byte{1}}}
roDataColumns, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, []util.DataColumnParam{{Index: 1}})
var scs [fieldparams.NumberOfColumns]*blocks.RODataColumn
scs[1] = &roDataColumns[0]
dataColumnCacheEntry := dataColumnCacheEntry{scs: scs}
_, err := dataColumnCacheEntry.filter(roDataColumns[0].BlockRoot(), &commitmentsArray)
notStored := IndicesNotStored(sum, peerdas.NewColumnIndicesFromSlice([]uint64{1}))
actual, err := newDataColumnCacheEntry([32]byte{}).append([]blocks.RODataColumn{}, notStored)
require.Equal(t, 0, len(actual))
require.NotNil(t, err)
})
t.Run("Nominal", func(t *testing.T) {
commitmentsArray := safeCommitmentsArray{nil, [][]byte{[]byte{1}}, nil, [][]byte{[]byte{3}}}
diskSummary := filesystem.NewDataColumnStorageSummary(42, [fieldparams.NumberOfColumns]bool{false, true})
indices := peerdas.NewColumnIndicesFromSlice([]uint64{1, 3})
expected, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, []util.DataColumnParam{{Index: 3, KzgCommitments: [][]byte{[]byte{3}}}})
var scs [fieldparams.NumberOfColumns]*blocks.RODataColumn
scs[3] = &expected[0]
scs := map[uint64]blocks.RODataColumn{
3: expected[0],
}
sum := filesystem.NewDataColumnStorageSummary(42, [fieldparams.NumberOfColumns]bool{false, true})
entry := dataColumnCacheEntry{scs: scs}
dataColumnCacheEntry := dataColumnCacheEntry{scs: scs, diskSummary: diskSummary}
actual, err := dataColumnCacheEntry.filter(expected[0].BlockRoot(), &commitmentsArray)
actual, err := entry.append([]blocks.RODataColumn{}, IndicesNotStored(sum, indices))
require.NoError(t, err)
require.DeepEqual(t, expected, actual)
})
}
func TestCount(t *testing.T) {
s := safeCommitmentsArray{nil, [][]byte{[]byte{1}}, nil, [][]byte{[]byte{3}}}
require.Equal(t, 2, s.count())
}
t.Run("Append does not mutate the input", func(t *testing.T) {
indices := peerdas.NewColumnIndicesFromSlice([]uint64{1, 2})
expected, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, []util.DataColumnParam{
{Index: 0, KzgCommitments: [][]byte{[]byte{1}}},
{Index: 1, KzgCommitments: [][]byte{[]byte{2}}},
{Index: 2, KzgCommitments: [][]byte{[]byte{3}}},
})
func TestNonEmptyIndices(t *testing.T) {
s := safeCommitmentsArray{nil, [][]byte{[]byte{10}}, nil, [][]byte{[]byte{20}}}
actual := s.nonEmptyIndices()
require.DeepEqual(t, map[uint64]bool{1: true, 3: true}, actual)
}
scs := map[uint64]blocks.RODataColumn{
1: expected[1],
2: expected[2],
}
entry := dataColumnCacheEntry{scs: scs}
func TestSliceBytesEqual(t *testing.T) {
t.Run("Different lengths", func(t *testing.T) {
a := [][]byte{[]byte{1, 2, 3}}
b := [][]byte{[]byte{1, 2, 3}, []byte{4, 5, 6}}
require.Equal(t, false, sliceBytesEqual(a, b))
})
t.Run("Same length but different content", func(t *testing.T) {
a := [][]byte{[]byte{1, 2, 3}, []byte{4, 5, 6}}
b := [][]byte{[]byte{1, 2, 3}, []byte{4, 5, 7}}
require.Equal(t, false, sliceBytesEqual(a, b))
})
t.Run("Equal slices", func(t *testing.T) {
a := [][]byte{[]byte{1, 2, 3}, []byte{4, 5, 6}}
b := [][]byte{[]byte{1, 2, 3}, []byte{4, 5, 6}}
require.Equal(t, true, sliceBytesEqual(a, b))
original := []blocks.RODataColumn{expected[0]}
actual, err := entry.append(original, indices)
require.NoError(t, err)
require.Equal(t, len(expected), len(actual))
slices.SortFunc(actual, func(i, j blocks.RODataColumn) int {
return int(i.Index) - int(j.Index)
})
for i := range expected {
require.Equal(t, expected[i].Index, actual[i].Index)
}
require.Equal(t, 1, len(original))
})
}

View File

@@ -7,13 +7,13 @@ import (
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
)
// AvailabilityStore describes a component that can verify and save sidecars for a given block, and confirm previously
// verified and saved sidecars.
// Persist guarantees that the sidecar will be available to perform a DA check
// for the life of the beacon node process.
// IsDataAvailable guarantees that all blobs committed to in the block have been
// durably persisted before returning a non-error value.
type AvailabilityStore interface {
IsDataAvailable(ctx context.Context, current primitives.Slot, b blocks.ROBlock) error
Persist(current primitives.Slot, blobSidecar ...blocks.ROBlob) error
// AvailabilityChecker is the minimum interface needed to check if data is available for a block.
// By convention there is a concept of an AvailabilityStore that implements a method to persist
// blobs or data columns to prepare for Availability checking, but since those methods are different
// for different forms of blob data, they are not included in the interface.
type AvailabilityChecker interface {
IsDataAvailable(ctx context.Context, current primitives.Slot, b ...blocks.ROBlock) error
}
// RetentionChecker is a callback that determines whether blobs at the given slot are within the retention period.
type RetentionChecker func(primitives.Slot) bool

5
beacon-chain/das/log.go Normal file
View File

@@ -0,0 +1,5 @@
package das
import "github.com/sirupsen/logrus"
var log = logrus.WithField("prefix", "das")

View File

@@ -9,16 +9,20 @@ import (
// MockAvailabilityStore is an implementation of AvailabilityStore that can be used by other packages in tests.
type MockAvailabilityStore struct {
VerifyAvailabilityCallback func(ctx context.Context, current primitives.Slot, b blocks.ROBlock) error
VerifyAvailabilityCallback func(ctx context.Context, current primitives.Slot, b ...blocks.ROBlock) error
ErrIsDataAvailable error
PersistBlobsCallback func(current primitives.Slot, blobSidecar ...blocks.ROBlob) error
}
var _ AvailabilityStore = &MockAvailabilityStore{}
var _ AvailabilityChecker = &MockAvailabilityStore{}
// IsDataAvailable satisfies the corresponding method of the AvailabilityStore interface in a way that is useful for tests.
func (m *MockAvailabilityStore) IsDataAvailable(ctx context.Context, current primitives.Slot, b blocks.ROBlock) error {
func (m *MockAvailabilityStore) IsDataAvailable(ctx context.Context, current primitives.Slot, b ...blocks.ROBlock) error {
if m.ErrIsDataAvailable != nil {
return m.ErrIsDataAvailable
}
if m.VerifyAvailabilityCallback != nil {
return m.VerifyAvailabilityCallback(ctx, current, b)
return m.VerifyAvailabilityCallback(ctx, current, b...)
}
return nil
}

135
beacon-chain/das/needs.go Normal file
View File

@@ -0,0 +1,135 @@
package das
import (
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/pkg/errors"
)
// NeedSpan represents the need for a resource over a span of slots.
type NeedSpan struct {
Begin primitives.Slot
End primitives.Slot
}
// At returns whether blocks/blobs/columns are needed At the given slot.
func (n NeedSpan) At(slot primitives.Slot) bool {
return slot >= n.Begin && slot < n.End
}
// CurrentNeeds fields can be used to check whether the given resource type is needed
// at a given slot. The values are based on the current slot, so this value shouldn't
// be retained / reused across slots.
type CurrentNeeds struct {
Block NeedSpan
Blob NeedSpan
Col NeedSpan
}
// SyncNeeds holds configuration and state for determining what data is needed
// at any given slot during backfill based on the current slot.
type SyncNeeds struct {
current func() primitives.Slot
deneb primitives.Slot
fulu primitives.Slot
oldestSlotFlagPtr *primitives.Slot
validOldestSlotPtr *primitives.Slot
blockRetention primitives.Epoch
blobRetentionFlag primitives.Epoch
blobRetention primitives.Epoch
colRetention primitives.Epoch
}
type CurrentSlotter func() primitives.Slot
func NewSyncNeeds(current CurrentSlotter, oldestSlotFlagPtr *primitives.Slot, blobRetentionFlag primitives.Epoch) (SyncNeeds, error) {
deneb, err := slots.EpochStart(params.BeaconConfig().DenebForkEpoch)
if err != nil {
return SyncNeeds{}, errors.Wrap(err, "deneb fork slot")
}
fuluBoundary := min(params.BeaconConfig().FuluForkEpoch, slots.MaxSafeEpoch())
fulu, err := slots.EpochStart(fuluBoundary)
if err != nil {
return SyncNeeds{}, errors.Wrap(err, "fulu fork slot")
}
sn := SyncNeeds{
current: func() primitives.Slot { return current() },
deneb: deneb,
fulu: fulu,
blobRetentionFlag: blobRetentionFlag,
}
// We apply the --blob-retention-epochs flag to both blob and column retention.
sn.blobRetention = max(sn.blobRetentionFlag, params.BeaconConfig().MinEpochsForBlobsSidecarsRequest)
sn.colRetention = max(sn.blobRetentionFlag, params.BeaconConfig().MinEpochsForDataColumnSidecarsRequest)
// Override spec minimum block retention with user-provided flag only if it is lower than the spec minimum.
sn.blockRetention = primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
if oldestSlotFlagPtr != nil {
oldestEpoch := slots.ToEpoch(*oldestSlotFlagPtr)
if oldestEpoch < sn.blockRetention {
sn.validOldestSlotPtr = oldestSlotFlagPtr
} else {
log.WithField("backfill-oldest-slot", *oldestSlotFlagPtr).
WithField("specMinSlot", syncEpochOffset(current(), sn.blockRetention)).
Warn("Ignoring user-specified slot > MIN_EPOCHS_FOR_BLOCK_REQUESTS.")
}
}
return sn, nil
}
// Currently is the main callback given to the different parts of backfill to determine
// what resources are needed at a given slot. It assumes the current instance of SyncNeeds
// is the result of calling initialize.
func (n SyncNeeds) Currently() CurrentNeeds {
current := n.current()
c := CurrentNeeds{
Block: n.blockSpan(current),
Blob: NeedSpan{Begin: syncEpochOffset(current, n.blobRetention), End: n.fulu},
Col: NeedSpan{Begin: syncEpochOffset(current, n.colRetention), End: current},
}
// Adjust the minimums forward to the slots where the sidecar types were introduced
c.Blob.Begin = max(c.Blob.Begin, n.deneb)
c.Col.Begin = max(c.Col.Begin, n.fulu)
return c
}
func (n SyncNeeds) blockSpan(current primitives.Slot) NeedSpan {
if n.validOldestSlotPtr != nil { // assumes validation done in initialize()
return NeedSpan{Begin: *n.validOldestSlotPtr, End: current}
}
return NeedSpan{Begin: syncEpochOffset(current, n.blockRetention), End: current}
}
func (n SyncNeeds) BlobRetentionChecker() RetentionChecker {
return func(slot primitives.Slot) bool {
current := n.Currently()
return current.Blob.At(slot)
}
}
func (n SyncNeeds) DataColumnRetentionChecker() RetentionChecker {
return func(slot primitives.Slot) bool {
current := n.Currently()
return current.Col.At(slot)
}
}
// syncEpochOffset subtracts a number of epochs as slots from the current slot, with underflow checks.
// It returns slot 1 if the result would be 0 or underflow. It doesn't return slot 0 because the
// genesis block needs to be specially synced (it doesn't have a valid signature).
func syncEpochOffset(current primitives.Slot, subtract primitives.Epoch) primitives.Slot {
minEpoch := min(subtract, slots.MaxSafeEpoch())
// compute slot offset - offset is a number of slots to go back from current (not an absolute slot).
offset := slots.UnsafeEpochStart(minEpoch)
// Undeflow protection: slot 0 is the genesis block, therefore the signature in it is invalid.
// To prevent us from rejecting a batch, we restrict the minimum backfill batch till only slot 1
if offset >= current {
return 1
}
return current - offset
}

View File

@@ -0,0 +1,675 @@
package das
import (
"fmt"
"testing"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/OffchainLabs/prysm/v7/time/slots"
)
// TestNeedSpanAt tests the needSpan.at() method for range checking.
func TestNeedSpanAt(t *testing.T) {
cases := []struct {
name string
span NeedSpan
slots []primitives.Slot
expected bool
}{
{
name: "within bounds",
span: NeedSpan{Begin: 100, End: 200},
slots: []primitives.Slot{101, 150, 199},
expected: true,
},
{
name: "before begin / at end boundary (exclusive)",
span: NeedSpan{Begin: 100, End: 200},
slots: []primitives.Slot{99, 200, 201},
expected: false,
},
{
name: "empty span (begin == end)",
span: NeedSpan{Begin: 100, End: 100},
slots: []primitives.Slot{100},
expected: false,
},
{
name: "slot 0 with span starting at 0",
span: NeedSpan{Begin: 0, End: 100},
slots: []primitives.Slot{0},
expected: true,
},
}
for _, tc := range cases {
for _, sl := range tc.slots {
t.Run(fmt.Sprintf("%s at slot %d, ", tc.name, sl), func(t *testing.T) {
result := tc.span.At(sl)
require.Equal(t, tc.expected, result)
})
}
}
}
// TestSyncEpochOffset tests the syncEpochOffset helper function.
func TestSyncEpochOffset(t *testing.T) {
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
cases := []struct {
name string
current primitives.Slot
subtract primitives.Epoch
expected primitives.Slot
}{
{
name: "typical offset - 5 epochs back",
current: primitives.Slot(10000),
subtract: 5,
expected: primitives.Slot(10000 - 5*slotsPerEpoch),
},
{
name: "zero subtract returns current",
current: primitives.Slot(5000),
subtract: 0,
expected: primitives.Slot(5000),
},
{
name: "subtract 1 epoch from mid-range slot",
current: primitives.Slot(1000),
subtract: 1,
expected: primitives.Slot(1000 - slotsPerEpoch),
},
{
name: "offset equals current - underflow protection",
current: primitives.Slot(slotsPerEpoch),
subtract: 1,
expected: 1,
},
{
name: "offset exceeds current - underflow protection",
current: primitives.Slot(50),
subtract: 1000,
expected: 1,
},
{
name: "current very close to 0",
current: primitives.Slot(10),
subtract: 1,
expected: 1,
},
{
name: "subtract MaxSafeEpoch",
current: primitives.Slot(1000000),
subtract: slots.MaxSafeEpoch(),
expected: 1, // underflow protection
},
{
name: "result exactly at slot 1",
current: primitives.Slot(1 + slotsPerEpoch),
subtract: 1,
expected: 1,
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
result := syncEpochOffset(tc.current, tc.subtract)
require.Equal(t, tc.expected, result)
})
}
}
// TestSyncNeedsInitialize tests the syncNeeds.initialize() method.
func TestSyncNeedsInitialize(t *testing.T) {
params.SetupTestConfigCleanup(t)
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
minBlobEpochs := params.BeaconConfig().MinEpochsForBlobsSidecarsRequest
minColEpochs := params.BeaconConfig().MinEpochsForDataColumnSidecarsRequest
currentSlot := primitives.Slot(10000)
currentFunc := func() primitives.Slot { return currentSlot }
cases := []struct {
invalidOldestFlag bool
expectValidOldest bool
oldestSlotFlagPtr *primitives.Slot
blobRetentionFlag primitives.Epoch
expectedBlob primitives.Epoch
expectedCol primitives.Epoch
name string
input SyncNeeds
}{
{
name: "basic initialization with no flags",
expectValidOldest: false,
expectedBlob: minBlobEpochs,
expectedCol: minColEpochs,
blobRetentionFlag: 0,
},
{
name: "blob retention flag less than spec minimum",
blobRetentionFlag: minBlobEpochs - 1,
expectValidOldest: false,
expectedBlob: minBlobEpochs,
expectedCol: minColEpochs,
},
{
name: "blob retention flag greater than spec minimum",
blobRetentionFlag: minBlobEpochs + 10,
expectValidOldest: false,
expectedBlob: minBlobEpochs + 10,
expectedCol: minBlobEpochs + 10,
},
{
name: "oldestSlotFlagPtr is nil",
blobRetentionFlag: 0,
oldestSlotFlagPtr: nil,
expectValidOldest: false,
expectedBlob: minBlobEpochs,
expectedCol: minColEpochs,
},
{
name: "valid oldestSlotFlagPtr (earlier than spec minimum)",
blobRetentionFlag: 0,
oldestSlotFlagPtr: func() *primitives.Slot {
slot := primitives.Slot(10)
return &slot
}(),
expectValidOldest: true,
expectedBlob: minBlobEpochs,
expectedCol: minColEpochs,
},
{
name: "invalid oldestSlotFlagPtr (later than spec minimum)",
blobRetentionFlag: 0,
oldestSlotFlagPtr: func() *primitives.Slot {
// Make it way past the spec minimum
slot := currentSlot - primitives.Slot(params.BeaconConfig().MinEpochsForBlockRequests-1)*slotsPerEpoch
return &slot
}(),
expectValidOldest: false,
expectedBlob: minBlobEpochs,
expectedCol: minColEpochs,
invalidOldestFlag: true,
},
{
name: "oldestSlotFlagPtr at boundary (exactly at spec minimum)",
blobRetentionFlag: 0,
oldestSlotFlagPtr: func() *primitives.Slot {
slot := currentSlot - primitives.Slot(params.BeaconConfig().MinEpochsForBlockRequests)*slotsPerEpoch
return &slot
}(),
expectValidOldest: false,
expectedBlob: minBlobEpochs,
expectedCol: minColEpochs,
invalidOldestFlag: true,
},
{
name: "both blob retention flag and oldest slot set",
blobRetentionFlag: minBlobEpochs + 5,
oldestSlotFlagPtr: func() *primitives.Slot {
slot := primitives.Slot(100)
return &slot
}(),
expectValidOldest: true,
expectedBlob: minBlobEpochs + 5,
expectedCol: minBlobEpochs + 5,
},
{
name: "zero blob retention uses spec minimum",
blobRetentionFlag: 0,
expectValidOldest: false,
expectedBlob: minBlobEpochs,
expectedCol: minColEpochs,
},
{
name: "large blob retention value",
blobRetentionFlag: 5000,
expectValidOldest: false,
expectedBlob: 5000,
expectedCol: 5000,
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
result, err := NewSyncNeeds(currentFunc, tc.oldestSlotFlagPtr, tc.blobRetentionFlag)
require.NoError(t, err)
// Check that current, deneb, fulu are set correctly
require.Equal(t, currentSlot, result.current())
// Check retention calculations
require.Equal(t, tc.expectedBlob, result.blobRetention)
require.Equal(t, tc.expectedCol, result.colRetention)
if tc.invalidOldestFlag {
require.IsNil(t, result.validOldestSlotPtr)
} else {
require.Equal(t, tc.oldestSlotFlagPtr, result.validOldestSlotPtr)
}
// Check blockRetention is always spec minimum
require.Equal(t, primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests), result.blockRetention)
})
}
}
// TestSyncNeedsBlockSpan tests the syncNeeds.blockSpan() method.
func TestSyncNeedsBlockSpan(t *testing.T) {
params.SetupTestConfigCleanup(t)
minBlockEpochs := params.BeaconConfig().MinEpochsForBlockRequests
cases := []struct {
name string
validOldest *primitives.Slot
blockRetention primitives.Epoch
current primitives.Slot
expectedBegin primitives.Slot
expectedEnd primitives.Slot
}{
{
name: "with validOldestSlotPtr set",
validOldest: func() *primitives.Slot { s := primitives.Slot(500); return &s }(),
blockRetention: primitives.Epoch(minBlockEpochs),
current: 10000,
expectedBegin: 500,
expectedEnd: 10000,
},
{
name: "without validOldestSlotPtr (nil)",
validOldest: nil,
blockRetention: primitives.Epoch(minBlockEpochs),
current: 10000,
expectedBegin: syncEpochOffset(10000, primitives.Epoch(minBlockEpochs)),
expectedEnd: 10000,
},
{
name: "very low current slot",
validOldest: nil,
blockRetention: primitives.Epoch(minBlockEpochs),
current: 100,
expectedBegin: 1, // underflow protection
expectedEnd: 100,
},
{
name: "very high current slot",
validOldest: nil,
blockRetention: primitives.Epoch(minBlockEpochs),
current: 1000000,
expectedBegin: syncEpochOffset(1000000, primitives.Epoch(minBlockEpochs)),
expectedEnd: 1000000,
},
{
name: "validOldestSlotPtr at boundary value",
validOldest: func() *primitives.Slot { s := primitives.Slot(1); return &s }(),
blockRetention: primitives.Epoch(minBlockEpochs),
current: 5000,
expectedBegin: 1,
expectedEnd: 5000,
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
sn := SyncNeeds{
validOldestSlotPtr: tc.validOldest,
blockRetention: tc.blockRetention,
}
result := sn.blockSpan(tc.current)
require.Equal(t, tc.expectedBegin, result.Begin)
require.Equal(t, tc.expectedEnd, result.End)
})
}
}
// TestSyncNeedsCurrently tests the syncNeeds.currently() method.
func TestSyncNeedsCurrently(t *testing.T) {
params.SetupTestConfigCleanup(t)
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
denebSlot := primitives.Slot(1000)
fuluSlot := primitives.Slot(2000)
cases := []struct {
name string
current primitives.Slot
blobRetention primitives.Epoch
colRetention primitives.Epoch
blockRetention primitives.Epoch
validOldest *primitives.Slot
// Expected block span
expectBlockBegin primitives.Slot
expectBlockEnd primitives.Slot
// Expected blob span
expectBlobBegin primitives.Slot
expectBlobEnd primitives.Slot
// Expected column span
expectColBegin primitives.Slot
expectColEnd primitives.Slot
}{
{
name: "pre-Deneb - only blocks needed",
current: 500,
blobRetention: 10,
colRetention: 10,
blockRetention: 5,
validOldest: nil,
expectBlockBegin: syncEpochOffset(500, 5),
expectBlockEnd: 500,
expectBlobBegin: denebSlot, // adjusted to deneb
expectBlobEnd: fuluSlot,
expectColBegin: fuluSlot, // adjusted to fulu
expectColEnd: 500,
},
{
name: "between Deneb and Fulu - blocks and blobs needed",
current: 1500,
blobRetention: 10,
colRetention: 10,
blockRetention: 5,
validOldest: nil,
expectBlockBegin: syncEpochOffset(1500, 5),
expectBlockEnd: 1500,
expectBlobBegin: max(syncEpochOffset(1500, 10), denebSlot),
expectBlobEnd: fuluSlot,
expectColBegin: fuluSlot, // adjusted to fulu
expectColEnd: 1500,
},
{
name: "post-Fulu - all resources needed",
current: 3000,
blobRetention: 10,
colRetention: 10,
blockRetention: 5,
validOldest: nil,
expectBlockBegin: syncEpochOffset(3000, 5),
expectBlockEnd: 3000,
expectBlobBegin: max(syncEpochOffset(3000, 10), denebSlot),
expectBlobEnd: fuluSlot,
expectColBegin: max(syncEpochOffset(3000, 10), fuluSlot),
expectColEnd: 3000,
},
{
name: "exactly at Deneb boundary",
current: denebSlot,
blobRetention: 10,
colRetention: 10,
blockRetention: 5,
validOldest: nil,
expectBlockBegin: syncEpochOffset(denebSlot, 5),
expectBlockEnd: denebSlot,
expectBlobBegin: denebSlot,
expectBlobEnd: fuluSlot,
expectColBegin: fuluSlot,
expectColEnd: denebSlot,
},
{
name: "exactly at Fulu boundary",
current: fuluSlot,
blobRetention: 10,
colRetention: 10,
blockRetention: 5,
validOldest: nil,
expectBlockBegin: syncEpochOffset(fuluSlot, 5),
expectBlockEnd: fuluSlot,
expectBlobBegin: max(syncEpochOffset(fuluSlot, 10), denebSlot),
expectBlobEnd: fuluSlot,
expectColBegin: fuluSlot,
expectColEnd: fuluSlot,
},
{
name: "small retention periods",
current: 5000,
blobRetention: 1,
colRetention: 2,
blockRetention: 1,
validOldest: nil,
expectBlockBegin: syncEpochOffset(5000, 1),
expectBlockEnd: 5000,
expectBlobBegin: max(syncEpochOffset(5000, 1), denebSlot),
expectBlobEnd: fuluSlot,
expectColBegin: max(syncEpochOffset(5000, 2), fuluSlot),
expectColEnd: 5000,
},
{
name: "large retention periods",
current: 10000,
blobRetention: 100,
colRetention: 100,
blockRetention: 50,
validOldest: nil,
expectBlockBegin: syncEpochOffset(10000, 50),
expectBlockEnd: 10000,
expectBlobBegin: max(syncEpochOffset(10000, 100), denebSlot),
expectBlobEnd: fuluSlot,
expectColBegin: max(syncEpochOffset(10000, 100), fuluSlot),
expectColEnd: 10000,
},
{
name: "with validOldestSlotPtr for blocks",
current: 8000,
blobRetention: 10,
colRetention: 10,
blockRetention: 5,
validOldest: func() *primitives.Slot { s := primitives.Slot(100); return &s }(),
expectBlockBegin: 100,
expectBlockEnd: 8000,
expectBlobBegin: max(syncEpochOffset(8000, 10), denebSlot),
expectBlobEnd: fuluSlot,
expectColBegin: max(syncEpochOffset(8000, 10), fuluSlot),
expectColEnd: 8000,
},
{
name: "retention approaching current slot",
current: primitives.Slot(2000 + 5*slotsPerEpoch),
blobRetention: 5,
colRetention: 5,
blockRetention: 3,
validOldest: nil,
expectBlockBegin: syncEpochOffset(primitives.Slot(2000+5*slotsPerEpoch), 3),
expectBlockEnd: primitives.Slot(2000 + 5*slotsPerEpoch),
expectBlobBegin: max(syncEpochOffset(primitives.Slot(2000+5*slotsPerEpoch), 5), denebSlot),
expectBlobEnd: fuluSlot,
expectColBegin: max(syncEpochOffset(primitives.Slot(2000+5*slotsPerEpoch), 5), fuluSlot),
expectColEnd: primitives.Slot(2000 + 5*slotsPerEpoch),
},
{
name: "current just after Deneb",
current: denebSlot + 10,
blobRetention: 10,
colRetention: 10,
blockRetention: 5,
validOldest: nil,
expectBlockBegin: syncEpochOffset(denebSlot+10, 5),
expectBlockEnd: denebSlot + 10,
expectBlobBegin: denebSlot,
expectBlobEnd: fuluSlot,
expectColBegin: fuluSlot,
expectColEnd: denebSlot + 10,
},
{
name: "current just after Fulu",
current: fuluSlot + 10,
blobRetention: 10,
colRetention: 10,
blockRetention: 5,
validOldest: nil,
expectBlockBegin: syncEpochOffset(fuluSlot+10, 5),
expectBlockEnd: fuluSlot + 10,
expectBlobBegin: max(syncEpochOffset(fuluSlot+10, 10), denebSlot),
expectBlobEnd: fuluSlot,
expectColBegin: fuluSlot,
expectColEnd: fuluSlot + 10,
},
{
name: "blob retention would start before Deneb",
current: denebSlot + primitives.Slot(5*slotsPerEpoch),
blobRetention: 100, // very large retention
colRetention: 10,
blockRetention: 5,
validOldest: nil,
expectBlockBegin: syncEpochOffset(denebSlot+primitives.Slot(5*slotsPerEpoch), 5),
expectBlockEnd: denebSlot + primitives.Slot(5*slotsPerEpoch),
expectBlobBegin: denebSlot, // clamped to deneb
expectBlobEnd: fuluSlot,
expectColBegin: fuluSlot,
expectColEnd: denebSlot + primitives.Slot(5*slotsPerEpoch),
},
{
name: "column retention would start before Fulu",
current: fuluSlot + primitives.Slot(5*slotsPerEpoch),
blobRetention: 10,
colRetention: 100, // very large retention
blockRetention: 5,
validOldest: nil,
expectBlockBegin: syncEpochOffset(fuluSlot+primitives.Slot(5*slotsPerEpoch), 5),
expectBlockEnd: fuluSlot + primitives.Slot(5*slotsPerEpoch),
expectBlobBegin: max(syncEpochOffset(fuluSlot+primitives.Slot(5*slotsPerEpoch), 10), denebSlot),
expectBlobEnd: fuluSlot,
expectColBegin: fuluSlot, // clamped to fulu
expectColEnd: fuluSlot + primitives.Slot(5*slotsPerEpoch),
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
sn := SyncNeeds{
current: func() primitives.Slot { return tc.current },
deneb: denebSlot,
fulu: fuluSlot,
validOldestSlotPtr: tc.validOldest,
blockRetention: tc.blockRetention,
blobRetention: tc.blobRetention,
colRetention: tc.colRetention,
}
result := sn.Currently()
// Verify block span
require.Equal(t, tc.expectBlockBegin, result.Block.Begin,
"block.begin mismatch")
require.Equal(t, tc.expectBlockEnd, result.Block.End,
"block.end mismatch")
// Verify blob span
require.Equal(t, tc.expectBlobBegin, result.Blob.Begin,
"blob.begin mismatch")
require.Equal(t, tc.expectBlobEnd, result.Blob.End,
"blob.end mismatch")
// Verify column span
require.Equal(t, tc.expectColBegin, result.Col.Begin,
"col.begin mismatch")
require.Equal(t, tc.expectColEnd, result.Col.End,
"col.end mismatch")
})
}
}
// TestCurrentNeedsIntegration verifies the complete currentNeeds workflow.
func TestCurrentNeedsIntegration(t *testing.T) {
params.SetupTestConfigCleanup(t)
denebSlot := primitives.Slot(1000)
fuluSlot := primitives.Slot(2000)
cases := []struct {
name string
current primitives.Slot
blobRetention primitives.Epoch
colRetention primitives.Epoch
testSlots []primitives.Slot
expectBlockAt []bool
expectBlobAt []bool
expectColAt []bool
}{
{
name: "pre-Deneb slot - only blocks",
current: 500,
blobRetention: 10,
colRetention: 10,
testSlots: []primitives.Slot{100, 250, 499, 500, 1000, 2000},
expectBlockAt: []bool{true, true, true, false, false, false},
expectBlobAt: []bool{false, false, false, false, true, false},
expectColAt: []bool{false, false, false, false, false, false},
},
{
name: "between Deneb and Fulu - blocks and blobs",
current: 1500,
blobRetention: 10,
colRetention: 10,
testSlots: []primitives.Slot{500, 1000, 1200, 1499, 1500, 2000},
expectBlockAt: []bool{true, true, true, true, false, false},
expectBlobAt: []bool{false, false, true, true, true, false},
expectColAt: []bool{false, false, false, false, false, false},
},
{
name: "post-Fulu - all resources",
current: 3000,
blobRetention: 10,
colRetention: 10,
testSlots: []primitives.Slot{1000, 1500, 2000, 2500, 2999, 3000},
expectBlockAt: []bool{true, true, true, true, true, false},
expectBlobAt: []bool{false, false, false, false, false, false},
expectColAt: []bool{false, false, false, false, true, false},
},
{
name: "at Deneb boundary",
current: denebSlot,
blobRetention: 5,
colRetention: 5,
testSlots: []primitives.Slot{500, 999, 1000, 1500, 2000},
expectBlockAt: []bool{true, true, false, false, false},
expectBlobAt: []bool{false, false, true, true, false},
expectColAt: []bool{false, false, false, false, false},
},
{
name: "at Fulu boundary",
current: fuluSlot,
blobRetention: 5,
colRetention: 5,
testSlots: []primitives.Slot{1000, 1500, 1999, 2000, 2001},
expectBlockAt: []bool{true, true, true, false, false},
expectBlobAt: []bool{false, false, true, false, false},
expectColAt: []bool{false, false, false, false, false},
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
sn := SyncNeeds{
current: func() primitives.Slot { return tc.current },
deneb: denebSlot,
fulu: fuluSlot,
blockRetention: 100,
blobRetention: tc.blobRetention,
colRetention: tc.colRetention,
}
cn := sn.Currently()
// Verify block.end == current
require.Equal(t, tc.current, cn.Block.End, "block.end should equal current")
// Verify blob.end == fulu
require.Equal(t, fuluSlot, cn.Blob.End, "blob.end should equal fulu")
// Verify col.end == current
require.Equal(t, tc.current, cn.Col.End, "col.end should equal current")
// Test each slot
for i, slot := range tc.testSlots {
require.Equal(t, tc.expectBlockAt[i], cn.Block.At(slot),
"block.at(%d) mismatch at index %d", slot, i)
require.Equal(t, tc.expectBlobAt[i], cn.Blob.At(slot),
"blob.at(%d) mismatch at index %d", slot, i)
require.Equal(t, tc.expectColAt[i], cn.Col.At(slot),
"col.at(%d) mismatch at index %d", slot, i)
}
})
}
}

View File

@@ -270,7 +270,7 @@ func (dcs *DataColumnStorage) Save(dataColumnSidecars []blocks.VerifiedRODataCol
// Check the number of columns is the one expected.
// While implementing this, we expect the number of columns won't change.
// If it does, we will need to create a new version of the data column sidecar file.
if params.BeaconConfig().NumberOfColumns != mandatoryNumberOfColumns {
if fieldparams.NumberOfColumns != mandatoryNumberOfColumns {
return errWrongNumberOfColumns
}
@@ -964,8 +964,7 @@ func (si *storageIndices) set(dataColumnIndex uint64, position uint8) error {
// pullChan pulls data column sidecars from the input channel until it is empty.
func pullChan(inputRoDataColumns chan []blocks.VerifiedRODataColumn) []blocks.VerifiedRODataColumn {
numberOfColumns := params.BeaconConfig().NumberOfColumns
dataColumnSidecars := make([]blocks.VerifiedRODataColumn, 0, numberOfColumns)
dataColumnSidecars := make([]blocks.VerifiedRODataColumn, 0, fieldparams.NumberOfColumns)
for {
select {

View File

@@ -117,8 +117,6 @@ func (sc *dataColumnStorageSummaryCache) HighestEpoch() primitives.Epoch {
// set updates the cache.
func (sc *dataColumnStorageSummaryCache) set(dataColumnsIdent DataColumnsIdent) error {
numberOfColumns := params.BeaconConfig().NumberOfColumns
sc.mu.Lock()
defer sc.mu.Unlock()
@@ -127,7 +125,7 @@ func (sc *dataColumnStorageSummaryCache) set(dataColumnsIdent DataColumnsIdent)
count := uint64(0)
for _, index := range dataColumnsIdent.Indices {
if index >= numberOfColumns {
if index >= fieldparams.NumberOfColumns {
return errDataColumnIndexOutOfBounds
}

View File

@@ -7,7 +7,6 @@ import (
"testing"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/testing/require"
@@ -88,22 +87,6 @@ func TestWarmCache(t *testing.T) {
}
func TestSaveDataColumnsSidecars(t *testing.T) {
t.Run("wrong numbers of columns", func(t *testing.T) {
cfg := params.BeaconConfig().Copy()
cfg.NumberOfColumns = 0
params.OverrideBeaconConfig(cfg)
params.SetupTestConfigCleanup(t)
_, verifiedRoDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(
t,
[]util.DataColumnParam{{Index: 12}, {Index: 1_000_000}, {Index: 48}},
)
_, dataColumnStorage := NewEphemeralDataColumnStorageAndFs(t)
err := dataColumnStorage.Save(verifiedRoDataColumnSidecars)
require.ErrorIs(t, err, errWrongNumberOfColumns)
})
t.Run("one of the column index is too large", func(t *testing.T) {
_, verifiedRoDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(
t,

View File

@@ -15,7 +15,8 @@ import (
)
// UpdateCustodyInfo atomically updates the custody group count only if it is greater than the stored one.
// In this case, it also updates the earliest available slot with the provided value.
// When the custody group count increases, the earliest available slot is set to the maximum of the
// incoming value and the stored value, ensuring the slot never decreases when increasing custody.
// It returns the (potentially updated) custody group count and earliest available slot.
func (s *Store) UpdateCustodyInfo(ctx context.Context, earliestAvailableSlot primitives.Slot, custodyGroupCount uint64) (primitives.Slot, uint64, error) {
_, span := trace.StartSpan(ctx, "BeaconDB.UpdateCustodyInfo")
@@ -41,25 +42,39 @@ func (s *Store) UpdateCustodyInfo(ctx context.Context, earliestAvailableSlot pri
storedEarliestAvailableSlot = primitives.Slot(bytesutil.BytesToUint64BigEndian(storedEarliestAvailableSlotBytes))
}
log.WithFields(logrus.Fields{
"incomingSlot": earliestAvailableSlot,
"incomingGroupCount": custodyGroupCount,
"storedSlot": storedEarliestAvailableSlot,
"storedGroupCount": storedGroupCount,
"storedSlotBytesLen": len(storedEarliestAvailableSlotBytes),
"storedGroupCountBytesLen": len(storedGroupCountBytes),
}).Debug("UpdateCustodyInfo: comparing incoming vs stored values")
// Exit early if the new custody group count is lower than or equal to the stored one.
if custodyGroupCount <= storedGroupCount {
log.Debug("UpdateCustodyInfo: exiting early, custody group count not increasing")
return nil
}
storedGroupCount, storedEarliestAvailableSlot = custodyGroupCount, earliestAvailableSlot
// Store the earliest available slot.
bytes := bytesutil.Uint64ToBytesBigEndian(uint64(earliestAvailableSlot))
if err := bucket.Put(earliestAvailableSlotKey, bytes); err != nil {
return errors.Wrap(err, "put earliest available slot")
}
// Store the custody group count.
bytes = bytesutil.Uint64ToBytesBigEndian(custodyGroupCount)
// Update the custody group count.
storedGroupCount = custodyGroupCount
bytes := bytesutil.Uint64ToBytesBigEndian(custodyGroupCount)
if err := bucket.Put(groupCountKey, bytes); err != nil {
return errors.Wrap(err, "put custody group count")
}
// Only update earliestAvailableSlot if the incoming value is higher.
// This prevents losing availability for data we already have when switching modes
// (e.g., from normal to semi-supernode or supernode).
if earliestAvailableSlot > storedEarliestAvailableSlot {
storedEarliestAvailableSlot = earliestAvailableSlot
bytes = bytesutil.Uint64ToBytesBigEndian(uint64(earliestAvailableSlot))
if err := bucket.Put(earliestAvailableSlotKey, bytes); err != nil {
return errors.Wrap(err, "put earliest available slot")
}
}
return nil
}); err != nil {
return 0, 0, err

View File

@@ -89,7 +89,7 @@ func TestUpdateCustodyInfo(t *testing.T) {
require.Equal(t, groupCount, storedCount)
})
t.Run("update with higher group count", func(t *testing.T) {
t.Run("update with higher group count and higher slot", func(t *testing.T) {
const (
initialSlot = primitives.Slot(100)
initialCount = uint64(5)
@@ -112,6 +112,150 @@ func TestUpdateCustodyInfo(t *testing.T) {
require.Equal(t, groupCount, storedCount)
})
t.Run("update with higher group count and lower slot should preserve higher slot", func(t *testing.T) {
// This is the bug scenario: when switching from normal mode to semi-supernode,
// the incoming slot might be lower than the stored slot, but we should preserve
// the higher stored slot to avoid advertising that we can serve data we don't have.
const (
initialSlot = primitives.Slot(1835523) // Higher stored slot
initialCount = uint64(10)
earliestSlot = primitives.Slot(1835456) // Lower incoming slot (e.g., from head slot)
groupCount = uint64(64) // Increasing custody (e.g., semi-supernode)
)
db := setupDB(t)
_, _, err := db.UpdateCustodyInfo(ctx, initialSlot, initialCount)
require.NoError(t, err)
// When custody count increases but slot is lower, the higher slot should be preserved
slot, count, err := db.UpdateCustodyInfo(ctx, earliestSlot, groupCount)
require.NoError(t, err)
require.Equal(t, initialSlot, slot, "earliestAvailableSlot should not decrease when custody group count increases")
require.Equal(t, groupCount, count)
// Verify in the database
storedSlot, storedCount := getCustodyInfoFromDB(t, db)
require.Equal(t, initialSlot, storedSlot, "stored slot should be the higher value")
require.Equal(t, groupCount, storedCount)
})
t.Run("pre-fulu scenario: checkpoint sync before fork, restart with semi-supernode", func(t *testing.T) {
// This test covers the pre-Fulu bug scenario:
// 1. Node starts with checkpoint sync BEFORE Fulu fork - uses EarliestSlot() (checkpoint block slot)
// 2. Validators connect after Fulu activates - maintainCustodyInfo() updates to head slot (higher)
// 3. Node restarts with --semi-supernode - updateCustodyInfoInDB uses EarliestSlot() again
// The bug was that step 3 would overwrite the higher slot from step 2.
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.FuluForkEpoch = 100
params.OverrideBeaconConfig(cfg)
fuluForkSlot, err := slots.EpochStart(cfg.FuluForkEpoch)
require.NoError(t, err)
// Derive slot values relative to Fulu fork
checkpointBlockSlot := fuluForkSlot - 10 // Checkpoint sync happened before Fulu
headSlot := fuluForkSlot + 5 // Head slot after Fulu activates
defaultCustody := cfg.CustodyRequirement // Default custody from config
validatorCustody := cfg.CustodyRequirement + 6 // Custody after validators connect
semiSupernodeCustody := cfg.NumberOfCustodyGroups // Semi-supernode custodies all groups
// Verify our test setup: checkpoint is pre-Fulu, head is post-Fulu
require.Equal(t, true, checkpointBlockSlot < fuluForkSlot, "checkpoint must be before Fulu fork")
require.Equal(t, true, headSlot >= fuluForkSlot, "head must be at or after Fulu fork")
db := setupDB(t)
// Step 1: Node starts with checkpoint sync (pre-Fulu)
// updateCustodyInfoInDB sees saved.Slot() < fuluForkSlot, so uses EarliestSlot()
slot, count, err := db.UpdateCustodyInfo(ctx, checkpointBlockSlot, defaultCustody)
require.NoError(t, err)
require.Equal(t, checkpointBlockSlot, slot)
require.Equal(t, defaultCustody, count)
// Step 2: Validators connect after Fulu activates, maintainCustodyInfo() runs
// Uses headSlot which is higher than checkpointBlockSlot
slot, count, err = db.UpdateCustodyInfo(ctx, headSlot, validatorCustody)
require.NoError(t, err)
require.Equal(t, headSlot, slot, "should update to head slot")
require.Equal(t, validatorCustody, count)
// Verify step 2 stored correctly
storedSlot, storedCount := getCustodyInfoFromDB(t, db)
require.Equal(t, headSlot, storedSlot)
require.Equal(t, validatorCustody, storedCount)
// Step 3: Restart with --semi-supernode
// updateCustodyInfoInDB sees saved.Slot() < fuluForkSlot, so uses EarliestSlot() again
slot, count, err = db.UpdateCustodyInfo(ctx, checkpointBlockSlot, semiSupernodeCustody)
require.NoError(t, err)
require.Equal(t, headSlot, slot, "earliestAvailableSlot should NOT decrease back to checkpoint slot")
require.Equal(t, semiSupernodeCustody, count)
// Verify the database preserved the higher slot
storedSlot, storedCount = getCustodyInfoFromDB(t, db)
require.Equal(t, headSlot, storedSlot, "stored slot should remain at head slot, not checkpoint slot")
require.Equal(t, semiSupernodeCustody, storedCount)
})
t.Run("post-fulu scenario: finalized slot lower than stored head slot", func(t *testing.T) {
// This test covers the post-Fulu bug scenario:
// Post-fork, updateCustodyInfoInDB uses saved.Slot() (finalized slot) directly,
// not EarliestSlot(). But the same bug can occur because:
// - maintainCustodyInfo() stores headSlot (higher)
// - Restart uses finalized slot (lower than head)
// Our fix ensures earliestAvailableSlot never decreases.
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.FuluForkEpoch = 100
params.OverrideBeaconConfig(cfg)
fuluForkSlot, err := slots.EpochStart(cfg.FuluForkEpoch)
require.NoError(t, err)
// Derive slot values relative to Fulu fork - all slots are AFTER Fulu
finalizedSlotAtStart := fuluForkSlot + 100 // Finalized slot at first start (post-Fulu)
headSlot := fuluForkSlot + 200 // Head slot when validators connect
finalizedSlotRestart := fuluForkSlot + 150 // Finalized slot at restart (< headSlot)
defaultCustody := cfg.CustodyRequirement // Default custody from config
validatorCustody := cfg.CustodyRequirement + 6 // Custody after validators connect
semiSupernodeCustody := cfg.NumberOfCustodyGroups // Semi-supernode custodies all groups
// Verify our test setup: all slots are post-Fulu
require.Equal(t, true, finalizedSlotAtStart >= fuluForkSlot, "finalized slot must be at or after Fulu fork")
require.Equal(t, true, headSlot >= fuluForkSlot, "head slot must be at or after Fulu fork")
require.Equal(t, true, finalizedSlotRestart >= fuluForkSlot, "restart finalized slot must be at or after Fulu fork")
require.Equal(t, true, finalizedSlotRestart < headSlot, "restart finalized slot must be less than head slot")
db := setupDB(t)
// Step 1: Node starts post-Fulu
// updateCustodyInfoInDB sees saved.Slot() >= fuluForkSlot, so uses saved.Slot() directly
slot, count, err := db.UpdateCustodyInfo(ctx, finalizedSlotAtStart, defaultCustody)
require.NoError(t, err)
require.Equal(t, finalizedSlotAtStart, slot)
require.Equal(t, defaultCustody, count)
// Step 2: Validators connect, maintainCustodyInfo() uses head slot
slot, count, err = db.UpdateCustodyInfo(ctx, headSlot, validatorCustody)
require.NoError(t, err)
require.Equal(t, headSlot, slot)
require.Equal(t, validatorCustody, count)
// Step 3: Restart with --semi-supernode
// updateCustodyInfoInDB uses finalized slot which is lower than stored head slot
slot, count, err = db.UpdateCustodyInfo(ctx, finalizedSlotRestart, semiSupernodeCustody)
require.NoError(t, err)
require.Equal(t, headSlot, slot, "earliestAvailableSlot should NOT decrease to finalized slot")
require.Equal(t, semiSupernodeCustody, count)
// Verify database preserved the higher slot
storedSlot, storedCount := getCustodyInfoFromDB(t, db)
require.Equal(t, headSlot, storedSlot)
require.Equal(t, semiSupernodeCustody, storedCount)
})
t.Run("update with lower group count should not update", func(t *testing.T) {
const (
initialSlot = primitives.Slot(200)

View File

@@ -2683,7 +2683,7 @@ func createBlobServerV2(t *testing.T, numBlobs int, blobMasks []bool) *httptest.
Blob: []byte("0xblob"),
KzgProofs: []hexutil.Bytes{},
}
for j := 0; j < int(params.BeaconConfig().NumberOfColumns); j++ {
for range fieldparams.NumberOfColumns {
cellProof := make([]byte, 48)
blobAndCellProofs[i].KzgProofs = append(blobAndCellProofs[i].KzgProofs, cellProof)
}

View File

@@ -240,7 +240,7 @@ func (f *ForkChoice) IsViableForCheckpoint(cp *forkchoicetypes.Checkpoint) (bool
if node.slot == epochStart {
return true, nil
}
if !features.Get().DisableLastEpochTargets {
if !features.Get().IgnoreUnviableAttestations {
// Allow any node from the checkpoint epoch - 1 to be viable.
nodeEpoch := slots.ToEpoch(node.slot)
if nodeEpoch+1 == cp.Epoch {
@@ -642,8 +642,12 @@ func (f *ForkChoice) DependentRootForEpoch(root [32]byte, epoch primitives.Epoch
if !ok || node == nil {
return [32]byte{}, ErrNilNode
}
if slots.ToEpoch(node.slot) >= epoch && node.parent != nil {
node = node.parent
if slots.ToEpoch(node.slot) >= epoch {
if node.parent != nil {
node = node.parent
} else {
return f.store.finalizedDependentRoot, nil
}
}
return node.root, nil
}

View File

@@ -212,6 +212,9 @@ func (s *Store) prune(ctx context.Context) error {
return nil
}
// Save the new finalized dependent root because it will be pruned
s.finalizedDependentRoot = finalizedNode.parent.root
// Prune nodeByRoot starting from root
if err := s.pruneFinalizedNodeByRootMap(ctx, s.treeRootNode, finalizedNode); err != nil {
return err

View File

@@ -465,6 +465,7 @@ func TestStore_TargetRootForEpoch(t *testing.T) {
ctx := t.Context()
f := setup(1, 1)
// Insert a block in slot 32
state, blk, err := prepareForkchoiceState(ctx, params.BeaconConfig().SlotsPerEpoch, [32]byte{'a'}, params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blk))
@@ -475,6 +476,7 @@ func TestStore_TargetRootForEpoch(t *testing.T) {
require.NoError(t, err)
require.Equal(t, dependent, [32]byte{})
// Insert a block in slot 33
state, blk1, err := prepareForkchoiceState(ctx, params.BeaconConfig().SlotsPerEpoch+1, [32]byte{'b'}, blk.Root(), params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blk1))
@@ -488,7 +490,7 @@ func TestStore_TargetRootForEpoch(t *testing.T) {
require.NoError(t, err)
require.Equal(t, dependent, [32]byte{})
// Insert a block for the next epoch (missed slot 0)
// Insert a block for the next epoch (missed slot 0), slot 65
state, blk2, err := prepareForkchoiceState(ctx, 2*params.BeaconConfig().SlotsPerEpoch+1, [32]byte{'c'}, blk1.Root(), params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
@@ -509,6 +511,7 @@ func TestStore_TargetRootForEpoch(t *testing.T) {
require.NoError(t, err)
require.Equal(t, dependent, blk1.Root())
// Insert a block at slot 66
state, blk3, err := prepareForkchoiceState(ctx, 2*params.BeaconConfig().SlotsPerEpoch+2, [32]byte{'d'}, blk2.Root(), params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blk3))
@@ -533,8 +536,11 @@ func TestStore_TargetRootForEpoch(t *testing.T) {
dependent, err = f.DependentRoot(1)
require.NoError(t, err)
require.Equal(t, [32]byte{}, dependent)
dependent, err = f.DependentRoot(2)
require.NoError(t, err)
require.Equal(t, blk1.Root(), dependent)
// Insert a block for next epoch (slot 0 present)
// Insert a block for the next epoch, slot 96 (descends from finalized at slot 33)
state, blk4, err := prepareForkchoiceState(ctx, 3*params.BeaconConfig().SlotsPerEpoch, [32]byte{'e'}, blk1.Root(), params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blk4))
@@ -551,6 +557,7 @@ func TestStore_TargetRootForEpoch(t *testing.T) {
require.NoError(t, err)
require.Equal(t, dependent, blk1.Root())
// Insert a block at slot 97
state, blk5, err := prepareForkchoiceState(ctx, 3*params.BeaconConfig().SlotsPerEpoch+1, [32]byte{'f'}, blk4.Root(), params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blk5))
@@ -600,12 +607,16 @@ func TestStore_TargetRootForEpoch(t *testing.T) {
require.NoError(t, err)
require.Equal(t, target, blk1.Root())
// Prune finalization
// Prune finalization, finalize the block at slot 96
s.finalizedCheckpoint.Root = blk4.Root()
require.NoError(t, s.prune(ctx))
target, err = f.TargetRootForEpoch(blk4.Root(), 3)
require.NoError(t, err)
require.Equal(t, blk4.Root(), target)
// Dependent root for the finalized block should be the root of the pruned block at slot 33
dependent, err = f.DependentRootForEpoch(blk4.Root(), 3)
require.NoError(t, err)
require.Equal(t, blk1.Root(), dependent)
}
func TestStore_DependentRootForEpoch(t *testing.T) {

View File

@@ -31,6 +31,7 @@ type Store struct {
proposerBoostRoot [fieldparams.RootLength]byte // latest block root that was boosted after being received in a timely manner.
previousProposerBoostRoot [fieldparams.RootLength]byte // previous block root that was boosted after being received in a timely manner.
previousProposerBoostScore uint64 // previous proposer boosted root score.
finalizedDependentRoot [fieldparams.RootLength]byte // dependent root at finalized checkpoint.
committeeWeight uint64 // tracks the total active validator balance divided by the number of slots per Epoch.
treeRootNode *Node // the root node of the store tree.
headNode *Node // last head Node

View File

@@ -0,0 +1,95 @@
# Graffiti Version Info Implementation
## Summary
Add automatic EL+CL version info to block graffiti following [ethereum/execution-apis#517](https://github.com/ethereum/execution-apis/pull/517). Uses the [flexible standard](https://hackmd.io/@wmoBhF17RAOH2NZ5bNXJVg/BJX2c9gja) to pack client info into leftover space after user graffiti.
More details: https://github.com/ethereum/execution-apis/blob/main/src/engine/identification.md
## Implementation
### Core Component: GraffitiInfo Struct
Thread-safe struct holding version information:
```go
const clCode = "PR"
type GraffitiInfo struct {
mu sync.RWMutex
userGraffiti string // From --graffiti flag (set once at startup)
clCommit string // From version.GetCommitPrefix() helper function
elCode string // From engine_getClientVersionV1
elCommit string // From engine_getClientVersionV1
}
```
### Flow
1. **Startup**: Parse flags, create GraffitiInfo with user graffiti and CL info.
2. **Wiring**: Pass struct to both execution service and RPC validator server
3. **Runtime**: Execution service goroutine periodically calls `engine_getClientVersionV1` and updates EL fields
4. **Block Proposal**: RPC validator server calls `GenerateGraffiti()` to get formatted graffiti
### Flexible Graffiti Format
Packs as much client info as space allows (after user graffiti):
| Available Space | Format | Example |
|----------------|--------|---------|
| ≥12 bytes | `EL(2)+commit(4)+CL(2)+commit(4)+user` | `GE168dPR63afBob` |
| 8-11 bytes | `EL(2)+commit(2)+CL(2)+commit(2)+user` | `GE16PR63my node here` |
| 4-7 bytes | `EL(2)+CL(2)+user` | `GEPRthis is my graffiti msg` |
| 2-3 bytes | `EL(2)+user` | `GEalmost full graffiti message` |
| <2 bytes | user only | `full 32 byte user graffiti here` |
```go
func (g *GraffitiInfo) GenerateGraffiti() [32]byte {
available := 32 - len(userGraffiti)
if elCode == "" {
elCommit2 = elCommit4 = ""
}
switch {
case available >= 12:
return elCode + elCommit4 + clCode + clCommit4 + userGraffiti
case available >= 8:
return elCode + elCommit2 + clCode + clCommit2 + userGraffiti
case available >= 4:
return elCode + clCode + userGraffiti
case available >= 2:
return elCode + userGraffiti
default:
return userGraffiti
}
}
```
### Update Logic
Single testable function in execution service:
```go
func (s *Service) updateGraffitiInfo() {
versions, err := s.GetClientVersion(ctx)
if err != nil {
return // Keep last good value
}
if len(versions) == 1 {
s.graffitiInfo.UpdateFromEngine(versions[0].Code, versions[0].Commit)
}
}
```
Goroutine calls this on `slot % 8 == 4` timing (4 times per epoch, avoids slot boundaries).
### Files Changes Required
**New:**
- `beacon-chain/execution/graffiti_info.go` - The struct and methods
- `beacon-chain/execution/graffiti_info_test.go` - Unit tests
- `runtime/version/version.go` - Add `GetCommitPrefix()` helper that extracts first 4 hex chars from the git commit injected via Bazel ldflags at build time
**Modified:**
- `beacon-chain/execution/service.go` - Add goroutine + updateGraffitiInfo()
- `beacon-chain/execution/engine_client.go` - Add GetClientVersion() method that does engine call
- `beacon-chain/rpc/.../validator/proposer.go` - Call GenerateGraffiti()
- `beacon-chain/node/node.go` - Wire GraffitiInfo to services
### Testing Strategy
- Unit test GraffitiInfo methods (priority logic, thread safety)
- Unit test updateGraffitiInfo() with mocked engine client

View File

@@ -23,6 +23,7 @@ go_library(
"//beacon-chain/builder:go_default_library",
"//beacon-chain/cache:go_default_library",
"//beacon-chain/cache/depositsnapshot:go_default_library",
"//beacon-chain/das:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/filesystem:go_default_library",
"//beacon-chain/db/kv:go_default_library",

View File

@@ -26,6 +26,7 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/builder"
"github.com/OffchainLabs/prysm/v7/beacon-chain/cache"
"github.com/OffchainLabs/prysm/v7/beacon-chain/cache/depositsnapshot"
"github.com/OffchainLabs/prysm/v7/beacon-chain/das"
"github.com/OffchainLabs/prysm/v7/beacon-chain/db"
"github.com/OffchainLabs/prysm/v7/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v7/beacon-chain/db/kv"
@@ -116,7 +117,7 @@ type BeaconNode struct {
GenesisProviders []genesis.Provider
CheckpointInitializer checkpoint.Initializer
forkChoicer forkchoice.ForkChoicer
clockWaiter startup.ClockWaiter
ClockWaiter startup.ClockWaiter
BackfillOpts []backfill.ServiceOption
initialSyncComplete chan struct{}
BlobStorage *filesystem.BlobStorage
@@ -129,6 +130,7 @@ type BeaconNode struct {
slasherEnabled bool
lcStore *lightclient.Store
ConfigOptions []params.Option
SyncNeedsWaiter func() (das.SyncNeeds, error)
}
// New creates a new node instance, sets up configuration options, and registers
@@ -193,7 +195,7 @@ func New(cliCtx *cli.Context, cancel context.CancelFunc, opts ...Option) (*Beaco
params.LogDigests(params.BeaconConfig())
synchronizer := startup.NewClockSynchronizer()
beacon.clockWaiter = synchronizer
beacon.ClockWaiter = synchronizer
beacon.forkChoicer = doublylinkedtree.New()
depositAddress, err := execution.DepositContractAddress()
@@ -233,12 +235,13 @@ func New(cliCtx *cli.Context, cancel context.CancelFunc, opts ...Option) (*Beaco
beacon.lhsp = &verification.LazyHeadStateProvider{}
beacon.verifyInitWaiter = verification.NewInitializerWaiter(
beacon.clockWaiter, forkchoice.NewROForkChoice(beacon.forkChoicer), beacon.stateGen, beacon.lhsp)
beacon.ClockWaiter, forkchoice.NewROForkChoice(beacon.forkChoicer), beacon.stateGen, beacon.lhsp)
beacon.BackfillOpts = append(
beacon.BackfillOpts,
backfill.WithVerifierWaiter(beacon.verifyInitWaiter),
backfill.WithInitSyncWaiter(initSyncWaiter(ctx, beacon.initialSyncComplete)),
backfill.WithSyncNeedsWaiter(beacon.SyncNeedsWaiter),
)
if err := registerServices(cliCtx, beacon, synchronizer, bfs); err != nil {
@@ -664,7 +667,7 @@ func (b *BeaconNode) registerP2P(cliCtx *cli.Context) error {
StateNotifier: b,
DB: b.db,
StateGen: b.stateGen,
ClockWaiter: b.clockWaiter,
ClockWaiter: b.ClockWaiter,
})
if err != nil {
return err
@@ -706,7 +709,7 @@ func (b *BeaconNode) registerSlashingPoolService() error {
return err
}
s := slashings.NewPoolService(b.ctx, b.slashingsPool, slashings.WithElectraTimer(b.clockWaiter, chainService.CurrentSlot))
s := slashings.NewPoolService(b.ctx, b.slashingsPool, slashings.WithElectraTimer(b.ClockWaiter, chainService.CurrentSlot))
return b.services.RegisterService(s)
}
@@ -828,7 +831,7 @@ func (b *BeaconNode) registerSyncService(initialSyncComplete chan struct{}, bFil
regularsync.WithSlasherAttestationsFeed(b.slasherAttestationsFeed),
regularsync.WithSlasherBlockHeadersFeed(b.slasherBlockHeadersFeed),
regularsync.WithReconstructor(web3Service),
regularsync.WithClockWaiter(b.clockWaiter),
regularsync.WithClockWaiter(b.ClockWaiter),
regularsync.WithInitialSyncComplete(initialSyncComplete),
regularsync.WithStateNotifier(b),
regularsync.WithBlobStorage(b.BlobStorage),
@@ -859,7 +862,8 @@ func (b *BeaconNode) registerInitialSyncService(complete chan struct{}) error {
P2P: b.fetchP2P(),
StateNotifier: b,
BlockNotifier: b,
ClockWaiter: b.clockWaiter,
ClockWaiter: b.ClockWaiter,
SyncNeedsWaiter: b.SyncNeedsWaiter,
InitialSyncComplete: complete,
BlobStorage: b.BlobStorage,
DataColumnStorage: b.DataColumnStorage,
@@ -890,7 +894,7 @@ func (b *BeaconNode) registerSlasherService() error {
SlashingPoolInserter: b.slashingsPool,
SyncChecker: syncService,
HeadStateFetcher: chainService,
ClockWaiter: b.clockWaiter,
ClockWaiter: b.ClockWaiter,
})
if err != nil {
return err
@@ -983,7 +987,7 @@ func (b *BeaconNode) registerRPCService(router *http.ServeMux) error {
MaxMsgSize: maxMsgSize,
BlockBuilder: b.fetchBuilderService(),
Router: router,
ClockWaiter: b.clockWaiter,
ClockWaiter: b.ClockWaiter,
BlobStorage: b.BlobStorage,
DataColumnStorage: b.DataColumnStorage,
TrackedValidatorsCache: b.trackedValidatorsCache,
@@ -1128,7 +1132,7 @@ func (b *BeaconNode) registerPrunerService(cliCtx *cli.Context) error {
func (b *BeaconNode) RegisterBackfillService(cliCtx *cli.Context, bfs *backfill.Store) error {
pa := peers.NewAssigner(b.fetchP2P().Peers(), b.forkChoicer)
bf, err := backfill.NewService(cliCtx.Context, bfs, b.BlobStorage, b.clockWaiter, b.fetchP2P(), pa, b.BackfillOpts...)
bf, err := backfill.NewService(cliCtx.Context, bfs, b.BlobStorage, b.DataColumnStorage, b.ClockWaiter, b.fetchP2P(), pa, b.BackfillOpts...)
if err != nil {
return errors.Wrap(err, "error initializing backfill service")
}

View File

@@ -47,6 +47,7 @@ go_test(
],
embed = [":go_default_library"],
deps = [
"//beacon-chain/forkchoice/types:go_default_library",
"//beacon-chain/p2p/peers/peerdata:go_default_library",
"//beacon-chain/p2p/peers/scorers:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",

View File

@@ -4,11 +4,18 @@ import (
forkchoicetypes "github.com/OffchainLabs/prysm/v7/beacon-chain/forkchoice/types"
"github.com/OffchainLabs/prysm/v7/cmd/beacon-chain/flags"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
// StatusProvider describes the minimum capability that Assigner needs from peer status tracking.
// That is, the ability to retrieve the best peers by finalized checkpoint.
type StatusProvider interface {
BestFinalized(ourFinalized primitives.Epoch) (primitives.Epoch, []peer.ID)
}
// FinalizedCheckpointer describes the minimum capability that Assigner needs from forkchoice.
// That is, the ability to retrieve the latest finalized checkpoint to help with peer evaluation.
type FinalizedCheckpointer interface {
@@ -17,9 +24,9 @@ type FinalizedCheckpointer interface {
// NewAssigner assists in the correct construction of an Assigner by code in other packages,
// assuring all the important private member fields are given values.
// The FinalizedCheckpointer is used to retrieve the latest finalized checkpoint each time peers are requested.
// The StatusProvider is used to retrieve best peers, and FinalizedCheckpointer is used to retrieve the latest finalized checkpoint each time peers are requested.
// Peers that report an older finalized checkpoint are filtered out.
func NewAssigner(s *Status, fc FinalizedCheckpointer) *Assigner {
func NewAssigner(s StatusProvider, fc FinalizedCheckpointer) *Assigner {
return &Assigner{
ps: s,
fc: fc,
@@ -28,7 +35,7 @@ func NewAssigner(s *Status, fc FinalizedCheckpointer) *Assigner {
// Assigner uses the "BestFinalized" peer scoring method to pick the next-best peer to receive rpc requests.
type Assigner struct {
ps *Status
ps StatusProvider
fc FinalizedCheckpointer
}
@@ -38,38 +45,42 @@ type Assigner struct {
var ErrInsufficientSuitable = errors.New("no suitable peers")
func (a *Assigner) freshPeers() ([]peer.ID, error) {
required := min(flags.Get().MinimumSyncPeers, params.BeaconConfig().MaxPeersToSync)
_, peers := a.ps.BestFinalized(params.BeaconConfig().MaxPeersToSync, a.fc.FinalizedCheckpoint().Epoch)
required := min(flags.Get().MinimumSyncPeers, min(flags.Get().MinimumSyncPeers, params.BeaconConfig().MaxPeersToSync))
_, peers := a.ps.BestFinalized(a.fc.FinalizedCheckpoint().Epoch)
if len(peers) < required {
log.WithFields(logrus.Fields{
"suitable": len(peers),
"required": required}).Warn("Unable to assign peer while suitable peers < required ")
"required": required}).Trace("Unable to assign peer while suitable peers < required")
return nil, ErrInsufficientSuitable
}
return peers, nil
}
// AssignmentFilter describes a function that takes a list of peer.IDs and returns a filtered subset.
// An example is the NotBusy filter.
type AssignmentFilter func([]peer.ID) []peer.ID
// Assign uses the "BestFinalized" method to select the best peers that agree on a canonical block
// for the configured finalized epoch. At most `n` peers will be returned. The `busy` param can be used
// to filter out peers that we know we don't want to connect to, for instance if we are trying to limit
// the number of outbound requests to each peer from a given component.
func (a *Assigner) Assign(busy map[peer.ID]bool, n int) ([]peer.ID, error) {
func (a *Assigner) Assign(filter AssignmentFilter) ([]peer.ID, error) {
best, err := a.freshPeers()
if err != nil {
return nil, err
}
return pickBest(busy, n, best), nil
return filter(best), nil
}
func pickBest(busy map[peer.ID]bool, n int, best []peer.ID) []peer.ID {
ps := make([]peer.ID, 0, n)
for _, p := range best {
if len(ps) == n {
return ps
}
if !busy[p] {
ps = append(ps, p)
// NotBusy is a filter that returns the list of peer.IDs that are not in the `busy` map.
func NotBusy(busy map[peer.ID]bool) AssignmentFilter {
return func(peers []peer.ID) []peer.ID {
ps := make([]peer.ID, 0, len(peers))
for _, p := range peers {
if !busy[p] {
ps = append(ps, p)
}
}
return ps
}
return ps
}

View File

@@ -5,6 +5,8 @@ import (
"slices"
"testing"
forkchoicetypes "github.com/OffchainLabs/prysm/v7/beacon-chain/forkchoice/types"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/libp2p/go-libp2p/core/peer"
)
@@ -14,82 +16,68 @@ func TestPickBest(t *testing.T) {
cases := []struct {
name string
busy map[peer.ID]bool
n int
best []peer.ID
expected []peer.ID
}{
{
name: "",
n: 0,
name: "don't limit",
expected: best,
},
{
name: "none busy",
n: 1,
expected: best[0:1],
expected: best,
},
{
name: "all busy except last",
n: 1,
busy: testBusyMap(best[0 : len(best)-1]),
expected: best[len(best)-1:],
},
{
name: "all busy except i=5",
n: 1,
busy: testBusyMap(slices.Concat(best[0:5], best[6:])),
expected: []peer.ID{best[5]},
},
{
name: "all busy - 0 results",
n: 1,
busy: testBusyMap(best),
},
{
name: "first half busy",
n: 5,
busy: testBusyMap(best[0:5]),
expected: best[5:],
},
{
name: "back half busy",
n: 5,
busy: testBusyMap(best[5:]),
expected: best[0:5],
},
{
name: "pick all ",
n: 10,
expected: best,
},
{
name: "none available",
n: 10,
best: []peer.ID{},
},
{
name: "not enough",
n: 10,
best: best[0:1],
expected: best[0:1],
},
{
name: "not enough, some busy",
n: 10,
best: best[0:6],
busy: testBusyMap(best[0:5]),
expected: best[5:6],
},
}
for _, c := range cases {
name := fmt.Sprintf("n=%d", c.n)
if c.name != "" {
name += " " + c.name
}
t.Run(name, func(t *testing.T) {
t.Run(c.name, func(t *testing.T) {
if c.best == nil {
c.best = best
}
pb := pickBest(c.busy, c.n, c.best)
filt := NotBusy(c.busy)
pb := filt(c.best)
require.Equal(t, len(c.expected), len(pb))
for i := range c.expected {
require.Equal(t, c.expected[i], pb[i])
@@ -113,3 +101,310 @@ func testPeerIds(n int) []peer.ID {
}
return pids
}
// MockStatus is a test mock for the Status interface used in Assigner.
type MockStatus struct {
bestFinalizedEpoch primitives.Epoch
bestPeers []peer.ID
}
func (m *MockStatus) BestFinalized(ourFinalized primitives.Epoch) (primitives.Epoch, []peer.ID) {
return m.bestFinalizedEpoch, m.bestPeers
}
// MockFinalizedCheckpointer is a test mock for FinalizedCheckpointer interface.
type MockFinalizedCheckpointer struct {
checkpoint *forkchoicetypes.Checkpoint
}
func (m *MockFinalizedCheckpointer) FinalizedCheckpoint() *forkchoicetypes.Checkpoint {
return m.checkpoint
}
// TestAssign_HappyPath tests the Assign method with sufficient peers and various filters.
func TestAssign_HappyPath(t *testing.T) {
peers := testPeerIds(10)
cases := []struct {
name string
bestPeers []peer.ID
finalizedEpoch primitives.Epoch
filter AssignmentFilter
expectedCount int
}{
{
name: "sufficient peers with identity filter",
bestPeers: peers,
finalizedEpoch: 10,
filter: func(p []peer.ID) []peer.ID { return p },
expectedCount: 10,
},
{
name: "sufficient peers with NotBusy filter (no busy)",
bestPeers: peers,
finalizedEpoch: 10,
filter: NotBusy(make(map[peer.ID]bool)),
expectedCount: 10,
},
{
name: "sufficient peers with NotBusy filter (some busy)",
bestPeers: peers,
finalizedEpoch: 10,
filter: NotBusy(testBusyMap(peers[0:5])),
expectedCount: 5,
},
{
name: "minimum threshold exactly met",
bestPeers: peers[0:5],
finalizedEpoch: 10,
filter: func(p []peer.ID) []peer.ID { return p },
expectedCount: 5,
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
mockStatus := &MockStatus{
bestFinalizedEpoch: tc.finalizedEpoch,
bestPeers: tc.bestPeers,
}
mockCheckpointer := &MockFinalizedCheckpointer{
checkpoint: &forkchoicetypes.Checkpoint{Epoch: tc.finalizedEpoch},
}
assigner := NewAssigner(mockStatus, mockCheckpointer)
result, err := assigner.Assign(tc.filter)
require.NoError(t, err)
require.Equal(t, tc.expectedCount, len(result),
fmt.Sprintf("expected %d peers, got %d", tc.expectedCount, len(result)))
})
}
}
// TestAssign_InsufficientPeers tests error handling when not enough suitable peers are available.
// Note: The actual peer threshold depends on config values MaxPeersToSync and MinimumSyncPeers.
func TestAssign_InsufficientPeers(t *testing.T) {
cases := []struct {
name string
bestPeers []peer.ID
expectedErr error
description string
}{
{
name: "exactly at minimum threshold",
bestPeers: testPeerIds(5),
expectedErr: nil,
description: "5 peers should meet the minimum threshold",
},
{
name: "well above minimum threshold",
bestPeers: testPeerIds(50),
expectedErr: nil,
description: "50 peers should easily meet requirements",
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
mockStatus := &MockStatus{
bestFinalizedEpoch: 10,
bestPeers: tc.bestPeers,
}
mockCheckpointer := &MockFinalizedCheckpointer{
checkpoint: &forkchoicetypes.Checkpoint{Epoch: 10},
}
assigner := NewAssigner(mockStatus, mockCheckpointer)
result, err := assigner.Assign(NotBusy(make(map[peer.ID]bool)))
if tc.expectedErr != nil {
require.NotNil(t, err, tc.description)
require.Equal(t, tc.expectedErr, err)
} else {
require.NoError(t, err, tc.description)
require.Equal(t, len(tc.bestPeers), len(result))
}
})
}
}
// TestAssign_FilterApplication verifies that filters are correctly applied to peer lists.
func TestAssign_FilterApplication(t *testing.T) {
peers := testPeerIds(10)
cases := []struct {
name string
bestPeers []peer.ID
filterToApply AssignmentFilter
expectedCount int
description string
}{
{
name: "identity filter returns all peers",
bestPeers: peers,
filterToApply: func(p []peer.ID) []peer.ID { return p },
expectedCount: 10,
description: "identity filter should not change peer list",
},
{
name: "filter removes all peers (all busy)",
bestPeers: peers,
filterToApply: NotBusy(testBusyMap(peers)),
expectedCount: 0,
description: "all peers busy should return empty list",
},
{
name: "filter removes first 5 peers",
bestPeers: peers,
filterToApply: NotBusy(testBusyMap(peers[0:5])),
expectedCount: 5,
description: "should only return non-busy peers",
},
{
name: "filter removes last 5 peers",
bestPeers: peers,
filterToApply: NotBusy(testBusyMap(peers[5:])),
expectedCount: 5,
description: "should only return non-busy peers from beginning",
},
{
name: "custom filter selects every other peer",
bestPeers: peers,
filterToApply: func(p []peer.ID) []peer.ID {
result := make([]peer.ID, 0)
for i := 0; i < len(p); i += 2 {
result = append(result, p[i])
}
return result
},
expectedCount: 5,
description: "custom filter selecting every other peer",
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
mockStatus := &MockStatus{
bestFinalizedEpoch: 10,
bestPeers: tc.bestPeers,
}
mockCheckpointer := &MockFinalizedCheckpointer{
checkpoint: &forkchoicetypes.Checkpoint{Epoch: 10},
}
assigner := NewAssigner(mockStatus, mockCheckpointer)
result, err := assigner.Assign(tc.filterToApply)
require.NoError(t, err, fmt.Sprintf("unexpected error: %v", err))
require.Equal(t, tc.expectedCount, len(result),
fmt.Sprintf("%s: expected %d peers, got %d", tc.description, tc.expectedCount, len(result)))
})
}
}
// TestAssign_FinalizedCheckpointUsage verifies that the finalized checkpoint is correctly used.
func TestAssign_FinalizedCheckpointUsage(t *testing.T) {
peers := testPeerIds(10)
cases := []struct {
name string
finalizedEpoch primitives.Epoch
bestPeers []peer.ID
expectedCount int
description string
}{
{
name: "epoch 0",
finalizedEpoch: 0,
bestPeers: peers,
expectedCount: 10,
description: "epoch 0 should work",
},
{
name: "epoch 100",
finalizedEpoch: 100,
bestPeers: peers,
expectedCount: 10,
description: "high epoch number should work",
},
{
name: "epoch changes between calls",
finalizedEpoch: 50,
bestPeers: testPeerIds(5),
expectedCount: 5,
description: "epoch value should be used in checkpoint",
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
mockStatus := &MockStatus{
bestFinalizedEpoch: tc.finalizedEpoch,
bestPeers: tc.bestPeers,
}
mockCheckpointer := &MockFinalizedCheckpointer{
checkpoint: &forkchoicetypes.Checkpoint{Epoch: tc.finalizedEpoch},
}
assigner := NewAssigner(mockStatus, mockCheckpointer)
result, err := assigner.Assign(NotBusy(make(map[peer.ID]bool)))
require.NoError(t, err)
require.Equal(t, tc.expectedCount, len(result),
fmt.Sprintf("%s: expected %d peers, got %d", tc.description, tc.expectedCount, len(result)))
})
}
}
// TestAssign_EdgeCases tests boundary conditions and edge cases.
func TestAssign_EdgeCases(t *testing.T) {
cases := []struct {
name string
bestPeers []peer.ID
filter AssignmentFilter
expectedCount int
description string
}{
{
name: "filter returns empty from sufficient peers",
bestPeers: testPeerIds(10),
filter: func(p []peer.ID) []peer.ID { return []peer.ID{} },
expectedCount: 0,
description: "filter can return empty list even if sufficient peers available",
},
{
name: "filter selects subset from sufficient peers",
bestPeers: testPeerIds(10),
filter: func(p []peer.ID) []peer.ID { return p[0:2] },
expectedCount: 2,
description: "filter can return subset of available peers",
},
{
name: "filter selects single peer from many",
bestPeers: testPeerIds(20),
filter: func(p []peer.ID) []peer.ID { return p[0:1] },
expectedCount: 1,
description: "filter can select single peer from many available",
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
mockStatus := &MockStatus{
bestFinalizedEpoch: 10,
bestPeers: tc.bestPeers,
}
mockCheckpointer := &MockFinalizedCheckpointer{
checkpoint: &forkchoicetypes.Checkpoint{Epoch: 10},
}
assigner := NewAssigner(mockStatus, mockCheckpointer)
result, err := assigner.Assign(tc.filter)
require.NoError(t, err, fmt.Sprintf("%s: unexpected error: %v", tc.description, err))
require.Equal(t, tc.expectedCount, len(result),
fmt.Sprintf("%s: expected %d peers, got %d", tc.description, tc.expectedCount, len(result)))
})
}
}

View File

@@ -704,76 +704,54 @@ func (p *Status) deprecatedPrune() {
p.tallyIPTracker()
}
// BestFinalized returns the highest finalized epoch equal to or higher than `ourFinalizedEpoch`
// that is agreed upon by the majority of peers, and the peers agreeing on this finalized epoch.
// This method may not return the absolute highest finalized epoch, but the finalized epoch in which
// most peers can serve blocks (plurality voting). Ideally, all peers would be reporting the same
// finalized epoch but some may be behind due to their own latency, or because of their finalized
// epoch at the time we queried them.
func (p *Status) BestFinalized(maxPeers int, ourFinalizedEpoch primitives.Epoch) (primitives.Epoch, []peer.ID) {
// Retrieve all connected peers.
// BestFinalized groups all peers by their last known finalized epoch
// and selects the epoch of the largest group as best.
// Any peer with a finalized epoch < ourFinalized is excluded from consideration.
// In the event of a tie in largest group size, the higher epoch is the tie breaker.
// The selected epoch is returned, along with a list of peers with a finalized epoch >= the selected epoch.
func (p *Status) BestFinalized(ourFinalized primitives.Epoch) (primitives.Epoch, []peer.ID) {
connected := p.Connected()
pids := make([]peer.ID, 0, len(connected))
views := make(map[peer.ID]*pb.StatusV2, len(connected))
// key: finalized epoch, value: number of peers that support this finalized epoch.
finalizedEpochVotes := make(map[primitives.Epoch]uint64)
// key: peer ID, value: finalized epoch of the peer.
pidEpoch := make(map[peer.ID]primitives.Epoch, len(connected))
// key: peer ID, value: head slot of the peer.
pidHead := make(map[peer.ID]primitives.Slot, len(connected))
potentialPIDs := make([]peer.ID, 0, len(connected))
votes := make(map[primitives.Epoch]uint64)
winner := primitives.Epoch(0)
for _, pid := range connected {
peerChainState, err := p.ChainState(pid)
// Skip if the peer's finalized epoch is not defined, or if the peer's finalized epoch is
// lower than ours.
if err != nil || peerChainState == nil || peerChainState.FinalizedEpoch < ourFinalizedEpoch {
view, err := p.ChainState(pid)
if err != nil || view == nil || view.FinalizedEpoch < ourFinalized {
continue
}
pids = append(pids, pid)
views[pid] = view
finalizedEpochVotes[peerChainState.FinalizedEpoch]++
pidEpoch[pid] = peerChainState.FinalizedEpoch
pidHead[pid] = peerChainState.HeadSlot
potentialPIDs = append(potentialPIDs, pid)
}
// Select the target epoch, which is the epoch most peers agree upon.
// If there is a tie, select the highest epoch.
targetEpoch, mostVotes := primitives.Epoch(0), uint64(0)
for epoch, count := range finalizedEpochVotes {
if count > mostVotes || (count == mostVotes && epoch > targetEpoch) {
mostVotes = count
targetEpoch = epoch
votes[view.FinalizedEpoch]++
if winner == 0 {
winner = view.FinalizedEpoch
continue
}
e, v := view.FinalizedEpoch, votes[view.FinalizedEpoch]
if v > votes[winner] || v == votes[winner] && e > winner {
winner = e
}
}
// Sort PIDs by finalized (epoch, head), in decreasing order.
sort.Slice(potentialPIDs, func(i, j int) bool {
if pidEpoch[potentialPIDs[i]] == pidEpoch[potentialPIDs[j]] {
return pidHead[potentialPIDs[i]] > pidHead[potentialPIDs[j]]
// Descending sort by (finalized, head).
sort.Slice(pids, func(i, j int) bool {
iv, jv := views[pids[i]], views[pids[j]]
if iv.FinalizedEpoch == jv.FinalizedEpoch {
return iv.HeadSlot > jv.HeadSlot
}
return pidEpoch[potentialPIDs[i]] > pidEpoch[potentialPIDs[j]]
return iv.FinalizedEpoch > jv.FinalizedEpoch
})
// Trim potential peers to those on or after target epoch.
for i, pid := range potentialPIDs {
if pidEpoch[pid] < targetEpoch {
potentialPIDs = potentialPIDs[:i]
break
}
}
// Find the first peer with finalized epoch < winner, trim and all following (lower) peers.
trim := sort.Search(len(pids), func(i int) bool {
return views[pids[i]].FinalizedEpoch < winner
})
pids = pids[:trim]
// Trim potential peers to at most maxPeers.
if len(potentialPIDs) > maxPeers {
potentialPIDs = potentialPIDs[:maxPeers]
}
return targetEpoch, potentialPIDs
return winner, pids
}
// BestNonFinalized returns the highest known epoch, higher than ours,

View File

@@ -654,9 +654,10 @@ func TestTrimmedOrderedPeers(t *testing.T) {
FinalizedRoot: mockroot2[:],
})
target, pids := p.BestFinalized(maxPeers, 0)
target, pids := p.BestFinalized(0)
assert.Equal(t, expectedTarget, target, "Incorrect target epoch retrieved")
assert.Equal(t, maxPeers, len(pids), "Incorrect number of peers retrieved")
// addPeer called 5 times above
assert.Equal(t, 5, len(pids), "Incorrect number of peers retrieved")
// Expect the returned list to be ordered by finalized epoch and trimmed to max peers.
assert.Equal(t, pid3, pids[0], "Incorrect first peer")
@@ -1017,7 +1018,10 @@ func TestStatus_BestPeer(t *testing.T) {
HeadSlot: peerConfig.headSlot,
})
}
epoch, pids := p.BestFinalized(tt.limitPeers, tt.ourFinalizedEpoch)
epoch, pids := p.BestFinalized(tt.ourFinalizedEpoch)
if len(pids) > tt.limitPeers {
pids = pids[:tt.limitPeers]
}
assert.Equal(t, tt.targetEpoch, epoch, "Unexpected epoch retrieved")
assert.Equal(t, tt.targetEpochSupport, len(pids), "Unexpected number of peers supporting retrieved epoch")
})
@@ -1044,7 +1048,10 @@ func TestBestFinalized_returnsMaxValue(t *testing.T) {
})
}
_, pids := p.BestFinalized(maxPeers, 0)
_, pids := p.BestFinalized(0)
if len(pids) > maxPeers {
pids = pids[:maxPeers]
}
assert.Equal(t, maxPeers, len(pids), "Wrong number of peers returned")
}

View File

@@ -993,10 +993,11 @@ func (s *Server) validateEquivocation(blk interfaces.ReadOnlyBeaconBlock) error
}
func (s *Server) validateBlobs(blk interfaces.SignedBeaconBlock, blobs [][]byte, proofs [][]byte) error {
const numberOfColumns = fieldparams.NumberOfColumns
if blk.Version() < version.Deneb {
return nil
}
numberOfColumns := params.BeaconConfig().NumberOfColumns
commitments, err := blk.Block().Body().BlobKzgCommitments()
if err != nil {
return errors.Wrap(err, "could not get blob kzg commitments")

View File

@@ -711,6 +711,7 @@ func (s *Server) SubmitAttesterSlashingsV2(w http.ResponseWriter, r *http.Reques
versionHeader := r.Header.Get(api.VersionHeader)
if versionHeader == "" {
httputil.HandleError(w, api.VersionHeader+" header is required", http.StatusBadRequest)
return
}
v, err := version.FromString(versionHeader)
if err != nil {

View File

@@ -2112,6 +2112,33 @@ func TestSubmitAttesterSlashingsV2(t *testing.T) {
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.StringContains(t, "Invalid attester slashing", e.Message)
})
t.Run("missing-version-header", func(t *testing.T) {
bs, err := util.NewBeaconStateElectra()
require.NoError(t, err)
broadcaster := &p2pMock.MockBroadcaster{}
s := &Server{
ChainInfoFetcher: &blockchainmock.ChainService{State: bs},
SlashingsPool: &slashingsmock.PoolMock{},
Broadcaster: broadcaster,
}
var body bytes.Buffer
_, err = body.WriteString(invalidAttesterSlashing)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com/beacon/pool/attester_slashings", &body)
// Intentionally do not set api.VersionHeader to verify missing header handling.
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttesterSlashingsV2(writer, request)
require.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.StringContains(t, api.VersionHeader+" header is required", e.Message)
})
}
func TestSubmitProposerSlashing_InvalidSlashing(t *testing.T) {

View File

@@ -654,6 +654,10 @@ func (m *futureSyncMockFetcher) StateBySlot(context.Context, primitives.Slot) (s
return m.BeaconState, nil
}
func (m *futureSyncMockFetcher) StateByEpoch(context.Context, primitives.Epoch) (state.BeaconState, error) {
return m.BeaconState, nil
}
func TestGetSyncCommittees_Future(t *testing.T) {
st, _ := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := make([][]byte, params.BeaconConfig().SyncCommitteeSize)

View File

@@ -27,6 +27,7 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/rpc/testutil"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
mockSync "github.com/OffchainLabs/prysm/v7/beacon-chain/sync/initial-sync/testing"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
@@ -3756,6 +3757,7 @@ func Test_validateBlobs(t *testing.T) {
})
t.Run("Fulu block with valid cell proofs", func(t *testing.T) {
const numberOfColumns = fieldparams.NumberOfColumns
blk := util.NewBeaconBlockFulu()
blk.Block.Slot = fs
@@ -3783,14 +3785,13 @@ func Test_validateBlobs(t *testing.T) {
require.NoError(t, err)
// Generate cell proofs for the blobs (flattened format like execution client)
numberOfColumns := params.BeaconConfig().NumberOfColumns
cellProofs := make([][]byte, uint64(blobCount)*numberOfColumns)
for blobIdx := range blobCount {
_, proofs, err := kzg.ComputeCellsAndKZGProofs(&kzgBlobs[blobIdx])
require.NoError(t, err)
for colIdx := range numberOfColumns {
cellProofIdx := uint64(blobIdx)*numberOfColumns + colIdx
cellProofIdx := blobIdx*numberOfColumns + colIdx
cellProofs[cellProofIdx] = proofs[colIdx][:]
}
}

View File

@@ -155,20 +155,19 @@ func TestGetSpec(t *testing.T) {
config.MaxAttesterSlashingsElectra = 88
config.MaxAttestationsElectra = 89
config.MaxWithdrawalRequestsPerPayload = 90
config.MaxCellsInExtendedMatrix = 91
config.UnsetDepositRequestsStartIndex = 92
config.MaxDepositRequestsPerPayload = 93
config.MaxPendingDepositsPerEpoch = 94
config.MaxBlobCommitmentsPerBlock = 95
config.MaxBytesPerTransaction = 96
config.MaxExtraDataBytes = 97
config.BytesPerLogsBloom = 98
config.MaxTransactionsPerPayload = 99
config.FieldElementsPerBlob = 100
config.KzgCommitmentInclusionProofDepth = 101
config.BlobsidecarSubnetCount = 102
config.BlobsidecarSubnetCountElectra = 103
config.SyncMessageDueBPS = 104
config.UnsetDepositRequestsStartIndex = 91
config.MaxDepositRequestsPerPayload = 92
config.MaxPendingDepositsPerEpoch = 93
config.MaxBlobCommitmentsPerBlock = 94
config.MaxBytesPerTransaction = 95
config.MaxExtraDataBytes = 96
config.BytesPerLogsBloom = 97
config.MaxTransactionsPerPayload = 98
config.FieldElementsPerBlob = 99
config.KzgCommitmentInclusionProofDepth = 100
config.BlobsidecarSubnetCount = 101
config.BlobsidecarSubnetCountElectra = 102
config.SyncMessageDueBPS = 103
var dbp [4]byte
copy(dbp[:], []byte{'0', '0', '0', '1'})
@@ -206,7 +205,7 @@ func TestGetSpec(t *testing.T) {
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), &resp))
data, ok := resp.Data.(map[string]any)
require.Equal(t, true, ok)
assert.Equal(t, 176, len(data))
assert.Equal(t, 175, len(data))
for k, v := range data {
t.Run(k, func(t *testing.T) {
switch k {
@@ -500,8 +499,6 @@ func TestGetSpec(t *testing.T) {
assert.Equal(t, "1024", v)
case "MAX_REQUEST_BLOCKS_DENEB":
assert.Equal(t, "128", v)
case "NUMBER_OF_COLUMNS":
assert.Equal(t, "128", v)
case "MIN_PER_EPOCH_CHURN_LIMIT_ELECTRA":
assert.Equal(t, "128000000000", v)
case "MAX_PER_EPOCH_ACTIVATION_EXIT_CHURN_LIMIT":
@@ -538,14 +535,12 @@ func TestGetSpec(t *testing.T) {
assert.Equal(t, "89", v)
case "MAX_WITHDRAWAL_REQUESTS_PER_PAYLOAD":
assert.Equal(t, "90", v)
case "MAX_CELLS_IN_EXTENDED_MATRIX":
assert.Equal(t, "91", v)
case "UNSET_DEPOSIT_REQUESTS_START_INDEX":
assert.Equal(t, "92", v)
assert.Equal(t, "91", v)
case "MAX_DEPOSIT_REQUESTS_PER_PAYLOAD":
assert.Equal(t, "93", v)
assert.Equal(t, "92", v)
case "MAX_PENDING_DEPOSITS_PER_EPOCH":
assert.Equal(t, "94", v)
assert.Equal(t, "93", v)
case "MAX_BLOBS_PER_BLOCK_ELECTRA":
assert.Equal(t, "9", v)
case "MAX_REQUEST_BLOB_SIDECARS_ELECTRA":
@@ -563,25 +558,25 @@ func TestGetSpec(t *testing.T) {
case "MIN_EPOCHS_FOR_DATA_COLUMN_SIDECARS_REQUESTS":
assert.Equal(t, "4096", v)
case "MAX_BLOB_COMMITMENTS_PER_BLOCK":
assert.Equal(t, "95", v)
assert.Equal(t, "94", v)
case "MAX_BYTES_PER_TRANSACTION":
assert.Equal(t, "96", v)
assert.Equal(t, "95", v)
case "MAX_EXTRA_DATA_BYTES":
assert.Equal(t, "97", v)
assert.Equal(t, "96", v)
case "BYTES_PER_LOGS_BLOOM":
assert.Equal(t, "98", v)
assert.Equal(t, "97", v)
case "MAX_TRANSACTIONS_PER_PAYLOAD":
assert.Equal(t, "99", v)
assert.Equal(t, "98", v)
case "FIELD_ELEMENTS_PER_BLOB":
assert.Equal(t, "100", v)
assert.Equal(t, "99", v)
case "KZG_COMMITMENT_INCLUSION_PROOF_DEPTH":
assert.Equal(t, "101", v)
assert.Equal(t, "100", v)
case "BLOB_SIDECAR_SUBNET_COUNT":
assert.Equal(t, "102", v)
assert.Equal(t, "101", v)
case "BLOB_SIDECAR_SUBNET_COUNT_ELECTRA":
assert.Equal(t, "103", v)
assert.Equal(t, "102", v)
case "SYNC_MESSAGE_DUE_BPS":
assert.Equal(t, "104", v)
assert.Equal(t, "103", v)
case "BLOB_SCHEDULE":
blobSchedule, ok := v.([]any)
assert.Equal(t, true, ok)

View File

@@ -17,6 +17,7 @@ go_library(
"//beacon-chain/rpc/eth/helpers:go_default_library",
"//beacon-chain/rpc/eth/shared:go_default_library",
"//beacon-chain/rpc/lookup:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/primitives:go_default_library",

View File

@@ -15,6 +15,7 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/rpc/core"
"github.com/OffchainLabs/prysm/v7/beacon-chain/rpc/eth/helpers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/rpc/eth/shared"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
@@ -308,7 +309,7 @@ func (s *Server) DataColumnSidecars(w http.ResponseWriter, r *http.Request) {
// parseDataColumnIndices filters out invalid and duplicate data column indices
func parseDataColumnIndices(url *url.URL) ([]int, error) {
numberOfColumns := params.BeaconConfig().NumberOfColumns
const numberOfColumns = fieldparams.NumberOfColumns
rawIndices := url.Query()["indices"]
indices := make([]int, 0, numberOfColumns)
invalidIndices := make([]string, 0)

View File

@@ -709,15 +709,6 @@ func TestDataColumnSidecars(t *testing.T) {
}
func TestParseDataColumnIndices(t *testing.T) {
// Save the original config
originalConfig := params.BeaconConfig()
defer func() { params.OverrideBeaconConfig(originalConfig) }()
// Set NumberOfColumns to 128 for testing
config := params.BeaconConfig().Copy()
config.NumberOfColumns = 128
params.OverrideBeaconConfig(config)
tests := []struct {
name string
queryParams map[string][]string

View File

@@ -116,6 +116,7 @@ func (s *Server) GetLightClientUpdatesByRange(w http.ResponseWriter, req *http.R
for _, update := range updates {
if ctx.Err() != nil {
httputil.HandleError(w, "Context error: "+ctx.Err().Error(), http.StatusInternalServerError)
return
}
updateSlot := update.AttestedHeader().Beacon().Slot
@@ -131,12 +132,15 @@ func (s *Server) GetLightClientUpdatesByRange(w http.ResponseWriter, req *http.R
chunkLength = ssz.MarshalUint64(chunkLength, uint64(len(updateSSZ)+4))
if _, err := w.Write(chunkLength); err != nil {
httputil.HandleError(w, "Could not write chunk length: "+err.Error(), http.StatusInternalServerError)
return
}
if _, err := w.Write(updateEntry.ForkDigest[:]); err != nil {
httputil.HandleError(w, "Could not write fork digest: "+err.Error(), http.StatusInternalServerError)
return
}
if _, err := w.Write(updateSSZ); err != nil {
httputil.HandleError(w, "Could not write update SSZ: "+err.Error(), http.StatusInternalServerError)
return
}
}
} else {
@@ -145,6 +149,7 @@ func (s *Server) GetLightClientUpdatesByRange(w http.ResponseWriter, req *http.R
for _, update := range updates {
if ctx.Err() != nil {
httputil.HandleError(w, "Context error: "+ctx.Err().Error(), http.StatusInternalServerError)
return
}
updateJson, err := structs.LightClientUpdateFromConsensus(update)

View File

@@ -132,6 +132,7 @@ func (s *Server) GetHealth(w http.ResponseWriter, r *http.Request) {
optimistic, err := s.OptimisticModeFetcher.IsOptimistic(ctx)
if err != nil {
httputil.HandleError(w, "Could not check optimistic status: "+err.Error(), http.StatusInternalServerError)
return
}
if s.SyncChecker.Synced() && !optimistic {
return

View File

@@ -228,7 +228,7 @@ func (s *Server) attRewardsState(w http.ResponseWriter, r *http.Request) (state.
}
st, err := s.Stater.StateBySlot(r.Context(), nextEpochEnd)
if err != nil {
httputil.HandleError(w, "Could not get state for epoch's starting slot: "+err.Error(), http.StatusInternalServerError)
shared.WriteStateFetchError(w, err)
return nil, false
}
return st, true

View File

@@ -19,7 +19,6 @@ go_library(
"//beacon-chain/cache:go_default_library",
"//beacon-chain/core/feed/operation:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/transition:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/operations/attestations:go_default_library",
"//beacon-chain/operations/synccommittee:go_default_library",
@@ -78,6 +77,7 @@ go_test(
"//beacon-chain/rpc/core:go_default_library",
"//beacon-chain/rpc/eth/rewards/testing:go_default_library",
"//beacon-chain/rpc/eth/shared/testing:go_default_library",
"//beacon-chain/rpc/lookup:go_default_library",
"//beacon-chain/rpc/testutil:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/stategen:go_default_library",

View File

@@ -19,7 +19,6 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/builder"
"github.com/OffchainLabs/prysm/v7/beacon-chain/cache"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/transition"
"github.com/OffchainLabs/prysm/v7/beacon-chain/rpc/core"
rpchelpers "github.com/OffchainLabs/prysm/v7/beacon-chain/rpc/eth/helpers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/rpc/eth/shared"
@@ -898,20 +897,15 @@ func (s *Server) GetAttesterDuties(w http.ResponseWriter, r *http.Request) {
return
}
var startSlot primitives.Slot
// For next epoch requests, we use the current epoch's state since committee
// assignments for next epoch can be computed from current epoch's state.
epochForState := requestedEpoch
if requestedEpoch == nextEpoch {
startSlot, err = slots.EpochStart(currentEpoch)
} else {
startSlot, err = slots.EpochStart(requestedEpoch)
epochForState = currentEpoch
}
st, err := s.Stater.StateByEpoch(ctx, epochForState)
if err != nil {
httputil.HandleError(w, fmt.Sprintf("Could not get start slot from epoch %d: %v", requestedEpoch, err), http.StatusInternalServerError)
return
}
st, err := s.Stater.StateBySlot(ctx, startSlot)
if err != nil {
httputil.HandleError(w, "Could not get state: "+err.Error(), http.StatusInternalServerError)
shared.WriteStateFetchError(w, err)
return
}
@@ -1020,39 +1014,11 @@ func (s *Server) GetProposerDuties(w http.ResponseWriter, r *http.Request) {
nextEpochLookahead = true
}
epochStartSlot, err := slots.EpochStart(requestedEpoch)
st, err := s.Stater.StateByEpoch(ctx, requestedEpoch)
if err != nil {
httputil.HandleError(w, fmt.Sprintf("Could not get start slot of epoch %d: %v", requestedEpoch, err), http.StatusInternalServerError)
shared.WriteStateFetchError(w, err)
return
}
var st state.BeaconState
// if the requested epoch is new, use the head state and the next slot cache
if requestedEpoch < currentEpoch {
st, err = s.Stater.StateBySlot(ctx, epochStartSlot)
if err != nil {
httputil.HandleError(w, fmt.Sprintf("Could not get state for slot %d: %v ", epochStartSlot, err), http.StatusInternalServerError)
return
}
} else {
st, err = s.HeadFetcher.HeadState(ctx)
if err != nil {
httputil.HandleError(w, fmt.Sprintf("Could not get head state: %v ", err), http.StatusInternalServerError)
return
}
// Notice that even for Fulu requests for the next epoch, we are only advancing the state to the start of the current epoch.
if st.Slot() < epochStartSlot {
headRoot, err := s.HeadFetcher.HeadRoot(ctx)
if err != nil {
httputil.HandleError(w, fmt.Sprintf("Could not get head root: %v ", err), http.StatusInternalServerError)
return
}
st, err = transition.ProcessSlotsUsingNextSlotCache(ctx, st, headRoot, epochStartSlot)
if err != nil {
httputil.HandleError(w, fmt.Sprintf("Could not process slots up to %d: %v ", epochStartSlot, err), http.StatusInternalServerError)
return
}
}
}
var assignments map[primitives.ValidatorIndex][]primitives.Slot
if nextEpochLookahead {
@@ -1103,7 +1069,8 @@ func (s *Server) GetProposerDuties(w http.ResponseWriter, r *http.Request) {
httputil.HandleError(w, "Could not check optimistic status: "+err.Error(), http.StatusInternalServerError)
return
}
if !sortProposerDuties(w, duties) {
if err = sortProposerDuties(duties); err != nil {
httputil.HandleError(w, "Could not sort proposer duties: "+err.Error(), http.StatusInternalServerError)
return
}
@@ -1174,14 +1141,10 @@ func (s *Server) GetSyncCommitteeDuties(w http.ResponseWriter, r *http.Request)
}
startingEpoch := min(requestedEpoch, currentEpoch)
slot, err := slots.EpochStart(startingEpoch)
st, err := s.Stater.StateByEpoch(ctx, startingEpoch)
if err != nil {
httputil.HandleError(w, "Could not get sync committee slot: "+err.Error(), http.StatusInternalServerError)
return
}
st, err := s.Stater.State(ctx, []byte(strconv.FormatUint(uint64(slot), 10)))
if err != nil {
httputil.HandleError(w, "Could not get sync committee state: "+err.Error(), http.StatusInternalServerError)
shared.WriteStateFetchError(w, err)
return
}
@@ -1327,7 +1290,7 @@ func (s *Server) GetLiveness(w http.ResponseWriter, r *http.Request) {
}
st, err = s.Stater.StateBySlot(ctx, epochEnd)
if err != nil {
httputil.HandleError(w, "Could not get slot for requested epoch: "+err.Error(), http.StatusInternalServerError)
shared.WriteStateFetchError(w, err)
return
}
participation, err = st.CurrentEpochParticipation()
@@ -1447,22 +1410,20 @@ func syncCommitteeDutiesAndVals(
return duties, vals, nil
}
func sortProposerDuties(w http.ResponseWriter, duties []*structs.ProposerDuty) bool {
ok := true
func sortProposerDuties(duties []*structs.ProposerDuty) error {
var err error
sort.Slice(duties, func(i, j int) bool {
si, err := strconv.ParseUint(duties[i].Slot, 10, 64)
if err != nil {
httputil.HandleError(w, "Could not parse slot: "+err.Error(), http.StatusInternalServerError)
ok = false
si, parseErr := strconv.ParseUint(duties[i].Slot, 10, 64)
if parseErr != nil {
err = errors.Wrap(parseErr, "could not parse slot")
return false
}
sj, err := strconv.ParseUint(duties[j].Slot, 10, 64)
if err != nil {
httputil.HandleError(w, "Could not parse slot: "+err.Error(), http.StatusInternalServerError)
ok = false
sj, parseErr := strconv.ParseUint(duties[j].Slot, 10, 64)
if parseErr != nil {
err = errors.Wrap(parseErr, "could not parse slot")
return false
}
return si < sj
})
return ok
return err
}

View File

@@ -25,6 +25,7 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/operations/synccommittee"
p2pmock "github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/rpc/core"
"github.com/OffchainLabs/prysm/v7/beacon-chain/rpc/lookup"
"github.com/OffchainLabs/prysm/v7/beacon-chain/rpc/testutil"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state/stategen"
@@ -2006,6 +2007,7 @@ func TestGetAttesterDuties(t *testing.T) {
TimeFetcher: chain,
SyncChecker: &mockSync.Sync{IsSyncing: false},
OptimisticModeFetcher: chain,
HeadFetcher: chain,
BeaconDB: db,
}
@@ -2184,6 +2186,7 @@ func TestGetAttesterDuties(t *testing.T) {
Stater: &testutil.MockStater{StatesBySlot: map[primitives.Slot]state.BeaconState{0: bs}},
TimeFetcher: chain,
OptimisticModeFetcher: chain,
HeadFetcher: chain,
SyncChecker: &mockSync.Sync{IsSyncing: false},
BeaconDB: db,
}
@@ -2224,6 +2227,62 @@ func TestGetAttesterDuties(t *testing.T) {
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusServiceUnavailable, e.Code)
})
t.Run("state not found returns 404", func(t *testing.T) {
chainSlot := primitives.Slot(0)
chain := &mockChain.ChainService{
State: bs, Root: genesisRoot[:], Slot: &chainSlot,
}
stateNotFoundErr := lookup.NewStateNotFoundError(8192, []byte("test"))
s := &Server{
Stater: &testutil.MockStater{CustomError: &stateNotFoundErr},
TimeFetcher: chain,
SyncChecker: &mockSync.Sync{IsSyncing: false},
OptimisticModeFetcher: chain,
HeadFetcher: chain,
}
var body bytes.Buffer
_, err = body.WriteString("[\"0\"]")
require.NoError(t, err)
request := httptest.NewRequest(http.MethodGet, "http://www.example.com/eth/v1/validator/duties/attester/{epoch}", &body)
request.SetPathValue("epoch", "0")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetAttesterDuties(writer, request)
assert.Equal(t, http.StatusNotFound, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusNotFound, e.Code)
assert.StringContains(t, "State not found", e.Message)
})
t.Run("state fetch error returns 500", func(t *testing.T) {
chainSlot := primitives.Slot(0)
chain := &mockChain.ChainService{
State: bs, Root: genesisRoot[:], Slot: &chainSlot,
}
s := &Server{
Stater: &testutil.MockStater{CustomError: errors.New("internal error")},
TimeFetcher: chain,
SyncChecker: &mockSync.Sync{IsSyncing: false},
OptimisticModeFetcher: chain,
HeadFetcher: chain,
}
var body bytes.Buffer
_, err = body.WriteString("[\"0\"]")
require.NoError(t, err)
request := httptest.NewRequest(http.MethodGet, "http://www.example.com/eth/v1/validator/duties/attester/{epoch}", &body)
request.SetPathValue("epoch", "0")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetAttesterDuties(writer, request)
assert.Equal(t, http.StatusInternalServerError, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusInternalServerError, e.Code)
})
}
func TestGetProposerDuties(t *testing.T) {
@@ -2427,6 +2486,60 @@ func TestGetProposerDuties(t *testing.T) {
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusServiceUnavailable, e.Code)
})
t.Run("state not found returns 404", func(t *testing.T) {
bs, err := transition.GenesisBeaconState(t.Context(), deposits, 0, eth1Data)
require.NoError(t, err)
chainSlot := primitives.Slot(0)
chain := &mockChain.ChainService{
State: bs, Root: genesisRoot[:], Slot: &chainSlot,
}
stateNotFoundErr := lookup.NewStateNotFoundError(8192, []byte("test"))
s := &Server{
Stater: &testutil.MockStater{CustomError: &stateNotFoundErr},
TimeFetcher: chain,
SyncChecker: &mockSync.Sync{IsSyncing: false},
OptimisticModeFetcher: chain,
HeadFetcher: chain,
}
request := httptest.NewRequest(http.MethodGet, "http://www.example.com/eth/v1/validator/duties/proposer/{epoch}", nil)
request.SetPathValue("epoch", "0")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetProposerDuties(writer, request)
assert.Equal(t, http.StatusNotFound, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusNotFound, e.Code)
assert.StringContains(t, "State not found", e.Message)
})
t.Run("state fetch error returns 500", func(t *testing.T) {
bs, err := transition.GenesisBeaconState(t.Context(), deposits, 0, eth1Data)
require.NoError(t, err)
chainSlot := primitives.Slot(0)
chain := &mockChain.ChainService{
State: bs, Root: genesisRoot[:], Slot: &chainSlot,
}
s := &Server{
Stater: &testutil.MockStater{CustomError: errors.New("internal error")},
TimeFetcher: chain,
SyncChecker: &mockSync.Sync{IsSyncing: false},
OptimisticModeFetcher: chain,
HeadFetcher: chain,
}
request := httptest.NewRequest(http.MethodGet, "http://www.example.com/eth/v1/validator/duties/proposer/{epoch}", nil)
request.SetPathValue("epoch", "0")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetProposerDuties(writer, request)
assert.Equal(t, http.StatusInternalServerError, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusInternalServerError, e.Code)
})
}
func TestGetSyncCommitteeDuties(t *testing.T) {
@@ -2457,7 +2570,7 @@ func TestGetSyncCommitteeDuties(t *testing.T) {
}
require.NoError(t, st.SetNextSyncCommittee(nextCommittee))
mockChainService := &mockChain.ChainService{Genesis: genesisTime}
mockChainService := &mockChain.ChainService{Genesis: genesisTime, State: st}
s := &Server{
Stater: &testutil.MockStater{BeaconState: st},
SyncChecker: &mockSync.Sync{IsSyncing: false},
@@ -2648,7 +2761,7 @@ func TestGetSyncCommitteeDuties(t *testing.T) {
return newSyncPeriodSt
}
}
mockChainService := &mockChain.ChainService{Genesis: genesisTime, Slot: &newSyncPeriodStartSlot}
mockChainService := &mockChain.ChainService{Genesis: genesisTime, Slot: &newSyncPeriodStartSlot, State: newSyncPeriodSt}
s := &Server{
Stater: &testutil.MockStater{BeaconState: stateFetchFn(newSyncPeriodStartSlot)},
SyncChecker: &mockSync.Sync{IsSyncing: false},
@@ -2729,8 +2842,7 @@ func TestGetSyncCommitteeDuties(t *testing.T) {
slot, err := slots.EpochStart(1)
require.NoError(t, err)
st2, err := util.NewBeaconStateBellatrix()
require.NoError(t, err)
st2 := st.Copy()
require.NoError(t, st2.SetSlot(slot))
mockChainService := &mockChain.ChainService{
@@ -2744,7 +2856,7 @@ func TestGetSyncCommitteeDuties(t *testing.T) {
State: st2,
}
s := &Server{
Stater: &testutil.MockStater{BeaconState: st},
Stater: &testutil.MockStater{BeaconState: st2},
SyncChecker: &mockSync.Sync{IsSyncing: false},
TimeFetcher: mockChainService,
HeadFetcher: mockChainService,
@@ -2789,6 +2901,62 @@ func TestGetSyncCommitteeDuties(t *testing.T) {
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusServiceUnavailable, e.Code)
})
t.Run("state not found returns 404", func(t *testing.T) {
slot := 2 * params.BeaconConfig().SlotsPerEpoch
chainService := &mockChain.ChainService{
Slot: &slot,
}
stateNotFoundErr := lookup.NewStateNotFoundError(8192, []byte("test"))
s := &Server{
Stater: &testutil.MockStater{CustomError: &stateNotFoundErr},
TimeFetcher: chainService,
SyncChecker: &mockSync.Sync{IsSyncing: false},
OptimisticModeFetcher: chainService,
HeadFetcher: chainService,
}
var body bytes.Buffer
_, err := body.WriteString("[\"1\"]")
require.NoError(t, err)
request := httptest.NewRequest(http.MethodGet, "http://www.example.com/eth/v1/validator/duties/sync/{epoch}", &body)
request.SetPathValue("epoch", "1")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetSyncCommitteeDuties(writer, request)
assert.Equal(t, http.StatusNotFound, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusNotFound, e.Code)
assert.StringContains(t, "State not found", e.Message)
})
t.Run("state fetch error returns 500", func(t *testing.T) {
slot := 2 * params.BeaconConfig().SlotsPerEpoch
chainService := &mockChain.ChainService{
Slot: &slot,
}
s := &Server{
Stater: &testutil.MockStater{CustomError: errors.New("internal error")},
TimeFetcher: chainService,
SyncChecker: &mockSync.Sync{IsSyncing: false},
OptimisticModeFetcher: chainService,
HeadFetcher: chainService,
}
var body bytes.Buffer
_, err := body.WriteString("[\"1\"]")
require.NoError(t, err)
request := httptest.NewRequest(http.MethodGet, "http://www.example.com/eth/v1/validator/duties/sync/{epoch}", &body)
request.SetPathValue("epoch", "1")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetSyncCommitteeDuties(writer, request)
assert.Equal(t, http.StatusInternalServerError, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusInternalServerError, e.Code)
})
}
func TestPrepareBeaconProposer(t *testing.T) {

View File

@@ -11,6 +11,7 @@ go_library(
deps = [
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/transition:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/filesystem:go_default_library",
"//beacon-chain/rpc/core:go_default_library",

View File

@@ -450,7 +450,7 @@ func (p *BeaconDbBlocker) blobsDataFromStoredDataColumns(root [fieldparams.RootL
if count < peerdas.MinimumColumnCountToReconstruct() {
// There is no way to reconstruct the data columns.
return nil, &core.RpcError{
Err: errors.Errorf("the node does not custody enough data columns to reconstruct blobs - please start the beacon node with the `--%s` flag to ensure this call to succeed, or retry later if it is already the case", flags.Supernode.Name),
Err: errors.Errorf("the node does not custody enough data columns to reconstruct blobs - please start the beacon node with the `--%s` flag to ensure this call to succeed", flags.SemiSupernode.Name),
Reason: core.NotFound,
}
}
@@ -628,6 +628,8 @@ func (p *BeaconDbBlocker) neededDataColumnSidecars(root [fieldparams.RootLength]
// - no block, 404
// - block exists, before Fulu fork, 400 (data columns are not supported before Fulu fork)
func (p *BeaconDbBlocker) DataColumns(ctx context.Context, id string, indices []int) ([]blocks.VerifiedRODataColumn, *core.RpcError) {
const numberOfColumns = fieldparams.NumberOfColumns
// Check for genesis block first (not supported for data columns)
if id == "genesis" {
return nil, &core.RpcError{Err: errors.New("data columns are not supported for Phase 0 fork"), Reason: core.BadRequest}
@@ -681,7 +683,6 @@ func (p *BeaconDbBlocker) DataColumns(ctx context.Context, id string, indices []
}
} else {
// Validate and convert indices
numberOfColumns := params.BeaconConfig().NumberOfColumns
for _, index := range indices {
if index < 0 || uint64(index) >= numberOfColumns {
return nil, &core.RpcError{

View File

@@ -8,6 +8,7 @@ import (
"strings"
"github.com/OffchainLabs/prysm/v7/beacon-chain/blockchain"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/transition"
"github.com/OffchainLabs/prysm/v7/beacon-chain/db"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state/stategen"
@@ -82,8 +83,8 @@ type StateRootNotFoundError struct {
}
// NewStateRootNotFoundError creates a new error instance.
func NewStateRootNotFoundError(stateRootsSize int) StateNotFoundError {
return StateNotFoundError{
func NewStateRootNotFoundError(stateRootsSize int) StateRootNotFoundError {
return StateRootNotFoundError{
message: fmt.Sprintf("state root not found in the last %d state roots", stateRootsSize),
}
}
@@ -98,6 +99,7 @@ type Stater interface {
State(ctx context.Context, id []byte) (state.BeaconState, error)
StateRoot(ctx context.Context, id []byte) ([]byte, error)
StateBySlot(ctx context.Context, slot primitives.Slot) (state.BeaconState, error)
StateByEpoch(ctx context.Context, epoch primitives.Epoch) (state.BeaconState, error)
}
// BeaconDbStater is an implementation of Stater. It retrieves states from the beacon chain database.
@@ -267,6 +269,46 @@ func (p *BeaconDbStater) StateBySlot(ctx context.Context, target primitives.Slot
return st, nil
}
// StateByEpoch returns the state for the start of the requested epoch.
// For current or next epoch, it uses the head state and next slot cache for efficiency.
// For past epochs, it replays blocks from the most recent canonical state.
func (p *BeaconDbStater) StateByEpoch(ctx context.Context, epoch primitives.Epoch) (state.BeaconState, error) {
ctx, span := trace.StartSpan(ctx, "statefetcher.StateByEpoch")
defer span.End()
targetSlot, err := slots.EpochStart(epoch)
if err != nil {
return nil, errors.Wrap(err, "could not get epoch start slot")
}
currentSlot := p.GenesisTimeFetcher.CurrentSlot()
currentEpoch := slots.ToEpoch(currentSlot)
// For past epochs, use the replay mechanism
if epoch < currentEpoch {
return p.StateBySlot(ctx, targetSlot)
}
// For current or next epoch, use head state + next slot cache (much faster)
headState, err := p.ChainInfoFetcher.HeadState(ctx)
if err != nil {
return nil, errors.Wrap(err, "could not get head state")
}
// If head state is already at or past the target slot, return it
if headState.Slot() >= targetSlot {
return headState, nil
}
// Process slots using the next slot cache
headRoot := p.ChainInfoFetcher.CachedHeadRoot()
st, err := transition.ProcessSlotsUsingNextSlotCache(ctx, headState, headRoot[:], targetSlot)
if err != nil {
return nil, errors.Wrapf(err, "could not process slots up to %d", targetSlot)
}
return st, nil
}
func (p *BeaconDbStater) headStateRoot(ctx context.Context) ([]byte, error) {
b, err := p.ChainInfoFetcher.HeadBlock(ctx)
if err != nil {

View File

@@ -444,3 +444,111 @@ func TestStateBySlot_AfterHeadSlot(t *testing.T) {
require.NoError(t, err)
assert.Equal(t, primitives.Slot(101), st.Slot())
}
func TestStateByEpoch(t *testing.T) {
ctx := t.Context()
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
t.Run("current epoch uses head state", func(t *testing.T) {
// Head is at slot 5 (epoch 0), requesting epoch 0
headSlot := primitives.Slot(5)
headSt, err := statenative.InitializeFromProtoPhase0(&ethpb.BeaconState{Slot: headSlot})
require.NoError(t, err)
currentSlot := headSlot
mock := &chainMock.ChainService{State: headSt, Slot: &currentSlot}
p := BeaconDbStater{ChainInfoFetcher: mock, GenesisTimeFetcher: mock}
st, err := p.StateByEpoch(ctx, 0)
require.NoError(t, err)
// Should return head state since it's already past epoch start
assert.Equal(t, headSlot, st.Slot())
})
t.Run("current epoch processes slots to epoch start", func(t *testing.T) {
// Head is at slot 5 (epoch 0), requesting epoch 1
// Current slot is 32 (epoch 1), so epoch 1 is current epoch
headSlot := primitives.Slot(5)
headSt, err := statenative.InitializeFromProtoPhase0(&ethpb.BeaconState{Slot: headSlot})
require.NoError(t, err)
currentSlot := slotsPerEpoch // slot 32, epoch 1
mock := &chainMock.ChainService{State: headSt, Slot: &currentSlot}
p := BeaconDbStater{ChainInfoFetcher: mock, GenesisTimeFetcher: mock}
// Note: This will fail since ProcessSlotsUsingNextSlotCache requires proper setup
// In real usage, the transition package handles this properly
_, err = p.StateByEpoch(ctx, 1)
// The error is expected since we don't have a fully initialized beacon state
// that can process slots (missing committees, etc.)
assert.NotNil(t, err)
})
t.Run("past epoch uses replay", func(t *testing.T) {
// Head is at epoch 2, requesting epoch 0 (past)
headSlot := slotsPerEpoch * 2 // slot 64, epoch 2
headSt, err := statenative.InitializeFromProtoPhase0(&ethpb.BeaconState{Slot: headSlot})
require.NoError(t, err)
pastEpochSt, err := statenative.InitializeFromProtoPhase0(&ethpb.BeaconState{Slot: 0})
require.NoError(t, err)
currentSlot := headSlot
mock := &chainMock.ChainService{State: headSt, Slot: &currentSlot}
mockReplayer := mockstategen.NewReplayerBuilder()
mockReplayer.SetMockStateForSlot(pastEpochSt, 0)
p := BeaconDbStater{ChainInfoFetcher: mock, GenesisTimeFetcher: mock, ReplayerBuilder: mockReplayer}
st, err := p.StateByEpoch(ctx, 0)
require.NoError(t, err)
assert.Equal(t, primitives.Slot(0), st.Slot())
})
t.Run("next epoch uses head state path", func(t *testing.T) {
// Head is at slot 30 (epoch 0), requesting epoch 1 (next)
// Current slot is 30 (epoch 0), so epoch 1 is next epoch
headSlot := primitives.Slot(30)
headSt, err := statenative.InitializeFromProtoPhase0(&ethpb.BeaconState{Slot: headSlot})
require.NoError(t, err)
currentSlot := headSlot
mock := &chainMock.ChainService{State: headSt, Slot: &currentSlot}
p := BeaconDbStater{ChainInfoFetcher: mock, GenesisTimeFetcher: mock}
// Note: This will fail since ProcessSlotsUsingNextSlotCache requires proper setup
_, err = p.StateByEpoch(ctx, 1)
// The error is expected since we don't have a fully initialized beacon state
assert.NotNil(t, err)
})
t.Run("head state already at target slot returns immediately", func(t *testing.T) {
// Head is at slot 32 (epoch 1 start), requesting epoch 1
headSlot := slotsPerEpoch // slot 32
headSt, err := statenative.InitializeFromProtoPhase0(&ethpb.BeaconState{Slot: headSlot})
require.NoError(t, err)
currentSlot := headSlot
mock := &chainMock.ChainService{State: headSt, Slot: &currentSlot}
p := BeaconDbStater{ChainInfoFetcher: mock, GenesisTimeFetcher: mock}
st, err := p.StateByEpoch(ctx, 1)
require.NoError(t, err)
assert.Equal(t, headSlot, st.Slot())
})
t.Run("head state past target slot returns head state", func(t *testing.T) {
// Head is at slot 40, requesting epoch 1 (starts at slot 32)
headSlot := primitives.Slot(40)
headSt, err := statenative.InitializeFromProtoPhase0(&ethpb.BeaconState{Slot: headSlot})
require.NoError(t, err)
currentSlot := headSlot
mock := &chainMock.ChainService{State: headSt, Slot: &currentSlot}
p := BeaconDbStater{ChainInfoFetcher: mock, GenesisTimeFetcher: mock}
st, err := p.StateByEpoch(ctx, 1)
require.NoError(t, err)
// Returns head state since it's already >= epoch start
assert.Equal(t, headSlot, st.Slot())
})
}

View File

@@ -26,5 +26,6 @@ go_library(
"//proto/prysm/v1alpha1:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//time/slots:go_default_library",
],
)

View File

@@ -6,6 +6,7 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v7/time/slots"
)
// MockStater is a fake implementation of lookup.Stater.
@@ -14,6 +15,7 @@ type MockStater struct {
StateProviderFunc func(ctx context.Context, stateId []byte) (state.BeaconState, error)
BeaconStateRoot []byte
StatesBySlot map[primitives.Slot]state.BeaconState
StatesByEpoch map[primitives.Epoch]state.BeaconState
StatesByRoot map[[32]byte]state.BeaconState
CustomError error
}
@@ -43,3 +45,22 @@ func (m *MockStater) StateRoot(context.Context, []byte) ([]byte, error) {
func (m *MockStater) StateBySlot(_ context.Context, s primitives.Slot) (state.BeaconState, error) {
return m.StatesBySlot[s], nil
}
// StateByEpoch --
func (m *MockStater) StateByEpoch(_ context.Context, e primitives.Epoch) (state.BeaconState, error) {
if m.CustomError != nil {
return nil, m.CustomError
}
if m.StatesByEpoch != nil {
return m.StatesByEpoch[e], nil
}
// Fall back to StatesBySlot if StatesByEpoch is not set
slot, err := slots.EpochStart(e)
if err != nil {
return nil, err
}
if m.StatesBySlot != nil {
return m.StatesBySlot[slot], nil
}
return m.BeaconState, nil
}

View File

@@ -6,6 +6,7 @@ import (
"fmt"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v7/monitoring/tracing/trace"
"github.com/sirupsen/logrus"
@@ -37,76 +38,84 @@ func (s *State) MigrateToCold(ctx context.Context, fRoot [32]byte) error {
return nil
}
// Start at previous finalized slot, stop at current finalized slot (it will be handled in the next migration).
// If the slot is on archived point, save the state of that slot to the DB.
for slot := oldFSlot; slot < fSlot; slot++ {
// Calculate the first archived point slot >= oldFSlot (but > 0).
// This avoids iterating through every slot and only visits archived points directly.
var startSlot primitives.Slot
if oldFSlot == 0 {
startSlot = s.slotsPerArchivedPoint
} else {
// Round up to the next archived point
startSlot = (oldFSlot + s.slotsPerArchivedPoint - 1) / s.slotsPerArchivedPoint * s.slotsPerArchivedPoint
}
// Start at the first archived point after old finalized slot, stop before current finalized slot.
// Jump directly between archived points.
for slot := startSlot; slot < fSlot; slot += s.slotsPerArchivedPoint {
if ctx.Err() != nil {
return ctx.Err()
}
if slot%s.slotsPerArchivedPoint == 0 && slot != 0 {
cached, exists, err := s.epochBoundaryStateCache.getBySlot(slot)
cached, exists, err := s.epochBoundaryStateCache.getBySlot(slot)
if err != nil {
return fmt.Errorf("could not get epoch boundary state for slot %d", slot)
}
var aRoot [32]byte
var aState state.BeaconState
// When the epoch boundary state is not in cache due to skip slot scenario,
// we have to regenerate the state which will represent epoch boundary.
// By finding the highest available block below epoch boundary slot, we
// generate the state for that block root.
if exists {
aRoot = cached.root
aState = cached.state
} else {
_, roots, err := s.beaconDB.HighestRootsBelowSlot(ctx, slot)
if err != nil {
return fmt.Errorf("could not get epoch boundary state for slot %d", slot)
return err
}
var aRoot [32]byte
var aState state.BeaconState
// When the epoch boundary state is not in cache due to skip slot scenario,
// we have to regenerate the state which will represent epoch boundary.
// By finding the highest available block below epoch boundary slot, we
// generate the state for that block root.
if exists {
aRoot = cached.root
aState = cached.state
} else {
_, roots, err := s.beaconDB.HighestRootsBelowSlot(ctx, slot)
// Given the block has been finalized, the db should not have more than one block in a given slot.
// We should error out when this happens.
if len(roots) != 1 {
return errUnknownBlock
}
aRoot = roots[0]
// There's no need to generate the state if the state already exists in the DB.
// We can skip saving the state.
if !s.beaconDB.HasState(ctx, aRoot) {
aState, err = s.StateByRoot(ctx, aRoot)
if err != nil {
return err
}
// Given the block has been finalized, the db should not have more than one block in a given slot.
// We should error out when this happens.
if len(roots) != 1 {
return errUnknownBlock
}
aRoot = roots[0]
// There's no need to generate the state if the state already exists in the DB.
// We can skip saving the state.
if !s.beaconDB.HasState(ctx, aRoot) {
aState, err = s.StateByRoot(ctx, aRoot)
if err != nil {
return err
}
}
}
if s.beaconDB.HasState(ctx, aRoot) {
// If you are migrating a state and its already part of the hot state cache saved to the db,
// you can just remove it from the hot state cache as it becomes redundant.
s.saveHotStateDB.lock.Lock()
roots := s.saveHotStateDB.blockRootsOfSavedStates
for i := range roots {
if aRoot == roots[i] {
s.saveHotStateDB.blockRootsOfSavedStates = append(roots[:i], roots[i+1:]...)
// There shouldn't be duplicated roots in `blockRootsOfSavedStates`.
// Break here is ok.
break
}
}
s.saveHotStateDB.lock.Unlock()
continue
}
if err := s.beaconDB.SaveState(ctx, aState, aRoot); err != nil {
return err
}
log.WithFields(
logrus.Fields{
"slot": aState.Slot(),
"root": hex.EncodeToString(bytesutil.Trunc(aRoot[:])),
}).Info("Saved state in DB")
}
if s.beaconDB.HasState(ctx, aRoot) {
// If you are migrating a state and its already part of the hot state cache saved to the db,
// you can just remove it from the hot state cache as it becomes redundant.
s.saveHotStateDB.lock.Lock()
roots := s.saveHotStateDB.blockRootsOfSavedStates
for i := range roots {
if aRoot == roots[i] {
s.saveHotStateDB.blockRootsOfSavedStates = append(roots[:i], roots[i+1:]...)
// There shouldn't be duplicated roots in `blockRootsOfSavedStates`.
// Break here is ok.
break
}
}
s.saveHotStateDB.lock.Unlock()
continue
}
if err := s.beaconDB.SaveState(ctx, aState, aRoot); err != nil {
return err
}
log.WithFields(
logrus.Fields{
"slot": aState.Slot(),
"root": hex.EncodeToString(bytesutil.Trunc(aRoot[:])),
}).Info("Saved state in DB")
}
// Update finalized info in memory.

View File

@@ -7,6 +7,7 @@ go_library(
"block_batcher.go",
"context.go",
"custody.go",
"data_column_assignment.go",
"data_column_sidecars.go",
"data_columns_reconstruct.go",
"deadlines.go",
@@ -135,6 +136,7 @@ go_library(
"//time:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@com_github_hashicorp_golang_lru//:go_default_library",
"@com_github_libp2p_go_libp2p//core:go_default_library",
"@com_github_libp2p_go_libp2p//core/host:go_default_library",
@@ -167,6 +169,7 @@ go_test(
"block_batcher_test.go",
"context_test.go",
"custody_test.go",
"data_column_assignment_test.go",
"data_column_sidecars_test.go",
"data_columns_reconstruct_test.go",
"decode_pubsub_test.go",

View File

@@ -6,17 +6,22 @@ go_library(
"batch.go",
"batcher.go",
"blobs.go",
"columns.go",
"error.go",
"fulu_transition.go",
"log.go",
"metrics.go",
"pool.go",
"service.go",
"status.go",
"verify.go",
"verify_column.go",
"worker.go",
],
importpath = "github.com/OffchainLabs/prysm/v7/beacon-chain/sync/backfill",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/das:go_default_library",
"//beacon-chain/db:go_default_library",
@@ -37,7 +42,6 @@ go_library(
"//proto/dbval:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_libp2p_go_libp2p//core/peer:go_default_library",
"@com_github_pkg_errors//:go_default_library",
@@ -51,19 +55,27 @@ go_test(
name = "go_default_test",
srcs = [
"batch_test.go",
"batcher_expiration_test.go",
"batcher_test.go",
"blobs_test.go",
"columns_test.go",
"fulu_transition_test.go",
"log_test.go",
"pool_test.go",
"service_test.go",
"status_test.go",
"verify_column_test.go",
"verify_test.go",
],
embed = [":go_default_library"],
deps = [
"//beacon-chain/blockchain/kzg:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/das:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/filesystem:go_default_library",
"//beacon-chain/p2p/peers:go_default_library",
"//beacon-chain/p2p/testing:go_default_library",
"//beacon-chain/startup:go_default_library",
"//beacon-chain/state:go_default_library",
@@ -85,5 +97,7 @@ go_test(
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_libp2p_go_libp2p//core/peer:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_stretchr_testify//require:go_default_library",
],
)

View File

@@ -8,7 +8,6 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/das"
"github.com/OffchainLabs/prysm/v7/beacon-chain/sync"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
eth "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/libp2p/go-libp2p/core/peer"
@@ -16,9 +15,13 @@ import (
"github.com/sirupsen/logrus"
)
// ErrChainBroken indicates a backfill batch can't be imported to the db because it is not known to be the ancestor
// of the canonical chain.
var ErrChainBroken = errors.New("batch is not the ancestor of a known finalized root")
var errChainBroken = errors.New("batch is not the ancestor of a known finalized root")
// retryLogMod defines how often retryable errors are logged at debug level instead of trace.
const retryLogMod = 5
// retryDelay defines the delay between retry attempts for a batch.
const retryDelay = time.Second
type batchState int
@@ -30,16 +33,20 @@ func (s batchState) String() string {
return "init"
case batchSequenced:
return "sequenced"
case batchErrRetryable:
return "error_retryable"
case batchSyncBlobs:
return "sync_blobs"
case batchSyncColumns:
return "sync_columns"
case batchImportable:
return "importable"
case batchImportComplete:
return "import_complete"
case batchEndSequence:
return "end_sequence"
case batchBlobSync:
return "blob_sync"
case batchErrRetryable:
return "error_retryable"
case batchErrFatal:
return "error_fatal"
default:
return "unknown"
}
@@ -49,15 +56,15 @@ const (
batchNil batchState = iota
batchInit
batchSequenced
batchErrRetryable
batchBlobSync
batchSyncBlobs
batchSyncColumns
batchImportable
batchImportComplete
batchErrRetryable
batchErrFatal // if this is received in the main loop, the worker pool will be shut down.
batchEndSequence
)
var retryDelay = time.Second
type batchId string
type batch struct {
@@ -67,35 +74,52 @@ type batch struct {
retries int
retryAfter time.Time
begin primitives.Slot
end primitives.Slot // half-open interval, [begin, end), ie >= start, < end.
results verifiedROBlocks
end primitives.Slot // half-open interval, [begin, end), ie >= begin, < end.
blocks verifiedROBlocks
err error
state batchState
busy peer.ID
blockPid peer.ID
blobPid peer.ID
bs *blobSync
// `assignedPeer` is used by the worker pool to assign and unassign peer.IDs to serve requests for the current batch state.
// Depending on the state it will be copied to blockPeer, columns.Peer, blobs.Peer.
assignedPeer peer.ID
blockPeer peer.ID
nextReqCols []uint64
blobs *blobSync
columns *columnSync
}
func (b batch) logFields() logrus.Fields {
f := map[string]any{
"batchId": b.id(),
"state": b.state.String(),
"scheduled": b.scheduled.String(),
"seq": b.seq,
"retries": b.retries,
"begin": b.begin,
"end": b.end,
"busyPid": b.busy,
"blockPid": b.blockPid,
"blobPid": b.blobPid,
"batchId": b.id(),
"state": b.state.String(),
"scheduled": b.scheduled.String(),
"seq": b.seq,
"retries": b.retries,
"retryAfter": b.retryAfter.String(),
"begin": b.begin,
"end": b.end,
"busyPid": b.assignedPeer,
"blockPid": b.blockPeer,
}
if b.blobs != nil {
f["blobPid"] = b.blobs.peer
}
if b.columns != nil {
f["colPid"] = b.columns.peer
}
if b.retries > 0 {
f["retryAfter"] = b.retryAfter.String()
}
if b.state == batchSyncColumns {
f["nextColumns"] = fmt.Sprintf("%v", b.nextReqCols)
}
if b.state == batchErrRetryable && b.blobs != nil {
f["blobsMissing"] = b.blobs.needed()
}
return f
}
// replaces returns true if `r` is a version of `b` that has been updated by a worker,
// meaning it should replace `b` in the batch sequencing queue.
func (b batch) replaces(r batch) bool {
if r.state == batchImportComplete {
return false
@@ -114,9 +138,9 @@ func (b batch) id() batchId {
}
func (b batch) ensureParent(expected [32]byte) error {
tail := b.results[len(b.results)-1]
tail := b.blocks[len(b.blocks)-1]
if tail.Root() != expected {
return errors.Wrapf(ErrChainBroken, "last parent_root=%#x, tail root=%#x", expected, tail.Root())
return errors.Wrapf(errChainBroken, "last parent_root=%#x, tail root=%#x", expected, tail.Root())
}
return nil
}
@@ -136,21 +160,15 @@ func (b batch) blobRequest() *eth.BlobSidecarsByRangeRequest {
}
}
func (b batch) withResults(results verifiedROBlocks, bs *blobSync) batch {
b.results = results
b.bs = bs
if bs.blobsNeeded() > 0 {
return b.withState(batchBlobSync)
func (b batch) transitionToNext() batch {
if len(b.blocks) == 0 {
return b.withState(batchSequenced)
}
return b.withState(batchImportable)
}
func (b batch) postBlobSync() batch {
if b.blobsNeeded() > 0 {
log.WithFields(b.logFields()).WithField("blobsMissing", b.blobsNeeded()).Error("Batch still missing blobs after downloading from peer")
b.bs = nil
b.results = []blocks.ROBlock{}
return b.withState(batchErrRetryable)
if len(b.columns.columnsNeeded()) > 0 {
return b.withState(batchSyncColumns)
}
if b.blobs != nil && b.blobs.needed() > 0 {
return b.withState(batchSyncBlobs)
}
return b.withState(batchImportable)
}
@@ -159,44 +177,89 @@ func (b batch) withState(s batchState) batch {
if s == batchSequenced {
b.scheduled = time.Now()
switch b.state {
case batchErrRetryable:
b.retries += 1
b.retryAfter = time.Now().Add(retryDelay)
log.WithFields(b.logFields()).Info("Sequencing batch for retry after delay")
case batchInit, batchNil:
b.firstScheduled = b.scheduled
}
}
if s == batchImportComplete {
backfillBatchTimeRoundtrip.Observe(float64(time.Since(b.firstScheduled).Milliseconds()))
log.WithFields(b.logFields()).Debug("Backfill batch imported")
}
b.state = s
b.seq += 1
return b
}
func (b batch) withPeer(p peer.ID) batch {
b.blockPid = p
backfillBatchTimeWaiting.Observe(float64(time.Since(b.scheduled).Milliseconds()))
return b
}
func (b batch) withRetryableError(err error) batch {
b.err = err
b.retries += 1
b.retryAfter = time.Now().Add(retryDelay)
msg := "Could not proceed with batch processing due to error"
logBase := log.WithFields(b.logFields()).WithError(err)
// Log at trace level to limit log noise,
// but escalate to debug level every nth attempt for batches that have some peristent issue.
if b.retries&retryLogMod != 0 {
logBase.Trace(msg)
} else {
logBase.Debug(msg)
}
return b.withState(batchErrRetryable)
}
func (b batch) blobsNeeded() int {
return b.bs.blobsNeeded()
func (b batch) withFatalError(err error) batch {
log.WithFields(b.logFields()).WithError(err).Error("Fatal batch processing error")
b.err = err
return b.withState(batchErrFatal)
}
func (b batch) blobResponseValidator() sync.BlobResponseValidation {
return b.bs.validateNext
func (b batch) withError(err error) batch {
if isRetryable(err) {
return b.withRetryableError(err)
}
return b.withFatalError(err)
}
func (b batch) availabilityStore() das.AvailabilityStore {
return b.bs.store
func (b batch) validatingColumnRequest(cb *columnBisector) (*validatingColumnRequest, error) {
req, err := b.columns.request(b.nextReqCols, columnRequestLimit)
if err != nil {
return nil, errors.Wrap(err, "columns request")
}
if req == nil {
return nil, nil
}
return &validatingColumnRequest{
req: req,
columnSync: b.columns,
bisector: cb,
}, nil
}
// resetToRetryColumns is called after a partial batch failure. It adds column indices back
// to the toDownload structure for any blocks where those columns failed, and resets the bisector state.
// Note that this method will also prune any columns that have expired, meaning we no longer need them
// per spec and/or our backfill & retention settings.
func resetToRetryColumns(b batch, needs das.CurrentNeeds) batch {
// return the given batch as-is if it isn't in a state that this func should handle.
if b.columns == nil || b.columns.bisector == nil || len(b.columns.bisector.errs) == 0 {
return b.transitionToNext()
}
pruned := make(map[[32]byte]struct{})
b.columns.pruneExpired(needs, pruned)
// clear out failed column state in the bisector and add back to
bisector := b.columns.bisector
roots := bisector.failingRoots()
// Add all the failed columns back to the toDownload structure and reset the bisector state.
for _, root := range roots {
if _, rm := pruned[root]; rm {
continue
}
bc := b.columns.toDownload[root]
bc.remaining.Merge(bisector.failuresFor(root))
}
b.columns.bisector.reset()
return b.transitionToNext()
}
var batchBlockUntil = func(ctx context.Context, untilRetry time.Duration, b batch) error {
@@ -223,6 +286,26 @@ func (b batch) waitUntilReady(ctx context.Context) error {
return nil
}
func (b batch) workComplete() bool {
return b.state == batchImportable
}
func (b batch) expired(needs das.CurrentNeeds) bool {
if !needs.Block.At(b.end - 1) {
log.WithFields(b.logFields()).WithField("retentionStartSlot", needs.Block.Begin).Debug("Batch outside retention window")
return true
}
return false
}
func (b batch) selectPeer(picker *sync.PeerPicker, busy map[peer.ID]bool) (peer.ID, []uint64, error) {
if b.state == batchSyncColumns {
return picker.ForColumns(b.columns.columnsNeeded(), busy)
}
peer, err := picker.ForBlocks(busy)
return peer, nil, err
}
func sortBatchDesc(bb []batch) {
sort.Slice(bb, func(i, j int) bool {
return bb[i].end > bb[j].end

View File

@@ -24,17 +24,16 @@ func TestSortBatchDesc(t *testing.T) {
}
func TestWaitUntilReady(t *testing.T) {
b := batch{}.withState(batchErrRetryable)
require.Equal(t, time.Time{}, b.retryAfter)
var got time.Duration
wur := batchBlockUntil
var got time.Duration
var errDerp = errors.New("derp")
batchBlockUntil = func(_ context.Context, ur time.Duration, _ batch) error {
got = ur
return errDerp
}
// retries counter and timestamp are set when we mark the batch for sequencing, if it is in the retry state
b = b.withState(batchSequenced)
b := batch{}.withRetryableError(errors.New("test error"))
require.ErrorIs(t, b.waitUntilReady(t.Context()), errDerp)
require.Equal(t, true, retryDelay-time.Until(b.retryAfter) < time.Millisecond)
require.Equal(t, true, got < retryDelay && got > retryDelay-time.Millisecond)

View File

@@ -1,6 +1,7 @@
package backfill
import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/das"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/pkg/errors"
)
@@ -10,8 +11,9 @@ var errEndSequence = errors.New("sequence has terminated, no more backfill batch
var errCannotDecreaseMinimum = errors.New("the minimum backfill slot can only be increased, not decreased")
type batchSequencer struct {
batcher batcher
seq []batch
batcher batcher
seq []batch
currentNeeds func() das.CurrentNeeds
}
// sequence() is meant as a verb "arrange in a particular order".
@@ -19,32 +21,38 @@ type batchSequencer struct {
// in its internal view. sequence relies on update() for updates to its view of the
// batches it has previously sequenced.
func (c *batchSequencer) sequence() ([]batch, error) {
needs := c.currentNeeds()
s := make([]batch, 0)
// batch start slots are in descending order, c.seq[n].begin == c.seq[n+1].end
for i := range c.seq {
switch c.seq[i].state {
case batchInit, batchErrRetryable:
c.seq[i] = c.seq[i].withState(batchSequenced)
s = append(s, c.seq[i])
case batchNil:
if c.seq[i].state == batchNil {
// batchNil is the zero value of the batch type.
// This case means that we are initializing a batch that was created by the
// initial allocation of the seq slice, so batcher need to compute its bounds.
var b batch
if i == 0 {
// The first item in the list is a special case, subsequent items are initialized
// relative to the preceding batches.
b = c.batcher.before(c.batcher.max)
c.seq[i] = c.batcher.before(c.batcher.max)
} else {
b = c.batcher.beforeBatch(c.seq[i-1])
c.seq[i] = c.batcher.beforeBatch(c.seq[i-1])
}
c.seq[i] = b.withState(batchSequenced)
s = append(s, c.seq[i])
case batchEndSequence:
if len(s) == 0 {
}
if c.seq[i].state == batchInit || c.seq[i].state == batchErrRetryable {
// This means the batch has fallen outside the retention window so we no longer need to sync it.
// Since we always create batches from high to low, we can assume we've already created the
// descendent batches from the batch we're dropping, so there won't be another batch depending on
// this one - we can stop adding batches and mark put this one in the batchEndSequence state.
// When all batches are in batchEndSequence, worker pool spins down and marks backfill complete.
if c.seq[i].expired(needs) {
c.seq[i] = c.seq[i].withState(batchEndSequence)
} else {
c.seq[i] = c.seq[i].withState(batchSequenced)
s = append(s, c.seq[i])
continue
}
default:
}
if c.seq[i].state == batchEndSequence && len(s) == 0 {
s = append(s, c.seq[i])
continue
}
}
@@ -62,6 +70,7 @@ func (c *batchSequencer) sequence() ([]batch, error) {
// seq with new batches that are ready to be worked on.
func (c *batchSequencer) update(b batch) {
done := 0
needs := c.currentNeeds()
for i := 0; i < len(c.seq); i++ {
if b.replaces(c.seq[i]) {
c.seq[i] = b
@@ -73,16 +82,23 @@ func (c *batchSequencer) update(b batch) {
done += 1
continue
}
if c.seq[i].expired(needs) {
c.seq[i] = c.seq[i].withState(batchEndSequence)
done += 1
continue
}
// Move the unfinished batches to overwrite the finished ones.
// eg consider [a,b,c,d,e] where a,b are done
// when i==2, done==2 (since done was incremented for a and b)
// so we want to copy c to a, then on i=3, d to b, then on i=4 e to c.
c.seq[i-done] = c.seq[i]
}
if done == 1 && len(c.seq) == 1 {
if done == len(c.seq) {
c.seq[0] = c.batcher.beforeBatch(c.seq[0])
return
}
// Overwrite the moved batches with the next ones in the sequence.
// Continuing the example in the comment above, len(c.seq)==5, done=2, so i=3.
// We want to replace index 3 with the batch that should be processed after index 2,
@@ -113,18 +129,6 @@ func (c *batchSequencer) importable() []batch {
return imp
}
// moveMinimum enables the backfill service to change the slot where the batcher will start replying with
// batch state batchEndSequence (signaling that no new batches will be produced). This is done in response to
// epochs advancing, which shrinks the gap between <checkpoint slot> and <current slot>-MIN_EPOCHS_FOR_BLOCK_REQUESTS,
// allowing the node to download a smaller number of blocks.
func (c *batchSequencer) moveMinimum(min primitives.Slot) error {
if min < c.batcher.min {
return errCannotDecreaseMinimum
}
c.batcher.min = min
return nil
}
// countWithState provides a view into how many batches are in a particular state
// to be used for logging or metrics purposes.
func (c *batchSequencer) countWithState(s batchState) int {
@@ -158,23 +162,24 @@ func (c *batchSequencer) numTodo() int {
return todo
}
func newBatchSequencer(seqLen int, min, max, size primitives.Slot) *batchSequencer {
b := batcher{min: min, max: max, size: size}
func newBatchSequencer(seqLen int, max, size primitives.Slot, needsCb func() das.CurrentNeeds) *batchSequencer {
b := batcher{currentNeeds: needsCb, max: max, size: size}
seq := make([]batch, seqLen)
return &batchSequencer{batcher: b, seq: seq}
return &batchSequencer{batcher: b, seq: seq, currentNeeds: needsCb}
}
type batcher struct {
min primitives.Slot
max primitives.Slot
size primitives.Slot
currentNeeds func() das.CurrentNeeds
max primitives.Slot
size primitives.Slot
}
func (r batcher) remaining(upTo primitives.Slot) int {
if r.min >= upTo {
needs := r.currentNeeds()
if !needs.Block.At(upTo) {
return 0
}
delta := upTo - r.min
delta := upTo - needs.Block.Begin
if delta%r.size != 0 {
return int(delta/r.size) + 1
}
@@ -186,13 +191,18 @@ func (r batcher) beforeBatch(upTo batch) batch {
}
func (r batcher) before(upTo primitives.Slot) batch {
// upTo is an exclusive upper bound. Requesting a batch before the lower bound of backfill signals the end of the
// backfill process.
if upTo <= r.min {
// upTo is an exclusive upper bound. If we do not need the block at the upTo slot,
// we don't have anything left to sync, signaling the end of the backfill process.
needs := r.currentNeeds()
// The upper bound is exclusive, so we shouldn't return in this case where the previous
// batch beginning sits at the exact slot of the start of the retention window. In that case
// we've actually hit the end of the sync sequence.
if !needs.Block.At(upTo) || needs.Block.Begin == upTo {
return batch{begin: upTo, end: upTo, state: batchEndSequence}
}
begin := r.min
if upTo > r.size+r.min {
begin := needs.Block.Begin
if upTo > r.size+needs.Block.Begin {
begin = upTo - r.size
}

View File

@@ -0,0 +1,831 @@
package backfill
import (
"testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/das"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/testing/require"
)
// dynamicNeeds provides a mutable currentNeeds callback for testing scenarios
// where the retention window changes over time.
type dynamicNeeds struct {
blockBegin primitives.Slot
blockEnd primitives.Slot
blobBegin primitives.Slot
blobEnd primitives.Slot
colBegin primitives.Slot
colEnd primitives.Slot
}
func newDynamicNeeds(blockBegin, blockEnd primitives.Slot) *dynamicNeeds {
return &dynamicNeeds{
blockBegin: blockBegin,
blockEnd: blockEnd,
blobBegin: blockBegin,
blobEnd: blockEnd,
colBegin: blockBegin,
colEnd: blockEnd,
}
}
func (d *dynamicNeeds) get() das.CurrentNeeds {
return das.CurrentNeeds{
Block: das.NeedSpan{Begin: d.blockBegin, End: d.blockEnd},
Blob: das.NeedSpan{Begin: d.blobBegin, End: d.blobEnd},
Col: das.NeedSpan{Begin: d.colBegin, End: d.colEnd},
}
}
// advance moves the retention window forward by the given number of slots.
func (d *dynamicNeeds) advance(slots primitives.Slot) {
d.blockBegin += slots
d.blockEnd += slots
d.blobBegin += slots
d.blobEnd += slots
d.colBegin += slots
d.colEnd += slots
}
// setBlockBegin sets only the block retention start slot.
func (d *dynamicNeeds) setBlockBegin(begin primitives.Slot) {
d.blockBegin = begin
}
// ============================================================================
// Category 1: Basic Expiration During sequence()
// ============================================================================
func TestSequenceExpiration_SingleBatchExpires_Init(t *testing.T) {
// Single batch in batchInit expires when needs.block.begin moves past it
dn := newDynamicNeeds(100, 500)
seq := newBatchSequencer(1, 200, 50, dn.get)
// Initialize batch: [150, 200)
seq.seq[0] = batch{begin: 150, end: 200, state: batchInit}
// Move retention window past the batch
dn.setBlockBegin(200)
seq.batcher.currentNeeds = dn.get
got, err := seq.sequence()
require.NoError(t, err)
require.Equal(t, 1, len(got))
require.Equal(t, batchEndSequence, got[0].state)
}
func TestSequenceExpiration_SingleBatchExpires_ErrRetryable(t *testing.T) {
// Single batch in batchErrRetryable expires when needs change
dn := newDynamicNeeds(100, 500)
seq := newBatchSequencer(1, 200, 50, dn.get)
seq.seq[0] = batch{begin: 150, end: 200, state: batchErrRetryable}
// Move retention window past the batch
dn.setBlockBegin(200)
seq.batcher.currentNeeds = dn.get
got, err := seq.sequence()
require.NoError(t, err)
require.Equal(t, 1, len(got))
require.Equal(t, batchEndSequence, got[0].state)
}
func TestSequenceExpiration_MultipleBatchesExpire_Partial(t *testing.T) {
// 4 batches, 2 expire when needs change
dn := newDynamicNeeds(100, 500)
seq := newBatchSequencer(4, 400, 50, dn.get)
// Batches: [350,400), [300,350), [250,300), [200,250)
seq.seq[0] = batch{begin: 350, end: 400, state: batchInit}
seq.seq[1] = batch{begin: 300, end: 350, state: batchInit}
seq.seq[2] = batch{begin: 250, end: 300, state: batchInit}
seq.seq[3] = batch{begin: 200, end: 250, state: batchInit}
// Move retention to 300 - batches [250,300) and [200,250) should expire
dn.setBlockBegin(300)
seq.batcher.currentNeeds = dn.get
got, err := seq.sequence()
require.NoError(t, err)
require.Equal(t, 2, len(got))
// First two batches should be sequenced (not expired)
require.Equal(t, batchSequenced, got[0].state)
require.Equal(t, primitives.Slot(350), got[0].begin)
require.Equal(t, batchSequenced, got[1].state)
require.Equal(t, primitives.Slot(300), got[1].begin)
// Verify expired batches are marked batchEndSequence in seq
require.Equal(t, batchEndSequence, seq.seq[2].state)
require.Equal(t, batchEndSequence, seq.seq[3].state)
}
func TestSequenceExpiration_AllBatchesExpire(t *testing.T) {
// All batches expire, returns one batchEndSequence
dn := newDynamicNeeds(100, 500)
seq := newBatchSequencer(3, 300, 50, dn.get)
seq.seq[0] = batch{begin: 250, end: 300, state: batchInit}
seq.seq[1] = batch{begin: 200, end: 250, state: batchInit}
seq.seq[2] = batch{begin: 150, end: 200, state: batchInit}
// Move retention past all batches
dn.setBlockBegin(350)
seq.batcher.currentNeeds = dn.get
got, err := seq.sequence()
require.NoError(t, err)
require.Equal(t, 1, len(got))
require.Equal(t, batchEndSequence, got[0].state)
}
func TestSequenceExpiration_BatchAtExactBoundary(t *testing.T) {
// Batch with end == needs.block.begin should expire
// Because expired() checks !needs.block.at(b.end - 1)
// If batch.end = 200 and needs.block.begin = 200, then at(199) = false → expired
dn := newDynamicNeeds(200, 500)
seq := newBatchSequencer(1, 250, 50, dn.get)
// Batch [150, 200) - end is exactly at retention start
seq.seq[0] = batch{begin: 150, end: 200, state: batchInit}
got, err := seq.sequence()
require.NoError(t, err)
require.Equal(t, 1, len(got))
require.Equal(t, batchEndSequence, got[0].state)
}
func TestSequenceExpiration_BatchJustInsideBoundary(t *testing.T) {
// Batch with end == needs.block.begin + 1 should NOT expire
// at(200) with begin=200 returns true
dn := newDynamicNeeds(200, 500)
seq := newBatchSequencer(1, 251, 50, dn.get)
// Batch [200, 251) - end-1 = 250 which is inside [200, 500)
seq.seq[0] = batch{begin: 200, end: 251, state: batchInit}
got, err := seq.sequence()
require.NoError(t, err)
require.Equal(t, 1, len(got))
require.Equal(t, batchSequenced, got[0].state)
}
// ============================================================================
// Category 2: Expiration During update()
// ============================================================================
func TestUpdateExpiration_UpdateCausesExpiration(t *testing.T) {
// Update a batch while needs have changed, causing other batches to expire
dn := newDynamicNeeds(100, 500)
seq := newBatchSequencer(3, 300, 50, dn.get)
seq.seq[0] = batch{begin: 250, end: 300, state: batchSequenced}
seq.seq[1] = batch{begin: 200, end: 250, state: batchSequenced}
seq.seq[2] = batch{begin: 150, end: 200, state: batchInit}
// Move retention window
dn.setBlockBegin(200)
seq.batcher.currentNeeds = dn.get
// Update first batch (should still be valid)
updated := batch{begin: 250, end: 300, state: batchImportable, seq: 1}
seq.update(updated)
// First batch should be updated
require.Equal(t, batchImportable, seq.seq[0].state)
// Third batch should have expired during update
require.Equal(t, batchEndSequence, seq.seq[2].state)
}
func TestUpdateExpiration_MultipleExpireDuringUpdate(t *testing.T) {
// Several batches expire when needs advance significantly
dn := newDynamicNeeds(100, 500)
seq := newBatchSequencer(4, 400, 50, dn.get)
seq.seq[0] = batch{begin: 350, end: 400, state: batchSequenced}
seq.seq[1] = batch{begin: 300, end: 350, state: batchSequenced}
seq.seq[2] = batch{begin: 250, end: 300, state: batchInit}
seq.seq[3] = batch{begin: 200, end: 250, state: batchInit}
// Move retention to expire last two batches
dn.setBlockBegin(300)
seq.batcher.currentNeeds = dn.get
// Update first batch
updated := batch{begin: 350, end: 400, state: batchImportable, seq: 1}
seq.update(updated)
// Check that expired batches are marked
require.Equal(t, batchEndSequence, seq.seq[2].state)
require.Equal(t, batchEndSequence, seq.seq[3].state)
}
func TestUpdateExpiration_UpdateCompleteWhileExpiring(t *testing.T) {
// Mark batch complete while other batches expire
dn := newDynamicNeeds(100, 500)
seq := newBatchSequencer(3, 300, 50, dn.get)
seq.seq[0] = batch{begin: 250, end: 300, state: batchImportable}
seq.seq[1] = batch{begin: 200, end: 250, state: batchSequenced}
seq.seq[2] = batch{begin: 150, end: 200, state: batchInit}
// Move retention to expire last batch
dn.setBlockBegin(200)
seq.batcher.currentNeeds = dn.get
// Mark first batch complete
completed := batch{begin: 250, end: 300, state: batchImportComplete, seq: 1}
seq.update(completed)
// Completed batch removed, third batch should have expired
// Check that we still have 3 batches (shifted + new ones added)
require.Equal(t, 3, len(seq.seq))
// The batch that was at index 2 should now be expired
foundExpired := false
for _, b := range seq.seq {
if b.state == batchEndSequence {
foundExpired = true
break
}
}
require.Equal(t, true, foundExpired, "should have an expired batch")
}
func TestUpdateExpiration_ExpiredBatchNotShiftedIncorrectly(t *testing.T) {
// Verify expired batches don't get incorrectly shifted
dn := newDynamicNeeds(100, 500)
seq := newBatchSequencer(3, 300, 50, dn.get)
seq.seq[0] = batch{begin: 250, end: 300, state: batchImportComplete}
seq.seq[1] = batch{begin: 200, end: 250, state: batchInit}
seq.seq[2] = batch{begin: 150, end: 200, state: batchInit}
// Move retention to expire all remaining init batches
dn.setBlockBegin(250)
seq.batcher.currentNeeds = dn.get
// Update with the completed batch
completed := batch{begin: 250, end: 300, state: batchImportComplete, seq: 1}
seq.update(completed)
// Verify sequence integrity
require.Equal(t, 3, len(seq.seq))
}
func TestUpdateExpiration_NewBatchCreatedRespectsNeeds(t *testing.T) {
// When new batch is created after expiration, it should respect current needs
dn := newDynamicNeeds(100, 500)
seq := newBatchSequencer(2, 300, 50, dn.get)
seq.seq[0] = batch{begin: 250, end: 300, state: batchImportable}
seq.seq[1] = batch{begin: 200, end: 250, state: batchInit}
// Mark first batch complete to trigger new batch creation
completed := batch{begin: 250, end: 300, state: batchImportComplete, seq: 1}
seq.update(completed)
// New batch should be created - verify it respects the needs
require.Equal(t, 2, len(seq.seq))
// New batch should have proper bounds
for _, b := range seq.seq {
if b.state == batchNil {
continue
}
require.Equal(t, true, b.begin < b.end, "batch bounds should be valid")
}
}
// ============================================================================
// Category 3: Progressive Slot Advancement
// ============================================================================
func TestProgressiveAdvancement_SlotAdvancesGradually(t *testing.T) {
// Simulate gradual slot advancement with batches expiring one by one
dn := newDynamicNeeds(100, 500)
seq := newBatchSequencer(4, 400, 50, dn.get)
// Initialize batches
seq.seq[0] = batch{begin: 350, end: 400, state: batchInit}
seq.seq[1] = batch{begin: 300, end: 350, state: batchInit}
seq.seq[2] = batch{begin: 250, end: 300, state: batchInit}
seq.seq[3] = batch{begin: 200, end: 250, state: batchInit}
// First sequence - all should be returned
got, err := seq.sequence()
require.NoError(t, err)
require.Equal(t, 4, len(got))
// Advance by 50 slots - last batch should expire
dn.setBlockBegin(250)
seq.batcher.currentNeeds = dn.get
// Mark first batch importable and update
seq.seq[0].state = batchImportable
seq.update(seq.seq[0])
// Last batch should now be expired
require.Equal(t, batchEndSequence, seq.seq[3].state)
// Advance again
dn.setBlockBegin(300)
seq.batcher.currentNeeds = dn.get
seq.seq[1].state = batchImportable
seq.update(seq.seq[1])
// Count expired batches
expiredCount := 0
for _, b := range seq.seq {
if b.state == batchEndSequence {
expiredCount++
}
}
require.Equal(t, true, expiredCount >= 2, "expected at least 2 expired batches")
}
func TestProgressiveAdvancement_SlotAdvancesInBursts(t *testing.T) {
// Large jump in slots causes multiple batches to expire at once
dn := newDynamicNeeds(100, 600)
seq := newBatchSequencer(6, 500, 50, dn.get)
// Initialize batches: [450,500), [400,450), [350,400), [300,350), [250,300), [200,250)
for i := range 6 {
seq.seq[i] = batch{
begin: primitives.Slot(500 - (i+1)*50),
end: primitives.Slot(500 - i*50),
state: batchInit,
}
}
// Large jump - expire 4 batches at once
dn.setBlockBegin(400)
seq.batcher.currentNeeds = dn.get
got, err := seq.sequence()
require.NoError(t, err)
// Should have 2 non-expired batches returned
nonExpired := 0
for _, b := range got {
if b.state == batchSequenced {
nonExpired++
}
}
require.Equal(t, 2, nonExpired)
}
func TestProgressiveAdvancement_WorkerProcessingDuringAdvancement(t *testing.T) {
// Batches in various processing states while needs advance
dn := newDynamicNeeds(100, 500)
seq := newBatchSequencer(4, 400, 50, dn.get)
seq.seq[0] = batch{begin: 350, end: 400, state: batchSyncBlobs}
seq.seq[1] = batch{begin: 300, end: 350, state: batchSyncColumns}
seq.seq[2] = batch{begin: 250, end: 300, state: batchSequenced}
seq.seq[3] = batch{begin: 200, end: 250, state: batchInit}
// Advance past last batch
dn.setBlockBegin(250)
seq.batcher.currentNeeds = dn.get
// Call sequence - only batchInit should transition
got, err := seq.sequence()
require.NoError(t, err)
// batchInit batch should have expired
require.Equal(t, batchEndSequence, seq.seq[3].state)
// Batches in other states should not be returned by sequence (already dispatched)
for _, b := range got {
require.NotEqual(t, batchSyncBlobs, b.state)
require.NotEqual(t, batchSyncColumns, b.state)
}
}
func TestProgressiveAdvancement_CompleteBeforeExpiration(t *testing.T) {
// Batch completes just before it would expire
dn := newDynamicNeeds(100, 500)
seq := newBatchSequencer(2, 300, 50, dn.get)
seq.seq[0] = batch{begin: 250, end: 300, state: batchSequenced}
seq.seq[1] = batch{begin: 200, end: 250, state: batchImportable}
// Complete the second batch BEFORE advancing needs
completed := batch{begin: 200, end: 250, state: batchImportComplete, seq: 1}
seq.update(completed)
// Now advance needs past where the batch was
dn.setBlockBegin(250)
seq.batcher.currentNeeds = dn.get
// The completed batch should have been removed successfully
// Sequence should work normally
got, err := seq.sequence()
require.NoError(t, err)
require.Equal(t, true, len(got) >= 1, "expected at least 1 batch")
}
// ============================================================================
// Category 4: Batch State Transitions Under Expiration
// ============================================================================
func TestStateExpiration_NilBatchNotExpired(t *testing.T) {
// batchNil should be initialized, not expired
dn := newDynamicNeeds(200, 500)
seq := newBatchSequencer(2, 300, 50, dn.get)
// Leave seq[0] as batchNil (zero value)
seq.seq[1] = batch{begin: 200, end: 250, state: batchInit}
got, err := seq.sequence()
require.NoError(t, err)
// batchNil should have been initialized and sequenced
foundSequenced := false
for _, b := range got {
if b.state == batchSequenced {
foundSequenced = true
}
}
require.Equal(t, true, foundSequenced, "expected at least one sequenced batch")
}
func TestStateExpiration_InitBatchExpires(t *testing.T) {
// batchInit batches expire when outside retention
dn := newDynamicNeeds(200, 500)
seq := newBatchSequencer(1, 250, 50, dn.get)
seq.seq[0] = batch{begin: 150, end: 200, state: batchInit}
got, err := seq.sequence()
require.NoError(t, err)
require.Equal(t, 1, len(got))
require.Equal(t, batchEndSequence, got[0].state)
}
func TestStateExpiration_SequencedBatchNotCheckedBySequence(t *testing.T) {
// batchSequenced batches are not returned by sequence() (already dispatched)
dn := newDynamicNeeds(100, 500)
seq := newBatchSequencer(2, 300, 50, dn.get)
seq.seq[0] = batch{begin: 250, end: 300, state: batchSequenced}
seq.seq[1] = batch{begin: 200, end: 250, state: batchInit}
// Move retention past second batch
dn.setBlockBegin(250)
seq.batcher.currentNeeds = dn.get
got, err := seq.sequence()
require.NoError(t, err)
// Init batch should expire, sequenced batch not returned
for _, b := range got {
require.NotEqual(t, batchSequenced, b.state)
}
}
func TestStateExpiration_SyncBlobsBatchNotCheckedBySequence(t *testing.T) {
// batchSyncBlobs not returned by sequence
dn := newDynamicNeeds(100, 500)
seq := newBatchSequencer(1, 300, 50, dn.get)
seq.seq[0] = batch{begin: 250, end: 300, state: batchSyncBlobs}
_, err := seq.sequence()
require.ErrorIs(t, err, errMaxBatches) // No batch to return
}
func TestStateExpiration_SyncColumnsBatchNotCheckedBySequence(t *testing.T) {
// batchSyncColumns not returned by sequence
dn := newDynamicNeeds(100, 500)
seq := newBatchSequencer(1, 300, 50, dn.get)
seq.seq[0] = batch{begin: 250, end: 300, state: batchSyncColumns}
_, err := seq.sequence()
require.ErrorIs(t, err, errMaxBatches)
}
func TestStateExpiration_ImportableBatchNotCheckedBySequence(t *testing.T) {
// batchImportable not returned by sequence
dn := newDynamicNeeds(100, 500)
seq := newBatchSequencer(1, 300, 50, dn.get)
seq.seq[0] = batch{begin: 250, end: 300, state: batchImportable}
_, err := seq.sequence()
require.ErrorIs(t, err, errMaxBatches)
}
func TestStateExpiration_RetryableBatchExpires(t *testing.T) {
// batchErrRetryable batches can expire
dn := newDynamicNeeds(200, 500)
seq := newBatchSequencer(1, 250, 50, dn.get)
seq.seq[0] = batch{begin: 150, end: 200, state: batchErrRetryable}
got, err := seq.sequence()
require.NoError(t, err)
require.Equal(t, 1, len(got))
require.Equal(t, batchEndSequence, got[0].state)
}
// ============================================================================
// Category 5: Edge Cases and Boundaries
// ============================================================================
func TestEdgeCase_NeedsSpanShrinks(t *testing.T) {
// Unusual case: retention window becomes smaller
dn := newDynamicNeeds(100, 500)
seq := newBatchSequencer(3, 400, 50, dn.get)
seq.seq[0] = batch{begin: 350, end: 400, state: batchInit}
seq.seq[1] = batch{begin: 300, end: 350, state: batchInit}
seq.seq[2] = batch{begin: 250, end: 300, state: batchInit}
// Shrink window from both ends
dn.blockBegin = 300
dn.blockEnd = 400
seq.batcher.currentNeeds = dn.get
_, err := seq.sequence()
require.NoError(t, err)
// Third batch should have expired
require.Equal(t, batchEndSequence, seq.seq[2].state)
}
func TestEdgeCase_EmptySequenceAfterExpiration(t *testing.T) {
// All batches in non-schedulable states, none can be sequenced
dn := newDynamicNeeds(100, 500)
seq := newBatchSequencer(2, 300, 50, dn.get)
seq.seq[0] = batch{begin: 250, end: 300, state: batchImportable}
seq.seq[1] = batch{begin: 200, end: 250, state: batchImportable}
// No batchInit or batchErrRetryable to sequence
_, err := seq.sequence()
require.ErrorIs(t, err, errMaxBatches)
}
func TestEdgeCase_EndSequenceChainReaction(t *testing.T) {
// When batches expire, subsequent calls should handle them correctly
dn := newDynamicNeeds(100, 500)
seq := newBatchSequencer(3, 300, 50, dn.get)
seq.seq[0] = batch{begin: 250, end: 300, state: batchInit}
seq.seq[1] = batch{begin: 200, end: 250, state: batchInit}
seq.seq[2] = batch{begin: 150, end: 200, state: batchInit}
// Expire all
dn.setBlockBegin(300)
seq.batcher.currentNeeds = dn.get
got1, err := seq.sequence()
require.NoError(t, err)
require.Equal(t, 1, len(got1))
require.Equal(t, batchEndSequence, got1[0].state)
// Calling sequence again should still return batchEndSequence
got2, err := seq.sequence()
require.NoError(t, err)
require.Equal(t, 1, len(got2))
require.Equal(t, batchEndSequence, got2[0].state)
}
func TestEdgeCase_MixedExpirationAndCompletion(t *testing.T) {
// Some batches complete while others expire simultaneously
dn := newDynamicNeeds(100, 500)
seq := newBatchSequencer(4, 400, 50, dn.get)
seq.seq[0] = batch{begin: 350, end: 400, state: batchImportComplete}
seq.seq[1] = batch{begin: 300, end: 350, state: batchImportable}
seq.seq[2] = batch{begin: 250, end: 300, state: batchInit}
seq.seq[3] = batch{begin: 200, end: 250, state: batchInit}
// Expire last two batches
dn.setBlockBegin(300)
seq.batcher.currentNeeds = dn.get
// Update with completed batch to trigger processing
completed := batch{begin: 350, end: 400, state: batchImportComplete, seq: 1}
seq.update(completed)
// Verify expired batches are marked
expiredCount := 0
for _, b := range seq.seq {
if b.state == batchEndSequence {
expiredCount++
}
}
require.Equal(t, true, expiredCount >= 2, "expected at least 2 expired batches")
}
func TestEdgeCase_BatchExpiresAtSlotZero(t *testing.T) {
// Edge case with very low slot numbers
dn := newDynamicNeeds(50, 200)
seq := newBatchSequencer(2, 100, 50, dn.get)
seq.seq[0] = batch{begin: 50, end: 100, state: batchInit}
seq.seq[1] = batch{begin: 0, end: 50, state: batchInit}
// Move past first batch
dn.setBlockBegin(100)
seq.batcher.currentNeeds = dn.get
got, err := seq.sequence()
require.NoError(t, err)
// Both batches should have expired
for _, b := range got {
require.Equal(t, batchEndSequence, b.state)
}
}
// ============================================================================
// Category 6: Integration with numTodo/remaining
// ============================================================================
func TestNumTodo_AfterExpiration(t *testing.T) {
// numTodo should correctly reflect expired batches
dn := newDynamicNeeds(100, 500)
seq := newBatchSequencer(3, 300, 50, dn.get)
seq.seq[0] = batch{begin: 250, end: 300, state: batchSequenced}
seq.seq[1] = batch{begin: 200, end: 250, state: batchSequenced}
seq.seq[2] = batch{begin: 150, end: 200, state: batchInit}
todoBefore := seq.numTodo()
// Expire last batch
dn.setBlockBegin(200)
seq.batcher.currentNeeds = dn.get
// Force expiration via sequence
_, err := seq.sequence()
require.NoError(t, err)
todoAfter := seq.numTodo()
// Todo count should have decreased
require.Equal(t, true, todoAfter < todoBefore, "expected todo count to decrease after expiration")
}
func TestRemaining_AfterNeedsChange(t *testing.T) {
// batcher.remaining() should use updated needs
dn := newDynamicNeeds(100, 500)
b := batcher{currentNeeds: dn.get, size: 50}
remainingBefore := b.remaining(300)
// Move retention window
dn.setBlockBegin(250)
b.currentNeeds = dn.get
remainingAfter := b.remaining(300)
// Remaining should have decreased
require.Equal(t, true, remainingAfter < remainingBefore, "expected remaining to decrease after needs change")
}
func TestCountWithState_AfterExpiration(t *testing.T) {
// State counts should be accurate after expiration
dn := newDynamicNeeds(100, 500)
seq := newBatchSequencer(3, 300, 50, dn.get)
seq.seq[0] = batch{begin: 250, end: 300, state: batchInit}
seq.seq[1] = batch{begin: 200, end: 250, state: batchInit}
seq.seq[2] = batch{begin: 150, end: 200, state: batchInit}
require.Equal(t, 3, seq.countWithState(batchInit))
require.Equal(t, 0, seq.countWithState(batchEndSequence))
// Expire all batches
dn.setBlockBegin(300)
seq.batcher.currentNeeds = dn.get
_, err := seq.sequence()
require.NoError(t, err)
require.Equal(t, 0, seq.countWithState(batchInit))
require.Equal(t, 3, seq.countWithState(batchEndSequence))
}
// ============================================================================
// Category 7: Fork Transition Scenarios (Blob/Column specific)
// ============================================================================
func TestForkTransition_BlobNeedsChange(t *testing.T) {
// Test when blob retention is different from block retention
dn := newDynamicNeeds(100, 500)
// Set blob begin to be further ahead
dn.blobBegin = 200
seq := newBatchSequencer(3, 300, 50, dn.get)
seq.seq[0] = batch{begin: 250, end: 300, state: batchInit}
seq.seq[1] = batch{begin: 200, end: 250, state: batchInit}
seq.seq[2] = batch{begin: 150, end: 200, state: batchInit}
// Sequence should work based on block needs
got, err := seq.sequence()
require.NoError(t, err)
require.Equal(t, 3, len(got))
}
func TestForkTransition_ColumnNeedsChange(t *testing.T) {
// Test when column retention is different from block retention
dn := newDynamicNeeds(100, 500)
// Set column begin to be further ahead
dn.colBegin = 300
seq := newBatchSequencer(3, 400, 50, dn.get)
seq.seq[0] = batch{begin: 350, end: 400, state: batchInit}
seq.seq[1] = batch{begin: 300, end: 350, state: batchInit}
seq.seq[2] = batch{begin: 250, end: 300, state: batchInit}
// Batch expiration is based on block needs, not column needs
got, err := seq.sequence()
require.NoError(t, err)
require.Equal(t, 3, len(got))
}
func TestForkTransition_BlockNeedsVsBlobNeeds(t *testing.T) {
// Blocks still needed but blobs have shorter retention
dn := newDynamicNeeds(100, 500)
dn.blobBegin = 300 // Blobs only needed from slot 300
dn.blobEnd = 500
seq := newBatchSequencer(3, 400, 50, dn.get)
seq.seq[0] = batch{begin: 350, end: 400, state: batchInit}
seq.seq[1] = batch{begin: 300, end: 350, state: batchInit}
seq.seq[2] = batch{begin: 250, end: 300, state: batchInit}
// All batches should be returned (block expiration, not blob)
got, err := seq.sequence()
require.NoError(t, err)
require.Equal(t, 3, len(got))
// Now change block needs to match blob needs
dn.blockBegin = 300
seq.batcher.currentNeeds = dn.get
// Re-sequence - last batch should expire
seq.seq[0].state = batchInit
seq.seq[1].state = batchInit
seq.seq[2].state = batchInit
got2, err := seq.sequence()
require.NoError(t, err)
// Should have 2 non-expired batches
nonExpired := 0
for _, b := range got2 {
if b.state == batchSequenced {
nonExpired++
}
}
require.Equal(t, 2, nonExpired)
}
func TestForkTransition_AllResourceTypesAdvance(t *testing.T) {
// Block, blob, and column spans all advance together
dn := newDynamicNeeds(100, 500)
seq := newBatchSequencer(4, 400, 50, dn.get)
// Batches: [350,400), [300,350), [250,300), [200,250)
for i := range 4 {
seq.seq[i] = batch{
begin: primitives.Slot(400 - (i+1)*50),
end: primitives.Slot(400 - i*50),
state: batchInit,
}
}
// Advance all needs together by 200 slots
// blockBegin moves from 100 to 300
dn.advance(200)
seq.batcher.currentNeeds = dn.get
got, err := seq.sequence()
require.NoError(t, err)
// Count non-expired
nonExpired := 0
for _, b := range got {
if b.state == batchSequenced {
nonExpired++
}
}
// With begin=300, batches [200,250) and [250,300) should have expired
// Batches [350,400) and [300,350) remain valid
require.Equal(t, 2, nonExpired)
}

View File

@@ -4,6 +4,7 @@ import (
"fmt"
"testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/das"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/testing/require"
)
@@ -17,7 +18,7 @@ func TestBatcherBefore(t *testing.T) {
}{
{
name: "size 10",
b: batcher{min: 0, size: 10},
b: batcher{currentNeeds: mockCurrentNeedsFunc(0, 100), size: 10},
upTo: []primitives.Slot{33, 30, 10, 6},
expect: []batch{
{begin: 23, end: 33, state: batchInit},
@@ -28,7 +29,7 @@ func TestBatcherBefore(t *testing.T) {
},
{
name: "size 4",
b: batcher{min: 0, size: 4},
b: batcher{currentNeeds: mockCurrentNeedsFunc(0, 100), size: 4},
upTo: []primitives.Slot{33, 6, 4},
expect: []batch{
{begin: 29, end: 33, state: batchInit},
@@ -38,7 +39,7 @@ func TestBatcherBefore(t *testing.T) {
},
{
name: "trigger end",
b: batcher{min: 20, size: 10},
b: batcher{currentNeeds: mockCurrentNeedsFunc(20, 100), size: 10},
upTo: []primitives.Slot{33, 30, 25, 21, 20, 19},
expect: []batch{
{begin: 23, end: 33, state: batchInit},
@@ -71,7 +72,7 @@ func TestBatchSingleItem(t *testing.T) {
min = 0
max = 11235
size = 64
seq := newBatchSequencer(seqLen, min, max, size)
seq := newBatchSequencer(seqLen, max, size, mockCurrentNeedsFunc(min, max+1))
got, err := seq.sequence()
require.NoError(t, err)
require.Equal(t, 1, len(got))
@@ -99,7 +100,7 @@ func TestBatchSequencer(t *testing.T) {
min = 0
max = 11235
size = 64
seq := newBatchSequencer(seqLen, min, max, size)
seq := newBatchSequencer(seqLen, max, size, mockCurrentNeedsFunc(min, max+1))
expected := []batch{
{begin: 11171, end: 11235},
{begin: 11107, end: 11171},
@@ -212,7 +213,10 @@ func TestBatchSequencer(t *testing.T) {
// set the min for the batcher close to the lowest slot. This will force the next batch to be partial and the batch
// after that to be the final batch.
newMin := seq.seq[len(seq.seq)-1].begin - 30
seq.batcher.min = newMin
seq.currentNeeds = func() das.CurrentNeeds {
return das.CurrentNeeds{Block: das.NeedSpan{Begin: newMin, End: seq.batcher.max}}
}
seq.batcher.currentNeeds = seq.currentNeeds
first = seq.seq[0]
first.state = batchImportComplete
// update() with a complete state will cause the sequence to be extended with an additional batch
@@ -235,3 +239,863 @@ func TestBatchSequencer(t *testing.T) {
//require.ErrorIs(t, err, errEndSequence)
require.Equal(t, batchEndSequence, end.state)
}
// initializeBatchWithSlots sets the begin and end slot values for a batch
// in descending order (slot positions decrease as index increases)
func initializeBatchWithSlots(batches []batch, min primitives.Slot, size primitives.Slot) {
for i := range batches {
// Batches are ordered descending by slot: earliest batches have lower indices
// so batch[0] covers highest slots, batch[N] covers lowest slots
end := min + primitives.Slot((len(batches)-i)*int(size))
begin := end - size
batches[i].begin = begin
batches[i].end = end
}
}
// TestSequence tests the sequence() method with various batch states
func TestSequence(t *testing.T) {
testCases := []struct {
name string
seqLen int
min primitives.Slot
max primitives.Slot
size primitives.Slot
initialStates []batchState
expectedCount int
expectedErr error
stateTransform func([]batch) // optional: transform states before test
}{
{
name: "EmptySequence",
seqLen: 0,
min: 100,
max: 1000,
size: 64,
initialStates: []batchState{},
expectedCount: 0,
expectedErr: errMaxBatches,
},
{
name: "SingleBatchInit",
seqLen: 1,
min: 100,
max: 1000,
size: 64,
initialStates: []batchState{batchInit},
expectedCount: 1,
},
{
name: "SingleBatchErrRetryable",
seqLen: 1,
min: 100,
max: 1000,
size: 64,
initialStates: []batchState{batchErrRetryable},
expectedCount: 1,
},
{
name: "MultipleBatchesInit",
seqLen: 3,
min: 100,
max: 1000,
size: 200,
initialStates: []batchState{batchInit, batchInit, batchInit},
expectedCount: 3,
},
{
name: "MixedStates_InitAndSequenced",
seqLen: 2,
min: 100,
max: 1000,
size: 100,
initialStates: []batchState{batchInit, batchSequenced},
expectedCount: 1,
},
{
name: "MixedStates_SequencedFirst",
seqLen: 2,
min: 100,
max: 1000,
size: 100,
initialStates: []batchState{batchSequenced, batchInit},
expectedCount: 1,
},
{
name: "AllBatchesSequenced",
seqLen: 3,
min: 100,
max: 1000,
size: 200,
initialStates: []batchState{batchSequenced, batchSequenced, batchSequenced},
expectedCount: 0,
expectedErr: errMaxBatches,
},
{
name: "EndSequenceOnly",
seqLen: 1,
min: 100,
max: 1000,
size: 64,
initialStates: []batchState{batchEndSequence},
expectedCount: 1,
},
{
name: "EndSequenceWithOthers",
seqLen: 2,
min: 100,
max: 1000,
size: 64,
initialStates: []batchState{batchInit, batchEndSequence},
expectedCount: 1,
},
{
name: "ImportableNotSequenced",
seqLen: 1,
min: 100,
max: 1000,
size: 64,
initialStates: []batchState{batchImportable},
expectedCount: 0,
expectedErr: errMaxBatches,
},
{
name: "ImportCompleteNotSequenced",
seqLen: 1,
min: 100,
max: 1000,
size: 64,
initialStates: []batchState{batchImportComplete},
expectedCount: 0,
expectedErr: errMaxBatches,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
seq := newBatchSequencer(tc.seqLen, tc.max, tc.size, mockCurrentNeedsFunc(tc.min, tc.max+1))
// Initialize batches with valid slot ranges
initializeBatchWithSlots(seq.seq, tc.min, tc.size)
// Set initial states
for i, state := range tc.initialStates {
seq.seq[i].state = state
}
// Apply any transformations
if tc.stateTransform != nil {
tc.stateTransform(seq.seq)
}
got, err := seq.sequence()
if tc.expectedErr != nil {
require.ErrorIs(t, err, tc.expectedErr)
} else {
require.NoError(t, err)
}
require.Equal(t, tc.expectedCount, len(got))
// Verify returned batches are in batchSequenced state
for _, b := range got {
if b.state != batchEndSequence {
require.Equal(t, batchSequenced, b.state)
}
}
})
}
}
// TestUpdate tests the update() method which: (1) updates batch state, (2) removes batchImportComplete batches,
// (3) shifts remaining batches down, and (4) adds new batches to fill vacated positions.
// NOTE: The sequence length can change! Completed batches are removed and new ones are added.
func TestUpdate(t *testing.T) {
testCases := []struct {
name string
seqLen int
batches []batchState
updateIdx int
newState batchState
expectedLen int // expected length after update
expected []batchState // expected states of first N batches after update
}{
{
name: "SingleBatchUpdate",
seqLen: 1,
batches: []batchState{batchInit},
updateIdx: 0,
newState: batchImportable,
expectedLen: 1,
expected: []batchState{batchImportable},
},
{
name: "RemoveFirstCompleted_ShiftOthers",
seqLen: 3,
batches: []batchState{batchImportComplete, batchInit, batchInit},
updateIdx: 0,
newState: batchImportComplete,
expectedLen: 3, // 1 removed + 2 new added
expected: []batchState{batchInit, batchInit}, // shifted down
},
{
name: "RemoveMultipleCompleted",
seqLen: 3,
batches: []batchState{batchImportComplete, batchImportComplete, batchInit},
updateIdx: 0,
newState: batchImportComplete,
expectedLen: 3, // 2 removed + 2 new added
expected: []batchState{batchInit}, // only 1 non-complete batch
},
{
name: "RemoveMiddleCompleted_AlsoShifts",
seqLen: 3,
batches: []batchState{batchInit, batchImportComplete, batchInit},
updateIdx: 1,
newState: batchImportComplete,
expectedLen: 3, // 1 removed + 1 new added
expected: []batchState{batchInit, batchInit}, // middle complete removed, last shifted to middle
},
{
name: "SingleBatchComplete_Replaced",
seqLen: 1,
batches: []batchState{batchInit},
updateIdx: 0,
newState: batchImportComplete,
expectedLen: 1, // special case: replaced with new batch
expected: []batchState{batchInit}, // new batch from beforeBatch
},
{
name: "UpdateNonMatchingBatch",
seqLen: 2,
batches: []batchState{batchInit, batchInit},
updateIdx: 0,
newState: batchImportable,
expectedLen: 2,
expected: []batchState{batchImportable, batchInit},
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
seq := newBatchSequencer(tc.seqLen, 1000, 64, mockCurrentNeedsFunc(0, 1000+1))
// Initialize batches with proper slot ranges
for i := range seq.seq {
seq.seq[i] = batch{
begin: primitives.Slot(1000 - (i+1)*64),
end: primitives.Slot(1000 - i*64),
state: tc.batches[i],
}
}
// Create batch to update (must match begin/end to be replaced)
updateBatch := seq.seq[tc.updateIdx]
updateBatch.state = tc.newState
seq.update(updateBatch)
// Verify expected length
if len(seq.seq) != tc.expectedLen {
t.Fatalf("expected length %d, got %d", tc.expectedLen, len(seq.seq))
}
// Verify expected states of first N batches
for i, expectedState := range tc.expected {
if i >= len(seq.seq) {
t.Fatalf("expected state at index %d but seq only has %d batches", i, len(seq.seq))
}
if seq.seq[i].state != expectedState {
t.Fatalf("batch[%d]: expected state %s, got %s", i, expectedState.String(), seq.seq[i].state.String())
}
}
// Verify slot contiguity for non-newly-generated batches
// (newly generated batches from beforeBatch() may not be contiguous with shifted batches)
// For this test, we just verify they're in valid slot ranges
for i := 0; i < len(seq.seq); i++ {
if seq.seq[i].begin >= seq.seq[i].end {
t.Fatalf("invalid batch[%d]: begin=%d should be < end=%d", i, seq.seq[i].begin, seq.seq[i].end)
}
}
})
}
}
// TestImportable tests the importable() method for contiguity checking
func TestImportable(t *testing.T) {
testCases := []struct {
name string
seqLen int
states []batchState
expectedCount int
expectedBreak int // index where importable chain breaks (-1 if none)
}{
{
name: "EmptySequence",
seqLen: 0,
states: []batchState{},
expectedCount: 0,
expectedBreak: -1,
},
{
name: "FirstBatchNotImportable",
seqLen: 2,
states: []batchState{batchInit, batchImportable},
expectedCount: 0,
expectedBreak: 0,
},
{
name: "FirstBatchImportable",
seqLen: 1,
states: []batchState{batchImportable},
expectedCount: 1,
expectedBreak: -1,
},
{
name: "TwoImportableConsecutive",
seqLen: 2,
states: []batchState{batchImportable, batchImportable},
expectedCount: 2,
expectedBreak: -1,
},
{
name: "ThreeImportableConsecutive",
seqLen: 3,
states: []batchState{batchImportable, batchImportable, batchImportable},
expectedCount: 3,
expectedBreak: -1,
},
{
name: "ImportsBreak_SecondNotImportable",
seqLen: 2,
states: []batchState{batchImportable, batchInit},
expectedCount: 1,
expectedBreak: 1,
},
{
name: "ImportsBreak_MiddleNotImportable",
seqLen: 4,
states: []batchState{batchImportable, batchImportable, batchInit, batchImportable},
expectedCount: 2,
expectedBreak: 2,
},
{
name: "EndSequenceAfterImportable",
seqLen: 3,
states: []batchState{batchImportable, batchImportable, batchEndSequence},
expectedCount: 2,
expectedBreak: 2,
},
{
name: "AllStatesNotImportable",
seqLen: 3,
states: []batchState{batchInit, batchSequenced, batchErrRetryable},
expectedCount: 0,
expectedBreak: 0,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
seq := newBatchSequencer(tc.seqLen, 1000, 64, mockCurrentNeedsFunc(0, 1000+1))
for i, state := range tc.states {
seq.seq[i] = batch{
begin: primitives.Slot(1000 - (i+1)*64),
end: primitives.Slot(1000 - i*64),
state: state,
}
}
imp := seq.importable()
require.Equal(t, tc.expectedCount, len(imp))
})
}
}
// TestMoveMinimumWithNonImportableUpdate tests integration of moveMinimum with update()
func TestMoveMinimumWithNonImportableUpdate(t *testing.T) {
t.Run("UpdateBatchAfterMinimumChange", func(t *testing.T) {
seq := newBatchSequencer(3, 300, 50, mockCurrentNeedsFunc(100, 300+1))
// Initialize with batches
seq.seq[0] = batch{begin: 200, end: 250, state: batchInit}
seq.seq[1] = batch{begin: 150, end: 200, state: batchInit}
seq.seq[2] = batch{begin: 100, end: 150, state: batchInit}
seq.currentNeeds = mockCurrentNeedsFunc(150, 300+1)
seq.batcher.currentNeeds = seq.currentNeeds
// Update non-importable batch above new minimum
batchToUpdate := batch{begin: 200, end: 250, state: batchSequenced}
seq.update(batchToUpdate)
// Verify batch was updated
require.Equal(t, batchSequenced, seq.seq[0].state)
// Verify numTodo reflects updated minimum
todo := seq.numTodo()
require.NotEqual(t, 0, todo, "numTodo should be greater than 0 after moveMinimum and update")
})
}
// TestCountWithState tests state counting
func TestCountWithState(t *testing.T) {
testCases := []struct {
name string
seqLen int
states []batchState
queryState batchState
expectedCount int
}{
{
name: "CountInit_NoBatches",
seqLen: 0,
states: []batchState{},
queryState: batchInit,
expectedCount: 0,
},
{
name: "CountInit_OneBatch",
seqLen: 1,
states: []batchState{batchInit},
queryState: batchInit,
expectedCount: 1,
},
{
name: "CountInit_MultipleBatches",
seqLen: 3,
states: []batchState{batchInit, batchInit, batchInit},
queryState: batchInit,
expectedCount: 3,
},
{
name: "CountInit_MixedStates",
seqLen: 3,
states: []batchState{batchInit, batchSequenced, batchInit},
queryState: batchInit,
expectedCount: 2,
},
{
name: "CountSequenced",
seqLen: 3,
states: []batchState{batchInit, batchSequenced, batchImportable},
queryState: batchSequenced,
expectedCount: 1,
},
{
name: "CountImportable",
seqLen: 3,
states: []batchState{batchImportable, batchImportable, batchInit},
queryState: batchImportable,
expectedCount: 2,
},
{
name: "CountComplete",
seqLen: 3,
states: []batchState{batchImportComplete, batchImportComplete, batchInit},
queryState: batchImportComplete,
expectedCount: 2,
},
{
name: "CountEndSequence",
seqLen: 3,
states: []batchState{batchInit, batchEndSequence, batchInit},
queryState: batchEndSequence,
expectedCount: 1,
},
{
name: "CountZero_NonexistentState",
seqLen: 2,
states: []batchState{batchInit, batchInit},
queryState: batchImportable,
expectedCount: 0,
},
{
name: "CountNil",
seqLen: 3,
states: []batchState{batchNil, batchNil, batchInit},
queryState: batchNil,
expectedCount: 2,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
seq := newBatchSequencer(tc.seqLen, 1000, 64, mockCurrentNeedsFunc(0, 1000+1))
for i, state := range tc.states {
seq.seq[i].state = state
}
count := seq.countWithState(tc.queryState)
require.Equal(t, tc.expectedCount, count)
})
}
}
// TestNumTodo tests remaining batch count calculation
func TestNumTodo(t *testing.T) {
testCases := []struct {
name string
seqLen int
min primitives.Slot
max primitives.Slot
size primitives.Slot
states []batchState
expectedTodo int
}{
{
name: "EmptySequence",
seqLen: 0,
min: 0,
max: 1000,
size: 64,
states: []batchState{},
expectedTodo: 0,
},
{
name: "SingleBatchComplete",
seqLen: 1,
min: 0,
max: 1000,
size: 64,
states: []batchState{batchImportComplete},
expectedTodo: 0,
},
{
name: "SingleBatchInit",
seqLen: 1,
min: 0,
max: 100,
size: 10,
states: []batchState{batchInit},
expectedTodo: 1,
},
{
name: "AllBatchesIgnored",
seqLen: 3,
min: 0,
max: 1000,
size: 64,
states: []batchState{batchImportComplete, batchImportComplete, batchNil},
expectedTodo: 0,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
seq := newBatchSequencer(tc.seqLen, tc.max, tc.size, mockCurrentNeedsFunc(tc.min, tc.max+1))
for i, state := range tc.states {
seq.seq[i] = batch{
begin: primitives.Slot(tc.max - primitives.Slot((i+1)*10)),
end: primitives.Slot(tc.max - primitives.Slot(i*10)),
state: state,
}
}
// Just verify numTodo doesn't panic
_ = seq.numTodo()
})
}
}
// TestBatcherRemaining tests the remaining() calculation logic
func TestBatcherRemaining(t *testing.T) {
testCases := []struct {
name string
min primitives.Slot
upTo primitives.Slot
size primitives.Slot
expected int
}{
{
name: "UpToLessThanMin",
min: 100,
upTo: 50,
size: 10,
expected: 0,
},
{
name: "UpToEqualsMin",
min: 100,
upTo: 100,
size: 10,
expected: 0,
},
{
name: "ExactBoundary",
min: 100,
upTo: 110,
size: 10,
expected: 1,
},
{
name: "ExactBoundary_Multiple",
min: 100,
upTo: 150,
size: 10,
expected: 5,
},
{
name: "PartialBatch",
min: 100,
upTo: 115,
size: 10,
expected: 2,
},
{
name: "PartialBatch_Small",
min: 100,
upTo: 105,
size: 10,
expected: 1,
},
{
name: "LargeRange",
min: 100,
upTo: 500,
size: 10,
expected: 40,
},
{
name: "LargeRange_Partial",
min: 100,
upTo: 505,
size: 10,
expected: 41,
},
{
name: "PartialBatch_Size1",
min: 100,
upTo: 101,
size: 1,
expected: 1,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
needs := func() das.CurrentNeeds {
return das.CurrentNeeds{Block: das.NeedSpan{Begin: tc.min, End: tc.upTo + 1}}
}
b := batcher{size: tc.size, currentNeeds: needs}
result := b.remaining(tc.upTo)
require.Equal(t, tc.expected, result)
})
}
}
// assertAllBatchesAboveMinimum verifies all returned batches have end > minimum
func assertAllBatchesAboveMinimum(t *testing.T, batches []batch, min primitives.Slot) {
for _, b := range batches {
if b.state != batchEndSequence {
if b.end <= min {
t.Fatalf("batch begin=%d end=%d has end <= minimum %d", b.begin, b.end, min)
}
}
}
}
// assertBatchesContiguous verifies contiguity of returned batches
func assertBatchesContiguous(t *testing.T, batches []batch) {
for i := 0; i < len(batches)-1; i++ {
require.Equal(t, batches[i].begin, batches[i+1].end,
"batch[%d] begin=%d not contiguous with batch[%d] end=%d", i, batches[i].begin, i+1, batches[i+1].end)
}
}
// assertBatchNotReturned verifies a specific batch is not in the returned list
func assertBatchNotReturned(t *testing.T, batches []batch, shouldNotBe batch) {
for _, b := range batches {
if b.begin == shouldNotBe.begin && b.end == shouldNotBe.end {
t.Fatalf("batch begin=%d end=%d should not be returned", shouldNotBe.begin, shouldNotBe.end)
}
}
}
// TestMoveMinimumFiltersOutOfRangeBatches tests that batches below new minimum are not returned by sequence()
// after moveMinimum is called. The sequence() method marks expired batches (end <= min) as batchEndSequence
// but does not return them (unless they're the only batches left).
func TestMoveMinimumFiltersOutOfRangeBatches(t *testing.T) {
testCases := []struct {
name string
seqLen int
min primitives.Slot
max primitives.Slot
size primitives.Slot
initialStates []batchState
newMinimum primitives.Slot
expectedReturned int
expectedAllAbove primitives.Slot // all returned batches should have end > this value (except batchEndSequence)
}{
// Category 1: Single Batch Below New Minimum
{
name: "BatchBelowMinimum_Init",
seqLen: 4,
min: 100,
max: 1000,
size: 50,
initialStates: []batchState{batchInit, batchInit, batchInit, batchInit},
newMinimum: 175,
expectedReturned: 3, // [250-300], [200-250], [150-200] are returned
expectedAllAbove: 175,
},
{
name: "BatchBelowMinimum_ErrRetryable",
seqLen: 4,
min: 100,
max: 1000,
size: 50,
initialStates: []batchState{batchSequenced, batchSequenced, batchErrRetryable, batchErrRetryable},
newMinimum: 175,
expectedReturned: 1, // only [150-200] (ErrRetryable) is returned; [100-150] is expired and not returned
expectedAllAbove: 175,
},
// Category 2: Multiple Batches Below New Minimum
{
name: "MultipleBatchesBelowMinimum",
seqLen: 8,
min: 100,
max: 500,
size: 50,
initialStates: []batchState{batchInit, batchInit, batchInit, batchInit, batchInit, batchInit, batchInit, batchInit},
newMinimum: 320,
expectedReturned: 4, // [450-500], [400-450], [350-400], [300-350] returned; rest expired/not returned
expectedAllAbove: 320,
},
// Category 3: Batches at Boundary - batch.end == minimum is expired
{
name: "BatchExactlyAtMinimum",
seqLen: 3,
min: 100,
max: 1000,
size: 50,
initialStates: []batchState{batchInit, batchInit, batchInit},
newMinimum: 200,
expectedReturned: 1, // [250-300] returned; [200-250] (end==200) and [100-150] are expired
expectedAllAbove: 200,
},
{
name: "BatchJustAboveMinimum",
seqLen: 3,
min: 100,
max: 1000,
size: 50,
initialStates: []batchState{batchInit, batchInit, batchInit},
newMinimum: 199,
expectedReturned: 2, // [250-300], [200-250] returned; [100-150] (end<=199) is expired
expectedAllAbove: 199,
},
// Category 4: No Batches Affected
{
name: "MoveMinimumNoAffect",
seqLen: 3,
min: 100,
max: 1000,
size: 50,
initialStates: []batchState{batchInit, batchInit, batchInit},
newMinimum: 120,
expectedReturned: 3, // all batches returned, none below minimum
expectedAllAbove: 120,
},
// Category 5: Mixed States Below Minimum
{
name: "MixedStatesBelowMinimum",
seqLen: 4,
min: 100,
max: 1000,
size: 50,
initialStates: []batchState{batchSequenced, batchInit, batchErrRetryable, batchInit},
newMinimum: 175,
expectedReturned: 2, // [200-250] (Init) and [150-200] (ErrRetryable) returned; others not in Init/ErrRetryable or expired
expectedAllAbove: 175,
},
// Category 6: Large moveMinimum
{
name: "LargeMoveMinimumSkipsMost",
seqLen: 4,
min: 100,
max: 1000,
size: 50,
initialStates: []batchState{batchInit, batchInit, batchInit, batchInit},
newMinimum: 290,
expectedReturned: 1, // only [250-300] (end=300 > 290) returned
expectedAllAbove: 290,
},
// Category 7: All Batches Expired
{
name: "AllBatchesExpired",
seqLen: 3,
min: 100,
max: 1000,
size: 50,
initialStates: []batchState{batchInit, batchInit, batchInit},
newMinimum: 300,
expectedReturned: 1, // when all expire, one batchEndSequence is returned
expectedAllAbove: 0, // batchEndSequence may have any slot value, don't check
},
// Category 8: Contiguity after filtering
{
name: "ContiguityMaintained",
seqLen: 4,
min: 100,
max: 1000,
size: 50,
initialStates: []batchState{batchInit, batchInit, batchInit, batchInit},
newMinimum: 150,
expectedReturned: 3, // [250-300], [200-250], [150-200] returned
expectedAllAbove: 150,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
seq := newBatchSequencer(tc.seqLen, tc.max, tc.size, mockCurrentNeedsFunc(tc.min, tc.max+1))
// Initialize batches with valid slot ranges
initializeBatchWithSlots(seq.seq, tc.min, tc.size)
// Set initial states
for i, state := range tc.initialStates {
seq.seq[i].state = state
}
// move minimum and call sequence to update set of batches
seq.currentNeeds = mockCurrentNeedsFunc(tc.newMinimum, tc.max+1)
seq.batcher.currentNeeds = seq.currentNeeds
got, err := seq.sequence()
require.NoError(t, err)
// Verify count
if len(got) != tc.expectedReturned {
t.Fatalf("expected %d batches returned, got %d", tc.expectedReturned, len(got))
}
// Verify all returned non-endSequence batches have end > newMinimum
// (batchEndSequence may be returned when all batches are expired, so exclude from check)
if tc.expectedAllAbove > 0 {
for _, b := range got {
if b.state != batchEndSequence && b.end <= tc.expectedAllAbove {
t.Fatalf("batch begin=%d end=%d has end <= %d (should be filtered)",
b.begin, b.end, tc.expectedAllAbove)
}
}
}
// Verify contiguity is maintained for returned batches
if len(got) > 1 {
assertBatchesContiguous(t, got)
}
})
}
}

View File

@@ -12,6 +12,7 @@ import (
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors"
)
@@ -30,35 +31,47 @@ type blobSummary struct {
}
type blobSyncConfig struct {
retentionStart primitives.Slot
nbv verification.NewBlobVerifier
store *filesystem.BlobStorage
nbv verification.NewBlobVerifier
store *filesystem.BlobStorage
currentNeeds func() das.CurrentNeeds
}
func newBlobSync(current primitives.Slot, vbs verifiedROBlocks, cfg *blobSyncConfig) (*blobSync, error) {
expected, err := vbs.blobIdents(cfg.retentionStart)
expected, err := vbs.blobIdents(cfg.currentNeeds)
if err != nil {
return nil, err
}
bbv := newBlobBatchVerifier(cfg.nbv)
as := das.NewLazilyPersistentStore(cfg.store, bbv)
shouldRetain := func(slot primitives.Slot) bool {
needs := cfg.currentNeeds()
return needs.Blob.At(slot)
}
as := das.NewLazilyPersistentStore(cfg.store, bbv, shouldRetain)
return &blobSync{current: current, expected: expected, bbv: bbv, store: as}, nil
}
type blobVerifierMap map[[32]byte][]verification.BlobVerifier
type blobSync struct {
store das.AvailabilityStore
store *das.LazilyPersistentStoreBlob
expected []blobSummary
next int
bbv *blobBatchVerifier
current primitives.Slot
peer peer.ID
}
func (bs *blobSync) blobsNeeded() int {
func (bs *blobSync) needed() int {
return len(bs.expected) - bs.next
}
// validateNext is given to the RPC request code as one of the a validation callbacks.
// It orchestrates setting up the batch verifier (blobBatchVerifier) and calls Persist on the
// AvailabilityStore. This enables the rest of the code in between RPC and the AvailabilityStore
// to stay decoupled from each other. The AvailabilityStore holds the blobs in memory between the
// call to Persist, and the call to IsDataAvailable (where the blobs are actually written to disk
// if successfully verified).
func (bs *blobSync) validateNext(rb blocks.ROBlob) error {
if bs.next >= len(bs.expected) {
return errUnexpectedResponseSize
@@ -102,6 +115,7 @@ func newBlobBatchVerifier(nbv verification.NewBlobVerifier) *blobBatchVerifier {
return &blobBatchVerifier{newBlobVerifier: nbv, verifiers: make(blobVerifierMap)}
}
// blobBatchVerifier implements the BlobBatchVerifier interface required by the das store.
type blobBatchVerifier struct {
newBlobVerifier verification.NewBlobVerifier
verifiers blobVerifierMap
@@ -117,6 +131,7 @@ func (bbv *blobBatchVerifier) newVerifier(rb blocks.ROBlob) verification.BlobVer
return m[rb.Index]
}
// VerifiedROBlobs satisfies the BlobBatchVerifier interface expected by the AvailabilityChecker
func (bbv *blobBatchVerifier) VerifiedROBlobs(_ context.Context, blk blocks.ROBlock, _ []blocks.ROBlob) ([]blocks.VerifiedROBlob, error) {
m, ok := bbv.verifiers[blk.Root()]
if !ok {

View File

@@ -3,6 +3,7 @@ package backfill
import (
"testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/das"
"github.com/OffchainLabs/prysm/v7/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v7/beacon-chain/verification"
"github.com/OffchainLabs/prysm/v7/config/params"
@@ -11,28 +12,42 @@ import (
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/OffchainLabs/prysm/v7/testing/util"
"github.com/OffchainLabs/prysm/v7/time/slots"
)
const testBlobGenBlobCount = 3
func testBlobGen(t *testing.T, start primitives.Slot, n int) ([]blocks.ROBlock, [][]blocks.ROBlob) {
blks := make([]blocks.ROBlock, n)
blobs := make([][]blocks.ROBlob, n)
for i := range n {
bk, bl := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, start+primitives.Slot(i), 3)
bk, bl := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, start+primitives.Slot(i), testBlobGenBlobCount)
blks[i] = bk
blobs[i] = bl
}
return blks, blobs
}
func setupCurrentNeeds(t *testing.T, current primitives.Slot) das.SyncNeeds {
cs := func() primitives.Slot { return current }
sn, err := das.NewSyncNeeds(cs, nil, 0)
require.NoError(t, err)
return sn
}
func TestValidateNext_happy(t *testing.T) {
startSlot := util.SlotAtEpoch(t, params.BeaconConfig().DenebForkEpoch)
current := startSlot + 65
blks, blobs := testBlobGen(t, startSlot, 4)
cfg := &blobSyncConfig{
retentionStart: 0,
nbv: testNewBlobVerifier(),
store: filesystem.NewEphemeralBlobStorage(t),
nbv: testNewBlobVerifier(),
store: filesystem.NewEphemeralBlobStorage(t),
currentNeeds: mockCurrentNeedsFunc(0, current+1),
}
//expected :=
expected, err := verifiedROBlocks(blks).blobIdents(cfg.currentNeeds)
require.NoError(t, err)
require.Equal(t, len(blks)*testBlobGenBlobCount, len(expected))
bsync, err := newBlobSync(current, blks, cfg)
require.NoError(t, err)
nb := 0
@@ -49,26 +64,32 @@ func TestValidateNext_happy(t *testing.T) {
}
func TestValidateNext_cheapErrors(t *testing.T) {
denebSlot, err := slots.EpochStart(params.BeaconConfig().DenebForkEpoch)
require.NoError(t, err)
current := primitives.Slot(128)
blks, blobs := testBlobGen(t, 63, 2)
syncNeeds := setupCurrentNeeds(t, current)
cfg := &blobSyncConfig{
retentionStart: 0,
nbv: testNewBlobVerifier(),
store: filesystem.NewEphemeralBlobStorage(t),
nbv: testNewBlobVerifier(),
store: filesystem.NewEphemeralBlobStorage(t),
currentNeeds: syncNeeds.Currently,
}
blks, blobs := testBlobGen(t, denebSlot, 2)
bsync, err := newBlobSync(current, blks, cfg)
require.NoError(t, err)
require.ErrorIs(t, bsync.validateNext(blobs[len(blobs)-1][0]), errUnexpectedResponseContent)
}
func TestValidateNext_sigMatch(t *testing.T) {
denebSlot, err := slots.EpochStart(params.BeaconConfig().DenebForkEpoch)
require.NoError(t, err)
current := primitives.Slot(128)
blks, blobs := testBlobGen(t, 63, 1)
syncNeeds := setupCurrentNeeds(t, current)
cfg := &blobSyncConfig{
retentionStart: 0,
nbv: testNewBlobVerifier(),
store: filesystem.NewEphemeralBlobStorage(t),
nbv: testNewBlobVerifier(),
store: filesystem.NewEphemeralBlobStorage(t),
currentNeeds: syncNeeds.Currently,
}
blks, blobs := testBlobGen(t, denebSlot, 1)
bsync, err := newBlobSync(current, blks, cfg)
require.NoError(t, err)
blobs[0][0].SignedBlockHeader.Signature = bytesutil.PadTo([]byte("derp"), 48)
@@ -79,6 +100,8 @@ func TestValidateNext_errorsFromVerifier(t *testing.T) {
ds := util.SlotAtEpoch(t, params.BeaconConfig().DenebForkEpoch)
current := primitives.Slot(ds + 96)
blks, blobs := testBlobGen(t, ds+31, 1)
cn := mockCurrentNeedsFunc(0, current+1)
cases := []struct {
name string
err error
@@ -109,9 +132,9 @@ func TestValidateNext_errorsFromVerifier(t *testing.T) {
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
cfg := &blobSyncConfig{
retentionStart: 0,
nbv: testNewBlobVerifier(c.cb),
store: filesystem.NewEphemeralBlobStorage(t),
nbv: testNewBlobVerifier(c.cb),
store: filesystem.NewEphemeralBlobStorage(t),
currentNeeds: cn,
}
bsync, err := newBlobSync(current, blks, cfg)
require.NoError(t, err)

View File

@@ -0,0 +1,282 @@
package backfill
import (
"bytes"
"context"
"fmt"
"time"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v7/beacon-chain/das"
"github.com/OffchainLabs/prysm/v7/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p"
"github.com/OffchainLabs/prysm/v7/beacon-chain/sync"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors"
)
var (
errInvalidDataColumnResponse = errors.New("invalid DataColumnSidecar response")
errUnexpectedBlockRoot = errors.Wrap(errInvalidDataColumnResponse, "unexpected sidecar block root")
errCommitmentLengthMismatch = errors.Wrap(errInvalidDataColumnResponse, "sidecar has different commitment count than block")
errCommitmentValueMismatch = errors.Wrap(errInvalidDataColumnResponse, "sidecar commitments do not match block")
)
// tune the amount of columns we try to download from peers at once.
// The spec limit is 128 * 32, but connection errors are more likely when
// requesting so much at once.
const columnRequestLimit = 128 * 4
type columnBatch struct {
first primitives.Slot
last primitives.Slot
custodyGroups peerdas.ColumnIndices
toDownload map[[32]byte]*toDownload
}
type toDownload struct {
remaining peerdas.ColumnIndices
commitments [][]byte
slot primitives.Slot
}
func (cs *columnBatch) needed() peerdas.ColumnIndices {
// make a copy that we can modify to reduce search iterations.
search := cs.custodyGroups.ToMap()
ci := peerdas.ColumnIndices{}
for _, v := range cs.toDownload {
if len(search) == 0 {
return ci
}
for col := range search {
if v.remaining.Has(col) {
ci.Set(col)
// avoid iterating every single block+index by only searching for indices
// we haven't found yet.
delete(search, col)
}
}
}
return ci
}
// pruneExpired removes any columns from the batch that are no longer needed.
// If `pruned` is non-nil, it is populated with the roots that were removed.
func (cs *columnBatch) pruneExpired(needs das.CurrentNeeds, pruned map[[32]byte]struct{}) {
for root, td := range cs.toDownload {
if !needs.Col.At(td.slot) {
delete(cs.toDownload, root)
if pruned != nil {
pruned[root] = struct{}{}
}
}
}
}
// neededSidecarCount returns the total number of sidecars still needed to complete the batch.
func (cs *columnBatch) neededSidecarCount() int {
count := 0
for _, v := range cs.toDownload {
count += v.remaining.Count()
}
return count
}
// neededSidecarsByColumn counts how many sidecars are still needed for each column index.
func (cs *columnBatch) neededSidecarsByColumn(peerHas peerdas.ColumnIndices) map[uint64]int {
need := make(map[uint64]int, len(peerHas))
for _, v := range cs.toDownload {
for idx := range v.remaining {
if peerHas.Has(idx) {
need[idx]++
}
}
}
return need
}
type columnSync struct {
*columnBatch
store *das.LazilyPersistentStoreColumn
current primitives.Slot
peer peer.ID
bisector *columnBisector
}
func newColumnSync(ctx context.Context, b batch, blks verifiedROBlocks, current primitives.Slot, p p2p.P2P, cfg *workerCfg) (*columnSync, error) {
cgc, err := p.CustodyGroupCount(ctx)
if err != nil {
return nil, errors.Wrap(err, "custody group count")
}
cb, err := buildColumnBatch(ctx, b, blks, p, cfg.colStore, cfg.currentNeeds())
if err != nil {
return nil, err
}
if cb == nil {
return &columnSync{}, nil
}
shouldRetain := func(sl primitives.Slot) bool {
needs := cfg.currentNeeds()
return needs.Col.At(sl)
}
bisector := newColumnBisector(cfg.downscore)
return &columnSync{
columnBatch: cb,
current: current,
store: das.NewLazilyPersistentStoreColumn(cfg.colStore, cfg.newVC, p.NodeID(), cgc, bisector, shouldRetain),
bisector: bisector,
}, nil
}
func (cs *columnSync) blockColumns(root [32]byte) *toDownload {
if cs.columnBatch == nil {
return nil
}
return cs.columnBatch.toDownload[root]
}
func (cs *columnSync) columnsNeeded() peerdas.ColumnIndices {
if cs.columnBatch == nil {
return peerdas.ColumnIndices{}
}
return cs.columnBatch.needed()
}
func (cs *columnSync) request(reqCols []uint64, limit int) (*ethpb.DataColumnSidecarsByRangeRequest, error) {
if len(reqCols) == 0 {
return nil, nil
}
// Use cheaper check to avoid allocating map and counting sidecars if under limit.
if cs.neededSidecarCount() <= limit {
return sync.DataColumnSidecarsByRangeRequest(reqCols, cs.first, cs.last)
}
// Re-slice b.nextReqCols to keep the number of requested sidecars under the limit.
reqCount := 0
peerHas := peerdas.NewColumnIndicesFromSlice(reqCols)
needed := cs.neededSidecarsByColumn(peerHas)
for i := range reqCols {
addSidecars := needed[reqCols[i]]
if reqCount+addSidecars > columnRequestLimit {
reqCols = reqCols[:i]
break
}
reqCount += addSidecars
}
return sync.DataColumnSidecarsByRangeRequest(reqCols, cs.first, cs.last)
}
type validatingColumnRequest struct {
req *ethpb.DataColumnSidecarsByRangeRequest
columnSync *columnSync
bisector *columnBisector
}
func (v *validatingColumnRequest) validate(cd blocks.RODataColumn) (err error) {
defer func(validity string, start time.Time) {
dataColumnSidecarVerifyMs.Observe(float64(time.Since(start).Milliseconds()))
if err != nil {
validity = "invalid"
}
dataColumnSidecarDownloadCount.WithLabelValues(fmt.Sprintf("%d", cd.Index), validity).Inc()
dataColumnSidecarDownloadBytes.Add(float64(cd.SizeSSZ()))
}("valid", time.Now())
return v.countedValidation(cd)
}
// When we call Persist we'll get the verification checks that are provided by the availability store.
// In addition to those checks this function calls rpcValidity which maintains a state machine across
// response values to ensure that the response is valid in the context of the overall request,
// like making sure that the block roots is one of the ones we expect based on the blocks we used to
// construct the request. It also does cheap sanity checks on the DataColumnSidecar values like
// ensuring that the commitments line up with the block.
func (v *validatingColumnRequest) countedValidation(cd blocks.RODataColumn) error {
root := cd.BlockRoot()
expected := v.columnSync.blockColumns(root)
if expected == nil {
return errors.Wrapf(errUnexpectedBlockRoot, "root=%#x, slot=%d", root, cd.Slot())
}
// We don't need this column, but we trust the column state machine verified we asked for it as part of a range request.
// So we can just skip over it and not try to persist it.
if !expected.remaining.Has(cd.Index) {
return nil
}
if len(cd.KzgCommitments) != len(expected.commitments) {
return errors.Wrapf(errCommitmentLengthMismatch, "root=%#x, slot=%d, index=%d", root, cd.Slot(), cd.Index)
}
for i, cmt := range cd.KzgCommitments {
if !bytes.Equal(cmt, expected.commitments[i]) {
return errors.Wrapf(errCommitmentValueMismatch, "root=%#x, slot=%d, index=%d", root, cd.Slot(), cd.Index)
}
}
if err := v.columnSync.store.Persist(v.columnSync.current, cd); err != nil {
return errors.Wrap(err, "persisting data column")
}
v.bisector.addPeerColumns(v.columnSync.peer, cd)
expected.remaining.Unset(cd.Index)
return nil
}
func currentCustodiedColumns(ctx context.Context, p p2p.P2P) (peerdas.ColumnIndices, error) {
cgc, err := p.CustodyGroupCount(ctx)
if err != nil {
return nil, errors.Wrap(err, "custody group count")
}
peerInfo, _, err := peerdas.Info(p.NodeID(), cgc)
if err != nil {
return nil, errors.Wrap(err, "peer info")
}
return peerdas.NewColumnIndicesFromMap(peerInfo.CustodyColumns), nil
}
func buildColumnBatch(ctx context.Context, b batch, blks verifiedROBlocks, p p2p.P2P, store *filesystem.DataColumnStorage, needs das.CurrentNeeds) (*columnBatch, error) {
if len(blks) == 0 {
return nil, nil
}
if !needs.Col.At(b.begin) && !needs.Col.At(b.end-1) {
return nil, nil
}
indices, err := currentCustodiedColumns(ctx, p)
if err != nil {
return nil, errors.Wrap(err, "current custodied columns")
}
summary := &columnBatch{
custodyGroups: indices,
toDownload: make(map[[32]byte]*toDownload, len(blks)),
}
for _, b := range blks {
slot := b.Block().Slot()
if !needs.Col.At(slot) {
continue
}
cmts, err := b.Block().Body().BlobKzgCommitments()
if err != nil {
return nil, errors.Wrap(err, "failed to get blob kzg commitments")
}
if len(cmts) == 0 {
continue
}
// The last block this part of the loop sees will be the last one
// we need to download data columns for.
if len(summary.toDownload) == 0 {
// toDownload is only empty the first time through, so this is the first block with data columns.
summary.first = slot
}
summary.last = slot
summary.toDownload[b.Root()] = &toDownload{
remaining: das.IndicesNotStored(store.Summary(b.Root()), indices),
commitments: cmts,
slot: slot,
}
}
return summary, nil
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,9 @@
package backfill
import "github.com/pkg/errors"
var errUnrecoverable = errors.New("service in unrecoverable state")
func isRetryable(err error) bool {
return !errors.Is(err, errUnrecoverable)
}

View File

@@ -0,0 +1,130 @@
package backfill
import (
"context"
"github.com/OffchainLabs/prysm/v7/beacon-chain/das"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/pkg/errors"
)
var errMissingAvailabilityChecker = errors.Wrap(errUnrecoverable, "batch is missing required availability checker")
var errUnsafeRange = errors.Wrap(errUnrecoverable, "invalid slice indices")
type checkMultiplexer struct {
blobCheck das.AvailabilityChecker
colCheck das.AvailabilityChecker
currentNeeds das.CurrentNeeds
}
// Persist implements das.AvailabilityStore.
var _ das.AvailabilityChecker = &checkMultiplexer{}
// newCheckMultiplexer initializes an AvailabilityChecker that multiplexes to the BlobSidecar and DataColumnSidecar
// AvailabilityCheckers present in the batch.
func newCheckMultiplexer(needs das.CurrentNeeds, b batch) *checkMultiplexer {
s := &checkMultiplexer{currentNeeds: needs}
if b.blobs != nil && b.blobs.store != nil {
s.blobCheck = b.blobs.store
}
if b.columns != nil && b.columns.store != nil {
s.colCheck = b.columns.store
}
return s
}
// IsDataAvailable implements the das.AvailabilityStore interface.
func (m *checkMultiplexer) IsDataAvailable(ctx context.Context, current primitives.Slot, blks ...blocks.ROBlock) error {
needs, err := m.divideByChecker(blks)
if err != nil {
return errors.Wrap(errUnrecoverable, "failed to slice blocks by DA type")
}
if err := doAvailabilityCheck(ctx, m.blobCheck, current, needs.blobs); err != nil {
return errors.Wrap(err, "blob store availability check failed")
}
if err := doAvailabilityCheck(ctx, m.colCheck, current, needs.cols); err != nil {
return errors.Wrap(err, "column store availability check failed")
}
return nil
}
func doAvailabilityCheck(ctx context.Context, check das.AvailabilityChecker, current primitives.Slot, blks []blocks.ROBlock) error {
if len(blks) == 0 {
return nil
}
// Double check that the checker is non-nil.
if check == nil {
return errMissingAvailabilityChecker
}
return check.IsDataAvailable(ctx, current, blks...)
}
// daGroups is a helper type that groups blocks by their DA type.
type daGroups struct {
blobs []blocks.ROBlock
cols []blocks.ROBlock
}
// blocksByDaType slices the given blocks into two slices: one for deneb blocks (BlobSidecar)
// and one for fulu blocks (DataColumnSidecar). Blocks that are pre-deneb or have no
// blob commitments are skipped.
func (m *checkMultiplexer) divideByChecker(blks []blocks.ROBlock) (daGroups, error) {
needs := daGroups{}
for _, blk := range blks {
slot := blk.Block().Slot()
if !m.currentNeeds.Blob.At(slot) && !m.currentNeeds.Col.At(slot) {
continue
}
cmts, err := blk.Block().Body().BlobKzgCommitments()
if err != nil {
return needs, err
}
if len(cmts) == 0 {
continue
}
if m.currentNeeds.Col.At(slot) {
needs.cols = append(needs.cols, blk)
continue
}
if m.currentNeeds.Blob.At(slot) {
needs.blobs = append(needs.blobs, blk)
continue
}
}
return needs, nil
}
// safeRange is a helper type that enforces safe slicing.
type safeRange struct {
start uint
end uint
}
// isZero returns true if the range is zero-length.
func (r safeRange) isZero() bool {
return r.start == r.end
}
// subSlice returns the subslice of s defined by sub
// if it can be safely sliced, or an error if the range is invalid
// with respect to the slice.
func subSlice[T any](s []T, sub safeRange) ([]T, error) {
slen := uint(len(s))
if slen == 0 || sub.isZero() {
return nil, nil
}
// Check that minimum bound is safe.
if sub.end < sub.start {
return nil, errUnsafeRange
}
// Check that upper bound is safe.
if sub.start >= slen || sub.end > slen {
return nil, errUnsafeRange
}
return s[sub.start:sub.end], nil
}

View File

@@ -0,0 +1,822 @@
package backfill
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/das"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/OffchainLabs/prysm/v7/testing/util"
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/pkg/errors"
)
type mockChecker struct {
}
var mockAvailabilityFailure = errors.New("fake error from IsDataAvailable")
var mockColumnFailure = errors.Wrap(mockAvailabilityFailure, "column checker failure")
var mockBlobFailure = errors.Wrap(mockAvailabilityFailure, "blob checker failure")
// trackingAvailabilityChecker wraps a das.AvailabilityChecker and tracks calls
type trackingAvailabilityChecker struct {
checker das.AvailabilityChecker
callCount int
blocksSeenPerCall [][]blocks.ROBlock // Track blocks passed in each call
}
// NewTrackingAvailabilityChecker creates a wrapper that tracks calls to the underlying checker
func NewTrackingAvailabilityChecker(checker das.AvailabilityChecker) *trackingAvailabilityChecker {
return &trackingAvailabilityChecker{
checker: checker,
callCount: 0,
blocksSeenPerCall: [][]blocks.ROBlock{},
}
}
// IsDataAvailable implements das.AvailabilityChecker
func (t *trackingAvailabilityChecker) IsDataAvailable(ctx context.Context, current primitives.Slot, blks ...blocks.ROBlock) error {
t.callCount++
// Track a copy of the blocks passed in this call
blocksCopy := make([]blocks.ROBlock, len(blks))
copy(blocksCopy, blks)
t.blocksSeenPerCall = append(t.blocksSeenPerCall, blocksCopy)
// Delegate to the underlying checker
return t.checker.IsDataAvailable(ctx, current, blks...)
}
// GetCallCount returns how many times IsDataAvailable was called
func (t *trackingAvailabilityChecker) GetCallCount() int {
return t.callCount
}
// GetBlocksInCall returns the blocks passed in a specific call (0-indexed)
func (t *trackingAvailabilityChecker) GetBlocksInCall(callIndex int) []blocks.ROBlock {
if callIndex < 0 || callIndex >= len(t.blocksSeenPerCall) {
return nil
}
return t.blocksSeenPerCall[callIndex]
}
// GetTotalBlocksSeen returns total number of blocks seen across all calls
func (t *trackingAvailabilityChecker) GetTotalBlocksSeen() int {
total := 0
for _, blkSlice := range t.blocksSeenPerCall {
total += len(blkSlice)
}
return total
}
func TestNewCheckMultiplexer(t *testing.T) {
denebSlot, fuluSlot := testDenebAndFuluSlots(t)
cases := []struct {
name string
batch func() batch
setupChecker func(*checkMultiplexer)
current primitives.Slot
err error
}{
{
name: "no availability checkers, no blocks",
batch: func() batch { return batch{} },
},
{
name: "no blob availability checkers, deneb blocks",
batch: func() batch {
blks, _ := testBlobGen(t, denebSlot, 2)
return batch{
blocks: blks,
}
},
setupChecker: func(m *checkMultiplexer) {
// Provide a column checker which should be unused in this test.
m.colCheck = &das.MockAvailabilityStore{}
},
err: errMissingAvailabilityChecker,
},
{
name: "no column availability checker, fulu blocks",
batch: func() batch {
blks, _ := testBlobGen(t, fuluSlot, 2)
return batch{
blocks: blks,
}
},
err: errMissingAvailabilityChecker,
setupChecker: func(m *checkMultiplexer) {
// Provide a blob checker which should be unused in this test.
m.blobCheck = &das.MockAvailabilityStore{}
},
},
{
name: "has column availability checker, fulu blocks",
batch: func() batch {
blks, _ := testBlobGen(t, fuluSlot, 2)
return batch{
blocks: blks,
}
},
setupChecker: func(m *checkMultiplexer) {
// Provide a blob checker which should be unused in this test.
m.colCheck = &das.MockAvailabilityStore{}
},
},
{
name: "has blob availability checker, deneb blocks",
batch: func() batch {
blks, _ := testBlobGen(t, denebSlot, 2)
return batch{
blocks: blks,
}
},
setupChecker: func(m *checkMultiplexer) {
// Provide a blob checker which should be unused in this test.
m.blobCheck = &das.MockAvailabilityStore{}
},
},
{
name: "has blob but not col availability checker, deneb and fulu blocks",
batch: func() batch {
blks, _ := testBlobGen(t, fuluSlot-2, 4) // spans deneb and fulu
return batch{
blocks: blks,
}
},
err: errMissingAvailabilityChecker, // fails because column store is not present
setupChecker: func(m *checkMultiplexer) {
m.blobCheck = &das.MockAvailabilityStore{}
},
},
{
name: "has col but not blob availability checker, deneb and fulu blocks",
batch: func() batch {
blks, _ := testBlobGen(t, fuluSlot-2, 4) // spans deneb and fulu
return batch{
blocks: blks,
}
},
err: errMissingAvailabilityChecker, // fails because column store is not present
setupChecker: func(m *checkMultiplexer) {
m.colCheck = &das.MockAvailabilityStore{}
},
},
{
name: "both checkers, deneb and fulu blocks",
batch: func() batch {
blks, _ := testBlobGen(t, fuluSlot-2, 4) // spans deneb and fulu
return batch{
blocks: blks,
}
},
setupChecker: func(m *checkMultiplexer) {
m.blobCheck = &das.MockAvailabilityStore{}
m.colCheck = &das.MockAvailabilityStore{}
},
},
{
name: "deneb checker fails, deneb and fulu blocks",
batch: func() batch {
blks, _ := testBlobGen(t, fuluSlot-2, 4) // spans deneb and fulu
return batch{
blocks: blks,
}
},
err: mockBlobFailure,
setupChecker: func(m *checkMultiplexer) {
m.blobCheck = &das.MockAvailabilityStore{ErrIsDataAvailable: mockBlobFailure}
m.colCheck = &das.MockAvailabilityStore{}
},
},
{
name: "fulu checker fails, deneb and fulu blocks",
batch: func() batch {
blks, _ := testBlobGen(t, fuluSlot-2, 4) // spans deneb and fulu
return batch{
blocks: blks,
}
},
err: mockBlobFailure,
setupChecker: func(m *checkMultiplexer) {
m.blobCheck = &das.MockAvailabilityStore{}
m.colCheck = &das.MockAvailabilityStore{ErrIsDataAvailable: mockBlobFailure}
},
},
}
needs := mockCurrentSpecNeeds()
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
b := tc.batch()
var checker *checkMultiplexer
checker = newCheckMultiplexer(needs, b)
if tc.setupChecker != nil {
tc.setupChecker(checker)
}
err := checker.IsDataAvailable(t.Context(), tc.current, b.blocks...)
if tc.err != nil {
require.ErrorIs(t, err, tc.err)
} else {
require.NoError(t, err)
}
})
}
}
func testBlocksWithCommitments(t *testing.T, startSlot primitives.Slot, count int) []blocks.ROBlock {
blks := make([]blocks.ROBlock, count)
for i := range count {
blk, _ := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, startSlot+primitives.Slot(i), 1)
blks[i] = blk
}
return blks
}
func TestDaNeeds(t *testing.T) {
denebSlot, fuluSlot := testDenebAndFuluSlots(t)
mux := &checkMultiplexer{currentNeeds: mockCurrentSpecNeeds()}
cases := []struct {
name string
setup func() (daGroups, []blocks.ROBlock)
expect daGroups
err error
}{
{
name: "empty range",
setup: func() (daGroups, []blocks.ROBlock) {
return daGroups{}, testBlocksWithCommitments(t, 10, 5)
},
},
{
name: "single deneb block",
setup: func() (daGroups, []blocks.ROBlock) {
blks := testBlocksWithCommitments(t, denebSlot, 1)
return daGroups{
blobs: []blocks.ROBlock{blks[0]},
}, blks
},
},
{
name: "single fulu block",
setup: func() (daGroups, []blocks.ROBlock) {
blks := testBlocksWithCommitments(t, fuluSlot, 1)
return daGroups{
cols: []blocks.ROBlock{blks[0]},
}, blks
},
},
{
name: "deneb range",
setup: func() (daGroups, []blocks.ROBlock) {
blks := testBlocksWithCommitments(t, denebSlot, 3)
return daGroups{
blobs: blks,
}, blks
},
},
{
name: "one deneb one fulu",
setup: func() (daGroups, []blocks.ROBlock) {
deneb := testBlocksWithCommitments(t, denebSlot, 1)
fulu := testBlocksWithCommitments(t, fuluSlot, 1)
return daGroups{
blobs: []blocks.ROBlock{deneb[0]},
cols: []blocks.ROBlock{fulu[0]},
}, append(deneb, fulu...)
},
},
{
name: "deneb and fulu range",
setup: func() (daGroups, []blocks.ROBlock) {
deneb := testBlocksWithCommitments(t, denebSlot, 3)
fulu := testBlocksWithCommitments(t, fuluSlot, 3)
return daGroups{
blobs: deneb,
cols: fulu,
}, append(deneb, fulu...)
},
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
expectNeeds, blks := tc.setup()
needs, err := mux.divideByChecker(blks)
if tc.err != nil {
require.ErrorIs(t, err, tc.err)
} else {
require.NoError(t, err)
}
expectBlob := make(map[[32]byte]struct{})
for _, blk := range expectNeeds.blobs {
expectBlob[blk.Root()] = struct{}{}
}
for _, blk := range needs.blobs {
_, ok := expectBlob[blk.Root()]
require.Equal(t, true, ok, "unexpected blob block root %#x", blk.Root())
delete(expectBlob, blk.Root())
}
require.Equal(t, 0, len(expectBlob), "missing blob blocks")
expectCol := make(map[[32]byte]struct{})
for _, blk := range expectNeeds.cols {
expectCol[blk.Root()] = struct{}{}
}
for _, blk := range needs.cols {
_, ok := expectCol[blk.Root()]
require.Equal(t, true, ok, "unexpected col block root %#x", blk.Root())
delete(expectCol, blk.Root())
}
require.Equal(t, 0, len(expectCol), "missing col blocks")
})
}
}
func testDenebAndFuluSlots(t *testing.T) (primitives.Slot, primitives.Slot) {
params.SetupTestConfigCleanup(t)
denebEpoch := params.BeaconConfig().DenebForkEpoch
if params.BeaconConfig().FuluForkEpoch == params.BeaconConfig().FarFutureEpoch {
params.BeaconConfig().FuluForkEpoch = denebEpoch + 4096*2
}
fuluEpoch := params.BeaconConfig().FuluForkEpoch
fuluSlot, err := slots.EpochStart(fuluEpoch)
require.NoError(t, err)
denebSlot, err := slots.EpochStart(denebEpoch)
require.NoError(t, err)
return denebSlot, fuluSlot
}
// Helper to create test blocks without blob commitments
// Uses 0 commitments instead of 1 like testBlocksWithCommitments
func testBlocksWithoutCommitments(t *testing.T, startSlot primitives.Slot, count int) []blocks.ROBlock {
blks := make([]blocks.ROBlock, count)
for i := range count {
blk, _ := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, startSlot+primitives.Slot(i), 0)
blks[i] = blk
}
return blks
}
// TestBlockDaNeedsWithoutCommitments verifies blocks without commitments are skipped
func TestBlockDaNeedsWithoutCommitments(t *testing.T) {
denebSlot, fuluSlot := testDenebAndFuluSlots(t)
mux := &checkMultiplexer{currentNeeds: mockCurrentSpecNeeds()}
cases := []struct {
name string
setup func() (daGroups, []blocks.ROBlock)
expect daGroups
err error
}{
{
name: "deneb blocks without commitments",
setup: func() (daGroups, []blocks.ROBlock) {
blks := testBlocksWithoutCommitments(t, denebSlot, 3)
return daGroups{}, blks // Expect empty daNeeds
},
},
{
name: "fulu blocks without commitments",
setup: func() (daGroups, []blocks.ROBlock) {
blks := testBlocksWithoutCommitments(t, fuluSlot, 3)
return daGroups{}, blks // Expect empty daNeeds
},
},
{
name: "mixed: some deneb with commitments, some without",
setup: func() (daGroups, []blocks.ROBlock) {
withCommit := testBlocksWithCommitments(t, denebSlot, 2)
withoutCommit := testBlocksWithoutCommitments(t, denebSlot+2, 2)
blks := append(withCommit, withoutCommit...)
return daGroups{
blobs: withCommit, // Only the ones with commitments
}, blks
},
},
{
name: "pre-deneb blocks are skipped",
setup: func() (daGroups, []blocks.ROBlock) {
blks := testBlocksWithCommitments(t, denebSlot-10, 5)
return daGroups{}, blks // All pre-deneb, expect empty
},
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
expectNeeds, blks := tc.setup()
needs, err := mux.divideByChecker(blks)
if tc.err != nil {
require.ErrorIs(t, err, tc.err)
} else {
require.NoError(t, err)
}
// Verify blob blocks
require.Equal(t, len(expectNeeds.blobs), len(needs.blobs),
"expected %d blob blocks, got %d", len(expectNeeds.blobs), len(needs.blobs))
// Verify col blocks
require.Equal(t, len(expectNeeds.cols), len(needs.cols),
"expected %d col blocks, got %d", len(expectNeeds.cols), len(needs.cols))
})
}
}
// TestBlockDaNeedsAcrossEras verifies blocks spanning pre-deneb/deneb/fulu boundaries
func TestBlockDaNeedsAcrossEras(t *testing.T) {
denebSlot, fuluSlot := testDenebAndFuluSlots(t)
mux := &checkMultiplexer{currentNeeds: mockCurrentSpecNeeds()}
cases := []struct {
name string
setup func() (daGroups, []blocks.ROBlock)
expectBlobCount int
expectColCount int
}{
{
name: "pre-deneb, deneb, fulu sequence",
setup: func() (daGroups, []blocks.ROBlock) {
preDeneb := testBlocksWithCommitments(t, denebSlot-1, 1)
deneb := testBlocksWithCommitments(t, denebSlot, 2)
fulu := testBlocksWithCommitments(t, fuluSlot, 2)
blks := append(preDeneb, append(deneb, fulu...)...)
return daGroups{}, blks
},
expectBlobCount: 2, // Only deneb blocks
expectColCount: 2, // Only fulu blocks
},
{
name: "blocks at exact deneb boundary",
setup: func() (daGroups, []blocks.ROBlock) {
atBoundary := testBlocksWithCommitments(t, denebSlot, 1)
return daGroups{
blobs: atBoundary,
}, atBoundary
},
expectBlobCount: 1,
expectColCount: 0,
},
{
name: "blocks at exact fulu boundary",
setup: func() (daGroups, []blocks.ROBlock) {
atBoundary := testBlocksWithCommitments(t, fuluSlot, 1)
return daGroups{
cols: atBoundary,
}, atBoundary
},
expectBlobCount: 0,
expectColCount: 1,
},
{
name: "many deneb blocks before fulu transition",
setup: func() (daGroups, []blocks.ROBlock) {
deneb := testBlocksWithCommitments(t, denebSlot, 10)
fulu := testBlocksWithCommitments(t, fuluSlot, 5)
blks := append(deneb, fulu...)
return daGroups{}, blks
},
expectBlobCount: 10,
expectColCount: 5,
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
_, blks := tc.setup()
needs, err := mux.divideByChecker(blks)
require.NoError(t, err)
require.Equal(t, tc.expectBlobCount, len(needs.blobs),
"expected %d blob blocks, got %d", tc.expectBlobCount, len(needs.blobs))
require.Equal(t, tc.expectColCount, len(needs.cols),
"expected %d col blocks, got %d", tc.expectColCount, len(needs.cols))
})
}
}
// TestDoAvailabilityCheckEdgeCases verifies edge cases in doAvailabilityCheck
func TestDoAvailabilityCheckEdgeCases(t *testing.T) {
denebSlot, _ := testDenebAndFuluSlots(t)
checkerErr := errors.New("checker error")
cases := []struct {
name string
checker das.AvailabilityChecker
blocks []blocks.ROBlock
expectErr error
setupTestBlocks func() []blocks.ROBlock
}{
{
name: "nil checker with empty blocks",
checker: nil,
blocks: []blocks.ROBlock{},
expectErr: nil, // Should succeed with no blocks
},
{
name: "nil checker with blocks",
checker: nil,
expectErr: errMissingAvailabilityChecker,
setupTestBlocks: func() []blocks.ROBlock {
return testBlocksWithCommitments(t, denebSlot, 1)
},
},
{
name: "valid checker with empty blocks",
checker: &das.MockAvailabilityStore{},
blocks: []blocks.ROBlock{},
expectErr: nil,
},
{
name: "valid checker with blocks succeeds",
checker: &das.MockAvailabilityStore{},
expectErr: nil,
setupTestBlocks: func() []blocks.ROBlock {
return testBlocksWithCommitments(t, denebSlot, 3)
},
},
{
name: "valid checker error is propagated",
checker: &das.MockAvailabilityStore{ErrIsDataAvailable: checkerErr},
expectErr: checkerErr,
setupTestBlocks: func() []blocks.ROBlock {
return testBlocksWithCommitments(t, denebSlot, 1)
},
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
blks := tc.blocks
if tc.setupTestBlocks != nil {
blks = tc.setupTestBlocks()
}
err := doAvailabilityCheck(t.Context(), tc.checker, denebSlot, blks)
if tc.expectErr != nil {
require.NotNil(t, err)
require.ErrorIs(t, err, tc.expectErr)
} else {
require.NoError(t, err)
}
})
}
}
// TestBlockDaNeedsErrorWrapping verifies error messages are properly wrapped
func TestBlockDaNeedsErrorWrapping(t *testing.T) {
denebSlot, _ := testDenebAndFuluSlots(t)
mux := &checkMultiplexer{currentNeeds: mockCurrentSpecNeeds()}
// Test with a block that has commitments but in deneb range
blks := testBlocksWithCommitments(t, denebSlot, 2)
// This should succeed without errors
needs, err := mux.divideByChecker(blks)
require.NoError(t, err)
require.Equal(t, 2, len(needs.blobs))
require.Equal(t, 0, len(needs.cols))
}
// TestIsDataAvailableCallRouting verifies that blocks are routed to the correct checker
// based on their era (pre-deneb, deneb, fulu) and tests various block combinations
func TestIsDataAvailableCallRouting(t *testing.T) {
denebSlot, fuluSlot := testDenebAndFuluSlots(t)
cases := []struct {
name string
buildBlocks func() []blocks.ROBlock
expectedBlobCalls int
expectedBlobBlocks int
expectedColCalls int
expectedColBlocks int
}{
{
name: "PreDenebOnly",
buildBlocks: func() []blocks.ROBlock {
return testBlocksWithCommitments(t, denebSlot-10, 3)
},
expectedBlobCalls: 0,
expectedBlobBlocks: 0,
expectedColCalls: 0,
expectedColBlocks: 0,
},
{
name: "DenebOnly",
buildBlocks: func() []blocks.ROBlock {
return testBlocksWithCommitments(t, denebSlot, 3)
},
expectedBlobCalls: 1,
expectedBlobBlocks: 3,
expectedColCalls: 0,
expectedColBlocks: 0,
},
{
name: "FuluOnly",
buildBlocks: func() []blocks.ROBlock {
return testBlocksWithCommitments(t, fuluSlot, 3)
},
expectedBlobCalls: 0,
expectedBlobBlocks: 0,
expectedColCalls: 1,
expectedColBlocks: 3,
},
{
name: "PreDeneb_Deneb_Mix",
buildBlocks: func() []blocks.ROBlock {
preDeneb := testBlocksWithCommitments(t, denebSlot-10, 3)
deneb := testBlocksWithCommitments(t, denebSlot, 3)
return append(preDeneb, deneb...)
},
expectedBlobCalls: 1,
expectedBlobBlocks: 3,
expectedColCalls: 0,
expectedColBlocks: 0,
},
{
name: "PreDeneb_Fulu_Mix",
buildBlocks: func() []blocks.ROBlock {
preDeneb := testBlocksWithCommitments(t, denebSlot-10, 3)
fulu := testBlocksWithCommitments(t, fuluSlot, 3)
return append(preDeneb, fulu...)
},
expectedBlobCalls: 0,
expectedBlobBlocks: 0,
expectedColCalls: 1,
expectedColBlocks: 3,
},
{
name: "Deneb_Fulu_Mix",
buildBlocks: func() []blocks.ROBlock {
deneb := testBlocksWithCommitments(t, denebSlot, 3)
fulu := testBlocksWithCommitments(t, fuluSlot, 3)
return append(deneb, fulu...)
},
expectedBlobCalls: 1,
expectedBlobBlocks: 3,
expectedColCalls: 1,
expectedColBlocks: 3,
},
{
name: "PreDeneb_Deneb_Fulu_Mix",
buildBlocks: func() []blocks.ROBlock {
preDeneb := testBlocksWithCommitments(t, denebSlot-10, 3)
deneb := testBlocksWithCommitments(t, denebSlot, 4)
fulu := testBlocksWithCommitments(t, fuluSlot, 3)
return append(preDeneb, append(deneb, fulu...)...)
},
expectedBlobCalls: 1,
expectedBlobBlocks: 4,
expectedColCalls: 1,
expectedColBlocks: 3,
},
{
name: "DenebNoCommitments",
buildBlocks: func() []blocks.ROBlock {
return testBlocksWithoutCommitments(t, denebSlot, 3)
},
expectedBlobCalls: 0,
expectedBlobBlocks: 0,
expectedColCalls: 0,
expectedColBlocks: 0,
},
{
name: "FuluNoCommitments",
buildBlocks: func() []blocks.ROBlock {
return testBlocksWithoutCommitments(t, fuluSlot, 3)
},
expectedBlobCalls: 0,
expectedBlobBlocks: 0,
expectedColCalls: 0,
expectedColBlocks: 0,
},
{
name: "MixedCommitments_Deneb",
buildBlocks: func() []blocks.ROBlock {
with := testBlocksWithCommitments(t, denebSlot, 3)
without := testBlocksWithoutCommitments(t, denebSlot+3, 3)
return append(with, without...)
},
expectedBlobCalls: 1,
expectedBlobBlocks: 3,
expectedColCalls: 0,
expectedColBlocks: 0,
},
{
name: "MixedCommitments_Fulu",
buildBlocks: func() []blocks.ROBlock {
with := testBlocksWithCommitments(t, fuluSlot, 3)
without := testBlocksWithoutCommitments(t, fuluSlot+3, 3)
return append(with, without...)
},
expectedBlobCalls: 0,
expectedBlobBlocks: 0,
expectedColCalls: 1,
expectedColBlocks: 3,
},
{
name: "MixedCommitments_All",
buildBlocks: func() []blocks.ROBlock {
denebWith := testBlocksWithCommitments(t, denebSlot, 3)
denebWithout := testBlocksWithoutCommitments(t, denebSlot+3, 2)
fuluWith := testBlocksWithCommitments(t, fuluSlot, 3)
fuluWithout := testBlocksWithoutCommitments(t, fuluSlot+3, 2)
return append(denebWith, append(denebWithout, append(fuluWith, fuluWithout...)...)...)
},
expectedBlobCalls: 1,
expectedBlobBlocks: 3,
expectedColCalls: 1,
expectedColBlocks: 3,
},
{
name: "EmptyBlocks",
buildBlocks: func() []blocks.ROBlock {
return []blocks.ROBlock{}
},
expectedBlobCalls: 0,
expectedBlobBlocks: 0,
expectedColCalls: 0,
expectedColBlocks: 0,
},
{
name: "SingleDeneb",
buildBlocks: func() []blocks.ROBlock {
return testBlocksWithCommitments(t, denebSlot, 1)
},
expectedBlobCalls: 1,
expectedBlobBlocks: 1,
expectedColCalls: 0,
expectedColBlocks: 0,
},
{
name: "SingleFulu",
buildBlocks: func() []blocks.ROBlock {
return testBlocksWithCommitments(t, fuluSlot, 1)
},
expectedBlobCalls: 0,
expectedBlobBlocks: 0,
expectedColCalls: 1,
expectedColBlocks: 1,
},
{
name: "DenebAtBoundary",
buildBlocks: func() []blocks.ROBlock {
preDeneb := testBlocksWithCommitments(t, denebSlot-1, 1)
atBoundary := testBlocksWithCommitments(t, denebSlot, 1)
return append(preDeneb, atBoundary...)
},
expectedBlobCalls: 1,
expectedBlobBlocks: 1,
expectedColCalls: 0,
expectedColBlocks: 0,
},
{
name: "FuluAtBoundary",
buildBlocks: func() []blocks.ROBlock {
deneb := testBlocksWithCommitments(t, fuluSlot-1, 1)
atBoundary := testBlocksWithCommitments(t, fuluSlot, 1)
return append(deneb, atBoundary...)
},
expectedBlobCalls: 1,
expectedBlobBlocks: 1,
expectedColCalls: 1,
expectedColBlocks: 1,
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
// Create tracking wrappers around mock checkers
blobTracker := NewTrackingAvailabilityChecker(&das.MockAvailabilityStore{})
colTracker := NewTrackingAvailabilityChecker(&das.MockAvailabilityStore{})
// Create multiplexer with tracked checkers
mux := &checkMultiplexer{
blobCheck: blobTracker,
colCheck: colTracker,
currentNeeds: mockCurrentSpecNeeds(),
}
// Build blocks and run availability check
blocks := tc.buildBlocks()
err := mux.IsDataAvailable(t.Context(), denebSlot, blocks...)
require.NoError(t, err)
// Assert blob checker was called the expected number of times
require.Equal(t, tc.expectedBlobCalls, blobTracker.GetCallCount(),
"blob checker call count mismatch for test %s", tc.name)
// Assert blob checker saw the expected number of blocks
require.Equal(t, tc.expectedBlobBlocks, blobTracker.GetTotalBlocksSeen(),
"blob checker block count mismatch for test %s", tc.name)
// Assert column checker was called the expected number of times
require.Equal(t, tc.expectedColCalls, colTracker.GetCallCount(),
"column checker call count mismatch for test %s", tc.name)
// Assert column checker saw the expected number of blocks
require.Equal(t, tc.expectedColBlocks, colTracker.GetTotalBlocksSeen(),
"column checker block count mismatch for test %s", tc.name)
})
}
}

View File

@@ -1,5 +1,115 @@
package backfill
import "github.com/sirupsen/logrus"
import (
"sync"
"sync/atomic"
"time"
"github.com/sirupsen/logrus"
)
var log = logrus.WithField("prefix", "backfill")
// intervalLogger only logs once for each interval. It only customizes a single
// instance of the entry/logger and should just be used to control the logging rate for
// *one specific line of code*.
type intervalLogger struct {
*logrus.Entry
base *logrus.Entry
mux sync.Mutex
seconds int64 // seconds is the number of seconds per logging interval
last *atomic.Int64 // last is the quantized representation of the last time a log was emitted
now func() time.Time
}
func newIntervalLogger(base *logrus.Entry, secondsBetweenLogs int64) *intervalLogger {
return &intervalLogger{
Entry: base,
base: base,
seconds: secondsBetweenLogs,
last: new(atomic.Int64),
now: time.Now,
}
}
// intervalNumber is a separate pure function because this helps tests determine
// proposer timestamp alignment.
func intervalNumber(t time.Time, seconds int64) int64 {
return t.Unix() / seconds
}
// intervalNumber is the integer division of the current unix timestamp
// divided by the number of seconds per interval.
func (l *intervalLogger) intervalNumber() int64 {
return intervalNumber(l.now(), l.seconds)
}
func (l *intervalLogger) copy() *intervalLogger {
return &intervalLogger{
Entry: l.Entry,
base: l.base,
seconds: l.seconds,
last: l.last,
now: l.now,
}
}
// Log overloads the Log() method of logrus.Entry, which is called under the hood
// when a log-level specific method (like Info(), Warn(), Error()) is invoked.
// By intercepting this call we can rate limit how often we log.
func (l *intervalLogger) Log(level logrus.Level, args ...any) {
n := l.intervalNumber()
// If Swap returns a different value that the current interval number, we haven't
// emitted a log yet this interval, so we can do so now.
if l.last.Swap(n) != n {
l.Entry.Log(level, args...)
}
// reset the Entry to the base so that any WithField/WithError calls
// don't persist across calls to Log()
}
func (l *intervalLogger) WithField(key string, value any) *intervalLogger {
cp := l.copy()
cp.Entry = cp.Entry.WithField(key, value)
return cp
}
func (l *intervalLogger) WithFields(fields logrus.Fields) *intervalLogger {
cp := l.copy()
cp.Entry = cp.Entry.WithFields(fields)
return cp
}
func (l *intervalLogger) WithError(err error) *intervalLogger {
cp := l.copy()
cp.Entry = cp.Entry.WithError(err)
return cp
}
func (l *intervalLogger) Trace(args ...any) {
l.Log(logrus.TraceLevel, args...)
}
func (l *intervalLogger) Debug(args ...any) {
l.Log(logrus.DebugLevel, args...)
}
func (l *intervalLogger) Print(args ...any) {
l.Info(args...)
}
func (l *intervalLogger) Info(args ...any) {
l.Log(logrus.InfoLevel, args...)
}
func (l *intervalLogger) Warn(args ...any) {
l.Log(logrus.WarnLevel, args...)
}
func (l *intervalLogger) Warning(args ...any) {
l.Warn(args...)
}
func (l *intervalLogger) Error(args ...any) {
l.Log(logrus.ErrorLevel, args...)
}

View File

@@ -0,0 +1,379 @@
package backfill
import (
"bytes"
"sync"
"testing"
"time"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/stretchr/testify/require"
)
// trackingHook is a logrus hook that counts Log callCount for testing.
type trackingHook struct {
mu sync.RWMutex
entries []*logrus.Entry
}
func (h *trackingHook) Levels() []logrus.Level {
return logrus.AllLevels
}
func (h *trackingHook) Fire(entry *logrus.Entry) error {
h.mu.Lock()
defer h.mu.Unlock()
h.entries = append(h.entries, entry)
return nil
}
func (h *trackingHook) callCount() int {
h.mu.RLock()
defer h.mu.RUnlock()
return len(h.entries)
}
func (h *trackingHook) emitted(t *testing.T) []string {
h.mu.RLock()
defer h.mu.RUnlock()
e := make([]string, len(h.entries))
for i, entry := range h.entries {
entry.Buffer = new(bytes.Buffer)
serialized, err := entry.Logger.Formatter.Format(entry)
require.NoError(t, err)
e[i] = string(serialized)
}
return e
}
func entryWithHook() (*logrus.Entry, *trackingHook) {
logger := logrus.New()
logger.SetLevel(logrus.TraceLevel)
hook := &trackingHook{}
logger.AddHook(hook)
entry := logrus.NewEntry(logger)
return entry, hook
}
func intervalSecondsAndDuration(i int) (int64, time.Duration) {
return int64(i), time.Duration(i) * time.Second
}
// mockClock provides a controllable time source for testing.
// It allows tests to set the current time and advance it as needed.
type mockClock struct {
t time.Time
}
// now returns the current time.
func (c *mockClock) now() time.Time {
return c.t
}
func setupMockClock(il *intervalLogger) *mockClock {
// initialize now so that the time aligns with the start of the
// interval bucket. This ensures that adding less than an interval
// of time to the timestamp can never move into the next bucket.
interval := intervalNumber(time.Now(), il.seconds)
now := time.Unix(interval*il.seconds, 0)
clock := &mockClock{t: now}
il.now = clock.now
return clock
}
// TestNewIntervalLogger verifies logger is properly initialized
func TestNewIntervalLogger(t *testing.T) {
base := logrus.NewEntry(logrus.New())
intSec := int64(10)
il := newIntervalLogger(base, intSec)
require.NotNil(t, il)
require.Equal(t, intSec, il.seconds)
require.Equal(t, int64(0), il.last.Load())
require.Equal(t, base, il.Entry)
}
// TestLogOncePerInterval verifies that Log is called only once within an interval window
func TestLogOncePerInterval(t *testing.T) {
entry, hook := entryWithHook()
il := newIntervalLogger(entry, 10)
_ = setupMockClock(il) // use a fixed time to make sure no race is possible
// First log should call the underlying Log method
il.Log(logrus.InfoLevel, "test message 1")
require.Equal(t, 1, hook.callCount())
// Second log in same interval should not call Log
il.Log(logrus.InfoLevel, "test message 2")
require.Equal(t, 1, hook.callCount())
// Third log still in same interval should not call Log
il.Log(logrus.InfoLevel, "test message 3")
require.Equal(t, 1, hook.callCount())
// Verify last is set to current interval
require.Equal(t, il.intervalNumber(), il.last.Load())
}
// TestLogAcrossIntervalBoundary verifies logging at interval boundaries resets correctly
func TestLogAcrossIntervalBoundary(t *testing.T) {
iSec, iDur := intervalSecondsAndDuration(10)
entry, hook := entryWithHook()
il := newIntervalLogger(entry, iSec)
clock := setupMockClock(il)
il.Log(logrus.InfoLevel, "first interval")
require.Equal(t, 1, hook.callCount())
// Log in new interval should succeed
clock.t = clock.t.Add(2 * iDur)
il.Log(logrus.InfoLevel, "second interval")
require.Equal(t, 2, hook.callCount())
}
// TestWithFieldChaining verifies WithField returns logger and can be chained
func TestWithFieldChaining(t *testing.T) {
entry, hook := entryWithHook()
iSec, iDur := intervalSecondsAndDuration(10)
il := newIntervalLogger(entry, iSec)
clock := setupMockClock(il)
result := il.WithField("key1", "value1")
require.NotNil(t, result)
result.Info("test")
require.Equal(t, 1, hook.callCount())
// make sure there was no mutation of the base as a side effect
clock.t = clock.t.Add(iDur)
il.Info("another")
// Verify field is present in logged entry
emitted := hook.emitted(t)
require.Contains(t, emitted[0], "test")
require.Contains(t, emitted[0], "key1=value1")
require.Contains(t, emitted[1], "another")
require.NotContains(t, emitted[1], "key1=value1")
}
// TestWithFieldsChaining verifies WithFields properly adds multiple fields
func TestWithFieldsChaining(t *testing.T) {
entry, hook := entryWithHook()
iSec, iDur := intervalSecondsAndDuration(10)
il := newIntervalLogger(entry, iSec)
clock := setupMockClock(il)
fields := logrus.Fields{
"key1": "value1",
"key2": "value2",
}
result := il.WithFields(fields)
require.NotNil(t, result)
result.Info("test")
require.Equal(t, 1, hook.callCount())
// make sure there was no mutation of the base as a side effect
clock.t = clock.t.Add(iDur)
il.Info("another")
// Verify field is present in logged entry
emitted := hook.emitted(t)
require.Contains(t, emitted[0], "test")
require.Contains(t, emitted[0], "key1=value1")
require.Contains(t, emitted[0], "key2=value2")
require.Contains(t, emitted[1], "another")
require.NotContains(t, emitted[1], "key1=value1")
require.NotContains(t, emitted[1], "key2=value2")
}
// TestWithErrorChaining verifies WithError properly adds error field
func TestWithErrorChaining(t *testing.T) {
entry, hook := entryWithHook()
iSec, iDur := intervalSecondsAndDuration(10)
il := newIntervalLogger(entry, iSec)
clock := setupMockClock(il)
expected := errors.New("lowercase words")
result := il.WithError(expected)
require.NotNil(t, result)
result.Error("test")
require.Equal(t, 1, hook.callCount())
require.NotNil(t, result)
// make sure there was no mutation of the base as a side effect
clock.t = clock.t.Add(iDur)
il.Info("different")
// Verify field is present in logged entry
emitted := hook.emitted(t)
require.Contains(t, emitted[0], expected.Error())
require.Contains(t, emitted[0], "test")
require.Contains(t, emitted[1], "different")
require.NotContains(t, emitted[1], "test")
require.NotContains(t, emitted[1], "lowercase words")
}
// TestLogLevelMethods verifies all log level methods work and respect rate limiting
func TestLogLevelMethods(t *testing.T) {
entry, hook := entryWithHook()
il := newIntervalLogger(entry, 10)
_ = setupMockClock(il) // use a fixed time to make sure no race is possible
// First call from each level-specific method should succeed
il.Trace("trace message")
require.Equal(t, 1, hook.callCount())
// Subsequent callCount in same interval should be suppressed
il.Debug("debug message")
require.Equal(t, 1, hook.callCount())
il.Info("info message")
require.Equal(t, 1, hook.callCount())
il.Print("print message")
require.Equal(t, 1, hook.callCount())
il.Warn("warn message")
require.Equal(t, 1, hook.callCount())
il.Warning("warning message")
require.Equal(t, 1, hook.callCount())
il.Error("error message")
require.Equal(t, 1, hook.callCount())
}
// TestConcurrentLogging verifies multiple goroutines can safely call Log concurrently
func TestConcurrentLogging(t *testing.T) {
entry, hook := entryWithHook()
il := newIntervalLogger(entry, 10)
_ = setupMockClock(il) // use a fixed time to make sure no race is possible
var wg sync.WaitGroup
wait := make(chan struct{})
for range 10 {
wg.Add(1)
go func() {
<-wait
defer wg.Done()
il.Log(logrus.InfoLevel, "concurrent message")
}()
}
close(wait) // maximize raciness by unblocking goroutines together
wg.Wait()
// Only one Log call should succeed across all goroutines in the same interval
require.Equal(t, 1, hook.callCount())
}
// TestZeroInterval verifies behavior with small interval (logs every second)
func TestZeroInterval(t *testing.T) {
entry, hook := entryWithHook()
il := newIntervalLogger(entry, 1)
clock := setupMockClock(il)
il.Log(logrus.InfoLevel, "first")
require.Equal(t, 1, hook.callCount())
// Move to next second
clock.t = clock.t.Add(time.Second)
il.Log(logrus.InfoLevel, "second")
require.Equal(t, 2, hook.callCount())
}
// TestCompleteLoggingFlow tests realistic scenario with repeated logging
func TestCompleteLoggingFlow(t *testing.T) {
entry, hook := entryWithHook()
iSec, iDur := intervalSecondsAndDuration(10)
il := newIntervalLogger(entry, iSec)
clock := setupMockClock(il)
// Add field
il = il.WithField("request_id", "12345")
// Log multiple times in same interval - only first succeeds
il.Info("message 1")
require.Equal(t, 1, hook.callCount())
il.Warn("message 2")
require.Equal(t, 1, hook.callCount())
// Move to next interval
clock.t = clock.t.Add(iDur)
// Should be able to log again in new interval
il.Error("message 3")
require.Equal(t, 2, hook.callCount())
require.NotNil(t, il)
}
// TestAtomicSwapCorrectness verifies atomic swap works correctly
func TestAtomicSwapCorrectness(t *testing.T) {
il := newIntervalLogger(logrus.NewEntry(logrus.New()), 10)
_ = setupMockClock(il) // use a fixed time to make sure no race is possible
// Swap operation should return different value on first call
current := il.intervalNumber()
old := il.last.Swap(current)
require.Equal(t, int64(0), old) // initial value is 0
require.Equal(t, current, il.last.Load())
// Swap with same value should return the same value
old = il.last.Swap(current)
require.Equal(t, current, old)
}
// TestLogMethodsWithClockAdvancement verifies that log methods respect rate limiting
// within an interval but emit again after the interval passes.
func TestLogMethodsWithClockAdvancement(t *testing.T) {
entry, hook := entryWithHook()
iSec, iDur := intervalSecondsAndDuration(10)
il := newIntervalLogger(entry, iSec)
clock := setupMockClock(il)
// First Error call should log
il.Error("error 1")
require.Equal(t, 1, hook.callCount())
// Warn call in same interval should be suppressed
il.Warn("warn 1")
require.Equal(t, 1, hook.callCount())
// Info call in same interval should be suppressed
il.Info("info 1")
require.Equal(t, 1, hook.callCount())
// Debug call in same interval should be suppressed
il.Debug("debug 1")
require.Equal(t, 1, hook.callCount())
// Move forward 5 seconds - still in same 10-second interval
require.Equal(t, 5*time.Second, iDur/2)
clock.t = clock.t.Add(iDur / 2)
il.Error("error 2")
require.Equal(t, 1, hook.callCount(), "should still be suppressed within same interval")
firstInterval := il.intervalNumber()
// Move forward to next interval (10 second interval boundary)
clock.t = clock.t.Add(iDur / 2)
nextInterval := il.intervalNumber()
require.NotEqual(t, firstInterval, nextInterval, "should be in new interval now")
il.Error("error 3")
require.Equal(t, 2, hook.callCount(), "should emit in new interval")
// Another call in the new interval should be suppressed
il.Warn("warn 2")
require.Equal(t, 2, hook.callCount())
// Move forward to yet another interval
clock.t = clock.t.Add(iDur)
il.Info("info 2")
require.Equal(t, 3, hook.callCount(), "should emit in third interval")
}

View File

@@ -21,86 +21,117 @@ var (
Help: "Number of batches that are ready to be imported once they can be connected to the existing chain.",
},
)
backfillRemainingBatches = promauto.NewGauge(
batchesRemaining = promauto.NewGauge(
prometheus.GaugeOpts{
Name: "backfill_remaining_batches",
Help: "Backfill remaining batches.",
},
)
backfillBatchesImported = promauto.NewCounter(
batchesImported = promauto.NewCounter(
prometheus.CounterOpts{
Name: "backfill_batches_imported",
Help: "Number of backfill batches downloaded and imported.",
},
)
backfillBlocksApproximateBytes = promauto.NewCounter(
prometheus.CounterOpts{
Name: "backfill_blocks_bytes_downloaded",
Help: "BeaconBlock bytes downloaded from peers for backfill.",
backfillBatchTimeWaiting = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "backfill_batch_waiting_ms",
Help: "Time batch waited for a suitable peer in ms.",
Buckets: []float64{50, 100, 300, 1000, 2000},
},
)
backfillBlobsApproximateBytes = promauto.NewCounter(
prometheus.CounterOpts{
Name: "backfill_blobs_bytes_downloaded",
Help: "BlobSidecar bytes downloaded from peers for backfill.",
backfillBatchTimeRoundtrip = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "backfill_batch_roundtrip_ms",
Help: "Total time to import batch, from first scheduled to imported.",
Buckets: []float64{1000, 2000, 4000, 6000, 10000},
},
)
backfillBlobsDownloadCount = promauto.NewCounter(
prometheus.CounterOpts{
Name: "backfill_blobs_download_count",
Help: "Number of BlobSidecar values downloaded from peers for backfill.",
},
)
backfillBlocksDownloadCount = promauto.NewCounter(
blockDownloadCount = promauto.NewCounter(
prometheus.CounterOpts{
Name: "backfill_blocks_download_count",
Help: "Number of BeaconBlock values downloaded from peers for backfill.",
},
)
backfillBatchTimeRoundtrip = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "backfill_batch_time_roundtrip",
Help: "Total time to import batch, from first scheduled to imported.",
Buckets: []float64{400, 800, 1600, 3200, 6400, 12800},
blockDownloadBytesApprox = promauto.NewCounter(
prometheus.CounterOpts{
Name: "backfill_blocks_downloaded_bytes",
Help: "BeaconBlock bytes downloaded from peers for backfill.",
},
)
backfillBatchTimeWaiting = promauto.NewHistogram(
blockDownloadMs = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "backfill_batch_time_waiting",
Help: "Time batch waited for a suitable peer.",
Buckets: []float64{50, 100, 300, 1000, 2000},
},
)
backfillBatchTimeDownloadingBlocks = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "backfill_batch_blocks_time_download",
Help: "Time, in milliseconds, batch spent downloading blocks from peer.",
Name: "backfill_batch_blocks_download_ms",
Help: "BeaconBlock download time, in ms.",
Buckets: []float64{100, 300, 1000, 2000, 4000, 8000},
},
)
backfillBatchTimeDownloadingBlobs = promauto.NewHistogram(
blockVerifyMs = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "backfill_batch_blobs_time_download",
Help: "Time, in milliseconds, batch spent downloading blobs from peer.",
Name: "backfill_batch_verify_ms",
Help: "BeaconBlock verification time, in ms.",
Buckets: []float64{100, 300, 1000, 2000, 4000, 8000},
},
)
backfillBatchTimeVerifying = promauto.NewHistogram(
blobSidecarDownloadCount = promauto.NewCounter(
prometheus.CounterOpts{
Name: "backfill_blobs_download_count",
Help: "Number of BlobSidecar values downloaded from peers for backfill.",
},
)
blobSidecarDownloadBytesApprox = promauto.NewCounter(
prometheus.CounterOpts{
Name: "backfill_blobs_downloaded_bytes",
Help: "BlobSidecar bytes downloaded from peers for backfill.",
},
)
blobSidecarDownloadMs = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "backfill_batch_time_verify",
Help: "Time batch spent downloading blocks from peer.",
Name: "backfill_batch_blobs_download_ms",
Help: "BlobSidecar download time, in ms.",
Buckets: []float64{100, 300, 1000, 2000, 4000, 8000},
},
)
dataColumnSidecarDownloadCount = promauto.NewCounterVec(
prometheus.CounterOpts{
Name: "backfill_data_column_sidecar_downloaded",
Help: "Number of DataColumnSidecar values downloaded from peers for backfill.",
},
[]string{"index", "validity"},
)
dataColumnSidecarDownloadBytes = promauto.NewCounter(
prometheus.CounterOpts{
Name: "backfill_data_column_sidecar_downloaded_bytes",
Help: "DataColumnSidecar bytes downloaded from peers for backfill.",
},
)
dataColumnSidecarDownloadMs = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "backfill_batch_columns_download_ms",
Help: "DataColumnSidecars download time, in ms.",
Buckets: []float64{100, 300, 1000, 2000, 4000, 8000},
},
)
dataColumnSidecarVerifyMs = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "backfill_batch_columns_verify_ms",
Help: "DataColumnSidecars verification time, in ms.",
Buckets: []float64{3, 5, 10, 20, 100, 200},
},
)
)
func blobValidationMetrics(_ blocks.ROBlob) error {
backfillBlobsDownloadCount.Inc()
blobSidecarDownloadCount.Inc()
return nil
}
func blockValidationMetrics(interfaces.ReadOnlySignedBeaconBlock) error {
backfillBlocksDownloadCount.Inc()
blockDownloadCount.Inc()
return nil
}

View File

@@ -2,22 +2,24 @@ package backfill
import (
"context"
"maps"
"math"
"time"
"github.com/OffchainLabs/prysm/v7/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v7/beacon-chain/das"
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p"
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/peers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/startup"
"github.com/OffchainLabs/prysm/v7/beacon-chain/sync"
"github.com/OffchainLabs/prysm/v7/beacon-chain/verification"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors"
)
type batchWorkerPool interface {
spawn(ctx context.Context, n int, clock *startup.Clock, a PeerAssigner, v *verifier, cm sync.ContextByteVersions, blobVerifier verification.NewBlobVerifier, bfs *filesystem.BlobStorage)
spawn(ctx context.Context, n int, a PeerAssigner, cfg *workerCfg)
todo(b batch)
complete() (batch, error)
}
@@ -26,47 +28,61 @@ type worker interface {
run(context.Context)
}
type newWorker func(id workerId, in, out chan batch, c *startup.Clock, v *verifier, cm sync.ContextByteVersions, nbv verification.NewBlobVerifier, bfs *filesystem.BlobStorage) worker
type newWorker func(id workerId, in, out chan batch, cfg *workerCfg) worker
func defaultNewWorker(p p2p.P2P) newWorker {
return func(id workerId, in, out chan batch, c *startup.Clock, v *verifier, cm sync.ContextByteVersions, nbv verification.NewBlobVerifier, bfs *filesystem.BlobStorage) worker {
return newP2pWorker(id, p, in, out, c, v, cm, nbv, bfs)
return func(id workerId, in, out chan batch, cfg *workerCfg) worker {
return newP2pWorker(id, p, in, out, cfg)
}
}
// minRequestInterval is the minimum amount of time between requests.
// ie a value of 1s means we'll make ~1 req/sec per peer.
const minReqInterval = time.Second
type p2pBatchWorkerPool struct {
maxBatches int
newWorker newWorker
toWorkers chan batch
fromWorkers chan batch
toRouter chan batch
fromRouter chan batch
shutdownErr chan error
endSeq []batch
ctx context.Context
cancel func()
maxBatches int
newWorker newWorker
toWorkers chan batch
fromWorkers chan batch
toRouter chan batch
fromRouter chan batch
shutdownErr chan error
endSeq []batch
ctx context.Context
cancel func()
earliest primitives.Slot // earliest is the earliest slot a worker is processing
peerCache *sync.DASPeerCache
p2p p2p.P2P
peerFailLogger *intervalLogger
needs func() das.CurrentNeeds
}
var _ batchWorkerPool = &p2pBatchWorkerPool{}
func newP2PBatchWorkerPool(p p2p.P2P, maxBatches int) *p2pBatchWorkerPool {
func newP2PBatchWorkerPool(p p2p.P2P, maxBatches int, needs func() das.CurrentNeeds) *p2pBatchWorkerPool {
nw := defaultNewWorker(p)
return &p2pBatchWorkerPool{
newWorker: nw,
toRouter: make(chan batch, maxBatches),
fromRouter: make(chan batch, maxBatches),
toWorkers: make(chan batch),
fromWorkers: make(chan batch),
maxBatches: maxBatches,
shutdownErr: make(chan error),
newWorker: nw,
toRouter: make(chan batch, maxBatches),
fromRouter: make(chan batch, maxBatches),
toWorkers: make(chan batch),
fromWorkers: make(chan batch),
maxBatches: maxBatches,
shutdownErr: make(chan error),
peerCache: sync.NewDASPeerCache(p),
p2p: p,
peerFailLogger: newIntervalLogger(log, 5),
earliest: primitives.Slot(math.MaxUint64),
needs: needs,
}
}
func (p *p2pBatchWorkerPool) spawn(ctx context.Context, n int, c *startup.Clock, a PeerAssigner, v *verifier, cm sync.ContextByteVersions, nbv verification.NewBlobVerifier, bfs *filesystem.BlobStorage) {
func (p *p2pBatchWorkerPool) spawn(ctx context.Context, n int, a PeerAssigner, cfg *workerCfg) {
p.ctx, p.cancel = context.WithCancel(ctx)
go p.batchRouter(a)
for i := range n {
go p.newWorker(workerId(i), p.toWorkers, p.fromWorkers, c, v, cm, nbv, bfs).run(p.ctx)
go p.newWorker(workerId(i), p.toWorkers, p.fromWorkers, cfg).run(p.ctx)
}
}
@@ -103,7 +119,6 @@ func (p *p2pBatchWorkerPool) batchRouter(pa PeerAssigner) {
busy := make(map[peer.ID]bool)
todo := make([]batch, 0)
rt := time.NewTicker(time.Second)
earliest := primitives.Slot(math.MaxUint64)
for {
select {
case b := <-p.toRouter:
@@ -115,51 +130,129 @@ func (p *p2pBatchWorkerPool) batchRouter(pa PeerAssigner) {
// This ticker exists to periodically break out of the channel select
// to retry failed assignments.
case b := <-p.fromWorkers:
pid := b.busy
busy[pid] = false
if b.state == batchBlobSync {
todo = append(todo, b)
sortBatchDesc(todo)
} else {
p.fromRouter <- b
if b.state == batchErrFatal {
p.shutdown(b.err)
}
pid := b.assignedPeer
delete(busy, pid)
if b.workComplete() {
p.fromRouter <- b
break
}
todo = append(todo, b)
sortBatchDesc(todo)
case <-p.ctx.Done():
log.WithError(p.ctx.Err()).Info("p2pBatchWorkerPool context canceled, shutting down")
p.shutdown(p.ctx.Err())
return
}
if len(todo) == 0 {
continue
}
// Try to assign as many outstanding batches as possible to peers and feed the assigned batches to workers.
assigned, err := pa.Assign(busy, len(todo))
var err error
todo, err = p.processTodo(todo, pa, busy)
if err != nil {
if errors.Is(err, peers.ErrInsufficientSuitable) {
// Transient error resulting from insufficient number of connected peers. Leave batches in
// queue and get to them whenever the peer situation is resolved.
continue
}
p.shutdown(err)
return
}
for _, pid := range assigned {
if err := todo[0].waitUntilReady(p.ctx); err != nil {
log.WithError(p.ctx.Err()).Info("p2pBatchWorkerPool context canceled, shutting down")
p.shutdown(p.ctx.Err())
return
}
busy[pid] = true
todo[0].busy = pid
p.toWorkers <- todo[0].withPeer(pid)
if todo[0].begin < earliest {
earliest = todo[0].begin
oldestBatch.Set(float64(earliest))
}
todo = todo[1:]
}
}
}
func (p *p2pBatchWorkerPool) processTodo(todo []batch, pa PeerAssigner, busy map[peer.ID]bool) ([]batch, error) {
if len(todo) == 0 {
return todo, nil
}
notBusy, err := pa.Assign(peers.NotBusy(busy))
if err != nil {
if errors.Is(err, peers.ErrInsufficientSuitable) {
// Transient error resulting from insufficient number of connected peers. Leave batches in
// queue and get to them whenever the peer situation is resolved.
return todo, nil
}
return nil, err
}
if len(notBusy) == 0 {
log.Debug("No suitable peers available for batch assignment")
return todo, nil
}
custodied := peerdas.NewColumnIndices()
if highestEpoch(todo) >= params.BeaconConfig().FuluForkEpoch {
custodied, err = currentCustodiedColumns(p.ctx, p.p2p)
if err != nil {
return nil, errors.Wrap(err, "current custodied columns")
}
}
picker, err := p.peerCache.NewPicker(notBusy, custodied, minReqInterval)
if err != nil {
log.WithError(err).Error("Failed to compute column-weighted peer scores")
return todo, nil
}
for i, b := range todo {
needs := p.needs()
if b.expired(needs) {
p.endSeq = append(p.endSeq, b.withState(batchEndSequence))
continue
}
excludePeers := busy
if b.state == batchErrFatal {
// Fatal error detected in batch, shut down the pool.
return nil, b.err
}
if b.state == batchErrRetryable {
// Columns can fail in a partial fashion, so we nee to reset
// components that track peer interactions for multiple columns
// to enable partial retries.
b = resetToRetryColumns(b, needs)
if b.state == batchSequenced {
// Transitioning to batchSequenced means we need to download a new block batch because there was
// a problem making or verifying the last block request, so we should try to pick a different peer this time.
excludePeers = busyCopy(busy)
excludePeers[b.blockPeer] = true
b.blockPeer = "" // reset block peer so we can fail back to it next time if there is an issue with assignment.
}
}
pid, cols, err := b.selectPeer(picker, excludePeers)
if err != nil {
p.peerFailLogger.WithField("notBusy", len(notBusy)).WithError(err).WithFields(b.logFields()).Debug("Failed to select peer for batch")
// Return the remaining todo items and allow the outer loop to control when we try again.
return todo[i:], nil
}
busy[pid] = true
b.assignedPeer = pid
b.nextReqCols = cols
backfillBatchTimeWaiting.Observe(float64(time.Since(b.scheduled).Milliseconds()))
p.toWorkers <- b
p.updateEarliest(b.begin)
}
return []batch{}, nil
}
func busyCopy(busy map[peer.ID]bool) map[peer.ID]bool {
busyCp := make(map[peer.ID]bool, len(busy))
maps.Copy(busyCp, busy)
return busyCp
}
func highestEpoch(batches []batch) primitives.Epoch {
highest := primitives.Epoch(0)
for _, b := range batches {
epoch := slots.ToEpoch(b.end - 1)
if epoch > highest {
highest = epoch
}
}
return highest
}
func (p *p2pBatchWorkerPool) updateEarliest(current primitives.Slot) {
if current >= p.earliest {
return
}
p.earliest = current
oldestBatch.Set(float64(p.earliest))
}
func (p *p2pBatchWorkerPool) shutdown(err error) {
p.cancel()
p.shutdownErr <- err

View File

@@ -5,12 +5,15 @@ import (
"testing"
"time"
"github.com/OffchainLabs/prysm/v7/beacon-chain/das"
"github.com/OffchainLabs/prysm/v7/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/peers"
p2ptest "github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/startup"
"github.com/OffchainLabs/prysm/v7/beacon-chain/sync"
"github.com/OffchainLabs/prysm/v7/beacon-chain/verification"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/OffchainLabs/prysm/v7/testing/util"
@@ -24,7 +27,7 @@ type mockAssigner struct {
// Assign satisfies the PeerAssigner interface so that mockAssigner can be used in tests
// in place of the concrete p2p implementation of PeerAssigner.
func (m mockAssigner) Assign(busy map[peer.ID]bool, n int) ([]peer.ID, error) {
func (m mockAssigner) Assign(filter peers.AssignmentFilter) ([]peer.ID, error) {
if m.err != nil {
return nil, m.err
}
@@ -42,7 +45,8 @@ func TestPoolDetectAllEnded(t *testing.T) {
p2p := p2ptest.NewTestP2P(t)
ctx := t.Context()
ma := &mockAssigner{}
pool := newP2PBatchWorkerPool(p2p, nw)
needs := func() das.CurrentNeeds { return das.CurrentNeeds{Block: das.NeedSpan{Begin: 10, End: 10}} }
pool := newP2PBatchWorkerPool(p2p, nw, needs)
st, err := util.NewBeaconState()
require.NoError(t, err)
keys, err := st.PublicKeys()
@@ -53,8 +57,9 @@ func TestPoolDetectAllEnded(t *testing.T) {
ctxMap, err := sync.ContextByteVersionsForValRoot(bytesutil.ToBytes32(st.GenesisValidatorsRoot()))
require.NoError(t, err)
bfs := filesystem.NewEphemeralBlobStorage(t)
pool.spawn(ctx, nw, startup.NewClock(time.Now(), [32]byte{}), ma, v, ctxMap, mockNewBlobVerifier, bfs)
br := batcher{min: 10, size: 10}
wcfg := &workerCfg{clock: startup.NewClock(time.Now(), [32]byte{}), newVB: mockNewBlobVerifier, verifier: v, ctxMap: ctxMap, blobStore: bfs}
pool.spawn(ctx, nw, ma, wcfg)
br := batcher{size: 10, currentNeeds: needs}
endSeq := br.before(0)
require.Equal(t, batchEndSequence, endSeq.state)
for range nw {
@@ -72,7 +77,7 @@ type mockPool struct {
todoChan chan batch
}
func (m *mockPool) spawn(_ context.Context, _ int, _ *startup.Clock, _ PeerAssigner, _ *verifier, _ sync.ContextByteVersions, _ verification.NewBlobVerifier, _ *filesystem.BlobStorage) {
func (m *mockPool) spawn(_ context.Context, _ int, _ PeerAssigner, _ *workerCfg) {
}
func (m *mockPool) todo(b batch) {
@@ -89,3 +94,443 @@ func (m *mockPool) complete() (batch, error) {
}
var _ batchWorkerPool = &mockPool{}
// TestProcessTodoExpiresOlderBatches tests that processTodo correctly identifies and converts expired batches
func TestProcessTodoExpiresOlderBatches(t *testing.T) {
testCases := []struct {
name string
seqLen int
min primitives.Slot
max primitives.Slot
size primitives.Slot
updateMin primitives.Slot // what we'll set minChecker to
expectedEndSeq int // how many batches should be converted to endSeq
expectedProcessed int // how many batches should be processed (assigned to peers)
}{
{
name: "NoBatchesExpired",
seqLen: 3,
min: 100,
max: 1000,
size: 50,
updateMin: 120, // doesn't expire any batches
expectedEndSeq: 0,
expectedProcessed: 3,
},
{
name: "SomeBatchesExpired",
seqLen: 4,
min: 100,
max: 1000,
size: 50,
updateMin: 175, // expires batches with end <= 175
expectedEndSeq: 1, // [100-150] will be expired
expectedProcessed: 3,
},
{
name: "AllBatchesExpired",
seqLen: 3,
min: 100,
max: 300,
size: 50,
updateMin: 300, // expires all batches
expectedEndSeq: 3,
expectedProcessed: 0,
},
{
name: "MultipleBatchesExpired",
seqLen: 8,
min: 100,
max: 500,
size: 50,
updateMin: 320, // expires multiple batches
expectedEndSeq: 4, // [300-350] (end=350 > 320 not expired), [250-300], [200-250], [150-200], [100-150] = 4 batches
expectedProcessed: 4,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
// Create pool with minChecker
pool := &p2pBatchWorkerPool{
endSeq: make([]batch, 0),
}
needs := das.CurrentNeeds{Block: das.NeedSpan{Begin: tc.updateMin, End: tc.max + 1}}
// Create batches with valid slot ranges (descending order)
todo := make([]batch, tc.seqLen)
for i := 0; i < tc.seqLen; i++ {
end := tc.min + primitives.Slot((tc.seqLen-i)*int(tc.size))
begin := end - tc.size
todo[i] = batch{
begin: begin,
end: end,
state: batchInit,
}
}
// Process todo using processTodo logic (simulate without actual peer assignment)
endSeqCount := 0
processedCount := 0
for _, b := range todo {
if b.expired(needs) {
pool.endSeq = append(pool.endSeq, b.withState(batchEndSequence))
endSeqCount++
} else {
processedCount++
}
}
// Verify counts
if endSeqCount != tc.expectedEndSeq {
t.Fatalf("expected %d batches to expire, got %d", tc.expectedEndSeq, endSeqCount)
}
if processedCount != tc.expectedProcessed {
t.Fatalf("expected %d batches to be processed, got %d", tc.expectedProcessed, processedCount)
}
// Verify all expired batches are in batchEndSequence state
for _, b := range pool.endSeq {
if b.state != batchEndSequence {
t.Fatalf("expired batch should be batchEndSequence, got %s", b.state.String())
}
if b.end > tc.updateMin {
t.Fatalf("batch with end=%d should not be in endSeq when min=%d", b.end, tc.updateMin)
}
}
})
}
}
// TestExpirationAfterMoveMinimum tests that batches expire correctly after minimum is increased
func TestExpirationAfterMoveMinimum(t *testing.T) {
testCases := []struct {
name string
seqLen int
min primitives.Slot
max primitives.Slot
size primitives.Slot
firstMin primitives.Slot
secondMin primitives.Slot
expectedAfter1 int // expected expired after first processTodo
expectedAfter2 int // expected expired after second processTodo
}{
{
name: "IncrementalMinimumIncrease",
seqLen: 4,
min: 100,
max: 1000,
size: 50,
firstMin: 150, // batches with end <= 150 expire
secondMin: 200, // additional batches with end <= 200 expire
expectedAfter1: 1, // [100-150] expires
expectedAfter2: 1, // [150-200] also expires on second check (end=200 <= 200)
},
{
name: "LargeMinimumJump",
seqLen: 3,
min: 100,
max: 300,
size: 50,
firstMin: 120, // no expiration
secondMin: 300, // all expire
expectedAfter1: 0,
expectedAfter2: 3,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
pool := &p2pBatchWorkerPool{
endSeq: make([]batch, 0),
}
// Create batches
todo := make([]batch, tc.seqLen)
for i := 0; i < tc.seqLen; i++ {
end := tc.min + primitives.Slot((tc.seqLen-i)*int(tc.size))
begin := end - tc.size
todo[i] = batch{
begin: begin,
end: end,
state: batchInit,
}
}
needs := das.CurrentNeeds{Block: das.NeedSpan{Begin: tc.firstMin, End: tc.max + 1}}
// First processTodo with firstMin
endSeq1 := 0
remaining1 := make([]batch, 0)
for _, b := range todo {
if b.expired(needs) {
pool.endSeq = append(pool.endSeq, b.withState(batchEndSequence))
endSeq1++
} else {
remaining1 = append(remaining1, b)
}
}
if endSeq1 != tc.expectedAfter1 {
t.Fatalf("after first update: expected %d expired, got %d", tc.expectedAfter1, endSeq1)
}
// Second processTodo with secondMin on remaining batches
needs.Block.Begin = tc.secondMin
endSeq2 := 0
for _, b := range remaining1 {
if b.expired(needs) {
pool.endSeq = append(pool.endSeq, b.withState(batchEndSequence))
endSeq2++
}
}
if endSeq2 != tc.expectedAfter2 {
t.Fatalf("after second update: expected %d expired, got %d", tc.expectedAfter2, endSeq2)
}
// Verify total endSeq count
totalExpected := tc.expectedAfter1 + tc.expectedAfter2
if len(pool.endSeq) != totalExpected {
t.Fatalf("expected total %d expired batches, got %d", totalExpected, len(pool.endSeq))
}
})
}
}
// TestTodoInterceptsBatchEndSequence tests that todo() correctly intercepts batchEndSequence batches
func TestTodoInterceptsBatchEndSequence(t *testing.T) {
testCases := []struct {
name string
batches []batch
expectedEndSeq int
expectedToRouter int
}{
{
name: "AllRegularBatches",
batches: []batch{
{state: batchInit},
{state: batchInit},
{state: batchErrRetryable},
},
expectedEndSeq: 0,
expectedToRouter: 3,
},
{
name: "MixedBatches",
batches: []batch{
{state: batchInit},
{state: batchEndSequence},
{state: batchInit},
{state: batchEndSequence},
},
expectedEndSeq: 2,
expectedToRouter: 2,
},
{
name: "AllEndSequence",
batches: []batch{
{state: batchEndSequence},
{state: batchEndSequence},
{state: batchEndSequence},
},
expectedEndSeq: 3,
expectedToRouter: 0,
},
{
name: "EmptyBatches",
batches: []batch{},
expectedEndSeq: 0,
expectedToRouter: 0,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
pool := &p2pBatchWorkerPool{
endSeq: make([]batch, 0),
}
endSeqCount := 0
routerCount := 0
for _, b := range tc.batches {
if b.state == batchEndSequence {
pool.endSeq = append(pool.endSeq, b)
endSeqCount++
} else {
routerCount++
}
}
if endSeqCount != tc.expectedEndSeq {
t.Fatalf("expected %d batchEndSequence, got %d", tc.expectedEndSeq, endSeqCount)
}
if routerCount != tc.expectedToRouter {
t.Fatalf("expected %d batches to router, got %d", tc.expectedToRouter, routerCount)
}
if len(pool.endSeq) != tc.expectedEndSeq {
t.Fatalf("endSeq slice should have %d batches, got %d", tc.expectedEndSeq, len(pool.endSeq))
}
})
}
}
// TestCompleteShutdownCondition tests the complete() method shutdown behavior
func TestCompleteShutdownCondition(t *testing.T) {
testCases := []struct {
name string
maxBatches int
endSeqCount int
shouldShutdown bool
expectedMin primitives.Slot
}{
{
name: "AllEndSeq_Shutdown",
maxBatches: 3,
endSeqCount: 3,
shouldShutdown: true,
expectedMin: 200,
},
{
name: "PartialEndSeq_NoShutdown",
maxBatches: 3,
endSeqCount: 2,
shouldShutdown: false,
expectedMin: 200,
},
{
name: "NoEndSeq_NoShutdown",
maxBatches: 5,
endSeqCount: 0,
shouldShutdown: false,
expectedMin: 150,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
pool := &p2pBatchWorkerPool{
maxBatches: tc.maxBatches,
endSeq: make([]batch, 0),
needs: func() das.CurrentNeeds {
return das.CurrentNeeds{Block: das.NeedSpan{Begin: tc.expectedMin}}
},
}
// Add endSeq batches
for i := 0; i < tc.endSeqCount; i++ {
pool.endSeq = append(pool.endSeq, batch{state: batchEndSequence})
}
// Check shutdown condition (this is what complete() checks)
shouldShutdown := len(pool.endSeq) == pool.maxBatches
if shouldShutdown != tc.shouldShutdown {
t.Fatalf("expected shouldShutdown=%v, got %v", tc.shouldShutdown, shouldShutdown)
}
pool.needs = func() das.CurrentNeeds {
return das.CurrentNeeds{Block: das.NeedSpan{Begin: tc.expectedMin}}
}
if pool.needs().Block.Begin != tc.expectedMin {
t.Fatalf("expected minimum %d, got %d", tc.expectedMin, pool.needs().Block.Begin)
}
})
}
}
// TestExpirationFlowEndToEnd tests the complete flow of batches from batcher through pool
func TestExpirationFlowEndToEnd(t *testing.T) {
testCases := []struct {
name string
seqLen int
min primitives.Slot
max primitives.Slot
size primitives.Slot
moveMinTo primitives.Slot
expired int
description string
}{
{
name: "SingleBatchExpires",
seqLen: 2,
min: 100,
max: 300,
size: 50,
moveMinTo: 150,
expired: 1,
description: "Initial [150-200] and [100-150]; moveMinimum(150) expires [100-150]",
},
/*
{
name: "ProgressiveExpiration",
seqLen: 4,
min: 100,
max: 500,
size: 50,
moveMinTo: 250,
description: "4 batches; moveMinimum(250) expires 2 of them",
},
*/
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
// Simulate the flow: batcher creates batches → sequence() → pool.todo() → pool.processTodo()
// Step 1: Create sequencer (simulating batcher)
seq := newBatchSequencer(tc.seqLen, tc.max, tc.size, mockCurrentNeedsFunc(tc.min, tc.max+1))
initializeBatchWithSlots(seq.seq, tc.min, tc.size)
for i := range seq.seq {
seq.seq[i].state = batchInit
}
// Step 2: Create pool
pool := &p2pBatchWorkerPool{
endSeq: make([]batch, 0),
}
// Step 3: Initial sequence() call - all batches should be returned (none expired yet)
batches1, err := seq.sequence()
if err != nil {
t.Fatalf("initial sequence() failed: %v", err)
}
if len(batches1) != tc.seqLen {
t.Fatalf("expected %d batches from initial sequence(), got %d", tc.seqLen, len(batches1))
}
// Step 4: Move minimum (simulating epoch advancement)
seq.currentNeeds = mockCurrentNeedsFunc(tc.moveMinTo, tc.max+1)
seq.batcher.currentNeeds = seq.currentNeeds
pool.needs = seq.currentNeeds
for i := range batches1 {
seq.update(batches1[i])
}
// Step 5: Process batches through pool (second sequence call would happen here in real code)
batches2, err := seq.sequence()
if err != nil && err != errMaxBatches {
t.Fatalf("second sequence() failed: %v", err)
}
require.Equal(t, tc.seqLen-tc.expired, len(batches2))
// Step 6: Simulate pool.processTodo() checking for expiration
processedCount := 0
for _, b := range batches2 {
if b.expired(pool.needs()) {
pool.endSeq = append(pool.endSeq, b.withState(batchEndSequence))
} else {
processedCount++
}
}
// Verify: All returned non-endSeq batches should have end > moveMinTo
for _, b := range batches2 {
if b.state != batchEndSequence && b.end <= tc.moveMinTo {
t.Fatalf("batch [%d-%d] should not be returned when min=%d", b.begin, b.end, tc.moveMinTo)
}
}
})
}
}

View File

@@ -3,10 +3,11 @@ package backfill
import (
"context"
"github.com/OffchainLabs/prysm/v7/beacon-chain/das"
"github.com/OffchainLabs/prysm/v7/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p"
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/peers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/startup"
"github.com/OffchainLabs/prysm/v7/beacon-chain/sync"
"github.com/OffchainLabs/prysm/v7/beacon-chain/verification"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
@@ -25,47 +26,39 @@ type Service struct {
enabled bool // service is disabled by default
clock *startup.Clock
store *Store
syncNeeds das.SyncNeeds
syncNeedsWaiter func() (das.SyncNeeds, error)
ms minimumSlotter
cw startup.ClockWaiter
verifierWaiter InitializerWaiter
newBlobVerifier verification.NewBlobVerifier
nWorkers int
batchSeq *batchSequencer
batchSize uint64
pool batchWorkerPool
verifier *verifier
ctxMap sync.ContextByteVersions
p2p p2p.P2P
pa PeerAssigner
batchImporter batchImporter
blobStore *filesystem.BlobStorage
dcStore *filesystem.DataColumnStorage
initSyncWaiter func() error
complete chan struct{}
workerCfg *workerCfg
fuluStart primitives.Slot
denebStart primitives.Slot
}
var _ runtime.Service = (*Service)(nil)
// PeerAssigner describes a type that provides an Assign method, which can assign the best peer
// to service an RPC blockRequest. The Assign method takes a map of peers that should be excluded,
// to service an RPC blockRequest. The Assign method takes a callback used to filter out peers,
// allowing the caller to avoid making multiple concurrent requests to the same peer.
type PeerAssigner interface {
Assign(busy map[peer.ID]bool, n int) ([]peer.ID, error)
Assign(filter peers.AssignmentFilter) ([]peer.ID, error)
}
type minimumSlotter func(primitives.Slot) primitives.Slot
type batchImporter func(ctx context.Context, current primitives.Slot, b batch, su *Store) (*dbval.BackfillStatus, error)
func defaultBatchImporter(ctx context.Context, current primitives.Slot, b batch, su *Store) (*dbval.BackfillStatus, error) {
status := su.status()
if err := b.ensureParent(bytesutil.ToBytes32(status.LowParentRoot)); err != nil {
return status, err
}
// Import blocks to db and update db state to reflect the newly imported blocks.
// Other parts of the beacon node may use the same StatusUpdater instance
// via the coverage.AvailableBlocker interface to safely determine if a given slot has been backfilled.
return su.fillBack(ctx, current, b.results, b.availabilityStore())
}
// ServiceOption represents a functional option for the backfill service constructor.
type ServiceOption func(*Service) error
@@ -120,66 +113,41 @@ func WithVerifierWaiter(viw InitializerWaiter) ServiceOption {
}
}
// WithMinimumSlot allows the user to specify a different backfill minimum slot than the spec default of current - MIN_EPOCHS_FOR_BLOCK_REQUESTS.
// If this value is greater than current - MIN_EPOCHS_FOR_BLOCK_REQUESTS, it will be ignored with a warning log.
func WithMinimumSlot(s primitives.Slot) ServiceOption {
ms := func(current primitives.Slot) primitives.Slot {
specMin := minimumBackfillSlot(current)
if s < specMin {
return s
}
log.WithField("userSlot", s).WithField("specMinSlot", specMin).
Warn("Ignoring user-specified slot > MIN_EPOCHS_FOR_BLOCK_REQUESTS.")
return specMin
}
func WithSyncNeedsWaiter(f func() (das.SyncNeeds, error)) ServiceOption {
return func(s *Service) error {
s.ms = ms
if f != nil {
s.syncNeedsWaiter = f
}
return nil
}
}
// NewService initializes the backfill Service. Like all implementations of the Service interface,
// the service won't begin its runloop until Start() is called.
func NewService(ctx context.Context, su *Store, bStore *filesystem.BlobStorage, cw startup.ClockWaiter, p p2p.P2P, pa PeerAssigner, opts ...ServiceOption) (*Service, error) {
func NewService(ctx context.Context, su *Store, bStore *filesystem.BlobStorage, dcStore *filesystem.DataColumnStorage, cw startup.ClockWaiter, p p2p.P2P, pa PeerAssigner, opts ...ServiceOption) (*Service, error) {
s := &Service{
ctx: ctx,
store: su,
blobStore: bStore,
cw: cw,
ms: minimumBackfillSlot,
p2p: p,
pa: pa,
batchImporter: defaultBatchImporter,
complete: make(chan struct{}),
ctx: ctx,
store: su,
blobStore: bStore,
dcStore: dcStore,
cw: cw,
p2p: p,
pa: pa,
complete: make(chan struct{}),
fuluStart: slots.SafeEpochStartOrMax(params.BeaconConfig().FuluForkEpoch),
denebStart: slots.SafeEpochStartOrMax(params.BeaconConfig().DenebForkEpoch),
}
s.batchImporter = s.defaultBatchImporter
for _, o := range opts {
if err := o(s); err != nil {
return nil, err
}
}
s.pool = newP2PBatchWorkerPool(p, s.nWorkers)
return s, nil
}
func (s *Service) initVerifier(ctx context.Context) (*verifier, sync.ContextByteVersions, error) {
cps, err := s.store.originState(ctx)
if err != nil {
return nil, nil, err
}
keys, err := cps.PublicKeys()
if err != nil {
return nil, nil, errors.Wrap(err, "unable to retrieve public keys for all validators in the origin state")
}
vr := cps.GenesisValidatorsRoot()
ctxMap, err := sync.ContextByteVersionsForValRoot(bytesutil.ToBytes32(vr))
if err != nil {
return nil, nil, errors.Wrapf(err, "unable to initialize context version map using genesis validator root %#x", vr)
}
v, err := newBackfillVerifier(vr, keys)
return v, ctxMap, err
}
func (s *Service) updateComplete() bool {
b, err := s.pool.complete()
if err != nil {
@@ -187,7 +155,7 @@ func (s *Service) updateComplete() bool {
log.WithField("backfillSlot", b.begin).Info("Backfill is complete")
return true
}
log.WithError(err).Error("Backfill service received unhandled error from worker pool")
log.WithError(err).Error("Service received unhandled error from worker pool")
return true
}
s.batchSeq.update(b)
@@ -195,39 +163,47 @@ func (s *Service) updateComplete() bool {
}
func (s *Service) importBatches(ctx context.Context) {
importable := s.batchSeq.importable()
imported := 0
defer func() {
if imported == 0 {
return
}
backfillBatchesImported.Add(float64(imported))
}()
current := s.clock.CurrentSlot()
for i := range importable {
ib := importable[i]
if len(ib.results) == 0 {
imported := 0
importable := s.batchSeq.importable()
for _, ib := range importable {
if len(ib.blocks) == 0 {
log.WithFields(ib.logFields()).Error("Batch with no results, skipping importer")
s.batchSeq.update(ib.withError(errors.New("batch has no blocks")))
// This batch needs to be retried before we can continue importing subsequent batches.
break
}
_, err := s.batchImporter(ctx, current, ib, s.store)
if err != nil {
log.WithError(err).WithFields(ib.logFields()).Debug("Backfill batch failed to import")
s.downscorePeer(ib.blockPid, "backfillBatchImportError")
s.batchSeq.update(ib.withState(batchErrRetryable))
s.batchSeq.update(ib.withError(err))
// If a batch fails, the subsequent batches are no longer considered importable.
break
}
// Calling update with state=batchImportComplete will advance the batch list.
s.batchSeq.update(ib.withState(batchImportComplete))
imported += 1
// Calling update with state=batchImportComplete will advance the batch list.
log.WithFields(ib.logFields()).WithField("batchesRemaining", s.batchSeq.numTodo()).Debug("Imported batch")
}
nt := s.batchSeq.numTodo()
log.WithField("imported", imported).WithField("importable", len(importable)).
WithField("batchesRemaining", nt).
Info("Backfill batches processed")
batchesRemaining.Set(float64(nt))
if imported > 0 {
batchesImported.Add(float64(imported))
}
}
backfillRemainingBatches.Set(float64(nt))
func (s *Service) defaultBatchImporter(ctx context.Context, current primitives.Slot, b batch, su *Store) (*dbval.BackfillStatus, error) {
status := su.status()
if err := b.ensureParent(bytesutil.ToBytes32(status.LowParentRoot)); err != nil {
return status, err
}
// Import blocks to db and update db state to reflect the newly imported blocks.
// Other parts of the beacon node may use the same StatusUpdater instance
// via the coverage.AvailableBlocker interface to safely determine if a given slot has been backfilled.
checker := newCheckMultiplexer(s.syncNeeds.Currently(), b)
return su.fillBack(ctx, current, b.blocks, checker)
}
func (s *Service) scheduleTodos() {
@@ -240,7 +216,7 @@ func (s *Service) scheduleTodos() {
// and then we'll have the parent_root expected by 90 to ensure it matches the root for 89,
// at which point we know we can process [80..90).
if errors.Is(err, errMaxBatches) {
log.Debug("Backfill batches waiting for descendent batch to complete")
log.Debug("Waiting for descendent batch to complete")
return
}
}
@@ -249,80 +225,83 @@ func (s *Service) scheduleTodos() {
}
}
// fuluOrigin checks whether the origin block (ie the checkpoint sync block from which backfill
// syncs backwards) is in an unsupported fork, enabling the backfill service to shut down rather than
// run with buggy behavior.
// This will be removed once DataColumnSidecar support is released.
func fuluOrigin(cfg *params.BeaconChainConfig, status *dbval.BackfillStatus) bool {
originEpoch := slots.ToEpoch(primitives.Slot(status.OriginSlot))
if originEpoch < cfg.FuluForkEpoch {
return false
}
return true
}
// Start begins the runloop of backfill.Service in the current goroutine.
func (s *Service) Start() {
if !s.enabled {
log.Info("Backfill service not enabled")
log.Info("Service not enabled")
s.markComplete()
return
}
ctx, cancel := context.WithCancel(s.ctx)
defer func() {
log.Info("Backfill service is shutting down")
log.Info("Service is shutting down")
cancel()
}()
if s.store.isGenesisSync() {
log.Info("Node synced from genesis, shutting down backfill")
s.markComplete()
return
}
clock, err := s.cw.WaitForClock(ctx)
if err != nil {
log.WithError(err).Error("Backfill service failed to start while waiting for genesis data")
log.WithError(err).Error("Service failed to start while waiting for genesis data")
return
}
s.clock = clock
v, err := s.verifierWaiter.WaitForInitializer(ctx)
s.newBlobVerifier = newBlobVerifierFromInitializer(v)
if s.syncNeedsWaiter == nil {
log.Error("Service missing sync needs waiter; cannot start")
return
}
syncNeeds, err := s.syncNeedsWaiter()
if err != nil {
log.WithError(err).Error("Could not initialize blob verifier in backfill service")
log.WithError(err).Error("Service failed to start while waiting for sync needs")
return
}
s.syncNeeds = syncNeeds
if s.store.isGenesisSync() {
log.Info("Backfill short-circuit; node synced from genesis")
s.markComplete()
return
}
status := s.store.status()
if fuluOrigin(params.BeaconConfig(), status) {
log.WithField("originSlot", s.store.status().OriginSlot).
Warn("backfill disabled; DataColumnSidecar currently unsupported, for updates follow https://github.com/OffchainLabs/prysm/issues/15982")
s.markComplete()
return
}
needs := s.syncNeeds.Currently()
// Exit early if there aren't going to be any batches to backfill.
if primitives.Slot(status.LowSlot) <= s.ms(s.clock.CurrentSlot()) {
log.WithField("minimumRequiredSlot", s.ms(s.clock.CurrentSlot())).
if !needs.Block.At(primitives.Slot(status.LowSlot)) {
log.WithField("minimumSlot", needs.Block.Begin).
WithField("backfillLowestSlot", status.LowSlot).
Info("Exiting backfill service; minimum block retention slot > lowest backfilled block")
s.markComplete()
return
}
s.verifier, s.ctxMap, err = s.initVerifier(ctx)
if err != nil {
log.WithError(err).Error("Unable to initialize backfill verifier")
return
}
if s.initSyncWaiter != nil {
log.Info("Backfill service waiting for initial-sync to reach head before starting")
log.Info("Service waiting for initial-sync to reach head before starting")
if err := s.initSyncWaiter(); err != nil {
log.WithError(err).Error("Error waiting for init-sync to complete")
return
}
}
s.pool.spawn(ctx, s.nWorkers, clock, s.pa, s.verifier, s.ctxMap, s.newBlobVerifier, s.blobStore)
s.batchSeq = newBatchSequencer(s.nWorkers, s.ms(s.clock.CurrentSlot()), primitives.Slot(status.LowSlot), primitives.Slot(s.batchSize))
if s.workerCfg == nil {
s.workerCfg = &workerCfg{
clock: s.clock,
blobStore: s.blobStore,
colStore: s.dcStore,
downscore: s.downscorePeer,
currentNeeds: s.syncNeeds.Currently,
}
if err = initWorkerCfg(ctx, s.workerCfg, s.verifierWaiter, s.store); err != nil {
log.WithError(err).Error("Could not initialize blob verifier in backfill service")
return
}
}
// Allow tests to inject a mock pool.
if s.pool == nil {
s.pool = newP2PBatchWorkerPool(s.p2p, s.nWorkers, s.syncNeeds.Currently)
}
s.pool.spawn(ctx, s.nWorkers, s.pa, s.workerCfg)
s.batchSeq = newBatchSequencer(s.nWorkers, primitives.Slot(status.LowSlot), primitives.Slot(s.batchSize), s.syncNeeds.Currently)
if err = s.initBatches(); err != nil {
log.WithError(err).Error("Non-recoverable error in backfill service")
return
@@ -338,9 +317,6 @@ func (s *Service) Start() {
}
s.importBatches(ctx)
batchesWaiting.Set(float64(s.batchSeq.countWithState(batchImportable)))
if err := s.batchSeq.moveMinimum(s.ms(s.clock.CurrentSlot())); err != nil {
log.WithError(err).Error("Non-recoverable error while adjusting backfill minimum slot")
}
s.scheduleTodos()
}
}
@@ -364,14 +340,16 @@ func (*Service) Status() error {
return nil
}
// minimumBackfillSlot determines the lowest slot that backfill needs to download based on looking back
// MIN_EPOCHS_FOR_BLOCK_REQUESTS from the current slot.
func minimumBackfillSlot(current primitives.Slot) primitives.Slot {
oe := min(primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests), slots.MaxSafeEpoch())
offset := slots.UnsafeEpochStart(oe)
// syncEpochOffset subtracts a number of epochs as slots from the current slot, with underflow checks.
// It returns slot 1 if the result would be 0 or underflow. It doesn't return slot 0 because the
// genesis block needs to be specially synced (it doesn't have a valid signature).
func syncEpochOffset(current primitives.Slot, subtract primitives.Epoch) primitives.Slot {
minEpoch := min(subtract, slots.MaxSafeEpoch())
// compute slot offset - offset is a number of slots to go back from current (not an absolute slot).
offset := slots.UnsafeEpochStart(minEpoch)
// Undeflow protection: slot 0 is the genesis block, therefore the signature in it is invalid.
// To prevent us from rejecting a batch, we restrict the minimum backfill batch till only slot 1
if offset >= current {
// Slot 0 is the genesis block, therefore the signature in it is invalid.
// To prevent us from rejecting a batch, we restrict the minimum backfill batch till only slot 1
return 1
}
return current - offset
@@ -383,9 +361,15 @@ func newBlobVerifierFromInitializer(ini *verification.Initializer) verification.
}
}
func newDataColumnVerifierFromInitializer(ini *verification.Initializer) verification.NewDataColumnsVerifier {
return func(cols []blocks.RODataColumn, reqs []verification.Requirement) verification.DataColumnsVerifier {
return ini.NewDataColumnsVerifier(cols, reqs)
}
}
func (s *Service) markComplete() {
close(s.complete)
log.Info("Backfill service marked as complete")
log.Info("Marked as complete")
}
func (s *Service) WaitForCompletion() error {
@@ -397,7 +381,11 @@ func (s *Service) WaitForCompletion() error {
}
}
func (s *Service) downscorePeer(peerID peer.ID, reason string) {
func (s *Service) downscorePeer(peerID peer.ID, reason string, err error) {
newScore := s.p2p.Peers().Scorers().BadResponsesScorer().Increment(peerID)
log.WithFields(logrus.Fields{"peerID": peerID, "reason": reason, "newScore": newScore}).Debug("Downscore peer")
logArgs := log.WithFields(logrus.Fields{"peerID": peerID, "reason": reason, "newScore": newScore})
if err != nil {
logArgs = logArgs.WithError(err)
}
logArgs.Debug("Downscore peer")
}

View File

@@ -5,17 +5,16 @@ import (
"testing"
"time"
"github.com/OffchainLabs/prysm/v7/beacon-chain/das"
"github.com/OffchainLabs/prysm/v7/beacon-chain/db/filesystem"
p2ptest "github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/startup"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
"github.com/OffchainLabs/prysm/v7/beacon-chain/verification"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/proto/dbval"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/OffchainLabs/prysm/v7/testing/util"
"github.com/OffchainLabs/prysm/v7/time/slots"
)
type mockMinimumSlotter struct {
@@ -40,9 +39,9 @@ func TestServiceInit(t *testing.T) {
su, err := NewUpdater(ctx, db)
require.NoError(t, err)
nWorkers := 5
var batchSize uint64 = 100
var batchSize uint64 = 4
nBatches := nWorkers * 2
var high uint64 = 11235
var high uint64 = 1 + batchSize*uint64(nBatches) // extra 1 because upper bound is exclusive
originRoot := [32]byte{}
origin, err := util.NewBeaconState()
require.NoError(t, err)
@@ -53,14 +52,24 @@ func TestServiceInit(t *testing.T) {
}
remaining := nBatches
cw := startup.NewClockSynchronizer()
require.NoError(t, cw.SetClock(startup.NewClock(time.Now(), [32]byte{})))
clock := startup.NewClock(time.Now(), [32]byte{}, startup.WithSlotAsNow(primitives.Slot(high)+1))
require.NoError(t, cw.SetClock(clock))
pool := &mockPool{todoChan: make(chan batch, nWorkers), finishedChan: make(chan batch, nWorkers)}
p2pt := p2ptest.NewTestP2P(t)
bfs := filesystem.NewEphemeralBlobStorage(t)
srv, err := NewService(ctx, su, bfs, cw, p2pt, &mockAssigner{},
WithBatchSize(batchSize), WithWorkerCount(nWorkers), WithEnableBackfill(true), WithVerifierWaiter(&mockInitalizerWaiter{}))
dcs := filesystem.NewEphemeralDataColumnStorage(t)
snw := func() (das.SyncNeeds, error) {
return das.NewSyncNeeds(
clock.CurrentSlot,
nil,
primitives.Epoch(0),
)
}
srv, err := NewService(ctx, su, bfs, dcs, cw, p2pt, &mockAssigner{},
WithBatchSize(batchSize), WithWorkerCount(nWorkers), WithEnableBackfill(true), WithVerifierWaiter(&mockInitalizerWaiter{}),
WithSyncNeedsWaiter(snw))
require.NoError(t, err)
srv.ms = mockMinimumSlotter{min: primitives.Slot(high - batchSize*uint64(nBatches))}.minimumSlot
srv.pool = pool
srv.batchImporter = func(context.Context, primitives.Slot, batch, *Store) (*dbval.BackfillStatus, error) {
return &dbval.BackfillStatus{}, nil
@@ -74,6 +83,11 @@ func TestServiceInit(t *testing.T) {
if b.state == batchSequenced {
b.state = batchImportable
}
for i := b.begin; i < b.end; i++ {
blk, _ := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, primitives.Slot(i), 0)
b.blocks = append(b.blocks, blk)
}
require.Equal(t, int(batchSize), len(b.blocks))
pool.finishedChan <- b
todo = testReadN(ctx, t, pool.todoChan, 1, todo)
}
@@ -83,18 +97,6 @@ func TestServiceInit(t *testing.T) {
}
}
func TestMinimumBackfillSlot(t *testing.T) {
oe := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
currSlot := (oe + 100).Mul(uint64(params.BeaconConfig().SlotsPerEpoch))
minSlot := minimumBackfillSlot(primitives.Slot(currSlot))
require.Equal(t, 100*params.BeaconConfig().SlotsPerEpoch, minSlot)
currSlot = oe.Mul(uint64(params.BeaconConfig().SlotsPerEpoch))
minSlot = minimumBackfillSlot(primitives.Slot(currSlot))
require.Equal(t, primitives.Slot(1), minSlot)
}
func testReadN(ctx context.Context, t *testing.T, c chan batch, n int, into []batch) []batch {
for range n {
select {
@@ -107,66 +109,3 @@ func testReadN(ctx context.Context, t *testing.T, c chan batch, n int, into []ba
}
return into
}
func TestBackfillMinSlotDefault(t *testing.T) {
oe := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
current := primitives.Slot((oe + 100).Mul(uint64(params.BeaconConfig().SlotsPerEpoch)))
s := &Service{}
specMin := minimumBackfillSlot(current)
t.Run("equal to specMin", func(t *testing.T) {
opt := WithMinimumSlot(specMin)
require.NoError(t, opt(s))
require.Equal(t, specMin, s.ms(current))
})
t.Run("older than specMin", func(t *testing.T) {
opt := WithMinimumSlot(specMin - 1)
require.NoError(t, opt(s))
// if WithMinimumSlot is older than the spec minimum, we should use it.
require.Equal(t, specMin-1, s.ms(current))
})
t.Run("newer than specMin", func(t *testing.T) {
opt := WithMinimumSlot(specMin + 1)
require.NoError(t, opt(s))
// if WithMinimumSlot is newer than the spec minimum, we should use the spec minimum
require.Equal(t, specMin, s.ms(current))
})
}
func TestFuluOrigin(t *testing.T) {
cfg := params.BeaconConfig()
fuluEpoch := cfg.FuluForkEpoch
fuluSlot, err := slots.EpochStart(fuluEpoch)
require.NoError(t, err)
cases := []struct {
name string
origin primitives.Slot
isFulu bool
}{
{
name: "before fulu",
origin: fuluSlot - 1,
isFulu: false,
},
{
name: "at fulu",
origin: fuluSlot,
isFulu: true,
},
{
name: "after fulu",
origin: fuluSlot + 1,
isFulu: true,
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
status := &dbval.BackfillStatus{
OriginSlot: uint64(tc.origin),
}
result := fuluOrigin(cfg, status)
require.Equal(t, tc.isFulu, result)
})
}
}

Some files were not shown because too many files have changed in this diff Show More