Compare commits

...

15 Commits

Author SHA1 Message Date
james-prysm
edcf9e86c5 changelog and ci complaints 2026-01-08 13:52:38 -06:00
james-prysm
2e8d579d31 adding cache for attestation data so we don't call it multiple times 2026-01-08 13:38:06 -06:00
Potuz
158c09ca8c Add --low-valcount-sweep feature flag for withdrawal sweep bound (#16231)
Gate the withdrawal sweep optimization (using min of validator count and
MaxValidatorsPerWithdrawalsSweep) behind a hidden feature flag that
defaults to false. Enable the flag for spectests to match consensus
spec.

The backported changes were from
[4788](https://github.com/ethereum/consensus-specs/pull/4788)

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-08 16:14:03 +00:00
Bastin
17245f4fac Ephemeral debug logfile (#16108)
**What type of PR is this?**
Feature

**What does this PR do? Why is it needed?**
This PR introduces an ephemeral log file that captures debug logs for 24
hours.

- it captures debug logs regardless of the user provided (or
non-provided) `--verbosity` flag.
- it allows a maximum of 250MB for each log file. 
- it keeps 1 backup logfile in case of size-based rotations. (as opposed
to time-based)
- this is enabled by default for beacon and validator nodes.
- the log files live in `datadir/logs/` directory under the names of
`beacon-chain.log` and `validator.log`. backups have a timestamp in
their name as well.
- the feature can be disabled using the `--disable-ephemeral-log-file`
flag.
2026-01-08 14:16:22 +00:00
Bastin
53b0a574ab Set logging verbosity per writer hook instead of globally (#16106)
**What type of PR is this?**
Feature

**What does this PR do? Why is it needed?**
This PR sets the logging verbosity level per writer hook (per output:
terminal, log file, etc) rather than setting a global logrus level which
limits customizing each output.

it set the terminal and log file output to be the same as the user
provided `--verbosity` flag. so nothing changes in reality.

it also introduces a `SetLoggingLevel()` to be used instead of
`logrus.SetLeveL()` in order for us to be able to set a different
baseline level later on if needed. (my next PR will use this).

I'm only making this change in the `beacon-chain` and `validator` apps,
skipping tools like `bootnode` and `client-stats`.
2026-01-08 12:19:16 +00:00
terence
c96d188468 gloas: add builders registry and update state fields (#16164)
This pr implements the Gloas builder registry and related beacon state
fields per the spec, including proto/SSZ updates and state-native wiring
for builders, payload availability, pending payments/withdrawals, and
expected withdrawals. This aligns BeaconState with the Gloas container
changes and adds supporting hashing/copy helpers.

Spec ref:
https://github.com/ethereum/consensus-specs/blob/master/specs/gloas/beacon-chain.md
2026-01-07 21:51:56 +00:00
Preston Van Loon
0fcb922702 Changelog for v7.1.2 (#16225)
**What type of PR is this?**

Documentation

**What does this PR do? Why is it needed?**

**Which issues(s) does this PR fix?**

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-01-07 20:06:51 +00:00
satushh
3646a77bfb Use copy() instead of byte-by-byte loop (#16222)
<!-- Thanks for sending a PR! Before submitting:

1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
   fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
   in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
   Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->

**What type of PR is this?**

Optimisation

**What does this PR do? Why is it needed?**

use copy() instead of byte-by-byte loop which isn't required. 

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [ ] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [ ] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [ ] I have added a description with sufficient context for reviewers
to understand this PR.
- [ ] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-01-07 17:15:52 +00:00
Potuz
1541558261 update spectests (#16219) 2026-01-07 16:48:50 +00:00
james-prysm
1a6252ade4 changing isHealthy to isReady (#16167)
<!-- Thanks for sending a PR! Before submitting:

1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
   fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
   in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
   Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->

**What type of PR is this?**

 Bug fix

**What does this PR do? Why is it needed?**

validator fallbacks shouldn't work on nodes that are syncing as many of
the tasks validators perform require the node to be fully synced.

- 206 or any other code is  interpreted as "not ready"
- 200 interpreted as "ready"

**Which issues(s) does this PR fix?**
 
continuation of https://github.com/OffchainLabs/prysm/pull/15401

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-01-06 18:58:12 +00:00
Preston Van Loon
27c009e7ff Tests: Add require.Eventually and fix a few test flakes (#16217)
**What type of PR is this?**

Other

**What does this PR do? Why is it needed?**

This is a better way to wait for a test condition to hit, rather than
time.Sleep.

**Which issues(s) does this PR fix?**


**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-01-06 18:20:27 +00:00
Jonny Rhea
ffad861e2c WithMaxExportBatchSize is specified twice (#16211)
<!-- Thanks for sending a PR! Before submitting:

1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
   fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
   in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
   Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->

**What type of PR is this?**

> Bug fix


**What does this PR do? Why is it needed?**

It's just a simple fix. I was looking at how prysm uses OpenTelemetry
and I noticed it.

**Which issues(s) does this PR fix?**

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-01-06 16:22:20 +00:00
Manu NALEPA
792fa22099 Add the --disable-get-blobs-v2 flag and fixes #16171 (#16155)
**What type of PR is this?**
Feature + Bugfix

**What does this PR do? Why is it needed?**
Starting at Fusaka, the beacon node can pull blobs with the
`engine_getBlobsV2` API from the execution layer.
This reduces by a lot the burden on the beacon node. However, the beacon
node should be able to work 100% correctly without this execution layer
help.

This PR introduces the `--disable-get-blobs-v2` flag to simulate a 0%
success rate of this engine API.

This PR also fixes:
- https://github.com/OffchainLabs/prysm/issues/16171

Please read commit by commit with commit messages.

**How to test it:**
For the `--disable-get-blobs-v2` part:

Run the beacon node with the `--disable-get-blobs-v2` flag in DEBUG
mode.
For every block with commitments, the following log should be displayed:
```
[2025-12-19 15:36:25.49] DEBUG sync: No data column sidecars constructed from the execution client ...
```

And the following log should **never** be displayed:
```
[2026-01-05 10:19:00.55] DEBUG sync: Constructed data column sidecars from the execution client count=...
```

For the #16171 part:
- No ERROR log showed in the linked issue should never be displayed.

**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-01-05 22:29:15 +00:00
Preston Van Loon
c5b3d3531c Added changelog for v7.1.1 (#16161)
<!-- Thanks for sending a PR! Before submitting:

1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
   fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
   in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
   Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->

**What type of PR is this?**

Documentation

**What does this PR do? Why is it needed?**

v7.1.1 release is coming today

**Which issues(s) does this PR fix?**


**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [ ] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-01-05 22:26:13 +00:00
Aarsh Shah
cc4510bb77 p2p: batch publish data column sidecars (#16183)
**What type of PR is this?**

Feature

What does this PR do? Why is it needed?

This PR takes @MarcoPolo 's PR at
https://github.com/OffchainLabs/prysm/pull/16130 to completion with
tests.

The description on his PR:

"""
a relatively small change to optimize network send order.

Without this, network writes tend to prioritize sending data for one
column to all peers before sending data for later columns (e.g for two
columns and 4 peers per column it would send A,A,A,A,B,B,B,B). With
batch publishing we can change the write order to round robin across
columns (e.g. A,B,A,B,A,B,A,B).

In cases where the process is sending at a rate over the network limit,
this approach allows at least some copies of the column to propagate
through the network. In early simulations with bandwidth limits of
50mbps for the publisher, this improved dissemination by ~20-30%.
"""
See the issue for some more context.

**Which issues(s) does this PR fix?**

Fixes https://github.com/OffchainLabs/prysm/issues/16129

Other notes for review

Acknowledgements

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [ ] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [ ] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).

---------

Co-authored-by: Marco Munizaga <git@marcopolo.io>
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
Co-authored-by: kasey <489222+kasey@users.noreply.github.com>
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2026-01-05 22:02:06 +00:00
122 changed files with 2982 additions and 1417 deletions

3
.gitignore vendored
View File

@@ -44,3 +44,6 @@ tmp
# spectest coverage reports
report.txt
# execution client data
execution/

View File

@@ -4,6 +4,66 @@ All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
## [v7.1.2](https://github.com/prysmaticlabs/prysm/compare/v7.1.1...v7.1.2) - 2026-01-07
Happy new year! This patch release is very small. The main improvement is better management of pending attestation aggregation via [PR 16153](https://github.com/OffchainLabs/prysm/pull/16153).
### Added
- `primitives.BuilderIndex`: SSZ `uint64` wrapper for builder registry indices. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16169)
### Changed
- the /eth/v2/beacon/pool/attestations and /eth/v1/beacon/pool/sync_committees now returns a 503 error if the node is still syncing, the rest api is also working in a similar process to gRPC broadcasting immediately now. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16152)
- `validateDataColumn`: Remove error logs. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16157)
- Pending aggregates: When multiple aggregated attestations only differing by the aggregator index are in the pending queue, only process one of them. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16153)
### Fixed
- Fix the missing fork version object mapping for Fulu in light client p2p. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16151)
- Do not process slots and copy states for next epoch proposers after Fulu. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16168)
## [v7.1.1](https://github.com/prysmaticlabs/prysm/compare/v7.1.0...v7.1.1) - 2025-12-18
Release highlights:
- Fixed potential deadlock scenario in data column batch verification
- Improved processing and metrics for cells and proofs
We are aware of [an issue](https://github.com/OffchainLabs/prysm/issues/16160) where Prysm struggles to sync from an out of sync state. We will have another release before the end of the year to address this issue.
Our postmortem document from the December 4th mainnet issue has been published on our [documentation site](https://prysm.offchainlabs.com/docs/misc/mainnet-postmortems/)
### Added
- Track the dependent root of the latest finalized checkpoint in forkchoice. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16103)
- Proposal design document to implement graffiti. Currently it is empty by default and the idea is to have it of the form GE168dPR63af. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15983)
- Add support for detecting and logging per address reachability via libp2p AutoNAT v2. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16100)
- Static analyzer that ensures each `httputil.HandleError` call is followed by a `return` statement. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16134)
- Prometheus histogram `cells_and_proofs_from_structured_computation_milliseconds` to track computation time for cells and proofs from structured blobs. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16115)
- Prometheus histogram `get_blobs_v2_latency_milliseconds` to track RPC latency for `getBlobsV2` calls to the execution layer. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16115)
### Changed
- Optimise migratetocold by not doing brute force for loop. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16101)
- e2e sync committee evaluator now skips the first slot after startup, we already skip the fork epoch for checks here, this skip only applies on startup, due to altair always from 0 and validators need to warm up. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16145)
- Run `ComputeCellsAndProofsFromFlat` in parallel to improve performance when computing cells and proofs. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16115)
- Run `ComputeCellsAndProofsFromStructured` in parallel to improve performance when computing cells and proofs. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16115)
### Removed
- Unnecessary copy is removed from Eth1DataHasEnoughSupport. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16118)
### Fixed
- Incorrect constructor return type [#16084](https://github.com/OffchainLabs/prysm/pull/16084). [[PR]](https://github.com/prysmaticlabs/prysm/pull/16084)
- Fixed possible race when validating two attestations at the same time. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16105)
- Fix missing return after version header check in SubmitAttesterSlashingsV2. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16126)
- Fix deadlock in data column gossip KZG batch verification when a caller times out preventing result delivery. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16141)
- Fixed replay state issue in rest api caused by attester and sync committee duties endpoints. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16136)
- Do not error when committee has been computed correctly but updating the cache failed. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16142)
- Prevent blocked sends to the KZG batch verifier when the caller context is already canceled, avoiding useless queueing and potential hangs. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16144)
## [v7.1.0](https://github.com/prysmaticlabs/prysm/compare/v7.0.0...v7.1.0) - 2025-12-10
This release includes several key features/fixes. If you are running v7.0.0 then you should update to v7.0.1 or later and remove the flag `--disable-last-epoch-targets`.

View File

@@ -273,16 +273,16 @@ filegroup(
url = "https://github.com/ethereum/EIPs/archive/5480440fe51742ed23342b68cf106cefd427e39d.tar.gz",
)
consensus_spec_version = "v1.6.0"
consensus_spec_version = "v1.7.0-alpha.0"
load("@prysm//tools:download_spectests.bzl", "consensus_spec_tests")
consensus_spec_tests(
name = "consensus_spec_tests",
flavors = {
"general": "sha256-54hTaUNF9nLg+hRr3oHoq0yjZpW3MNiiUUuCQu6Rajk=",
"minimal": "sha256-1JHIGg3gVMjvcGYRHR5cwdDgOvX47oR/MWp6gyAeZfA=",
"mainnet": "sha256-292h3W2Ffts0YExgDTyxYe9Os7R0bZIXuAaMO8P6kl4=",
"general": "sha256-b+rJOuVqq+Dy53quPcNYcQwPFoMU7Wp7tdUVe7n0g8w=",
"minimal": "sha256-qxRIxtjPxVsVCY90WsBJKhk0027XDSmhjnRvRN14V1c=",
"mainnet": "sha256-NsuOQG3LzeiEE1TrWuvQ6vu6BboHv7h7f/RTS0pWkCs=",
},
version = consensus_spec_version,
)
@@ -298,7 +298,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-VzBgrEokvYSMIIXVnSA5XS9I3m9oxpvToQGxC1N5lzw=",
integrity = "sha256-hwNdUBgdBrkk6pWIpNYbzbwswUuOu6AMD2exN8uv+QQ=",
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
)

View File

@@ -17,6 +17,7 @@ import (
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
ethpbv1 "github.com/OffchainLabs/prysm/v7/proto/eth/v1"
@@ -130,12 +131,10 @@ func TestService_ReceiveBlock(t *testing.T) {
block: genFullBlock(t, util.DefaultBlockGenConfig(), 1 /*slot*/),
},
check: func(t *testing.T, s *Service) {
// Hacky sleep, should use a better way to be able to resolve the race
// between event being sent out and processed.
time.Sleep(100 * time.Millisecond)
if recvd := len(s.cfg.StateNotifier.(*blockchainTesting.MockStateNotifier).ReceivedEvents()); recvd < 1 {
t.Errorf("Received %d state notifications, expected at least 1", recvd)
}
notifier := s.cfg.StateNotifier.(*blockchainTesting.MockStateNotifier)
require.Eventually(t, func() bool {
return len(notifier.ReceivedEvents()) >= 1
}, 2*time.Second, 10*time.Millisecond, "Expected at least 1 state notification")
},
},
{
@@ -222,10 +221,10 @@ func TestService_ReceiveBlockUpdateHead(t *testing.T) {
require.NoError(t, s.ReceiveBlock(ctx, wsb, root, nil))
})
wg.Wait()
time.Sleep(100 * time.Millisecond)
if recvd := len(s.cfg.StateNotifier.(*blockchainTesting.MockStateNotifier).ReceivedEvents()); recvd < 1 {
t.Errorf("Received %d state notifications, expected at least 1", recvd)
}
notifier := s.cfg.StateNotifier.(*blockchainTesting.MockStateNotifier)
require.Eventually(t, func() bool {
return len(notifier.ReceivedEvents()) >= 1
}, 2*time.Second, 10*time.Millisecond, "Expected at least 1 state notification")
// Verify fork choice has processed the block. (Genesis block and the new block)
assert.Equal(t, 2, s.cfg.ForkChoiceStore.NodeCount())
}
@@ -265,10 +264,10 @@ func TestService_ReceiveBlockBatch(t *testing.T) {
block: genFullBlock(t, util.DefaultBlockGenConfig(), 1 /*slot*/),
},
check: func(t *testing.T, s *Service) {
time.Sleep(100 * time.Millisecond)
if recvd := len(s.cfg.StateNotifier.(*blockchainTesting.MockStateNotifier).ReceivedEvents()); recvd < 1 {
t.Errorf("Received %d state notifications, expected at least 1", recvd)
}
notifier := s.cfg.StateNotifier.(*blockchainTesting.MockStateNotifier)
require.Eventually(t, func() bool {
return len(notifier.ReceivedEvents()) >= 1
}, 2*time.Second, 10*time.Millisecond, "Expected at least 1 state notification")
},
},
}
@@ -512,8 +511,9 @@ func Test_executePostFinalizationTasks(t *testing.T) {
s.cfg.StateNotifier = notifier
s.executePostFinalizationTasks(s.ctx, headState)
time.Sleep(1 * time.Second) // sleep for a second because event is in a separate go routine
require.Equal(t, 1, len(notifier.ReceivedEvents()))
require.Eventually(t, func() bool {
return len(notifier.ReceivedEvents()) == 1
}, 5*time.Second, 50*time.Millisecond, "Expected exactly 1 state notification")
e := notifier.ReceivedEvents()[0]
assert.Equal(t, statefeed.FinalizedCheckpoint, int(e.Type))
fc, ok := e.Data.(*ethpbv1.EventFinalizedCheckpoint)
@@ -552,8 +552,9 @@ func Test_executePostFinalizationTasks(t *testing.T) {
s.cfg.StateNotifier = notifier
s.executePostFinalizationTasks(s.ctx, headState)
time.Sleep(1 * time.Second) // sleep for a second because event is in a separate go routine
require.Equal(t, 1, len(notifier.ReceivedEvents()))
require.Eventually(t, func() bool {
return len(notifier.ReceivedEvents()) == 1
}, 5*time.Second, 50*time.Millisecond, "Expected exactly 1 state notification")
e := notifier.ReceivedEvents()[0]
assert.Equal(t, statefeed.FinalizedCheckpoint, int(e.Type))
fc, ok := e.Data.(*ethpbv1.EventFinalizedCheckpoint)
@@ -596,13 +597,13 @@ func TestProcessLightClientBootstrap(t *testing.T) {
s.executePostFinalizationTasks(s.ctx, l.AttestedState)
// wait for the goroutine to finish processing
time.Sleep(1 * time.Second)
// Check that the light client bootstrap is saved
b, err := s.lcStore.LightClientBootstrap(ctx, [32]byte(cp.Root))
require.NoError(t, err)
require.NotNil(t, b)
// Wait for the light client bootstrap to be saved (runs in goroutine)
var b interfaces.LightClientBootstrap
require.Eventually(t, func() bool {
var err error
b, err = s.lcStore.LightClientBootstrap(ctx, [32]byte(cp.Root))
return err == nil && b != nil
}, 5*time.Second, 50*time.Millisecond, "Light client bootstrap was not saved within timeout")
btst, err := lightClient.NewLightClientBootstrapFromBeaconState(ctx, l.FinalizedState.Slot(), l.FinalizedState, l.FinalizedBlock)
require.NoError(t, err)

View File

@@ -26,6 +26,7 @@ go_library(
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/core/validators:go_default_library",
"//beacon-chain/state:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types:go_default_library",

View File

@@ -7,6 +7,7 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/signing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
"github.com/OffchainLabs/prysm/v7/config/features"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
@@ -212,7 +213,12 @@ func ProcessWithdrawals(st state.BeaconState, executionData interfaces.Execution
if err != nil {
return nil, errors.Wrap(err, "could not get next withdrawal validator index")
}
nextValidatorIndex += primitives.ValidatorIndex(params.BeaconConfig().MaxValidatorsPerWithdrawalsSweep)
if features.Get().LowValcountSweep {
bound := min(uint64(st.NumValidators()), params.BeaconConfig().MaxValidatorsPerWithdrawalsSweep)
nextValidatorIndex += primitives.ValidatorIndex(bound)
} else {
nextValidatorIndex += primitives.ValidatorIndex(params.BeaconConfig().MaxValidatorsPerWithdrawalsSweep)
}
nextValidatorIndex = nextValidatorIndex % primitives.ValidatorIndex(st.NumValidators())
} else {
nextValidatorIndex = expectedWithdrawals[len(expectedWithdrawals)-1].ValidatorIndex + 1

View File

@@ -40,6 +40,7 @@ go_library(
"//beacon-chain/state/state-native:go_default_library",
"//beacon-chain/state/stategen:go_default_library",
"//beacon-chain/verification:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",

View File

@@ -11,6 +11,7 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v7/beacon-chain/execution/types"
"github.com/OffchainLabs/prysm/v7/beacon-chain/verification"
"github.com/OffchainLabs/prysm/v7/cmd/beacon-chain/flags"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
@@ -538,6 +539,10 @@ func (s *Service) GetBlobsV2(ctx context.Context, versionedHashes []common.Hash)
return nil, errors.New(fmt.Sprintf("%s is not supported", GetBlobsV2))
}
if flags.Get().DisableGetBlobsV2 {
return []*pb.BlobAndProofV2{}, nil
}
result := make([]*pb.BlobAndProofV2, len(versionedHashes))
err := s.rpcClient.CallContext(ctx, &result, GetBlobsV2, versionedHashes)

View File

@@ -75,7 +75,6 @@ func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
p2p := p2pTesting.NewTestP2P(t)
lcStore := NewLightClientStore(p2p, new(event.Feed), testDB.SetupDB(t))
timeForGoroutinesToFinish := 20 * time.Microsecond
// update 0 with basic data and no supermajority following an empty lastFinalityUpdate - should save and broadcast
l0 := util.NewTestLightClient(t, version.Altair)
update0, err := NewLightClientFinalityUpdateFromBeaconState(l0.Ctx, l0.State, l0.Block, l0.AttestedState, l0.AttestedBlock, l0.FinalizedBlock)
@@ -87,8 +86,9 @@ func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
lcStore.SetLastFinalityUpdate(update0, true)
require.Equal(t, update0, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
time.Sleep(timeForGoroutinesToFinish) // give some time for the broadcast goroutine to finish
require.Equal(t, true, p2p.BroadcastCalled.Load(), "Broadcast should have been called after setting a new last finality update when previous is nil")
require.Eventually(t, func() bool {
return p2p.BroadcastCalled.Load()
}, time.Second, 10*time.Millisecond, "Broadcast should have been called after setting a new last finality update when previous is nil")
p2p.BroadcastCalled.Store(false) // Reset for next test
// update 1 with same finality slot, increased attested slot, and no supermajority - should save but not broadcast
@@ -102,7 +102,7 @@ func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
lcStore.SetLastFinalityUpdate(update1, true)
require.Equal(t, update1, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
time.Sleep(timeForGoroutinesToFinish) // give some time for the broadcast goroutine to finish
time.Sleep(50 * time.Millisecond) // Wait briefly to verify broadcast is not called
require.Equal(t, false, p2p.BroadcastCalled.Load(), "Broadcast should not have been called after setting a new last finality update without supermajority")
p2p.BroadcastCalled.Store(false) // Reset for next test
@@ -117,8 +117,9 @@ func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
lcStore.SetLastFinalityUpdate(update2, true)
require.Equal(t, update2, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
time.Sleep(timeForGoroutinesToFinish) // give some time for the broadcast goroutine to finish
require.Equal(t, true, p2p.BroadcastCalled.Load(), "Broadcast should have been called after setting a new last finality update with supermajority")
require.Eventually(t, func() bool {
return p2p.BroadcastCalled.Load()
}, time.Second, 10*time.Millisecond, "Broadcast should have been called after setting a new last finality update with supermajority")
p2p.BroadcastCalled.Store(false) // Reset for next test
// update 3 with same finality slot, increased attested slot, and supermajority - should save but not broadcast
@@ -132,7 +133,7 @@ func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
lcStore.SetLastFinalityUpdate(update3, true)
require.Equal(t, update3, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
time.Sleep(timeForGoroutinesToFinish) // give some time for the broadcast goroutine to finish
time.Sleep(50 * time.Millisecond) // Wait briefly to verify broadcast is not called
require.Equal(t, false, p2p.BroadcastCalled.Load(), "Broadcast should not have been when previous was already broadcast")
// update 4 with increased finality slot, increased attested slot, and supermajority - should save and broadcast
@@ -146,8 +147,9 @@ func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
lcStore.SetLastFinalityUpdate(update4, true)
require.Equal(t, update4, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
time.Sleep(timeForGoroutinesToFinish) // give some time for the broadcast goroutine to finish
require.Equal(t, true, p2p.BroadcastCalled.Load(), "Broadcast should have been called after a new finality update with increased finality slot")
require.Eventually(t, func() bool {
return p2p.BroadcastCalled.Load()
}, time.Second, 10*time.Millisecond, "Broadcast should have been called after a new finality update with increased finality slot")
p2p.BroadcastCalled.Store(false) // Reset for next test
// update 5 with the same new finality slot, increased attested slot, and supermajority - should save but not broadcast
@@ -161,7 +163,7 @@ func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
lcStore.SetLastFinalityUpdate(update5, true)
require.Equal(t, update5, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
time.Sleep(timeForGoroutinesToFinish) // give some time for the broadcast goroutine to finish
time.Sleep(50 * time.Millisecond) // Wait briefly to verify broadcast is not called
require.Equal(t, false, p2p.BroadcastCalled.Load(), "Broadcast should not have been called when previous was already broadcast with supermajority")
// update 6 with the same new finality slot, increased attested slot, and no supermajority - should save but not broadcast
@@ -175,7 +177,7 @@ func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
lcStore.SetLastFinalityUpdate(update6, true)
require.Equal(t, update6, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
time.Sleep(timeForGoroutinesToFinish) // give some time for the broadcast goroutine to finish
time.Sleep(50 * time.Millisecond) // Wait briefly to verify broadcast is not called
require.Equal(t, false, p2p.BroadcastCalled.Load(), "Broadcast should not have been called when previous was already broadcast with supermajority")
}

View File

@@ -22,6 +22,7 @@ import (
"github.com/OffchainLabs/prysm/v7/monitoring/tracing/trace"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/time/slots"
pubsub "github.com/libp2p/go-libp2p-pubsub"
"github.com/pkg/errors"
ssz "github.com/prysmaticlabs/fastssz"
"github.com/sirupsen/logrus"
@@ -357,58 +358,67 @@ func (s *Service) BroadcastDataColumnSidecars(ctx context.Context, sidecars []bl
return nil
}
// broadcastDataColumnSidecars broadcasts multiple data column sidecars to the p2p network, after ensuring
// there is at least one peer in each needed subnet. If not, it will attempt to find one before broadcasting.
// It returns when all broadcasts are complete, or the context is cancelled (whichever comes first).
// broadcastDataColumnSidecars broadcasts multiple data column sidecars to the p2p network.
// For sidecars with available peers, it uses batch publishing.
// For sidecars without peers, it finds peers first and then publishes individually.
// Both paths run in parallel. It returns when all broadcasts are complete, or the context is cancelled.
func (s *Service) broadcastDataColumnSidecars(ctx context.Context, forkDigest [fieldparams.VersionLength]byte, sidecars []blocks.VerifiedRODataColumn) {
type rootAndIndex struct {
root [fieldparams.RootLength]byte
index uint64
}
var (
wg sync.WaitGroup
timings sync.Map
)
var timings sync.Map
logLevel := logrus.GetLevel()
slotPerRoot := make(map[[fieldparams.RootLength]byte]primitives.Slot, 1)
topicFunc := func(sidecar blocks.VerifiedRODataColumn) (topic string, wrappedSubIdx uint64, subnet uint64) {
subnet = peerdas.ComputeSubnetForDataColumnSidecar(sidecar.Index)
topic = dataColumnSubnetToTopic(subnet, forkDigest)
wrappedSubIdx = subnet + dataColumnSubnetVal
return
}
sidecarsWithPeers := make([]blocks.VerifiedRODataColumn, 0, len(sidecars))
var sidecarsWithoutPeers []blocks.VerifiedRODataColumn
// Categorize sidecars by peer availability.
for _, sidecar := range sidecars {
slotPerRoot[sidecar.BlockRoot()] = sidecar.Slot()
wg.Go(func() {
// Add tracing to the function.
ctx, span := trace.StartSpan(s.ctx, "p2p.broadcastDataColumnSidecars")
topic, wrappedSubIdx, _ := topicFunc(sidecar)
// Check if we have a peer for this subnet (use RLock for read-only check).
mu := s.subnetLocker(wrappedSubIdx)
mu.RLock()
hasPeer := s.hasPeerWithSubnet(topic)
mu.RUnlock()
if hasPeer {
sidecarsWithPeers = append(sidecarsWithPeers, sidecar)
continue
}
sidecarsWithoutPeers = append(sidecarsWithoutPeers, sidecar)
}
var batchWg, individualWg sync.WaitGroup
// Batch publish sidecars that already have peers
var messageBatch pubsub.MessageBatch
for _, sidecar := range sidecarsWithPeers {
batchWg.Go(func() {
_, span := trace.StartSpan(ctx, "p2p.broadcastDataColumnSidecars")
ctx := trace.NewContext(s.ctx, span)
defer span.End()
// Compute the subnet for this data column sidecar.
subnet := peerdas.ComputeSubnetForDataColumnSidecar(sidecar.Index)
topic, _, _ := topicFunc(sidecar)
// Build the topic corresponding to subnet column subnet and this fork digest.
topic := dataColumnSubnetToTopic(subnet, forkDigest)
// Compute the wrapped subnet index.
wrappedSubIdx := subnet + dataColumnSubnetVal
// Find peers if needed.
if err := s.findPeersIfNeeded(ctx, wrappedSubIdx, DataColumnSubnetTopicFormat, forkDigest, subnet); err != nil {
if err := s.batchObject(ctx, &messageBatch, sidecar, topic); err != nil {
tracing.AnnotateError(span, err)
log.WithError(err).Error("Cannot find peers if needed")
log.WithError(err).Error("Cannot batch data column sidecar")
return
}
// Broadcast the data column sidecar to the network.
if err := s.broadcastObject(ctx, sidecar, topic); err != nil {
tracing.AnnotateError(span, err)
log.WithError(err).Error("Cannot broadcast data column sidecar")
return
}
// Increase the number of successful broadcasts.
dataColumnSidecarBroadcasts.Inc()
// Record the timing for log purposes.
if logLevel >= logrus.DebugLevel {
root := sidecar.BlockRoot()
timings.Store(rootAndIndex{root: root, index: sidecar.Index}, time.Now())
@@ -416,8 +426,50 @@ func (s *Service) broadcastDataColumnSidecars(ctx context.Context, forkDigest [f
})
}
// Wait for all broadcasts to finish.
wg.Wait()
// For sidecars without peers, find peers and publish individually (no batching).
for _, sidecar := range sidecarsWithoutPeers {
individualWg.Go(func() {
_, span := trace.StartSpan(ctx, "p2p.broadcastDataColumnSidecars")
ctx := trace.NewContext(s.ctx, span)
defer span.End()
topic, wrappedSubIdx, subnet := topicFunc(sidecar)
// Find peers for this sidecar's subnet.
if err := s.findPeersIfNeeded(ctx, wrappedSubIdx, DataColumnSubnetTopicFormat, forkDigest, subnet); err != nil {
tracing.AnnotateError(span, err)
log.WithError(err).Error("Cannot find peers if needed")
return
}
// Publish individually (not batched) since we just found peers.
if err := s.broadcastObject(ctx, sidecar, topic); err != nil {
tracing.AnnotateError(span, err)
log.WithError(err).Error("Cannot broadcast data column sidecar")
return
}
dataColumnSidecarBroadcasts.Inc()
if logLevel >= logrus.DebugLevel {
root := sidecar.BlockRoot()
timings.Store(rootAndIndex{root: root, index: sidecar.Index}, time.Now())
}
})
}
// Wait for batch to be populated, then publish.
batchWg.Wait()
if len(sidecarsWithPeers) > 0 {
if err := s.pubsub.PublishBatch(&messageBatch); err != nil {
log.WithError(err).Error("Cannot publish batch for data column sidecars")
} else {
dataColumnSidecarBroadcasts.Add(float64(len(sidecarsWithPeers)))
}
}
// Wait for all individual publishes to complete.
individualWg.Wait()
// The rest of this function is only for debug logging purposes.
if logLevel < logrus.DebugLevel {
@@ -504,28 +556,68 @@ func (s *Service) findPeersIfNeeded(
return nil
}
// method to broadcast messages to other peers in our gossip mesh.
// encodeGossipMessage encodes an object for gossip transmission.
// It returns the encoded bytes and the full topic with protocol suffix.
func (s *Service) encodeGossipMessage(obj ssz.Marshaler, topic string) ([]byte, string, error) {
buf := new(bytes.Buffer)
if _, err := s.Encoding().EncodeGossip(buf, obj); err != nil {
return nil, "", fmt.Errorf("could not encode message: %w", err)
}
return buf.Bytes(), topic + s.Encoding().ProtocolSuffix(), nil
}
// broadcastObject broadcasts a message to other peers in our gossip mesh.
func (s *Service) broadcastObject(ctx context.Context, obj ssz.Marshaler, topic string) error {
ctx, span := trace.StartSpan(ctx, "p2p.broadcastObject")
defer span.End()
span.SetAttributes(trace.StringAttribute("topic", topic))
buf := new(bytes.Buffer)
if _, err := s.Encoding().EncodeGossip(buf, obj); err != nil {
err := errors.Wrap(err, "could not encode message")
data, fullTopic, err := s.encodeGossipMessage(obj, topic)
if err != nil {
tracing.AnnotateError(span, err)
return err
}
if span.IsRecording() {
id := hash.FastSum64(buf.Bytes())
messageLen := int64(buf.Len())
id := hash.FastSum64(data)
messageLen := int64(len(data))
// lint:ignore uintcast -- It's safe to do this for tracing.
iid := int64(id)
span = trace.AddMessageSendEvent(span, iid, messageLen /*uncompressed*/, messageLen /*compressed*/)
}
if err := s.PublishToTopic(ctx, topic+s.Encoding().ProtocolSuffix(), buf.Bytes()); err != nil {
if err := s.PublishToTopic(ctx, fullTopic, data); err != nil {
err := errors.Wrap(err, "could not publish message")
tracing.AnnotateError(span, err)
return err
}
return nil
}
// batchObject adds an object to a message batch for a future broadcast.
// The caller MUST publish the batch after all messages have been added.
func (s *Service) batchObject(ctx context.Context, batch *pubsub.MessageBatch, obj ssz.Marshaler, topic string) error {
ctx, span := trace.StartSpan(ctx, "p2p.batchObject")
defer span.End()
span.SetAttributes(trace.StringAttribute("topic", topic))
data, fullTopic, err := s.encodeGossipMessage(obj, topic)
if err != nil {
tracing.AnnotateError(span, err)
return err
}
if span.IsRecording() {
id := hash.FastSum64(data)
messageLen := int64(len(data))
// lint:ignore uintcast -- It's safe to do this for tracing.
iid := int64(id)
span = trace.AddMessageSendEvent(span, iid, messageLen /*uncompressed*/, messageLen /*compressed*/)
}
if err := s.addToBatch(ctx, batch, fullTopic, data); err != nil {
err := errors.Wrap(err, "could not publish message")
tracing.AnnotateError(span, err)
return err

View File

@@ -32,6 +32,8 @@ import (
"github.com/OffchainLabs/prysm/v7/time/slots"
pubsub "github.com/libp2p/go-libp2p-pubsub"
"github.com/libp2p/go-libp2p/core/host"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/libp2p/go-libp2p/core/protocol"
"google.golang.org/protobuf/proto"
)
@@ -70,7 +72,10 @@ func TestService_Broadcast(t *testing.T) {
sub, err := p2.SubscribeToTopic(topic)
require.NoError(t, err)
time.Sleep(50 * time.Millisecond) // libp2p fails without this delay...
// Wait for libp2p mesh to establish
require.Eventually(t, func() bool {
return len(p.pubsub.ListPeers(topic)) > 0
}, 5*time.Second, 10*time.Millisecond, "libp2p mesh did not establish")
// Async listen for the pubsub, must be before the broadcast.
var wg sync.WaitGroup
@@ -184,7 +189,10 @@ func TestService_BroadcastAttestation(t *testing.T) {
sub, err := p2.SubscribeToTopic(topic)
require.NoError(t, err)
time.Sleep(50 * time.Millisecond) // libp2p fails without this delay...
// Wait for libp2p mesh to establish
require.Eventually(t, func() bool {
return len(p.pubsub.ListPeers(topic)) > 0
}, 5*time.Second, 10*time.Millisecond, "libp2p mesh did not establish")
// Async listen for the pubsub, must be before the broadcast.
var wg sync.WaitGroup
@@ -373,7 +381,15 @@ func TestService_BroadcastAttestationWithDiscoveryAttempts(t *testing.T) {
_, err = tpHandle.Subscribe()
require.NoError(t, err)
time.Sleep(500 * time.Millisecond) // libp2p fails without this delay...
// This test specifically tests discovery-based peer finding, which requires
// time for nodes to discover each other. Using a fixed sleep here is intentional
// as we're testing the discovery timing behavior.
time.Sleep(500 * time.Millisecond)
// Verify mesh establishment after discovery
require.Eventually(t, func() bool {
return len(p.pubsub.ListPeers(topic)) > 0 && len(p2.pubsub.ListPeers(topic)) > 0
}, 5*time.Second, 10*time.Millisecond, "libp2p mesh did not establish")
nodePeers := p.pubsub.ListPeers(topic)
nodePeers2 := p2.pubsub.ListPeers(topic)
@@ -442,7 +458,10 @@ func TestService_BroadcastSyncCommittee(t *testing.T) {
sub, err := p2.SubscribeToTopic(topic)
require.NoError(t, err)
time.Sleep(50 * time.Millisecond) // libp2p fails without this delay...
// Wait for libp2p mesh to establish
require.Eventually(t, func() bool {
return len(p.pubsub.ListPeers(topic)) > 0
}, 5*time.Second, 10*time.Millisecond, "libp2p mesh did not establish")
// Async listen for the pubsub, must be before the broadcast.
var wg sync.WaitGroup
@@ -519,7 +538,10 @@ func TestService_BroadcastBlob(t *testing.T) {
sub, err := p2.SubscribeToTopic(topic)
require.NoError(t, err)
time.Sleep(50 * time.Millisecond) // libp2p fails without this delay...
// Wait for libp2p mesh to establish
require.Eventually(t, func() bool {
return len(p.pubsub.ListPeers(topic)) > 0
}, 5*time.Second, 10*time.Millisecond, "libp2p mesh did not establish")
// Async listen for the pubsub, must be before the broadcast.
var wg sync.WaitGroup
@@ -582,7 +604,10 @@ func TestService_BroadcastLightClientOptimisticUpdate(t *testing.T) {
sub, err := p2.SubscribeToTopic(topic)
require.NoError(t, err)
time.Sleep(50 * time.Millisecond) // libp2p fails without this delay...
// Wait for libp2p mesh to establish
require.Eventually(t, func() bool {
return len(p.pubsub.ListPeers(topic)) > 0
}, 5*time.Second, 10*time.Millisecond, "libp2p mesh did not establish")
// Async listen for the pubsub, must be before the broadcast.
var wg sync.WaitGroup
@@ -658,7 +683,10 @@ func TestService_BroadcastLightClientFinalityUpdate(t *testing.T) {
sub, err := p2.SubscribeToTopic(topic)
require.NoError(t, err)
time.Sleep(50 * time.Millisecond) // libp2p fails without this delay...
// Wait for libp2p mesh to establish
require.Eventually(t, func() bool {
return len(p.pubsub.ListPeers(topic)) > 0
}, 5*time.Second, 10*time.Millisecond, "libp2p mesh did not establish")
// Async listen for the pubsub, must be before the broadcast.
var wg sync.WaitGroup
@@ -769,8 +797,10 @@ func TestService_BroadcastDataColumn(t *testing.T) {
sub, err := p2.SubscribeToTopic(topic)
require.NoError(t, err)
// libp2p fails without this delay
time.Sleep(50 * time.Millisecond)
// Wait for libp2p mesh to establish
require.Eventually(t, func() bool {
return len(service.pubsub.ListPeers(topic)) > 0
}, 5*time.Second, 10*time.Millisecond, "libp2p mesh did not establish")
// Broadcast to peers and wait.
err = service.BroadcastDataColumnSidecars(ctx, []blocks.VerifiedRODataColumn{verifiedRoSidecar})
@@ -787,3 +817,190 @@ func TestService_BroadcastDataColumn(t *testing.T) {
require.NoError(t, service.Encoding().DecodeGossip(msg.Data, &result))
require.DeepEqual(t, &result, verifiedRoSidecar)
}
type topicInvoked struct {
topic string
pid peer.ID
}
// rpcOrderTracer is a RawTracer implementation that captures the order of SendRPC calls.
// It records the topics of messages sent via pubsub to verify round-robin ordering.
type rpcOrderTracer struct {
mu sync.Mutex
invoked []*topicInvoked
byTopic map[string][]peer.ID
}
func (t *rpcOrderTracer) SendRPC(rpc *pubsub.RPC, pid peer.ID) {
t.mu.Lock()
defer t.mu.Unlock()
for _, msg := range rpc.GetPublish() {
invoked := &topicInvoked{topic: msg.GetTopic(), pid: pid}
t.invoked = append(t.invoked, invoked)
t.byTopic[invoked.topic] = append(t.byTopic[invoked.topic], invoked.pid)
}
}
func newRpcOrderTracer() *rpcOrderTracer {
return &rpcOrderTracer{byTopic: make(map[string][]peer.ID)}
}
func (t *rpcOrderTracer) getTopics() []string {
t.mu.Lock()
defer t.mu.Unlock()
result := make([]string, len(t.invoked))
for i := range t.invoked {
result[i] = t.invoked[i].topic
}
return result
}
// No-op implementations for other RawTracer methods.
func (*rpcOrderTracer) AddPeer(peer.ID, protocol.ID) {}
func (*rpcOrderTracer) RemovePeer(peer.ID) {}
func (*rpcOrderTracer) Join(string) {}
func (*rpcOrderTracer) Leave(string) {}
func (*rpcOrderTracer) Graft(peer.ID, string) {}
func (*rpcOrderTracer) Prune(peer.ID, string) {}
func (*rpcOrderTracer) ValidateMessage(*pubsub.Message) {}
func (*rpcOrderTracer) DeliverMessage(*pubsub.Message) {}
func (*rpcOrderTracer) RejectMessage(*pubsub.Message, string) {}
func (*rpcOrderTracer) DuplicateMessage(*pubsub.Message) {}
func (*rpcOrderTracer) ThrottlePeer(peer.ID) {}
func (*rpcOrderTracer) RecvRPC(*pubsub.RPC) {}
func (*rpcOrderTracer) DropRPC(*pubsub.RPC, peer.ID) {}
func (*rpcOrderTracer) UndeliverableMessage(*pubsub.Message) {}
// TestService_BroadcastDataColumnRoundRobin verifies that when broadcasting multiple
// data column sidecars, messages are interleaved in round-robin order by column index
// rather than sending all copies of one column before the next.
//
// Without batch publishing: A,A,A,A,B,B,B,B (all peers for column A, then all for column B)
// With batch publishing: A,B,A,B,A,B,A,B (interleaved by message ID)
func TestService_BroadcastDataColumnRoundRobin(t *testing.T) {
const (
port = 2100
topicFormat = DataColumnSubnetTopicFormat
)
ctx := t.Context()
// Load the KZG trust setup.
err := kzg.Start()
require.NoError(t, err)
gFlags := new(flags.GlobalFlags)
gFlags.MinimumPeersPerSubnet = 1
flags.Init(gFlags)
defer flags.Init(new(flags.GlobalFlags))
// Create a tracer to capture the order of SendRPC calls.
tracer := newRpcOrderTracer()
// Create the publisher node with the tracer injected.
p1 := p2ptest.NewTestP2PWithPubsubOptions(t, []pubsub.Option{pubsub.WithRawTracer(tracer)})
// Create subscriber peers.
expectedPeers := []*p2ptest.TestP2P{
p2ptest.NewTestP2P(t),
p2ptest.NewTestP2P(t),
}
// Connect peers.
for _, p := range expectedPeers {
p1.Connect(p)
}
require.NotEqual(t, 0, len(p1.BHost.Network().Peers()), "No peers")
// Create a host for discovery.
_, pkey, ipAddr := createHost(t, port)
// Create a shared DB for the service.
db := testDB.SetupDB(t)
// Create and close the custody info channel immediately since custodyInfo is already set.
custodyInfoSet := make(chan struct{})
close(custodyInfoSet)
service := &Service{
ctx: ctx,
host: p1.BHost,
pubsub: p1.PubSub(),
joinedTopics: map[string]*pubsub.Topic{},
cfg: &Config{DB: db},
genesisTime: time.Now(),
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
subnetsLock: make(map[uint64]*sync.RWMutex),
subnetsLockLock: sync.Mutex{},
peers: peers.NewStatus(ctx, &peers.StatusConfig{ScorerParams: &scorers.Config{}}),
custodyInfo: &custodyInfo{},
custodyInfoSet: custodyInfoSet,
}
// Create a listener for discovery.
listener, err := service.startDiscoveryV5(ipAddr, pkey)
require.NoError(t, err)
service.dv5Listener = listener
digest, err := service.currentForkDigest()
require.NoError(t, err)
// Create multiple data column sidecars with different column indices.
// Use indices that map to different subnets: 0, 32, 64 (assuming 128 columns and 64 subnets).
columnIndices := []uint64{0, 32, 64}
params := make([]util.DataColumnParam, len(columnIndices))
for i, idx := range columnIndices {
params[i] = util.DataColumnParam{Index: idx}
}
_, verifiedRoSidecars := util.CreateTestVerifiedRoDataColumnSidecars(t, params)
expectedTopics := make(map[string]bool)
// Subscribe peers to the relevant topics.
for _, idx := range columnIndices {
subnet := peerdas.ComputeSubnetForDataColumnSidecar(idx)
topic := fmt.Sprintf(topicFormat, digest, subnet) + service.Encoding().ProtocolSuffix()
for _, p := range expectedPeers {
_, err = p.SubscribeToTopic(topic)
require.NoError(t, err)
}
expectedTopics[topic] = true
}
// libp2p needs some time to establish mesh connections.
time.Sleep(100 * time.Millisecond)
// Broadcast all sidecars.
err = service.BroadcastDataColumnSidecars(ctx, verifiedRoSidecars)
require.NoError(t, err)
// Give some time for messages to be sent.
time.Sleep(100 * time.Millisecond)
topics := tracer.getTopics()
if len(topics) == 0 {
t.Fatal("Expected at least one message for each topic to be sent to each peer")
}
unseen := make(map[string]bool)
for k := range expectedTopics {
unseen[k] = true
}
// Verify round-robin invariant: before all message IDs are seen, no message ID may be repeated.
// In round-robin order, we should see each topic once before any topic repeats.
for _, topic := range topics {
if !expectedTopics[topic] {
continue
}
if !unseen[topic] {
t.Errorf("Topic %s repeated before all topics were seen once. This violates round-robin ordering.", topic)
}
delete(unseen, topic)
if len(unseen) == 0 {
break // all have been seen
}
}
require.Equal(t, 0, len(unseen))
// Verify that we actually saw all expected topics.
for topic := range expectedTopics {
require.Equal(t, len(expectedPeers), len(tracer.byTopic[topic]))
}
}

View File

@@ -482,12 +482,12 @@ func TestStaticPeering_PeersAreAdded(t *testing.T) {
s.Start()
<-exitRoutine
}()
time.Sleep(50 * time.Millisecond)
time.Sleep(50 * time.Millisecond) // Wait for service initialization
var vr [32]byte
require.NoError(t, cs.SetClock(startup.NewClock(time.Now(), vr)))
time.Sleep(4 * time.Second)
ps := s.host.Network().Peers()
assert.Equal(t, 5, len(ps), "Not all peers added to peerstore")
require.Eventually(t, func() bool {
return len(s.host.Network().Peers()) == 5
}, 10*time.Second, 100*time.Millisecond, "Not all peers added to peerstore")
require.NoError(t, s.Stop())
exitRoutine <- true
}

View File

@@ -99,6 +99,27 @@ func (s *Service) PublishToTopic(ctx context.Context, topic string, data []byte,
}
}
// addToBatch joins (if necessary) a topic and adds the message to a message batch.
func (s *Service) addToBatch(ctx context.Context, batch *pubsub.MessageBatch, topic string, data []byte, opts ...pubsub.PubOpt) error {
topicHandle, err := s.JoinTopic(topic)
if err != nil {
return fmt.Errorf("joining topic: %w", err)
}
// Wait for at least 1 peer to be available to receive the published message.
for {
if flags.Get().MinimumSyncPeers == 0 || len(topicHandle.ListPeers()) > 0 {
return topicHandle.AddToBatch(ctx, batch, data, opts...)
}
select {
case <-ctx.Done():
return errors.Wrapf(ctx.Err(), "unable to find requisite number of peers for topic %s, 0 peers found to publish to", topic)
case <-time.After(100 * time.Millisecond):
// reenter the for loop after 100ms
}
}
}
// SubscribeToTopic joins (if necessary) and subscribes to PubSub topic.
func (s *Service) SubscribeToTopic(topic string, opts ...pubsub.SubOpt) (*pubsub.Subscription, error) {
s.awaitStateInitialized() // Genesis time and genesis validators root are required to subscribe.

View File

@@ -80,8 +80,9 @@ func TestService_Start_OnlyStartsOnce(t *testing.T) {
}()
var vr [32]byte
require.NoError(t, cs.SetClock(startup.NewClock(time.Now(), vr)))
time.Sleep(time.Second * 2)
assert.Equal(t, true, s.started, "Expected service to be started")
require.Eventually(t, func() bool {
return s.started
}, 5*time.Second, 100*time.Millisecond, "Expected service to be started")
s.Start()
require.LogsContain(t, hook, "Attempted to start p2p service when it was already started")
require.NoError(t, s.Stop())
@@ -260,17 +261,9 @@ func TestListenForNewNodes(t *testing.T) {
err = cs.SetClock(startup.NewClock(genesisTime, gvr))
require.NoError(t, err, "Could not set clock in service")
actualPeerCount := len(s.host.Network().Peers())
for range 40 {
if actualPeerCount == peerCount {
break
}
time.Sleep(100 * time.Millisecond)
actualPeerCount = len(s.host.Network().Peers())
}
assert.Equal(t, peerCount, actualPeerCount, "Not all peers added to peerstore")
require.Eventually(t, func() bool {
return len(s.host.Network().Peers()) == peerCount
}, 5*time.Second, 100*time.Millisecond, "Not all peers added to peerstore")
err = s.Stop()
require.NoError(t, err, "Failed to stop service")

View File

@@ -70,6 +70,11 @@ type TestP2P struct {
// NewTestP2P initializes a new p2p test service.
func NewTestP2P(t *testing.T, userOptions ...config.Option) *TestP2P {
return NewTestP2PWithPubsubOptions(t, nil, userOptions...)
}
// NewTestP2PWithPubsubOptions initializes a new p2p test service with custom pubsub options.
func NewTestP2PWithPubsubOptions(t *testing.T, pubsubOpts []pubsub.Option, userOptions ...config.Option) *TestP2P {
ctx := context.Background()
options := []config.Option{
libp2p.ResourceManager(&network.NullResourceManager{}),
@@ -84,10 +89,14 @@ func NewTestP2P(t *testing.T, userOptions ...config.Option) *TestP2P {
h, err := libp2p.New(options...)
require.NoError(t, err)
ps, err := pubsub.NewFloodSub(ctx, h,
defaultPubsubOpts := []pubsub.Option{
pubsub.WithMessageSigning(false),
pubsub.WithStrictSignatureVerification(false),
)
}
allPubsubOpts := append(defaultPubsubOpts, pubsubOpts...)
ps, err := pubsub.NewGossipSub(ctx, h, allPubsubOpts...)
if err != nil {
t.Fatal(err)
}

View File

@@ -657,8 +657,9 @@ func TestSubmitAttestationsV2(t *testing.T) {
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Source.Epoch)
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetData().Target.Root))
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Target.Epoch)
time.Sleep(100 * time.Millisecond) // Wait for async pool save
assert.Equal(t, 1, s.AttestationsPool.UnaggregatedAttestationCount())
require.Eventually(t, func() bool {
return s.AttestationsPool.UnaggregatedAttestationCount() == 1
}, time.Second, 10*time.Millisecond, "Expected 1 attestation in pool")
})
t.Run("multiple", func(t *testing.T) {
broadcaster := &p2pMock.MockBroadcaster{}
@@ -677,8 +678,9 @@ func TestSubmitAttestationsV2(t *testing.T) {
assert.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
assert.Equal(t, 2, broadcaster.NumAttestations())
time.Sleep(100 * time.Millisecond) // Wait for async pool save
assert.Equal(t, 2, s.AttestationsPool.UnaggregatedAttestationCount())
require.Eventually(t, func() bool {
return s.AttestationsPool.UnaggregatedAttestationCount() == 2
}, time.Second, 10*time.Millisecond, "Expected 2 attestations in pool")
})
t.Run("phase0 att post electra", func(t *testing.T) {
params.SetupTestConfigCleanup(t)
@@ -798,8 +800,9 @@ func TestSubmitAttestationsV2(t *testing.T) {
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Source.Epoch)
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetData().Target.Root))
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Target.Epoch)
time.Sleep(100 * time.Millisecond) // Wait for async pool save
assert.Equal(t, 1, s.AttestationsPool.UnaggregatedAttestationCount())
require.Eventually(t, func() bool {
return s.AttestationsPool.UnaggregatedAttestationCount() == 1
}, time.Second, 10*time.Millisecond, "Expected 1 attestation in pool")
})
t.Run("multiple", func(t *testing.T) {
broadcaster := &p2pMock.MockBroadcaster{}
@@ -818,8 +821,9 @@ func TestSubmitAttestationsV2(t *testing.T) {
assert.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
assert.Equal(t, 2, broadcaster.NumAttestations())
time.Sleep(100 * time.Millisecond) // Wait for async pool save
assert.Equal(t, 2, s.AttestationsPool.UnaggregatedAttestationCount())
require.Eventually(t, func() bool {
return s.AttestationsPool.UnaggregatedAttestationCount() == 2
}, time.Second, 10*time.Millisecond, "Expected 2 attestations in pool")
})
t.Run("no body", func(t *testing.T) {
request := httptest.NewRequest(http.MethodPost, "http://example.com", nil)
@@ -1375,9 +1379,9 @@ func TestSubmitSignedBLSToExecutionChanges_Ok(t *testing.T) {
writer.Body = &bytes.Buffer{}
s.SubmitBLSToExecutionChanges(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
time.Sleep(100 * time.Millisecond) // Delay to let the routine start
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
assert.Equal(t, numValidators, len(broadcaster.BroadcastMessages))
require.Eventually(t, func() bool {
return broadcaster.BroadcastCalled.Load() && len(broadcaster.BroadcastMessages) == numValidators
}, time.Second, 10*time.Millisecond, "Broadcast should be called with all messages")
poolChanges, err := s.BLSChangesPool.PendingBLSToExecChanges()
require.Equal(t, len(poolChanges), len(signedChanges))
@@ -1591,10 +1595,10 @@ func TestSubmitSignedBLSToExecutionChanges_Failures(t *testing.T) {
s.SubmitBLSToExecutionChanges(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
time.Sleep(10 * time.Millisecond) // Delay to allow the routine to start
require.StringContains(t, "One or more messages failed validation", writer.Body.String())
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
assert.Equal(t, numValidators, len(broadcaster.BroadcastMessages)+1)
require.Eventually(t, func() bool {
return broadcaster.BroadcastCalled.Load() && len(broadcaster.BroadcastMessages)+1 == numValidators
}, time.Second, 10*time.Millisecond, "Broadcast should be called with expected messages")
poolChanges, err := s.BLSChangesPool.PendingBLSToExecChanges()
require.Equal(t, len(poolChanges)+1, len(signedChanges))

View File

@@ -102,6 +102,7 @@ go_test(
"getters_test.go",
"getters_validator_test.go",
"getters_withdrawal_test.go",
"gloas_test.go",
"hasher_test.go",
"mvslice_fuzz_test.go",
"proofs_test.go",
@@ -156,6 +157,7 @@ go_test(
"@com_github_google_go_cmp//cmp:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_stretchr_testify//require:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",
"@org_golang_google_protobuf//testing/protocmp:go_default_library",
],

View File

@@ -72,11 +72,13 @@ type BeaconState struct {
// Gloas fields
latestExecutionPayloadBid *ethpb.ExecutionPayloadBid
builders []*ethpb.Builder
nextWithdrawalBuilderIndex primitives.BuilderIndex
executionPayloadAvailability []byte
builderPendingPayments []*ethpb.BuilderPendingPayment
builderPendingWithdrawals []*ethpb.BuilderPendingWithdrawal
latestBlockHash []byte
latestWithdrawalsRoot []byte
payloadExpectedWithdrawals []*enginev1.Withdrawal
id uint64
lock sync.RWMutex
@@ -134,11 +136,13 @@ type beaconStateMarshalable struct {
PendingConsolidations []*ethpb.PendingConsolidation `json:"pending_consolidations" yaml:"pending_consolidations"`
ProposerLookahead []primitives.ValidatorIndex `json:"proposer_look_ahead" yaml:"proposer_look_ahead"`
LatestExecutionPayloadBid *ethpb.ExecutionPayloadBid `json:"latest_execution_payload_bid" yaml:"latest_execution_payload_bid"`
Builders []*ethpb.Builder `json:"builders" yaml:"builders"`
NextWithdrawalBuilderIndex primitives.BuilderIndex `json:"next_withdrawal_builder_index" yaml:"next_withdrawal_builder_index"`
ExecutionPayloadAvailability []byte `json:"execution_payload_availability" yaml:"execution_payload_availability"`
BuilderPendingPayments []*ethpb.BuilderPendingPayment `json:"builder_pending_payments" yaml:"builder_pending_payments"`
BuilderPendingWithdrawals []*ethpb.BuilderPendingWithdrawal `json:"builder_pending_withdrawals" yaml:"builder_pending_withdrawals"`
LatestBlockHash []byte `json:"latest_block_hash" yaml:"latest_block_hash"`
LatestWithdrawalsRoot []byte `json:"latest_withdrawals_root" yaml:"latest_withdrawals_root"`
PayloadExpectedWithdrawals []*enginev1.Withdrawal `json:"payload_expected_withdrawals" yaml:"payload_expected_withdrawals"`
}
func (b *BeaconState) MarshalJSON() ([]byte, error) {
@@ -194,11 +198,13 @@ func (b *BeaconState) MarshalJSON() ([]byte, error) {
PendingConsolidations: b.pendingConsolidations,
ProposerLookahead: b.proposerLookahead,
LatestExecutionPayloadBid: b.latestExecutionPayloadBid,
Builders: b.builders,
NextWithdrawalBuilderIndex: b.nextWithdrawalBuilderIndex,
ExecutionPayloadAvailability: b.executionPayloadAvailability,
BuilderPendingPayments: b.builderPendingPayments,
BuilderPendingWithdrawals: b.builderPendingWithdrawals,
LatestBlockHash: b.latestBlockHash,
LatestWithdrawalsRoot: b.latestWithdrawalsRoot,
PayloadExpectedWithdrawals: b.payloadExpectedWithdrawals,
}
return json.Marshal(marshalable)
}

View File

@@ -56,9 +56,7 @@ func (r StateRoots) MarshalSSZTo(dst []byte) ([]byte, error) {
func (r StateRoots) MarshalSSZ() ([]byte, error) {
marshalled := make([]byte, fieldparams.StateRootsLength*32)
for i, r32 := range r {
for j, rr := range r32 {
marshalled[i*32+j] = rr
}
copy(marshalled[i*32:(i+1)*32], r32[:])
}
return marshalled, nil
}

View File

@@ -305,10 +305,12 @@ func (b *BeaconState) ToProtoUnsafe() any {
PendingConsolidations: b.pendingConsolidations,
ProposerLookahead: lookahead,
ExecutionPayloadAvailability: b.executionPayloadAvailability,
Builders: b.builders,
NextWithdrawalBuilderIndex: b.nextWithdrawalBuilderIndex,
BuilderPendingPayments: b.builderPendingPayments,
BuilderPendingWithdrawals: b.builderPendingWithdrawals,
LatestBlockHash: b.latestBlockHash,
LatestWithdrawalsRoot: b.latestWithdrawalsRoot,
PayloadExpectedWithdrawals: b.payloadExpectedWithdrawals,
}
default:
return nil
@@ -607,10 +609,12 @@ func (b *BeaconState) ToProto() any {
PendingConsolidations: b.pendingConsolidationsVal(),
ProposerLookahead: lookahead,
ExecutionPayloadAvailability: b.executionPayloadAvailabilityVal(),
Builders: b.buildersVal(),
NextWithdrawalBuilderIndex: b.nextWithdrawalBuilderIndex,
BuilderPendingPayments: b.builderPendingPaymentsVal(),
BuilderPendingWithdrawals: b.builderPendingWithdrawalsVal(),
LatestBlockHash: b.latestBlockHashVal(),
LatestWithdrawalsRoot: b.latestWithdrawalsRootVal(),
PayloadExpectedWithdrawals: b.payloadExpectedWithdrawalsVal(),
}
default:
return nil

View File

@@ -1,6 +1,7 @@
package state_native
import (
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
)
@@ -47,6 +48,22 @@ func (b *BeaconState) builderPendingWithdrawalsVal() []*ethpb.BuilderPendingWith
return withdrawals
}
// buildersVal returns a copy of the builders registry.
// This assumes that a lock is already held on BeaconState.
func (b *BeaconState) buildersVal() []*ethpb.Builder {
if b.builders == nil {
return nil
}
builders := make([]*ethpb.Builder, len(b.builders))
for i := range builders {
builder := b.builders[i]
builders[i] = ethpb.CopyBuilder(builder)
}
return builders
}
// latestBlockHashVal returns a copy of the latest block hash.
// This assumes that a lock is already held on BeaconState.
func (b *BeaconState) latestBlockHashVal() []byte {
@@ -60,15 +77,17 @@ func (b *BeaconState) latestBlockHashVal() []byte {
return hash
}
// latestWithdrawalsRootVal returns a copy of the latest withdrawals root.
// payloadExpectedWithdrawalsVal returns a copy of the payload expected withdrawals.
// This assumes that a lock is already held on BeaconState.
func (b *BeaconState) latestWithdrawalsRootVal() []byte {
if b.latestWithdrawalsRoot == nil {
func (b *BeaconState) payloadExpectedWithdrawalsVal() []*enginev1.Withdrawal {
if b.payloadExpectedWithdrawals == nil {
return nil
}
root := make([]byte, len(b.latestWithdrawalsRoot))
copy(root, b.latestWithdrawalsRoot)
withdrawals := make([]*enginev1.Withdrawal, len(b.payloadExpectedWithdrawals))
for i, withdrawal := range b.payloadExpectedWithdrawals {
withdrawals[i] = withdrawal.Copy()
}
return root
return withdrawals
}

View File

@@ -0,0 +1,43 @@
package state_native
import (
"testing"
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/stretchr/testify/require"
)
func TestBuildersVal(t *testing.T) {
st := &BeaconState{}
require.Nil(t, st.buildersVal())
st.builders = []*ethpb.Builder{
{Pubkey: []byte{0x01}, ExecutionAddress: []byte{0x02}, Balance: 3},
nil,
}
got := st.buildersVal()
require.Len(t, got, 2)
require.Nil(t, got[1])
require.Equal(t, st.builders[0], got[0])
require.NotSame(t, st.builders[0], got[0])
}
func TestPayloadExpectedWithdrawalsVal(t *testing.T) {
st := &BeaconState{}
require.Nil(t, st.payloadExpectedWithdrawalsVal())
st.payloadExpectedWithdrawals = []*enginev1.Withdrawal{
{Index: 1, ValidatorIndex: 2, Address: []byte{0x03}, Amount: 4},
nil,
}
got := st.payloadExpectedWithdrawalsVal()
require.Len(t, got, 2)
require.Nil(t, got[1])
require.Equal(t, st.payloadExpectedWithdrawals[0], got[0])
require.NotSame(t, st.payloadExpectedWithdrawals[0], got[0])
}

View File

@@ -342,6 +342,15 @@ func ComputeFieldRootsWithHasher(ctx context.Context, state *BeaconState) ([][]b
}
if state.version >= version.Gloas {
buildersRoot, err := stateutil.BuildersRoot(state.builders)
if err != nil {
return nil, errors.Wrap(err, "could not compute builders merkleization")
}
fieldRoots[types.Builders.RealPosition()] = buildersRoot[:]
nextWithdrawalBuilderIndexRoot := ssz.Uint64Root(uint64(state.nextWithdrawalBuilderIndex))
fieldRoots[types.NextWithdrawalBuilderIndex.RealPosition()] = nextWithdrawalBuilderIndexRoot[:]
epaRoot, err := stateutil.ExecutionPayloadAvailabilityRoot(state.executionPayloadAvailability)
if err != nil {
return nil, errors.Wrap(err, "could not compute execution payload availability merkleization")
@@ -366,8 +375,12 @@ func ComputeFieldRootsWithHasher(ctx context.Context, state *BeaconState) ([][]b
lbhRoot := bytesutil.ToBytes32(state.latestBlockHash)
fieldRoots[types.LatestBlockHash.RealPosition()] = lbhRoot[:]
lwrRoot := bytesutil.ToBytes32(state.latestWithdrawalsRoot)
fieldRoots[types.LatestWithdrawalsRoot.RealPosition()] = lwrRoot[:]
expectedWithdrawalsRoot, err := ssz.WithdrawalSliceRoot(state.payloadExpectedWithdrawals, fieldparams.MaxWithdrawalsPerPayload)
if err != nil {
return nil, errors.Wrap(err, "could not compute payload expected withdrawals root")
}
fieldRoots[types.PayloadExpectedWithdrawals.RealPosition()] = expectedWithdrawalsRoot[:]
}
return fieldRoots, nil
}

View File

@@ -120,11 +120,13 @@ var (
)
gloasAdditionalFields = []types.FieldIndex{
types.Builders,
types.NextWithdrawalBuilderIndex,
types.ExecutionPayloadAvailability,
types.BuilderPendingPayments,
types.BuilderPendingWithdrawals,
types.LatestBlockHash,
types.LatestWithdrawalsRoot,
types.PayloadExpectedWithdrawals,
}
gloasFields = slices.Concat(
@@ -145,7 +147,7 @@ const (
denebSharedFieldRefCount = 7
electraSharedFieldRefCount = 10
fuluSharedFieldRefCount = 11
gloasSharedFieldRefCount = 12 // Adds PendingBuilderWithdrawal to the shared-ref set and LatestExecutionPayloadHeader is removed
gloasSharedFieldRefCount = 13 // Adds Builders + BuilderPendingWithdrawals to the shared-ref set and LatestExecutionPayloadHeader is removed
)
// InitializeFromProtoPhase0 the beacon state from a protobuf representation.
@@ -817,11 +819,13 @@ func InitializeFromProtoUnsafeGloas(st *ethpb.BeaconStateGloas) (state.BeaconSta
pendingConsolidations: st.PendingConsolidations,
proposerLookahead: proposerLookahead,
latestExecutionPayloadBid: st.LatestExecutionPayloadBid,
builders: st.Builders,
nextWithdrawalBuilderIndex: st.NextWithdrawalBuilderIndex,
executionPayloadAvailability: st.ExecutionPayloadAvailability,
builderPendingPayments: st.BuilderPendingPayments,
builderPendingWithdrawals: st.BuilderPendingWithdrawals,
latestBlockHash: st.LatestBlockHash,
latestWithdrawalsRoot: st.LatestWithdrawalsRoot,
payloadExpectedWithdrawals: st.PayloadExpectedWithdrawals,
dirtyFields: make(map[types.FieldIndex]bool, fieldCount),
dirtyIndices: make(map[types.FieldIndex][]uint64, fieldCount),
stateFieldLeaves: make(map[types.FieldIndex]*fieldtrie.FieldTrie, fieldCount),
@@ -861,6 +865,7 @@ func InitializeFromProtoUnsafeGloas(st *ethpb.BeaconStateGloas) (state.BeaconSta
b.sharedFieldReferences[types.PendingPartialWithdrawals] = stateutil.NewRef(1)
b.sharedFieldReferences[types.PendingConsolidations] = stateutil.NewRef(1)
b.sharedFieldReferences[types.ProposerLookahead] = stateutil.NewRef(1)
b.sharedFieldReferences[types.Builders] = stateutil.NewRef(1) // New in Gloas.
b.sharedFieldReferences[types.BuilderPendingWithdrawals] = stateutil.NewRef(1) // New in Gloas.
state.Count.Inc()
@@ -932,6 +937,7 @@ func (b *BeaconState) Copy() state.BeaconState {
pendingDeposits: b.pendingDeposits,
pendingPartialWithdrawals: b.pendingPartialWithdrawals,
pendingConsolidations: b.pendingConsolidations,
builders: b.builders,
// Everything else, too small to be concerned about, constant size.
genesisValidatorsRoot: b.genesisValidatorsRoot,
@@ -948,11 +954,12 @@ func (b *BeaconState) Copy() state.BeaconState {
latestExecutionPayloadHeaderCapella: b.latestExecutionPayloadHeaderCapella.Copy(),
latestExecutionPayloadHeaderDeneb: b.latestExecutionPayloadHeaderDeneb.Copy(),
latestExecutionPayloadBid: b.latestExecutionPayloadBid.Copy(),
nextWithdrawalBuilderIndex: b.nextWithdrawalBuilderIndex,
executionPayloadAvailability: b.executionPayloadAvailabilityVal(),
builderPendingPayments: b.builderPendingPaymentsVal(),
builderPendingWithdrawals: b.builderPendingWithdrawalsVal(),
latestBlockHash: b.latestBlockHashVal(),
latestWithdrawalsRoot: b.latestWithdrawalsRootVal(),
payloadExpectedWithdrawals: b.payloadExpectedWithdrawalsVal(),
id: types.Enumerator.Inc(),
@@ -1328,6 +1335,10 @@ func (b *BeaconState) rootSelector(ctx context.Context, field types.FieldIndex)
return stateutil.ProposerLookaheadRoot(b.proposerLookahead)
case types.LatestExecutionPayloadBid:
return b.latestExecutionPayloadBid.HashTreeRoot()
case types.Builders:
return stateutil.BuildersRoot(b.builders)
case types.NextWithdrawalBuilderIndex:
return ssz.Uint64Root(uint64(b.nextWithdrawalBuilderIndex)), nil
case types.ExecutionPayloadAvailability:
return stateutil.ExecutionPayloadAvailabilityRoot(b.executionPayloadAvailability)
@@ -1337,8 +1348,8 @@ func (b *BeaconState) rootSelector(ctx context.Context, field types.FieldIndex)
return stateutil.BuilderPendingWithdrawalsRoot(b.builderPendingWithdrawals)
case types.LatestBlockHash:
return bytesutil.ToBytes32(b.latestBlockHash), nil
case types.LatestWithdrawalsRoot:
return bytesutil.ToBytes32(b.latestWithdrawalsRoot), nil
case types.PayloadExpectedWithdrawals:
return ssz.WithdrawalSliceRoot(b.payloadExpectedWithdrawals, fieldparams.MaxWithdrawalsPerPayload)
}
return [32]byte{}, errors.New("invalid field index provided")
}

View File

@@ -116,6 +116,10 @@ func (f FieldIndex) String() string {
return "pendingConsolidations"
case ProposerLookahead:
return "proposerLookahead"
case Builders:
return "builders"
case NextWithdrawalBuilderIndex:
return "nextWithdrawalBuilderIndex"
case ExecutionPayloadAvailability:
return "executionPayloadAvailability"
case BuilderPendingPayments:
@@ -124,8 +128,8 @@ func (f FieldIndex) String() string {
return "builderPendingWithdrawals"
case LatestBlockHash:
return "latestBlockHash"
case LatestWithdrawalsRoot:
return "latestWithdrawalsRoot"
case PayloadExpectedWithdrawals:
return "payloadExpectedWithdrawals"
default:
return fmt.Sprintf("unknown field index number: %d", f)
}
@@ -211,16 +215,20 @@ func (f FieldIndex) RealPosition() int {
return 36
case ProposerLookahead:
return 37
case ExecutionPayloadAvailability:
case Builders:
return 38
case BuilderPendingPayments:
case NextWithdrawalBuilderIndex:
return 39
case BuilderPendingWithdrawals:
case ExecutionPayloadAvailability:
return 40
case LatestBlockHash:
case BuilderPendingPayments:
return 41
case LatestWithdrawalsRoot:
case BuilderPendingWithdrawals:
return 42
case LatestBlockHash:
return 43
case PayloadExpectedWithdrawals:
return 44
default:
return -1
}
@@ -287,11 +295,13 @@ const (
PendingPartialWithdrawals // Electra: EIP-7251
PendingConsolidations // Electra: EIP-7251
ProposerLookahead // Fulu: EIP-7917
Builders // Gloas: EIP-7732
NextWithdrawalBuilderIndex // Gloas: EIP-7732
ExecutionPayloadAvailability // Gloas: EIP-7732
BuilderPendingPayments // Gloas: EIP-7732
BuilderPendingWithdrawals // Gloas: EIP-7732
LatestBlockHash // Gloas: EIP-7732
LatestWithdrawalsRoot // Gloas: EIP-7732
PayloadExpectedWithdrawals // Gloas: EIP-7732
)
// Enumerator keeps track of the number of states created since the node's start.

View File

@@ -6,6 +6,7 @@ go_library(
"block_header_root.go",
"builder_pending_payments_root.go",
"builder_pending_withdrawals_root.go",
"builders_root.go",
"eth1_root.go",
"execution_payload_availability_root.go",
"field_root_attestation.go",

View File

@@ -0,0 +1,12 @@
package stateutil
import (
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/encoding/ssz"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
)
// BuildersRoot computes the SSZ root of a slice of Builder.
func BuildersRoot(slice []*ethpb.Builder) ([32]byte, error) {
return ssz.SliceRoot(slice, uint64(fieldparams.BuilderRegistryLimit))
}

View File

@@ -70,7 +70,6 @@ func TestSyncHandlers_WaitToSync(t *testing.T) {
topic := "/eth2/%x/beacon_block"
go r.startDiscoveryAndSubscriptions()
time.Sleep(100 * time.Millisecond)
var vr [32]byte
require.NoError(t, gs.SetClock(startup.NewClock(time.Now(), vr)))
@@ -83,9 +82,11 @@ func TestSyncHandlers_WaitToSync(t *testing.T) {
msg.Block.ParentRoot = util.Random32Bytes(t)
msg.Signature = sk.Sign([]byte("data")).Marshal()
p2p.ReceivePubSub(topic, msg)
// wait for chainstart to be sent
time.Sleep(400 * time.Millisecond)
require.Equal(t, true, r.chainStarted.IsSet(), "Did not receive chain start event.")
// Wait for chainstart event to be processed
require.Eventually(t, func() bool {
return r.chainStarted.IsSet()
}, 5*time.Second, 50*time.Millisecond, "Did not receive chain start event.")
}
func TestSyncHandlers_WaitForChainStart(t *testing.T) {
@@ -217,20 +218,18 @@ func TestSyncService_StopCleanly(t *testing.T) {
p2p.Digest, err = r.currentForkDigest()
require.NoError(t, err)
// wait for chainstart to be sent
time.Sleep(2 * time.Second)
require.Equal(t, true, r.chainStarted.IsSet(), "Did not receive chain start event.")
require.NotEqual(t, 0, len(r.cfg.p2p.PubSub().GetTopics()))
require.NotEqual(t, 0, len(r.cfg.p2p.Host().Mux().Protocols()))
// Wait for chainstart and topics to be registered
require.Eventually(t, func() bool {
return r.chainStarted.IsSet() && len(r.cfg.p2p.PubSub().GetTopics()) > 0 && len(r.cfg.p2p.Host().Mux().Protocols()) > 0
}, 5*time.Second, 50*time.Millisecond, "Did not receive chain start event or topics not registered.")
// Both pubsub and rpc topics should be unsubscribed.
require.NoError(t, r.Stop())
// Sleep to allow pubsub topics to be deregistered.
time.Sleep(1 * time.Second)
require.Equal(t, 0, len(r.cfg.p2p.PubSub().GetTopics()))
require.Equal(t, 0, len(r.cfg.p2p.Host().Mux().Protocols()))
// Wait for pubsub topics to be deregistered.
require.Eventually(t, func() bool {
return len(r.cfg.p2p.PubSub().GetTopics()) == 0 && len(r.cfg.p2p.Host().Mux().Protocols()) == 0
}, 5*time.Second, 50*time.Millisecond, "Pubsub topics were not deregistered")
}
func TestService_Stop_SendsGoodbyeMessages(t *testing.T) {

View File

@@ -48,7 +48,14 @@ func (s *Service) beaconBlockSubscriber(ctx context.Context, msg proto.Message)
return errors.Wrap(err, "new ro block with root")
}
go s.processSidecarsFromExecutionFromBlock(ctx, roBlock)
go func() {
if err := s.processSidecarsFromExecutionFromBlock(ctx, roBlock); err != nil {
log.WithError(err).WithFields(logrus.Fields{
"root": fmt.Sprintf("%#x", root),
"slot": block.Slot(),
}).Error("Failed to process sidecars from execution from block")
}
}()
if err := s.cfg.chain.ReceiveBlock(ctx, signed, root, nil); err != nil {
if blockchain.IsInvalidBlock(err) {
@@ -69,28 +76,37 @@ func (s *Service) beaconBlockSubscriber(ctx context.Context, msg proto.Message)
}
return err
}
if err := s.processPendingAttsForBlock(ctx, root); err != nil {
return errors.Wrap(err, "process pending atts for block")
}
return nil
}
// processSidecarsFromExecutionFromBlock retrieves (if available) sidecars data from the execution client,
// builds corresponding sidecars, save them to the storage, and broadcasts them over P2P if necessary.
func (s *Service) processSidecarsFromExecutionFromBlock(ctx context.Context, roBlock blocks.ROBlock) {
func (s *Service) processSidecarsFromExecutionFromBlock(ctx context.Context, roBlock blocks.ROBlock) error {
if roBlock.Version() >= version.Fulu {
if err := s.processDataColumnSidecarsFromExecution(ctx, peerdas.PopulateFromBlock(roBlock)); err != nil {
log.WithError(err).Error("Failed to process data column sidecars from execution")
return
// Do not log if the context was cancelled on purpose.
// (Still log other context errors such as deadlines exceeded).
if errors.Is(err, context.Canceled) {
return nil
}
return errors.Wrap(err, "process data column sidecars from execution")
}
return
return nil
}
if roBlock.Version() >= version.Deneb {
s.processBlobSidecarsFromExecution(ctx, roBlock)
return
return nil
}
return nil
}
// processBlobSidecarsFromExecution retrieves (if available) blob sidecars data from the execution client,
@@ -168,7 +184,6 @@ func (s *Service) processDataColumnSidecarsFromExecution(ctx context.Context, so
key := fmt.Sprintf("%#x", source.Root())
if _, err, _ := s.columnSidecarsExecSingleFlight.Do(key, func() (any, error) {
const delay = 250 * time.Millisecond
secondsPerHalfSlot := time.Duration(params.BeaconConfig().SecondsPerSlot/2) * time.Second
commitments, err := source.Commitments()
if err != nil {
@@ -186,9 +201,6 @@ func (s *Service) processDataColumnSidecarsFromExecution(ctx context.Context, so
return nil, errors.Wrap(err, "column indices to sample")
}
ctx, cancel := context.WithTimeout(ctx, secondsPerHalfSlot)
defer cancel()
log := log.WithFields(logrus.Fields{
"root": fmt.Sprintf("%#x", source.Root()),
"slot": source.Slot(),
@@ -209,6 +221,11 @@ func (s *Service) processDataColumnSidecarsFromExecution(ctx context.Context, so
return nil, nil
}
// Return if the context is done.
if ctx.Err() != nil {
return nil, ctx.Err()
}
if iteration == 0 {
dataColumnsRecoveredFromELAttempts.Inc()
}
@@ -220,20 +237,10 @@ func (s *Service) processDataColumnSidecarsFromExecution(ctx context.Context, so
}
// No sidecars are retrieved from the EL, retry later
constructedSidecarCount = uint64(len(constructedSidecars))
if constructedSidecarCount == 0 {
if ctx.Err() != nil {
return nil, ctx.Err()
}
time.Sleep(delay)
continue
}
dataColumnsRecoveredFromELTotal.Inc()
constructedCount := uint64(len(constructedSidecars))
// Boundary check.
if constructedSidecarCount != fieldparams.NumberOfColumns {
if constructedSidecarCount > 0 && constructedSidecarCount != fieldparams.NumberOfColumns {
return nil, errors.Errorf("reconstruct data column sidecars returned %d sidecars, expected %d - should never happen", constructedSidecarCount, fieldparams.NumberOfColumns)
}
@@ -242,14 +249,24 @@ func (s *Service) processDataColumnSidecarsFromExecution(ctx context.Context, so
return nil, errors.Wrap(err, "broadcast and receive unseen data column sidecars")
}
log.WithFields(logrus.Fields{
"count": len(unseenIndices),
"indices": helpers.SortedPrettySliceFromMap(unseenIndices),
}).Debug("Constructed data column sidecars from the execution client")
if constructedCount > 0 {
dataColumnsRecoveredFromELTotal.Inc()
dataColumnSidecarsObtainedViaELCount.Observe(float64(len(unseenIndices)))
log.WithFields(logrus.Fields{
"root": fmt.Sprintf("%#x", source.Root()),
"slot": source.Slot(),
"proposerIndex": source.ProposerIndex(),
"iteration": iteration,
"type": source.Type(),
"count": len(unseenIndices),
"indices": helpers.SortedPrettySliceFromMap(unseenIndices),
}).Debug("Constructed data column sidecars from the execution client")
return nil, nil
return nil, nil
}
// Wait before retrying.
time.Sleep(delay)
}
}); err != nil {
return err
@@ -284,6 +301,11 @@ func (s *Service) broadcastAndReceiveUnseenDataColumnSidecars(
unseenIndices[sidecar.Index] = true
}
// Exit early if there are no nothing to broadcast or receive.
if len(unseenSidecars) == 0 {
return nil, nil
}
// Broadcast all the data column sidecars we reconstructed but did not see via gossip (non blocking).
if err := s.cfg.p2p.BroadcastDataColumnSidecars(ctx, unseenSidecars); err != nil {
return nil, errors.Wrap(err, "broadcast data column sidecars")

View File

@@ -194,7 +194,8 @@ func TestProcessSidecarsFromExecutionFromBlock(t *testing.T) {
},
seenBlobCache: lruwrpr.New(1),
}
s.processSidecarsFromExecutionFromBlock(t.Context(), roBlock)
err := s.processSidecarsFromExecutionFromBlock(t.Context(), roBlock)
require.NoError(t, err)
require.Equal(t, tt.expectedBlobCount, len(chainService.Blobs))
})
}
@@ -293,7 +294,8 @@ func TestProcessSidecarsFromExecutionFromBlock(t *testing.T) {
roBlock, err := blocks.NewROBlock(sb)
require.NoError(t, err)
s.processSidecarsFromExecutionFromBlock(t.Context(), roBlock)
err = s.processSidecarsFromExecutionFromBlock(t.Context(), roBlock)
require.NoError(t, err)
require.Equal(t, tt.expectedDataColumnCount, len(chainService.DataColumns))
})
}

View File

@@ -25,12 +25,12 @@ func (s *Service) dataColumnSubscriber(ctx context.Context, msg proto.Message) e
}
if err := s.receiveDataColumnSidecar(ctx, sidecar); err != nil {
return errors.Wrap(err, "receive data column sidecar")
return wrapDataColumnError(sidecar, "receive data column sidecar", err)
}
wg.Go(func() error {
if err := s.processDataColumnSidecarsFromReconstruction(ctx, sidecar); err != nil {
return errors.Wrap(err, "process data column sidecars from reconstruction")
return wrapDataColumnError(sidecar, "process data column sidecars from reconstruction", err)
}
return nil
@@ -38,7 +38,13 @@ func (s *Service) dataColumnSubscriber(ctx context.Context, msg proto.Message) e
wg.Go(func() error {
if err := s.processDataColumnSidecarsFromExecution(ctx, peerdas.PopulateFromSidecar(sidecar)); err != nil {
return errors.Wrap(err, "process data column sidecars from execution")
if errors.Is(err, context.Canceled) {
// Do not log if the context was cancelled on purpose.
// (Still log other context errors such as deadlines exceeded).
return nil
}
return wrapDataColumnError(sidecar, "process data column sidecars from execution", err)
}
return nil
@@ -110,3 +116,7 @@ func (s *Service) allDataColumnSubnets(_ primitives.Slot) map[uint64]bool {
return allSubnets
}
func wrapDataColumnError(sidecar blocks.VerifiedRODataColumn, message string, err error) error {
return fmt.Errorf("%s - slot %d, root %s: %w", message, sidecar.SignedBlockHeader.Header.Slot, fmt.Sprintf("%#x", sidecar.BlockRoot()), err)
}

View File

@@ -614,11 +614,10 @@ func TestVerifyIndexInCommittee_SeenAggregatorEpoch(t *testing.T) {
},
}
time.Sleep(10 * time.Millisecond) // Wait for cached value to pass through buffers.
if res, err := r.validateAggregateAndProof(t.Context(), "", msg); res == pubsub.ValidationAccept {
_ = err
t.Fatal("Validated status is true")
}
require.Eventually(t, func() bool {
res, _ := r.validateAggregateAndProof(t.Context(), "", msg)
return res != pubsub.ValidationAccept
}, time.Second, 10*time.Millisecond, "Expected validation to reject duplicate aggregate")
}
func TestValidateAggregateAndProof_BadBlock(t *testing.T) {

View File

@@ -992,7 +992,6 @@ func TestValidateBeaconBlockPubSub_SeenProposerSlot(t *testing.T) {
// Mark the proposer/slot as seen
r.setSeenBlockIndexSlot(msg.Block.Slot, msg.Block.ProposerIndex)
time.Sleep(10 * time.Millisecond) // Wait for cached value to pass through buffers
// Prepare and validate the second message (clone)
buf := new(bytes.Buffer)
@@ -1010,9 +1009,11 @@ func TestValidateBeaconBlockPubSub_SeenProposerSlot(t *testing.T) {
}
// Since this is not an equivocation (same signature), it should be ignored
res, err := r.validateBeaconBlockPubSub(ctx, "", m)
assert.NoError(t, err)
assert.Equal(t, pubsub.ValidationIgnore, res, "block with same signature should be ignored")
// Wait for the cached value to propagate through buffers
require.Eventually(t, func() bool {
res, err := r.validateBeaconBlockPubSub(ctx, "", m)
return err == nil && res == pubsub.ValidationIgnore
}, time.Second, 10*time.Millisecond, "block with same signature should be ignored")
// Verify no slashings were created
assert.Equal(t, 0, len(slashingPool.PendingPropSlashings), "Expected no slashings for same signature")

View File

@@ -1,3 +0,0 @@
## Fixed
- Fix missing return after version header check in SubmitAttesterSlashingsV2.

View File

@@ -1,3 +0,0 @@
## Fixed
- incorrect constructor return type [#16084](https://github.com/OffchainLabs/prysm/pull/16084)

View File

@@ -1,2 +0,0 @@
### Ignored
- Reverts AutoNatV2 change introduced in https://github.com/OffchainLabs/prysm/pull/16100 as the libp2p upgrade fails inter-op testing.

View File

@@ -1,3 +0,0 @@
### Fixed
- Prevent blocked sends to the KZG batch verifier when the caller context is already canceled, avoiding useless queueing and potential hangs.

View File

@@ -0,0 +1,5 @@
### Added
- Added an ephemeral debug logfile that for beacon and validator nodes that captures debug-level logs for 24 hours. It
also keeps 1 backup of in case of size-based rotation. The logfiles are stored in `datadir/logs/`. This feature is
enabled by default and can be disabled by setting the `--disable-ephemeral-log-file` flag.

View File

@@ -1,3 +0,0 @@
### Fixed
- Fix the missing fork version object mapping for Fulu in light client p2p.

View File

@@ -0,0 +1,4 @@
### Changed
- Moved verbosity settings to be configurable per hook, rather than just globally. This allows us to control the
verbosity of individual output independently.

View File

@@ -1,3 +0,0 @@
### Added
- `primitives.BuilderIndex`: SSZ `uint64` wrapper for builder registry indices.

View File

@@ -1,3 +0,0 @@
### Fixed
- Fix deadlock in data column gossip KZG batch verification when a caller times out preventing result delivery.

View File

@@ -1,3 +0,0 @@
### Changed
- the /eth/v2/beacon/pool/attestations and /eth/v1/beacon/pool/sync_committees now returns a 503 error if the node is still syncing, the rest api is also working in a similar process to gRPC broadcasting immediately now.

View File

@@ -0,0 +1,3 @@
### Changed
- post electra we now call attestation data once per slot and use a cache for subsequent requests

View File

@@ -1,3 +0,0 @@
### Fixed
- fixed replay state issue in rest api caused by attester and sync committee duties endpoints

View File

@@ -0,0 +1,3 @@
### Changed
- changed IsHealthy check to IsReady for validator client's interpretation from /eth/v1/node/health, 206 will now return false as the node is syncing.

View File

@@ -1,3 +0,0 @@
### Changed
- e2e sync committee evaluator now skips the first slot after startup, we already skip the fork epoch for checks here, this skip only applies on startup, due to altair always from 0 and validators need to warm up.

View File

@@ -0,0 +1,3 @@
### Fixed
- Don't call trace.WithMaxExportBatchSize(trace.DefaultMaxExportBatchSize) twice.

View File

@@ -1,3 +0,0 @@
### Changed
- Pending aggregates: When multiple aggregated attestations only differing by the aggregator index are in the pending queue, only process one of them.

View File

@@ -1,2 +0,0 @@
### Changed
- `validateDataColumn`: Remove error logs.

View File

@@ -1,2 +0,0 @@
### Ignored
- Added test requirement to `PULL_REQUEST_TEMPLATE.md`

View File

@@ -0,0 +1,2 @@
### Added
- `--disable-get-blobs-v2` flag.

View File

@@ -1,7 +0,0 @@
### Added
- prometheus histogram `cells_and_proofs_from_structured_computation_milliseconds` to track computation time for cells and proofs from structured blobs.
- prometheus histogram `get_blobs_v2_latency_milliseconds` to track RPC latency for `getBlobsV2` calls to the execution layer.
### Changed
- Run `ComputeCellsAndProofsFromFlat` in parallel to improve performance when computing cells and proofs.
- Run `ComputeCellsAndProofsFromStructured` in parallel to improve performance when computing cells and proofs.

View File

@@ -0,0 +1,3 @@
### Added
- Batch publish data columns for faster data propogation.

View File

@@ -1,3 +0,0 @@
### Fixed
- Fixed possible race when validating two attestations at the same time.

View File

@@ -1,3 +0,0 @@
### Added
- Track the dependent root of the latest finalized checkpoint in forkchoice.

View File

@@ -0,0 +1,2 @@
### Added
- Add a feature flag to pass spectests with low validator count.

View File

@@ -1,3 +0,0 @@
### Fixed
- Do not process slots and copy states for next epoch proposers after Fulu

View File

@@ -1,3 +0,0 @@
### Fixed
- Do not error when committee has been computed correctly but updating the cache failed.

View File

@@ -0,0 +1,2 @@
### Added
- Update spectests to v1.7.0-alpha.0

View File

@@ -0,0 +1,3 @@
### Ignored
- Updated changelog for v7.1.2

View File

@@ -1,3 +0,0 @@
### Ignored
- Updated CHANGELOG.md for v7.0.1 patch release

View File

@@ -1,3 +0,0 @@
### Ignored
- Changelog for v7.1.0

3
changelog/pvl-v7.1.1.md Normal file
View File

@@ -0,0 +1,3 @@
### Ignored
- Added changelog for v7.1.1

View File

@@ -0,0 +1,3 @@
### Changed
- Replaced `time.Sleep` with `require.Eventually` polling in tests to fix flaky behavior caused by race conditions between goroutines and assertions.

View File

@@ -1,3 +0,0 @@
### Added
- Static analyzer that ensures each `httputil.HandleError` call is followed by a `return` statement.

View File

@@ -1,3 +0,0 @@
### Ignored
- Use `WriteStateFetchError` in API handlers whenever possible.

View File

@@ -1,3 +0,0 @@
### Removed
- Unnecessary copy is removed from Eth1DataHasEnoughSupport

View File

@@ -1,3 +0,0 @@
### Added
- Proposal design document to implement graffiti. Currently it is empty by default and the idea is to have it of the form GE168dPR63af

View File

@@ -1,3 +0,0 @@
### Changed
- Optimise migratetocold by not doing brute force for loop

3
changelog/satushh-opt.md Normal file
View File

@@ -0,0 +1,3 @@
### Changed
- Performance improvement in state (MarshalSSZTo): use copy() instead of byte-by-byte loop which isn't required.

View File

@@ -0,0 +1,3 @@
### Added
- Added basic Gloas builder support (`Builder` message and `BeaconStateGloas` `builders`/`next_withdrawal_builder_index` fields)

View File

@@ -356,4 +356,15 @@ var (
Usage: "A comma-separated list of exponents (of 2) in decreasing order, defining the state diff hierarchy levels. The last exponent must be greater than or equal to 5.",
Value: cli.NewIntSlice(21, 18, 16, 13, 11, 9, 5),
}
// DisableEphemeralLogFile disables the 24 hour debug log file.
DisableEphemeralLogFile = &cli.BoolFlag{
Name: "disable-ephemeral-log-file",
Usage: "Disables the creation of a debug log file that keeps 24 hours of logs.",
Value: false,
}
// DisableGetBlobsV2 disables the engine_getBlobsV2 usage.
DisableGetBlobsV2 = &cli.BoolFlag{
Name: "disable-get-blobs-v2",
Usage: "Disables the engine_getBlobsV2 usage.",
}
)

View File

@@ -17,6 +17,7 @@ type GlobalFlags struct {
SubscribeToAllSubnets bool
Supernode bool
SemiSupernode bool
DisableGetBlobsV2 bool
MinimumSyncPeers int
MinimumPeersPerSubnet int
MaxConcurrentDials int
@@ -72,6 +73,11 @@ func ConfigureGlobalFlags(ctx *cli.Context) error {
cfg.SemiSupernode = true
}
if ctx.Bool(DisableGetBlobsV2.Name) {
log.Warning("Disabling `engine_getBlobsV2` API")
cfg.DisableGetBlobsV2 = true
}
// State-diff-exponents
cfg.StateDiffExponents = ctx.IntSlice(StateDiffExponents.Name)
if features.Get().EnableStateDiff {

View File

@@ -148,6 +148,7 @@ var appFlags = []cli.Flag{
flags.SlasherDirFlag,
flags.SlasherFlag,
flags.JwtId,
flags.DisableGetBlobsV2,
storage.BlobStoragePathFlag,
storage.DataColumnStoragePathFlag,
storage.BlobStorageLayout,
@@ -157,6 +158,7 @@ var appFlags = []cli.Flag{
dasFlags.BackfillOldestSlot,
dasFlags.BlobRetentionEpochFlag,
flags.BatchVerifierLimit,
flags.DisableEphemeralLogFile,
}
func init() {
@@ -169,8 +171,15 @@ func before(ctx *cli.Context) error {
return errors.Wrap(err, "failed to load flags from config file")
}
format := ctx.String(cmd.LogFormat.Name)
// determine log verbosity
verbosity := ctx.String(cmd.VerbosityFlag.Name)
verbosityLevel, err := logrus.ParseLevel(verbosity)
if err != nil {
return errors.Wrap(err, "failed to parse log verbosity")
}
logs.SetLoggingLevel(verbosityLevel)
format := ctx.String(cmd.LogFormat.Name)
switch format {
case "text":
// disabling logrus default output so we can control it via different hooks
@@ -184,8 +193,9 @@ func before(ctx *cli.Context) error {
formatter.ForceColors = true
logrus.AddHook(&logs.WriterHook{
Formatter: formatter,
Writer: os.Stderr,
Formatter: formatter,
Writer: os.Stderr,
AllowedLevels: logrus.AllLevels[:verbosityLevel+1],
})
case "fluentd":
f := joonix.NewFormatter()
@@ -209,11 +219,17 @@ func before(ctx *cli.Context) error {
logFileName := ctx.String(cmd.LogFileName.Name)
if logFileName != "" {
if err := logs.ConfigurePersistentLogging(logFileName, format); err != nil {
if err := logs.ConfigurePersistentLogging(logFileName, format, verbosityLevel); err != nil {
log.WithError(err).Error("Failed to configuring logging to disk.")
}
}
if !ctx.Bool(flags.DisableEphemeralLogFile.Name) {
if err := logs.ConfigureEphemeralLogFile(ctx.String(cmd.DataDirFlag.Name), ctx.App.Name); err != nil {
log.WithError(err).Error("Failed to configure debug log file")
}
}
if err := cmd.ExpandSingleEndpointIfFile(ctx, flags.ExecutionEngineEndpoint); err != nil {
return errors.Wrap(err, "failed to expand single endpoint")
}
@@ -290,7 +306,7 @@ func startNode(ctx *cli.Context, cancel context.CancelFunc) error {
if err != nil {
return err
}
logrus.SetLevel(level)
// Set libp2p logger to only panic logs for the info level.
golog.SetAllLoggers(golog.LevelPanic)

View File

@@ -169,6 +169,7 @@ var appHelpFlagGroups = []flagGroup{
flags.ExecutionJWTSecretFlag,
flags.JwtId,
flags.InteropMockEth1DataVotesFlag,
flags.DisableGetBlobsV2,
},
},
{ // Flags relevant to configuring beacon chain monitoring.
@@ -199,6 +200,7 @@ var appHelpFlagGroups = []flagGroup{
cmd.LogFormat,
cmd.LogFileName,
cmd.VerbosityFlag,
flags.DisableEphemeralLogFile,
},
},
{ // Feature flags.

View File

@@ -86,7 +86,7 @@ func main() {
logFileName := ctx.String(cmd.LogFileName.Name)
if logFileName != "" {
if err := logs.ConfigurePersistentLogging(logFileName, format); err != nil {
if err := logs.ConfigurePersistentLogging(logFileName, format, level); err != nil {
log.WithError(err).Error("Failed to configuring logging to disk.")
}
}

View File

@@ -410,6 +410,12 @@ var (
Usage: "Maximum number of health checks to perform before exiting if not healthy. Set to 0 or a negative number for indefinite checks.",
Value: DefaultMaxHealthChecks,
}
// DisableEphemeralLogFile disables the 24 hour debug log file.
DisableEphemeralLogFile = &cli.BoolFlag{
Name: "disable-ephemeral-log-file",
Usage: "Disables the creation of a debug log file that keeps 24 hours of logs.",
Value: false,
}
)
// DefaultValidatorDir returns OS-specific default validator directory.

View File

@@ -115,6 +115,7 @@ var appFlags = []cli.Flag{
debug.BlockProfileRateFlag,
debug.MutexProfileFractionFlag,
cmd.AcceptTosFlag,
flags.DisableEphemeralLogFile,
}
func init() {
@@ -147,6 +148,14 @@ func main() {
return err
}
// determine log verbosity
verbosity := ctx.String(cmd.VerbosityFlag.Name)
verbosityLevel, err := logrus.ParseLevel(verbosity)
if err != nil {
return errors.Wrap(err, "failed to parse log verbosity")
}
logs.SetLoggingLevel(verbosityLevel)
logFileName := ctx.String(cmd.LogFileName.Name)
format := ctx.String(cmd.LogFormat.Name)
@@ -163,8 +172,9 @@ func main() {
formatter.ForceColors = true
logrus.AddHook(&logs.WriterHook{
Formatter: formatter,
Writer: os.Stderr,
Formatter: formatter,
Writer: os.Stderr,
AllowedLevels: logrus.AllLevels[:verbosityLevel+1],
})
case "fluentd":
f := joonix.NewFormatter()
@@ -185,11 +195,17 @@ func main() {
}
if logFileName != "" {
if err := logs.ConfigurePersistentLogging(logFileName, format); err != nil {
if err := logs.ConfigurePersistentLogging(logFileName, format, verbosityLevel); err != nil {
log.WithError(err).Error("Failed to configuring logging to disk.")
}
}
if !ctx.Bool(flags.DisableEphemeralLogFile.Name) {
if err := logs.ConfigureEphemeralLogFile(ctx.String(cmd.DataDirFlag.Name), ctx.App.Name); err != nil {
log.WithError(err).Error("Failed to configure debug log file")
}
}
// Fix data dir for Windows users.
outdatedDataDir := filepath.Join(file.HomeDir(), "AppData", "Roaming", "Eth2Validators")
currentDataDir := flags.DefaultValidatorDir()

View File

@@ -75,6 +75,7 @@ var appHelpFlagGroups = []flagGroup{
cmd.GrpcMaxCallRecvMsgSizeFlag,
cmd.AcceptTosFlag,
cmd.ApiTimeoutFlag,
flags.DisableEphemeralLogFile,
},
},
{

View File

@@ -80,6 +80,7 @@ type Flags struct {
SaveInvalidBlob bool // SaveInvalidBlob saves invalid blob to temp.
EnableDiscoveryReboot bool // EnableDiscoveryReboot allows the node to have its local listener to be rebooted in the event of discovery issues.
LowValcountSweep bool // LowValcountSweep bounds withdrawal sweep by validator count.
// KeystoreImportDebounceInterval specifies the time duration the validator waits to reload new keys if they have
// changed on disk. This feature is for advanced use cases only.
@@ -284,6 +285,10 @@ func ConfigureBeaconChain(ctx *cli.Context) error {
logEnabled(ignoreUnviableAttestations)
cfg.IgnoreUnviableAttestations = true
}
if ctx.Bool(lowValcountSweep.Name) {
logEnabled(lowValcountSweep)
cfg.LowValcountSweep = true
}
if ctx.IsSet(EnableStateDiff.Name) {
logEnabled(EnableStateDiff)

View File

@@ -211,6 +211,13 @@ var (
Name: "ignore-unviable-attestations",
Usage: "Ignores attestations whose target state is not viable with respect to the current head (avoid expensive state replay from lagging attesters).",
}
// lowValcountSweep bounds withdrawal sweep by validator count.
lowValcountSweep = &cli.BoolFlag{
Name: "low-valcount-sweep",
Usage: "Uses validator count bound for withdrawal sweep when validator count is less than MaxValidatorsPerWithdrawalsSweep.",
Value: false,
Hidden: true,
}
)
// devModeFlags holds list of flags that are set when development mode is on.
@@ -272,6 +279,7 @@ var BeaconChainFlags = combinedFlags([]cli.Flag{
enableExperimentalAttestationPool,
forceHeadFlag,
blacklistRoots,
lowValcountSweep,
}, deprecatedBeaconFlags, deprecatedFlags, upcomingDeprecation)
func combinedFlags(flags ...[]cli.Flag) []cli.Flag {

View File

@@ -9,6 +9,7 @@ const (
RandaoMixesLength = 65536 // EPOCHS_PER_HISTORICAL_VECTOR
HistoricalRootsLength = 16777216 // HISTORICAL_ROOTS_LIMIT
ValidatorRegistryLimit = 1099511627776 // VALIDATOR_REGISTRY_LIMIT
BuilderRegistryLimit = 1099511627776 // BUILDER_REGISTRY_LIMIT
Eth1DataVotesLength = 2048 // SLOTS_PER_ETH1_VOTING_PERIOD
PreviousEpochAttestationsLength = 4096 // MAX_ATTESTATIONS * SLOTS_PER_EPOCH
CurrentEpochAttestationsLength = 4096 // MAX_ATTESTATIONS * SLOTS_PER_EPOCH

View File

@@ -9,6 +9,7 @@ const (
RandaoMixesLength = 64 // EPOCHS_PER_HISTORICAL_VECTOR
HistoricalRootsLength = 16777216 // HISTORICAL_ROOTS_LIMIT
ValidatorRegistryLimit = 1099511627776 // VALIDATOR_REGISTRY_LIMIT
BuilderRegistryLimit = 1099511627776 // BUILDER_REGISTRY_LIMIT
Eth1DataVotesLength = 32 // SLOTS_PER_ETH1_VOTING_PERIOD
PreviousEpochAttestationsLength = 1024 // MAX_ATTESTATIONS * SLOTS_PER_EPOCH
CurrentEpochAttestationsLength = 1024 // MAX_ATTESTATIONS * SLOTS_PER_EPOCH

View File

@@ -55,7 +55,8 @@ var placeholderFields = []string{
"MAX_REQUEST_BLOB_SIDECARS_FULU",
"MAX_REQUEST_INCLUSION_LIST",
"MAX_REQUEST_PAYLOADS", // Compile time constant on BeaconBlockBody.ExecutionRequests
"NUMBER_OF_COLUMNS", // Configured as a constant in config/fieldparams/mainnet.go
"MIN_BUILDER_WITHDRAWABILITY_DELAY",
"NUMBER_OF_COLUMNS", // Configured as a constant in config/fieldparams/mainnet.go
"PAYLOAD_ATTESTATION_DUE_BPS",
"PROPOSER_INCLUSION_LIST_CUTOFF",
"PROPOSER_INCLUSION_LIST_CUTOFF_BPS",

View File

@@ -206,7 +206,7 @@ var mainnetBeaconConfig = &BeaconChainConfig{
BeaconStateDenebFieldCount: 28,
BeaconStateElectraFieldCount: 37,
BeaconStateFuluFieldCount: 38,
BeaconStateGloasFieldCount: 43,
BeaconStateGloasFieldCount: 45,
// Slasher related values.
WeakSubjectivityPeriod: 54000,

View File

@@ -669,6 +669,7 @@ func hydrateBeaconBlockBodyGloas() *eth.BeaconBlockBodyGloas {
ParentBlockHash: make([]byte, fieldparams.RootLength),
ParentBlockRoot: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
PrevRandao: make([]byte, fieldparams.RootLength),
FeeRecipient: make([]byte, 20),
BlobKzgCommitmentsRoot: make([]byte, fieldparams.RootLength),
},

2
go.mod
View File

@@ -96,6 +96,7 @@ require (
google.golang.org/grpc v1.71.0
google.golang.org/protobuf v1.36.5
gopkg.in/d4l3k/messagediff.v1 v1.2.1
gopkg.in/natefinch/lumberjack.v2 v2.2.1
gopkg.in/yaml.v2 v2.4.0
gopkg.in/yaml.v3 v3.0.1
honnef.co/go/tools v0.6.0
@@ -273,7 +274,6 @@ require (
golang.org/x/tools/go/expect v0.1.1-deprecated // indirect
gopkg.in/cenkalti/backoff.v1 v1.1.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/natefinch/lumberjack.v2 v2.2.1 // indirect
lukechampine.com/blake3 v1.3.0 // indirect
rsc.io/tmplfunc v0.0.3 // indirect
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd // indirect

View File

@@ -17,7 +17,9 @@ go_library(
"//io/file:go_default_library",
"//runtime/logging/logrus-prefixed-formatter:go_default_library",
"@com_github_hashicorp_golang_lru//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@in_gopkg_natefinch_lumberjack_v2//:go_default_library",
],
)
@@ -28,5 +30,8 @@ go_test(
"stream_test.go",
],
embed = [":go_default_library"],
deps = ["//testing/require:go_default_library"],
deps = [
"//testing/require:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
)

View File

@@ -12,16 +12,25 @@ import (
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/io/file"
prefixed "github.com/OffchainLabs/prysm/v7/runtime/logging/logrus-prefixed-formatter"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"gopkg.in/natefinch/lumberjack.v2"
)
var ephemeralLogFileVerbosity = logrus.DebugLevel
// SetLoggingLevel sets the base logging level for logrus.
func SetLoggingLevel(lvl logrus.Level) {
logrus.SetLevel(max(lvl, ephemeralLogFileVerbosity))
}
func addLogWriter(w io.Writer) {
mw := io.MultiWriter(logrus.StandardLogger().Out, w)
logrus.SetOutput(mw)
}
// ConfigurePersistentLogging adds a log-to-file writer. File content is identical to stdout.
func ConfigurePersistentLogging(logFileName string, format string) error {
func ConfigurePersistentLogging(logFileName string, format string, lvl logrus.Level) error {
logrus.WithField("logFileName", logFileName).Info("Logs will be made persistent")
if err := file.MkdirAll(filepath.Dir(logFileName)); err != nil {
return err
@@ -47,14 +56,48 @@ func ConfigurePersistentLogging(logFileName string, format string) error {
formatter.DisableColors = true
logrus.AddHook(&WriterHook{
Formatter: formatter,
Writer: f,
Formatter: formatter,
Writer: f,
AllowedLevels: logrus.AllLevels[:lvl+1],
})
logrus.Info("File logging initialized")
return nil
}
// ConfigureEphemeralLogFile adds a log file that keeps 24 hours of logs with >debug verbosity.
func ConfigureEphemeralLogFile(datadirPath string, app string) error {
logFilePath := filepath.Join(datadirPath, "logs", app+".log")
if err := file.MkdirAll(filepath.Dir(logFilePath)); err != nil {
return errors.Wrap(err, "failed to create directory")
}
// Create formatter and writer hook
formatter := new(prefixed.TextFormatter)
formatter.TimestampFormat = "2006-01-02 15:04:05.00"
formatter.FullTimestamp = true
// If persistent log files are written - we disable the log messages coloring because
// the colors are ANSI codes and seen as gibberish in the log files.
formatter.DisableColors = true
// configure the lumberjack log writer to rotate logs every ~24 hours
debugWriter := &lumberjack.Logger{
Filename: logFilePath,
MaxSize: 250, // MB, to avoid unbounded growth
MaxBackups: 1, // one backup in case of size-based rotations
MaxAge: 1, // days; files older than this are removed
}
logrus.AddHook(&WriterHook{
Formatter: formatter,
Writer: debugWriter,
AllowedLevels: logrus.AllLevels[:ephemeralLogFileVerbosity+1],
})
logrus.Info("Ephemeral log file initialized")
return nil
}
// MaskCredentialsLogging masks the url credentials before logging for security purpose
// [scheme:][//[userinfo@]host][/]path[?query][#fragment] --> [scheme:][//[***]host][/***][#***]
// if the format is not matched nothing is done, string is returned as is.

View File

@@ -6,6 +6,7 @@ import (
"testing"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/sirupsen/logrus"
)
var urltests = []struct {
@@ -34,13 +35,13 @@ func TestConfigurePersistantLogging(t *testing.T) {
logFileName := "test.log"
existingDirectory := "test-1-existing-testing-dir"
err := ConfigurePersistentLogging(fmt.Sprintf("%s/%s/%s", testParentDir, existingDirectory, logFileName), "text")
err := ConfigurePersistentLogging(fmt.Sprintf("%s/%s/%s", testParentDir, existingDirectory, logFileName), "text", logrus.InfoLevel)
require.NoError(t, err)
// 2. Test creation of file along with parent directory
nonExistingDirectory := "test-2-non-existing-testing-dir"
err = ConfigurePersistentLogging(fmt.Sprintf("%s/%s/%s", testParentDir, nonExistingDirectory, logFileName), "text")
err = ConfigurePersistentLogging(fmt.Sprintf("%s/%s/%s", testParentDir, nonExistingDirectory, logFileName), "text", logrus.InfoLevel)
require.NoError(t, err)
// 3. Test creation of file in an existing parent directory with a non-existing sub-directory
@@ -51,7 +52,7 @@ func TestConfigurePersistantLogging(t *testing.T) {
return
}
err = ConfigurePersistentLogging(fmt.Sprintf("%s/%s/%s/%s", testParentDir, existingDirectory, nonExistingSubDirectory, logFileName), "text")
err = ConfigurePersistentLogging(fmt.Sprintf("%s/%s/%s/%s", testParentDir, existingDirectory, nonExistingSubDirectory, logFileName), "text", logrus.InfoLevel)
require.NoError(t, err)
//4. Create log file in a directory without 700 permissions
@@ -61,3 +62,38 @@ func TestConfigurePersistantLogging(t *testing.T) {
return
}
}
func TestConfigureEphemeralLogFile(t *testing.T) {
testParentDir := t.TempDir()
// 1. Test creation of file in an existing parent directory
existingDirectory := "test-1-existing-testing-dir"
err := ConfigureEphemeralLogFile(fmt.Sprintf("%s/%s", testParentDir, existingDirectory), "beacon-chain")
require.NoError(t, err)
// 2. Test creation of file along with parent directory
nonExistingDirectory := "test-2-non-existing-testing-dir"
err = ConfigureEphemeralLogFile(fmt.Sprintf("%s/%s", testParentDir, nonExistingDirectory), "beacon-chain")
require.NoError(t, err)
// 3. Test creation of file in an existing parent directory with a non-existing sub-directory
existingDirectory = "test-3-existing-testing-dir"
nonExistingSubDirectory := "test-3-non-existing-sub-dir"
err = os.Mkdir(fmt.Sprintf("%s/%s", testParentDir, existingDirectory), 0700)
if err != nil {
return
}
err = ConfigureEphemeralLogFile(fmt.Sprintf("%s/%s/%s", testParentDir, existingDirectory, nonExistingSubDirectory), "beacon-chain")
require.NoError(t, err)
//4. Create log file in a directory without 700 permissions
existingDirectory = "test-4-existing-testing-dir"
err = os.Mkdir(fmt.Sprintf("%s/%s", testParentDir, existingDirectory), 0750)
if err != nil {
return
}
err = ConfigureEphemeralLogFile(fmt.Sprintf("%s/%s/%s", testParentDir, existingDirectory, nonExistingSubDirectory), "beacon-chain")
require.NoError(t, err)
}

View File

@@ -45,7 +45,6 @@ func Setup(ctx context.Context, serviceName, processName, endpoint string, sampl
exporter,
trace.WithMaxExportBatchSize(trace.DefaultMaxExportBatchSize),
trace.WithBatchTimeout(trace.DefaultScheduleDelay*time.Millisecond),
trace.WithMaxExportBatchSize(trace.DefaultMaxExportBatchSize),
),
trace.WithResource(
resource.NewWithAttributes(

View File

@@ -35,6 +35,21 @@ func CopyValidator(val *Validator) *Validator {
}
}
// CopyBuilder copies the provided builder.
func CopyBuilder(builder *Builder) *Builder {
if builder == nil {
return nil
}
return &Builder{
Pubkey: bytesutil.SafeCopyBytes(builder.Pubkey),
Version: bytesutil.SafeCopyBytes(builder.Version),
ExecutionAddress: bytesutil.SafeCopyBytes(builder.ExecutionAddress),
Balance: builder.Balance,
DepositEpoch: builder.DepositEpoch,
WithdrawableEpoch: builder.WithdrawableEpoch,
}
}
// CopySyncCommitteeMessage copies the provided sync committee message object.
func CopySyncCommitteeMessage(s *SyncCommitteeMessage) *SyncCommitteeMessage {
if s == nil {

View File

@@ -1220,7 +1220,7 @@ func genExecutionPayloadBidGloas() *v1alpha1.ExecutionPayloadBid {
BlockHash: bytes(32),
FeeRecipient: bytes(20),
GasLimit: rand.Uint64(),
BuilderIndex: primitives.ValidatorIndex(rand.Uint64()),
BuilderIndex: primitives.BuilderIndex(rand.Uint64()),
Slot: primitives.Slot(rand.Uint64()),
Value: primitives.Gwei(rand.Uint64()),
BlobKzgCommitmentsRoot: bytes(32),

View File

@@ -13,11 +13,13 @@ func (header *ExecutionPayloadBid) Copy() *ExecutionPayloadBid {
ParentBlockHash: bytesutil.SafeCopyBytes(header.ParentBlockHash),
ParentBlockRoot: bytesutil.SafeCopyBytes(header.ParentBlockRoot),
BlockHash: bytesutil.SafeCopyBytes(header.BlockHash),
PrevRandao: bytesutil.SafeCopyBytes(header.PrevRandao),
FeeRecipient: bytesutil.SafeCopyBytes(header.FeeRecipient),
GasLimit: header.GasLimit,
BuilderIndex: header.BuilderIndex,
Slot: header.Slot,
Value: header.Value,
ExecutionPayment: header.ExecutionPayment,
BlobKzgCommitmentsRoot: bytesutil.SafeCopyBytes(header.BlobKzgCommitmentsRoot),
}
}
@@ -28,10 +30,9 @@ func (withdrawal *BuilderPendingWithdrawal) Copy() *BuilderPendingWithdrawal {
return nil
}
return &BuilderPendingWithdrawal{
FeeRecipient: bytesutil.SafeCopyBytes(withdrawal.FeeRecipient),
Amount: withdrawal.Amount,
BuilderIndex: withdrawal.BuilderIndex,
WithdrawableEpoch: withdrawal.WithdrawableEpoch,
FeeRecipient: bytesutil.SafeCopyBytes(withdrawal.FeeRecipient),
Amount: withdrawal.Amount,
BuilderIndex: withdrawal.BuilderIndex,
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -26,30 +26,37 @@ option go_package = "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1;eth";
// parent_block_hash: Hash32
// parent_block_root: Root
// block_hash: Hash32
// prev_randao: Bytes32
// fee_recipient: ExecutionAddress
// gas_limit: uint64
// builder_index: ValidatorIndex
// builder_index: BuilderIndex
// slot: Slot
// value: Gwei
// execution_payment: Gwei
// blob_kzg_commitments_root: Root
message ExecutionPayloadBid {
bytes parent_block_hash = 1 [ (ethereum.eth.ext.ssz_size) = "32" ];
bytes parent_block_root = 2 [ (ethereum.eth.ext.ssz_size) = "32" ];
bytes block_hash = 3 [ (ethereum.eth.ext.ssz_size) = "32" ];
bytes fee_recipient = 4 [ (ethereum.eth.ext.ssz_size) = "20" ];
uint64 gas_limit = 5;
uint64 builder_index = 6 [ (ethereum.eth.ext.cast_type) =
bytes prev_randao = 4 [ (ethereum.eth.ext.ssz_size) = "32" ];
bytes fee_recipient = 5 [ (ethereum.eth.ext.ssz_size) = "20" ];
uint64 gas_limit = 6;
uint64 builder_index = 7 [ (ethereum.eth.ext.cast_type) =
"github.com/OffchainLabs/prysm/v7/"
"consensus-types/primitives.ValidatorIndex" ];
uint64 slot = 7 [
"consensus-types/primitives.BuilderIndex" ];
uint64 slot = 8 [
(ethereum.eth.ext.cast_type) =
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives.Slot"
];
uint64 value = 8 [
uint64 value = 9 [
(ethereum.eth.ext.cast_type) =
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives.Gwei"
];
bytes blob_kzg_commitments_root = 9 [ (ethereum.eth.ext.ssz_size) = "32" ];
uint64 execution_payment = 10 [
(ethereum.eth.ext.cast_type) =
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives.Gwei"
];
bytes blob_kzg_commitments_root = 11 [ (ethereum.eth.ext.ssz_size) = "32" ];
}
// SignedExecutionPayloadBid wraps an execution payload bid with a signature.
@@ -185,12 +192,15 @@ message SignedBeaconBlockGloas {
// [All previous fields from earlier forks]
// # Replaced existing latest execution header position
// latest_execution_payload_bid: ExecutionPayloadBid
// # New fields in Gloas:EIP7732
// builders: List[Builder, BUILDER_REGISTRY_LIMIT]
// # [New in Gloas:EIP7732]
// next_withdrawal_builder_index: BuilderIndex
// # [New in Gloas:EIP7732]
// execution_payload_availability: Bitvector[SLOTS_PER_HISTORICAL_ROOT]
// builder_pending_payments: Vector[BuilderPendingPayment, 2 * SLOTS_PER_EPOCH]
// builder_pending_withdrawals: List[BuilderPendingWithdrawal, BUILDER_PENDING_WITHDRAWALS_LIMIT]
// latest_block_hash: Hash32
// latest_withdrawals_root: Root
// payload_expected_withdrawals: List[Withdrawal, MAX_WITHDRAWALS_PER_PAYLOAD]
message BeaconStateGloas {
// Versioning [1001-2000]
uint64 genesis_time = 1001;
@@ -300,13 +310,20 @@ message BeaconStateGloas {
[ (ethereum.eth.ext.ssz_size) = "proposer_lookahead_size" ];
// Fields introduced in Gloas fork [14001-15000]
bytes execution_payload_availability = 14001 [
repeated Builder builders = 14001
[ (ethereum.eth.ext.ssz_max) = "builder_registry_limit" ];
uint64 next_withdrawal_builder_index = 14002
[ (ethereum.eth.ext.cast_type) =
"github.com/OffchainLabs/prysm/v7/consensus-types/"
"primitives.BuilderIndex" ];
bytes execution_payload_availability = 14003 [
(ethereum.eth.ext.ssz_size) = "execution_payload_availability.size"
];
repeated BuilderPendingPayment builder_pending_payments = 14002 [(ethereum.eth.ext.ssz_size) = "builder_pending_payments.size"];
repeated BuilderPendingWithdrawal builder_pending_withdrawals = 14003 [(ethereum.eth.ext.ssz_max) = "1048576"];
bytes latest_block_hash = 14004 [ (ethereum.eth.ext.ssz_size) = "32" ];
bytes latest_withdrawals_root = 14005 [ (ethereum.eth.ext.ssz_size) = "32" ];
repeated BuilderPendingPayment builder_pending_payments = 14004 [(ethereum.eth.ext.ssz_size) = "builder_pending_payments.size"];
repeated BuilderPendingWithdrawal builder_pending_withdrawals = 14005 [(ethereum.eth.ext.ssz_max) = "1048576"];
bytes latest_block_hash = 14006 [ (ethereum.eth.ext.ssz_size) = "32" ];
repeated ethereum.engine.v1.Withdrawal payload_expected_withdrawals = 14007
[ (ethereum.eth.ext.ssz_max) = "withdrawal.size" ];
}
// BuilderPendingPayment represents a pending payment to a builder.
@@ -329,8 +346,7 @@ message BuilderPendingPayment {
// class BuilderPendingWithdrawal(Container):
// fee_recipient: ExecutionAddress
// amount: Gwei
// builder_index: ValidatorIndex
// withdrawable_epoch: Epoch
// builder_index: BuilderIndex
message BuilderPendingWithdrawal {
bytes fee_recipient = 1 [ (ethereum.eth.ext.ssz_size) = "20" ];
uint64 amount = 2 [
@@ -340,11 +356,7 @@ message BuilderPendingWithdrawal {
uint64 builder_index = 3
[ (ethereum.eth.ext.cast_type) =
"github.com/OffchainLabs/prysm/v7/consensus-types/"
"primitives.ValidatorIndex" ];
uint64 withdrawable_epoch = 4 [
(ethereum.eth.ext.cast_type) =
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives.Epoch"
];
"primitives.BuilderIndex" ];
}
// DataColumnSidecarGloas represents a data column sidecar in the Gloas fork.
@@ -387,7 +399,7 @@ message DataColumnSidecarGloas {
// class ExecutionPayloadEnvelope(Container):
// payload: ExecutionPayload
// execution_requests: ExecutionRequests
// builder_index: ValidatorIndex
// builder_index: BuilderIndex
// beacon_block_root: Root
// slot: Slot
// blob_kzg_commitments: List[KZGCommitment, MAX_BLOB_COMMITMENTS_PER_BLOCK]
@@ -397,7 +409,7 @@ message ExecutionPayloadEnvelope {
ethereum.engine.v1.ExecutionRequests execution_requests = 2;
uint64 builder_index = 3 [ (ethereum.eth.ext.cast_type) =
"github.com/OffchainLabs/prysm/v7/"
"consensus-types/primitives.ValidatorIndex" ];
"consensus-types/primitives.BuilderIndex" ];
bytes beacon_block_root = 4 [ (ethereum.eth.ext.ssz_size) = "32" ];
uint64 slot = 5 [
(ethereum.eth.ext.cast_type) =
@@ -421,3 +433,31 @@ message SignedExecutionPayloadEnvelope {
ExecutionPayloadEnvelope message = 1;
bytes signature = 2 [ (ethereum.eth.ext.ssz_size) = "96" ];
}
// Builder represents a builder in the Gloas fork.
//
// Spec:
// class Builder(Container):
// pubkey: BLSPubkey
// version: uint8
// execution_address: ExecutionAddress
// balance: Gwei
// deposit_epoch: Epoch
// withdrawable_epoch: Epoch
message Builder {
bytes pubkey = 1 [ (ethereum.eth.ext.ssz_size) = "48" ];
bytes version = 2 [ (ethereum.eth.ext.ssz_size) = "1" ];
bytes execution_address = 3 [ (ethereum.eth.ext.ssz_size) = "20" ];
uint64 balance = 4 [
(ethereum.eth.ext.cast_type) =
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives.Gwei"
];
uint64 deposit_epoch = 5 [
(ethereum.eth.ext.cast_type) =
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives.Epoch"
];
uint64 withdrawable_epoch = 6 [
(ethereum.eth.ext.cast_type) =
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives.Epoch"
];
}

View File

@@ -37,26 +37,36 @@ func (e *ExecutionPayloadBid) MarshalSSZTo(buf []byte) (dst []byte, err error) {
}
dst = append(dst, e.BlockHash...)
// Field (3) 'FeeRecipient'
// Field (3) 'PrevRandao'
if size := len(e.PrevRandao); size != 32 {
err = ssz.ErrBytesLengthFn("--.PrevRandao", size, 32)
return
}
dst = append(dst, e.PrevRandao...)
// Field (4) 'FeeRecipient'
if size := len(e.FeeRecipient); size != 20 {
err = ssz.ErrBytesLengthFn("--.FeeRecipient", size, 20)
return
}
dst = append(dst, e.FeeRecipient...)
// Field (4) 'GasLimit'
// Field (5) 'GasLimit'
dst = ssz.MarshalUint64(dst, e.GasLimit)
// Field (5) 'BuilderIndex'
// Field (6) 'BuilderIndex'
dst = ssz.MarshalUint64(dst, uint64(e.BuilderIndex))
// Field (6) 'Slot'
// Field (7) 'Slot'
dst = ssz.MarshalUint64(dst, uint64(e.Slot))
// Field (7) 'Value'
// Field (8) 'Value'
dst = ssz.MarshalUint64(dst, uint64(e.Value))
// Field (8) 'BlobKzgCommitmentsRoot'
// Field (9) 'ExecutionPayment'
dst = ssz.MarshalUint64(dst, uint64(e.ExecutionPayment))
// Field (10) 'BlobKzgCommitmentsRoot'
if size := len(e.BlobKzgCommitmentsRoot); size != 32 {
err = ssz.ErrBytesLengthFn("--.BlobKzgCommitmentsRoot", size, 32)
return
@@ -70,7 +80,7 @@ func (e *ExecutionPayloadBid) MarshalSSZTo(buf []byte) (dst []byte, err error) {
func (e *ExecutionPayloadBid) UnmarshalSSZ(buf []byte) error {
var err error
size := uint64(len(buf))
if size != 180 {
if size != 220 {
return ssz.ErrSize
}
@@ -92,36 +102,45 @@ func (e *ExecutionPayloadBid) UnmarshalSSZ(buf []byte) error {
}
e.BlockHash = append(e.BlockHash, buf[64:96]...)
// Field (3) 'FeeRecipient'
// Field (3) 'PrevRandao'
if cap(e.PrevRandao) == 0 {
e.PrevRandao = make([]byte, 0, len(buf[96:128]))
}
e.PrevRandao = append(e.PrevRandao, buf[96:128]...)
// Field (4) 'FeeRecipient'
if cap(e.FeeRecipient) == 0 {
e.FeeRecipient = make([]byte, 0, len(buf[96:116]))
e.FeeRecipient = make([]byte, 0, len(buf[128:148]))
}
e.FeeRecipient = append(e.FeeRecipient, buf[96:116]...)
e.FeeRecipient = append(e.FeeRecipient, buf[128:148]...)
// Field (4) 'GasLimit'
e.GasLimit = ssz.UnmarshallUint64(buf[116:124])
// Field (5) 'GasLimit'
e.GasLimit = ssz.UnmarshallUint64(buf[148:156])
// Field (5) 'BuilderIndex'
e.BuilderIndex = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.ValidatorIndex(ssz.UnmarshallUint64(buf[124:132]))
// Field (6) 'BuilderIndex'
e.BuilderIndex = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.BuilderIndex(ssz.UnmarshallUint64(buf[156:164]))
// Field (6) 'Slot'
e.Slot = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Slot(ssz.UnmarshallUint64(buf[132:140]))
// Field (7) 'Slot'
e.Slot = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Slot(ssz.UnmarshallUint64(buf[164:172]))
// Field (7) 'Value'
e.Value = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Gwei(ssz.UnmarshallUint64(buf[140:148]))
// Field (8) 'Value'
e.Value = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Gwei(ssz.UnmarshallUint64(buf[172:180]))
// Field (8) 'BlobKzgCommitmentsRoot'
// Field (9) 'ExecutionPayment'
e.ExecutionPayment = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Gwei(ssz.UnmarshallUint64(buf[180:188]))
// Field (10) 'BlobKzgCommitmentsRoot'
if cap(e.BlobKzgCommitmentsRoot) == 0 {
e.BlobKzgCommitmentsRoot = make([]byte, 0, len(buf[148:180]))
e.BlobKzgCommitmentsRoot = make([]byte, 0, len(buf[188:220]))
}
e.BlobKzgCommitmentsRoot = append(e.BlobKzgCommitmentsRoot, buf[148:180]...)
e.BlobKzgCommitmentsRoot = append(e.BlobKzgCommitmentsRoot, buf[188:220]...)
return err
}
// SizeSSZ returns the ssz encoded size in bytes for the ExecutionPayloadBid object
func (e *ExecutionPayloadBid) SizeSSZ() (size int) {
size = 180
size = 220
return
}
@@ -155,26 +174,36 @@ func (e *ExecutionPayloadBid) HashTreeRootWith(hh *ssz.Hasher) (err error) {
}
hh.PutBytes(e.BlockHash)
// Field (3) 'FeeRecipient'
// Field (3) 'PrevRandao'
if size := len(e.PrevRandao); size != 32 {
err = ssz.ErrBytesLengthFn("--.PrevRandao", size, 32)
return
}
hh.PutBytes(e.PrevRandao)
// Field (4) 'FeeRecipient'
if size := len(e.FeeRecipient); size != 20 {
err = ssz.ErrBytesLengthFn("--.FeeRecipient", size, 20)
return
}
hh.PutBytes(e.FeeRecipient)
// Field (4) 'GasLimit'
// Field (5) 'GasLimit'
hh.PutUint64(e.GasLimit)
// Field (5) 'BuilderIndex'
// Field (6) 'BuilderIndex'
hh.PutUint64(uint64(e.BuilderIndex))
// Field (6) 'Slot'
// Field (7) 'Slot'
hh.PutUint64(uint64(e.Slot))
// Field (7) 'Value'
// Field (8) 'Value'
hh.PutUint64(uint64(e.Value))
// Field (8) 'BlobKzgCommitmentsRoot'
// Field (9) 'ExecutionPayment'
hh.PutUint64(uint64(e.ExecutionPayment))
// Field (10) 'BlobKzgCommitmentsRoot'
if size := len(e.BlobKzgCommitmentsRoot); size != 32 {
err = ssz.ErrBytesLengthFn("--.BlobKzgCommitmentsRoot", size, 32)
return
@@ -216,7 +245,7 @@ func (s *SignedExecutionPayloadBid) MarshalSSZTo(buf []byte) (dst []byte, err er
func (s *SignedExecutionPayloadBid) UnmarshalSSZ(buf []byte) error {
var err error
size := uint64(len(buf))
if size != 276 {
if size != 316 {
return ssz.ErrSize
}
@@ -224,22 +253,22 @@ func (s *SignedExecutionPayloadBid) UnmarshalSSZ(buf []byte) error {
if s.Message == nil {
s.Message = new(ExecutionPayloadBid)
}
if err = s.Message.UnmarshalSSZ(buf[0:180]); err != nil {
if err = s.Message.UnmarshalSSZ(buf[0:220]); err != nil {
return err
}
// Field (1) 'Signature'
if cap(s.Signature) == 0 {
s.Signature = make([]byte, 0, len(buf[180:276]))
s.Signature = make([]byte, 0, len(buf[220:316]))
}
s.Signature = append(s.Signature, buf[180:276]...)
s.Signature = append(s.Signature, buf[220:316]...)
return err
}
// SizeSSZ returns the ssz encoded size in bytes for the SignedExecutionPayloadBid object
func (s *SignedExecutionPayloadBid) SizeSSZ() (size int) {
size = 276
size = 316
return
}
@@ -713,7 +742,7 @@ func (b *BeaconBlockBodyGloas) MarshalSSZ() ([]byte, error) {
// MarshalSSZTo ssz marshals the BeaconBlockBodyGloas object to a target array
func (b *BeaconBlockBodyGloas) MarshalSSZTo(buf []byte) (dst []byte, err error) {
dst = buf
offset := int(664)
offset := int(704)
// Field (0) 'RandaoReveal'
if size := len(b.RandaoReveal); size != 96 {
@@ -885,7 +914,7 @@ func (b *BeaconBlockBodyGloas) MarshalSSZTo(buf []byte) (dst []byte, err error)
func (b *BeaconBlockBodyGloas) UnmarshalSSZ(buf []byte) error {
var err error
size := uint64(len(buf))
if size < 664 {
if size < 704 {
return ssz.ErrSize
}
@@ -917,7 +946,7 @@ func (b *BeaconBlockBodyGloas) UnmarshalSSZ(buf []byte) error {
return ssz.ErrOffset
}
if o3 != 664 {
if o3 != 704 {
return ssz.ErrInvalidVariableOffset
}
@@ -958,12 +987,12 @@ func (b *BeaconBlockBodyGloas) UnmarshalSSZ(buf []byte) error {
if b.SignedExecutionPayloadBid == nil {
b.SignedExecutionPayloadBid = new(SignedExecutionPayloadBid)
}
if err = b.SignedExecutionPayloadBid.UnmarshalSSZ(buf[384:660]); err != nil {
if err = b.SignedExecutionPayloadBid.UnmarshalSSZ(buf[384:700]); err != nil {
return err
}
// Offset (11) 'PayloadAttestations'
if o11 = ssz.ReadOffset(buf[660:664]); o11 > size || o9 > o11 {
if o11 = ssz.ReadOffset(buf[700:704]); o11 > size || o9 > o11 {
return ssz.ErrOffset
}
@@ -1105,7 +1134,7 @@ func (b *BeaconBlockBodyGloas) UnmarshalSSZ(buf []byte) error {
// SizeSSZ returns the ssz encoded size in bytes for the BeaconBlockBodyGloas object
func (b *BeaconBlockBodyGloas) SizeSSZ() (size int) {
size = 664
size = 704
// Field (3) 'ProposerSlashings'
size += len(b.ProposerSlashings) * 416
@@ -1408,7 +1437,7 @@ func (b *BeaconStateGloas) MarshalSSZ() ([]byte, error) {
// MarshalSSZTo ssz marshals the BeaconStateGloas object to a target array
func (b *BeaconStateGloas) MarshalSSZTo(buf []byte) (dst []byte, err error) {
dst = buf
offset := int(2741821)
offset := int(2741333)
// Field (0) 'GenesisTime'
dst = ssz.MarshalUint64(dst, b.GenesisTime)
@@ -1630,14 +1659,21 @@ func (b *BeaconStateGloas) MarshalSSZTo(buf []byte) (dst []byte, err error) {
dst = ssz.MarshalUint64(dst, b.ProposerLookahead[ii])
}
// Field (38) 'ExecutionPayloadAvailability'
// Offset (38) 'Builders'
dst = ssz.WriteOffset(dst, offset)
offset += len(b.Builders) * 93
// Field (39) 'NextWithdrawalBuilderIndex'
dst = ssz.MarshalUint64(dst, uint64(b.NextWithdrawalBuilderIndex))
// Field (40) 'ExecutionPayloadAvailability'
if size := len(b.ExecutionPayloadAvailability); size != 1024 {
err = ssz.ErrBytesLengthFn("--.ExecutionPayloadAvailability", size, 1024)
return
}
dst = append(dst, b.ExecutionPayloadAvailability...)
// Field (39) 'BuilderPendingPayments'
// Field (41) 'BuilderPendingPayments'
if size := len(b.BuilderPendingPayments); size != 64 {
err = ssz.ErrVectorLengthFn("--.BuilderPendingPayments", size, 64)
return
@@ -1648,23 +1684,20 @@ func (b *BeaconStateGloas) MarshalSSZTo(buf []byte) (dst []byte, err error) {
}
}
// Offset (40) 'BuilderPendingWithdrawals'
// Offset (42) 'BuilderPendingWithdrawals'
dst = ssz.WriteOffset(dst, offset)
offset += len(b.BuilderPendingWithdrawals) * 44
offset += len(b.BuilderPendingWithdrawals) * 36
// Field (41) 'LatestBlockHash'
// Field (43) 'LatestBlockHash'
if size := len(b.LatestBlockHash); size != 32 {
err = ssz.ErrBytesLengthFn("--.LatestBlockHash", size, 32)
return
}
dst = append(dst, b.LatestBlockHash...)
// Field (42) 'LatestWithdrawalsRoot'
if size := len(b.LatestWithdrawalsRoot); size != 32 {
err = ssz.ErrBytesLengthFn("--.LatestWithdrawalsRoot", size, 32)
return
}
dst = append(dst, b.LatestWithdrawalsRoot...)
// Offset (44) 'PayloadExpectedWithdrawals'
dst = ssz.WriteOffset(dst, offset)
offset += len(b.PayloadExpectedWithdrawals) * 44
// Field (7) 'HistoricalRoots'
if size := len(b.HistoricalRoots); size > 16777216 {
@@ -1777,7 +1810,18 @@ func (b *BeaconStateGloas) MarshalSSZTo(buf []byte) (dst []byte, err error) {
}
}
// Field (40) 'BuilderPendingWithdrawals'
// Field (38) 'Builders'
if size := len(b.Builders); size > 1099511627776 {
err = ssz.ErrListTooBigFn("--.Builders", size, 1099511627776)
return
}
for ii := 0; ii < len(b.Builders); ii++ {
if dst, err = b.Builders[ii].MarshalSSZTo(dst); err != nil {
return
}
}
// Field (42) 'BuilderPendingWithdrawals'
if size := len(b.BuilderPendingWithdrawals); size > 1048576 {
err = ssz.ErrListTooBigFn("--.BuilderPendingWithdrawals", size, 1048576)
return
@@ -1788,6 +1832,17 @@ func (b *BeaconStateGloas) MarshalSSZTo(buf []byte) (dst []byte, err error) {
}
}
// Field (44) 'PayloadExpectedWithdrawals'
if size := len(b.PayloadExpectedWithdrawals); size > 16 {
err = ssz.ErrListTooBigFn("--.PayloadExpectedWithdrawals", size, 16)
return
}
for ii := 0; ii < len(b.PayloadExpectedWithdrawals); ii++ {
if dst, err = b.PayloadExpectedWithdrawals[ii].MarshalSSZTo(dst); err != nil {
return
}
}
return
}
@@ -1795,12 +1850,12 @@ func (b *BeaconStateGloas) MarshalSSZTo(buf []byte) (dst []byte, err error) {
func (b *BeaconStateGloas) UnmarshalSSZ(buf []byte) error {
var err error
size := uint64(len(buf))
if size < 2741821 {
if size < 2741333 {
return ssz.ErrSize
}
tail := buf
var o7, o9, o11, o12, o15, o16, o21, o27, o34, o35, o36, o40 uint64
var o7, o9, o11, o12, o15, o16, o21, o27, o34, o35, o36, o38, o42, o44 uint64
// Field (0) 'GenesisTime'
b.GenesisTime = ssz.UnmarshallUint64(buf[0:8])
@@ -1853,7 +1908,7 @@ func (b *BeaconStateGloas) UnmarshalSSZ(buf []byte) error {
return ssz.ErrOffset
}
if o7 != 2741821 {
if o7 != 2741333 {
return ssz.ErrInvalidVariableOffset
}
@@ -1963,93 +2018,100 @@ func (b *BeaconStateGloas) UnmarshalSSZ(buf []byte) error {
if b.LatestExecutionPayloadBid == nil {
b.LatestExecutionPayloadBid = new(ExecutionPayloadBid)
}
if err = b.LatestExecutionPayloadBid.UnmarshalSSZ(buf[2736629:2736809]); err != nil {
if err = b.LatestExecutionPayloadBid.UnmarshalSSZ(buf[2736629:2736849]); err != nil {
return err
}
// Field (25) 'NextWithdrawalIndex'
b.NextWithdrawalIndex = ssz.UnmarshallUint64(buf[2736809:2736817])
b.NextWithdrawalIndex = ssz.UnmarshallUint64(buf[2736849:2736857])
// Field (26) 'NextWithdrawalValidatorIndex'
b.NextWithdrawalValidatorIndex = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.ValidatorIndex(ssz.UnmarshallUint64(buf[2736817:2736825]))
b.NextWithdrawalValidatorIndex = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.ValidatorIndex(ssz.UnmarshallUint64(buf[2736857:2736865]))
// Offset (27) 'HistoricalSummaries'
if o27 = ssz.ReadOffset(buf[2736825:2736829]); o27 > size || o21 > o27 {
if o27 = ssz.ReadOffset(buf[2736865:2736869]); o27 > size || o21 > o27 {
return ssz.ErrOffset
}
// Field (28) 'DepositRequestsStartIndex'
b.DepositRequestsStartIndex = ssz.UnmarshallUint64(buf[2736829:2736837])
b.DepositRequestsStartIndex = ssz.UnmarshallUint64(buf[2736869:2736877])
// Field (29) 'DepositBalanceToConsume'
b.DepositBalanceToConsume = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Gwei(ssz.UnmarshallUint64(buf[2736837:2736845]))
b.DepositBalanceToConsume = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Gwei(ssz.UnmarshallUint64(buf[2736877:2736885]))
// Field (30) 'ExitBalanceToConsume'
b.ExitBalanceToConsume = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Gwei(ssz.UnmarshallUint64(buf[2736845:2736853]))
b.ExitBalanceToConsume = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Gwei(ssz.UnmarshallUint64(buf[2736885:2736893]))
// Field (31) 'EarliestExitEpoch'
b.EarliestExitEpoch = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Epoch(ssz.UnmarshallUint64(buf[2736853:2736861]))
b.EarliestExitEpoch = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Epoch(ssz.UnmarshallUint64(buf[2736893:2736901]))
// Field (32) 'ConsolidationBalanceToConsume'
b.ConsolidationBalanceToConsume = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Gwei(ssz.UnmarshallUint64(buf[2736861:2736869]))
b.ConsolidationBalanceToConsume = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Gwei(ssz.UnmarshallUint64(buf[2736901:2736909]))
// Field (33) 'EarliestConsolidationEpoch'
b.EarliestConsolidationEpoch = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Epoch(ssz.UnmarshallUint64(buf[2736869:2736877]))
b.EarliestConsolidationEpoch = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Epoch(ssz.UnmarshallUint64(buf[2736909:2736917]))
// Offset (34) 'PendingDeposits'
if o34 = ssz.ReadOffset(buf[2736877:2736881]); o34 > size || o27 > o34 {
if o34 = ssz.ReadOffset(buf[2736917:2736921]); o34 > size || o27 > o34 {
return ssz.ErrOffset
}
// Offset (35) 'PendingPartialWithdrawals'
if o35 = ssz.ReadOffset(buf[2736881:2736885]); o35 > size || o34 > o35 {
if o35 = ssz.ReadOffset(buf[2736921:2736925]); o35 > size || o34 > o35 {
return ssz.ErrOffset
}
// Offset (36) 'PendingConsolidations'
if o36 = ssz.ReadOffset(buf[2736885:2736889]); o36 > size || o35 > o36 {
if o36 = ssz.ReadOffset(buf[2736925:2736929]); o36 > size || o35 > o36 {
return ssz.ErrOffset
}
// Field (37) 'ProposerLookahead'
b.ProposerLookahead = ssz.ExtendUint64(b.ProposerLookahead, 64)
for ii := 0; ii < 64; ii++ {
b.ProposerLookahead[ii] = ssz.UnmarshallUint64(buf[2736889:2737401][ii*8 : (ii+1)*8])
b.ProposerLookahead[ii] = ssz.UnmarshallUint64(buf[2736929:2737441][ii*8 : (ii+1)*8])
}
// Field (38) 'ExecutionPayloadAvailability'
// Offset (38) 'Builders'
if o38 = ssz.ReadOffset(buf[2737441:2737445]); o38 > size || o36 > o38 {
return ssz.ErrOffset
}
// Field (39) 'NextWithdrawalBuilderIndex'
b.NextWithdrawalBuilderIndex = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.BuilderIndex(ssz.UnmarshallUint64(buf[2737445:2737453]))
// Field (40) 'ExecutionPayloadAvailability'
if cap(b.ExecutionPayloadAvailability) == 0 {
b.ExecutionPayloadAvailability = make([]byte, 0, len(buf[2737401:2738425]))
b.ExecutionPayloadAvailability = make([]byte, 0, len(buf[2737453:2738477]))
}
b.ExecutionPayloadAvailability = append(b.ExecutionPayloadAvailability, buf[2737401:2738425]...)
b.ExecutionPayloadAvailability = append(b.ExecutionPayloadAvailability, buf[2737453:2738477]...)
// Field (39) 'BuilderPendingPayments'
// Field (41) 'BuilderPendingPayments'
b.BuilderPendingPayments = make([]*BuilderPendingPayment, 64)
for ii := 0; ii < 64; ii++ {
if b.BuilderPendingPayments[ii] == nil {
b.BuilderPendingPayments[ii] = new(BuilderPendingPayment)
}
if err = b.BuilderPendingPayments[ii].UnmarshalSSZ(buf[2738425:2741753][ii*52 : (ii+1)*52]); err != nil {
if err = b.BuilderPendingPayments[ii].UnmarshalSSZ(buf[2738477:2741293][ii*44 : (ii+1)*44]); err != nil {
return err
}
}
// Offset (40) 'BuilderPendingWithdrawals'
if o40 = ssz.ReadOffset(buf[2741753:2741757]); o40 > size || o36 > o40 {
// Offset (42) 'BuilderPendingWithdrawals'
if o42 = ssz.ReadOffset(buf[2741293:2741297]); o42 > size || o38 > o42 {
return ssz.ErrOffset
}
// Field (41) 'LatestBlockHash'
// Field (43) 'LatestBlockHash'
if cap(b.LatestBlockHash) == 0 {
b.LatestBlockHash = make([]byte, 0, len(buf[2741757:2741789]))
b.LatestBlockHash = make([]byte, 0, len(buf[2741297:2741329]))
}
b.LatestBlockHash = append(b.LatestBlockHash, buf[2741757:2741789]...)
b.LatestBlockHash = append(b.LatestBlockHash, buf[2741297:2741329]...)
// Field (42) 'LatestWithdrawalsRoot'
if cap(b.LatestWithdrawalsRoot) == 0 {
b.LatestWithdrawalsRoot = make([]byte, 0, len(buf[2741789:2741821]))
// Offset (44) 'PayloadExpectedWithdrawals'
if o44 = ssz.ReadOffset(buf[2741329:2741333]); o44 > size || o42 > o44 {
return ssz.ErrOffset
}
b.LatestWithdrawalsRoot = append(b.LatestWithdrawalsRoot, buf[2741789:2741821]...)
// Field (7) 'HistoricalRoots'
{
@@ -2209,7 +2271,7 @@ func (b *BeaconStateGloas) UnmarshalSSZ(buf []byte) error {
// Field (36) 'PendingConsolidations'
{
buf = tail[o36:o40]
buf = tail[o36:o38]
num, err := ssz.DivideInt2(len(buf), 16, 262144)
if err != nil {
return err
@@ -2225,10 +2287,28 @@ func (b *BeaconStateGloas) UnmarshalSSZ(buf []byte) error {
}
}
// Field (40) 'BuilderPendingWithdrawals'
// Field (38) 'Builders'
{
buf = tail[o40:]
num, err := ssz.DivideInt2(len(buf), 44, 1048576)
buf = tail[o38:o42]
num, err := ssz.DivideInt2(len(buf), 93, 1099511627776)
if err != nil {
return err
}
b.Builders = make([]*Builder, num)
for ii := 0; ii < num; ii++ {
if b.Builders[ii] == nil {
b.Builders[ii] = new(Builder)
}
if err = b.Builders[ii].UnmarshalSSZ(buf[ii*93 : (ii+1)*93]); err != nil {
return err
}
}
}
// Field (42) 'BuilderPendingWithdrawals'
{
buf = tail[o42:o44]
num, err := ssz.DivideInt2(len(buf), 36, 1048576)
if err != nil {
return err
}
@@ -2237,7 +2317,25 @@ func (b *BeaconStateGloas) UnmarshalSSZ(buf []byte) error {
if b.BuilderPendingWithdrawals[ii] == nil {
b.BuilderPendingWithdrawals[ii] = new(BuilderPendingWithdrawal)
}
if err = b.BuilderPendingWithdrawals[ii].UnmarshalSSZ(buf[ii*44 : (ii+1)*44]); err != nil {
if err = b.BuilderPendingWithdrawals[ii].UnmarshalSSZ(buf[ii*36 : (ii+1)*36]); err != nil {
return err
}
}
}
// Field (44) 'PayloadExpectedWithdrawals'
{
buf = tail[o44:]
num, err := ssz.DivideInt2(len(buf), 44, 16)
if err != nil {
return err
}
b.PayloadExpectedWithdrawals = make([]*v1.Withdrawal, num)
for ii := 0; ii < num; ii++ {
if b.PayloadExpectedWithdrawals[ii] == nil {
b.PayloadExpectedWithdrawals[ii] = new(v1.Withdrawal)
}
if err = b.PayloadExpectedWithdrawals[ii].UnmarshalSSZ(buf[ii*44 : (ii+1)*44]); err != nil {
return err
}
}
@@ -2247,7 +2345,7 @@ func (b *BeaconStateGloas) UnmarshalSSZ(buf []byte) error {
// SizeSSZ returns the ssz encoded size in bytes for the BeaconStateGloas object
func (b *BeaconStateGloas) SizeSSZ() (size int) {
size = 2741821
size = 2741333
// Field (7) 'HistoricalRoots'
size += len(b.HistoricalRoots) * 32
@@ -2282,8 +2380,14 @@ func (b *BeaconStateGloas) SizeSSZ() (size int) {
// Field (36) 'PendingConsolidations'
size += len(b.PendingConsolidations) * 16
// Field (40) 'BuilderPendingWithdrawals'
size += len(b.BuilderPendingWithdrawals) * 44
// Field (38) 'Builders'
size += len(b.Builders) * 93
// Field (42) 'BuilderPendingWithdrawals'
size += len(b.BuilderPendingWithdrawals) * 36
// Field (44) 'PayloadExpectedWithdrawals'
size += len(b.PayloadExpectedWithdrawals) * 44
return
}
@@ -2637,14 +2741,33 @@ func (b *BeaconStateGloas) HashTreeRootWith(hh *ssz.Hasher) (err error) {
hh.Merkleize(subIndx)
}
// Field (38) 'ExecutionPayloadAvailability'
// Field (38) 'Builders'
{
subIndx := hh.Index()
num := uint64(len(b.Builders))
if num > 1099511627776 {
err = ssz.ErrIncorrectListSize
return
}
for _, elem := range b.Builders {
if err = elem.HashTreeRootWith(hh); err != nil {
return
}
}
hh.MerkleizeWithMixin(subIndx, num, 1099511627776)
}
// Field (39) 'NextWithdrawalBuilderIndex'
hh.PutUint64(uint64(b.NextWithdrawalBuilderIndex))
// Field (40) 'ExecutionPayloadAvailability'
if size := len(b.ExecutionPayloadAvailability); size != 1024 {
err = ssz.ErrBytesLengthFn("--.ExecutionPayloadAvailability", size, 1024)
return
}
hh.PutBytes(b.ExecutionPayloadAvailability)
// Field (39) 'BuilderPendingPayments'
// Field (41) 'BuilderPendingPayments'
{
subIndx := hh.Index()
for _, elem := range b.BuilderPendingPayments {
@@ -2655,7 +2778,7 @@ func (b *BeaconStateGloas) HashTreeRootWith(hh *ssz.Hasher) (err error) {
hh.Merkleize(subIndx)
}
// Field (40) 'BuilderPendingWithdrawals'
// Field (42) 'BuilderPendingWithdrawals'
{
subIndx := hh.Index()
num := uint64(len(b.BuilderPendingWithdrawals))
@@ -2671,19 +2794,28 @@ func (b *BeaconStateGloas) HashTreeRootWith(hh *ssz.Hasher) (err error) {
hh.MerkleizeWithMixin(subIndx, num, 1048576)
}
// Field (41) 'LatestBlockHash'
// Field (43) 'LatestBlockHash'
if size := len(b.LatestBlockHash); size != 32 {
err = ssz.ErrBytesLengthFn("--.LatestBlockHash", size, 32)
return
}
hh.PutBytes(b.LatestBlockHash)
// Field (42) 'LatestWithdrawalsRoot'
if size := len(b.LatestWithdrawalsRoot); size != 32 {
err = ssz.ErrBytesLengthFn("--.LatestWithdrawalsRoot", size, 32)
return
// Field (44) 'PayloadExpectedWithdrawals'
{
subIndx := hh.Index()
num := uint64(len(b.PayloadExpectedWithdrawals))
if num > 16 {
err = ssz.ErrIncorrectListSize
return
}
for _, elem := range b.PayloadExpectedWithdrawals {
if err = elem.HashTreeRootWith(hh); err != nil {
return
}
}
hh.MerkleizeWithMixin(subIndx, num, 16)
}
hh.PutBytes(b.LatestWithdrawalsRoot)
hh.Merkleize(indx)
return
@@ -2716,7 +2848,7 @@ func (b *BuilderPendingPayment) MarshalSSZTo(buf []byte) (dst []byte, err error)
func (b *BuilderPendingPayment) UnmarshalSSZ(buf []byte) error {
var err error
size := uint64(len(buf))
if size != 52 {
if size != 44 {
return ssz.ErrSize
}
@@ -2727,7 +2859,7 @@ func (b *BuilderPendingPayment) UnmarshalSSZ(buf []byte) error {
if b.Withdrawal == nil {
b.Withdrawal = new(BuilderPendingWithdrawal)
}
if err = b.Withdrawal.UnmarshalSSZ(buf[8:52]); err != nil {
if err = b.Withdrawal.UnmarshalSSZ(buf[8:44]); err != nil {
return err
}
@@ -2736,7 +2868,7 @@ func (b *BuilderPendingPayment) UnmarshalSSZ(buf []byte) error {
// SizeSSZ returns the ssz encoded size in bytes for the BuilderPendingPayment object
func (b *BuilderPendingPayment) SizeSSZ() (size int) {
size = 52
size = 44
return
}
@@ -2783,9 +2915,6 @@ func (b *BuilderPendingWithdrawal) MarshalSSZTo(buf []byte) (dst []byte, err err
// Field (2) 'BuilderIndex'
dst = ssz.MarshalUint64(dst, uint64(b.BuilderIndex))
// Field (3) 'WithdrawableEpoch'
dst = ssz.MarshalUint64(dst, uint64(b.WithdrawableEpoch))
return
}
@@ -2793,7 +2922,7 @@ func (b *BuilderPendingWithdrawal) MarshalSSZTo(buf []byte) (dst []byte, err err
func (b *BuilderPendingWithdrawal) UnmarshalSSZ(buf []byte) error {
var err error
size := uint64(len(buf))
if size != 44 {
if size != 36 {
return ssz.ErrSize
}
@@ -2807,17 +2936,14 @@ func (b *BuilderPendingWithdrawal) UnmarshalSSZ(buf []byte) error {
b.Amount = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Gwei(ssz.UnmarshallUint64(buf[20:28]))
// Field (2) 'BuilderIndex'
b.BuilderIndex = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.ValidatorIndex(ssz.UnmarshallUint64(buf[28:36]))
// Field (3) 'WithdrawableEpoch'
b.WithdrawableEpoch = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Epoch(ssz.UnmarshallUint64(buf[36:44]))
b.BuilderIndex = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.BuilderIndex(ssz.UnmarshallUint64(buf[28:36]))
return err
}
// SizeSSZ returns the ssz encoded size in bytes for the BuilderPendingWithdrawal object
func (b *BuilderPendingWithdrawal) SizeSSZ() (size int) {
size = 44
size = 36
return
}
@@ -2843,9 +2969,6 @@ func (b *BuilderPendingWithdrawal) HashTreeRootWith(hh *ssz.Hasher) (err error)
// Field (2) 'BuilderIndex'
hh.PutUint64(uint64(b.BuilderIndex))
// Field (3) 'WithdrawableEpoch'
hh.PutUint64(uint64(b.WithdrawableEpoch))
hh.Merkleize(indx)
return
}
@@ -3218,7 +3341,7 @@ func (e *ExecutionPayloadEnvelope) UnmarshalSSZ(buf []byte) error {
}
// Field (2) 'BuilderIndex'
e.BuilderIndex = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.ValidatorIndex(ssz.UnmarshallUint64(buf[8:16]))
e.BuilderIndex = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.BuilderIndex(ssz.UnmarshallUint64(buf[8:16]))
// Field (3) 'BeaconBlockRoot'
if cap(e.BeaconBlockRoot) == 0 {
@@ -3472,3 +3595,132 @@ func (s *SignedExecutionPayloadEnvelope) HashTreeRootWith(hh *ssz.Hasher) (err e
hh.Merkleize(indx)
return
}
// MarshalSSZ ssz marshals the Builder object
func (b *Builder) MarshalSSZ() ([]byte, error) {
return ssz.MarshalSSZ(b)
}
// MarshalSSZTo ssz marshals the Builder object to a target array
func (b *Builder) MarshalSSZTo(buf []byte) (dst []byte, err error) {
dst = buf
// Field (0) 'Pubkey'
if size := len(b.Pubkey); size != 48 {
err = ssz.ErrBytesLengthFn("--.Pubkey", size, 48)
return
}
dst = append(dst, b.Pubkey...)
// Field (1) 'Version'
if size := len(b.Version); size != 1 {
err = ssz.ErrBytesLengthFn("--.Version", size, 1)
return
}
dst = append(dst, b.Version...)
// Field (2) 'ExecutionAddress'
if size := len(b.ExecutionAddress); size != 20 {
err = ssz.ErrBytesLengthFn("--.ExecutionAddress", size, 20)
return
}
dst = append(dst, b.ExecutionAddress...)
// Field (3) 'Balance'
dst = ssz.MarshalUint64(dst, uint64(b.Balance))
// Field (4) 'DepositEpoch'
dst = ssz.MarshalUint64(dst, uint64(b.DepositEpoch))
// Field (5) 'WithdrawableEpoch'
dst = ssz.MarshalUint64(dst, uint64(b.WithdrawableEpoch))
return
}
// UnmarshalSSZ ssz unmarshals the Builder object
func (b *Builder) UnmarshalSSZ(buf []byte) error {
var err error
size := uint64(len(buf))
if size != 93 {
return ssz.ErrSize
}
// Field (0) 'Pubkey'
if cap(b.Pubkey) == 0 {
b.Pubkey = make([]byte, 0, len(buf[0:48]))
}
b.Pubkey = append(b.Pubkey, buf[0:48]...)
// Field (1) 'Version'
if cap(b.Version) == 0 {
b.Version = make([]byte, 0, len(buf[48:49]))
}
b.Version = append(b.Version, buf[48:49]...)
// Field (2) 'ExecutionAddress'
if cap(b.ExecutionAddress) == 0 {
b.ExecutionAddress = make([]byte, 0, len(buf[49:69]))
}
b.ExecutionAddress = append(b.ExecutionAddress, buf[49:69]...)
// Field (3) 'Balance'
b.Balance = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Gwei(ssz.UnmarshallUint64(buf[69:77]))
// Field (4) 'DepositEpoch'
b.DepositEpoch = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Epoch(ssz.UnmarshallUint64(buf[77:85]))
// Field (5) 'WithdrawableEpoch'
b.WithdrawableEpoch = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Epoch(ssz.UnmarshallUint64(buf[85:93]))
return err
}
// SizeSSZ returns the ssz encoded size in bytes for the Builder object
func (b *Builder) SizeSSZ() (size int) {
size = 93
return
}
// HashTreeRoot ssz hashes the Builder object
func (b *Builder) HashTreeRoot() ([32]byte, error) {
return ssz.HashWithDefaultHasher(b)
}
// HashTreeRootWith ssz hashes the Builder object with a hasher
func (b *Builder) HashTreeRootWith(hh *ssz.Hasher) (err error) {
indx := hh.Index()
// Field (0) 'Pubkey'
if size := len(b.Pubkey); size != 48 {
err = ssz.ErrBytesLengthFn("--.Pubkey", size, 48)
return
}
hh.PutBytes(b.Pubkey)
// Field (1) 'Version'
if size := len(b.Version); size != 1 {
err = ssz.ErrBytesLengthFn("--.Version", size, 1)
return
}
hh.PutBytes(b.Version)
// Field (2) 'ExecutionAddress'
if size := len(b.ExecutionAddress); size != 20 {
err = ssz.ErrBytesLengthFn("--.ExecutionAddress", size, 20)
return
}
hh.PutBytes(b.ExecutionAddress)
// Field (3) 'Balance'
hh.PutUint64(uint64(b.Balance))
// Field (4) 'DepositEpoch'
hh.PutUint64(uint64(b.DepositEpoch))
// Field (5) 'WithdrawableEpoch'
hh.PutUint64(uint64(b.WithdrawableEpoch))
hh.Merkleize(indx)
return
}

View File

@@ -26,10 +26,12 @@ func TestExecutionPayloadBid_Copy(t *testing.T) {
ParentBlockHash: []byte("parent_block_hash_32_bytes_long!"),
ParentBlockRoot: []byte("parent_block_root_32_bytes_long!"),
BlockHash: []byte("block_hash_32_bytes_are_long!!"),
PrevRandao: []byte("prev_randao_32_bytes_long!!!"),
FeeRecipient: []byte("fee_recipient_20_byt"),
GasLimit: 15000000,
BuilderIndex: primitives.ValidatorIndex(42),
BuilderIndex: primitives.BuilderIndex(42),
Slot: primitives.Slot(12345),
ExecutionPayment: 5645654,
Value: 1000000000000000000,
BlobKzgCommitmentsRoot: []byte("blob_kzg_commitments_32_bytes!!"),
},
@@ -76,10 +78,9 @@ func TestBuilderPendingWithdrawal_Copy(t *testing.T) {
{
name: "fully populated withdrawal",
withdrawal: &BuilderPendingWithdrawal{
FeeRecipient: []byte("fee_recipient_20_byt"),
Amount: primitives.Gwei(5000000000),
BuilderIndex: primitives.ValidatorIndex(123),
WithdrawableEpoch: primitives.Epoch(456),
FeeRecipient: []byte("fee_recipient_20_byt"),
Amount: primitives.Gwei(5000000000),
BuilderIndex: primitives.BuilderIndex(123),
},
},
}
@@ -134,10 +135,9 @@ func TestBuilderPendingPayment_Copy(t *testing.T) {
payment: &BuilderPendingPayment{
Weight: primitives.Gwei(2500),
Withdrawal: &BuilderPendingWithdrawal{
FeeRecipient: []byte("test_recipient_20byt"),
Amount: primitives.Gwei(10000),
BuilderIndex: primitives.ValidatorIndex(789),
WithdrawableEpoch: primitives.Epoch(999),
FeeRecipient: []byte("test_recipient_20byt"),
Amount: primitives.Gwei(10000),
BuilderIndex: primitives.BuilderIndex(789),
},
},
},
@@ -166,3 +166,60 @@ func TestBuilderPendingPayment_Copy(t *testing.T) {
})
}
}
func TestCopyBuilder(t *testing.T) {
tests := []struct {
name string
builder *Builder
}{
{
name: "nil builder",
builder: nil,
},
{
name: "empty builder",
builder: &Builder{},
},
{
name: "fully populated builder",
builder: &Builder{
Pubkey: []byte("pubkey_48_bytes_long_pubkey_48_bytes_long_pubkey_48!"),
Version: []byte{'a'},
ExecutionAddress: []byte("execution_address_20"),
Balance: primitives.Gwei(12345),
DepositEpoch: primitives.Epoch(10),
WithdrawableEpoch: primitives.Epoch(20),
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
copied := CopyBuilder(tt.builder)
if tt.builder == nil {
if copied != nil {
t.Errorf("CopyBuilder() of nil should return nil, got %v", copied)
}
return
}
if !reflect.DeepEqual(tt.builder, copied) {
t.Errorf("CopyBuilder() = %v, want %v", copied, tt.builder)
}
if len(tt.builder.Pubkey) > 0 {
tt.builder.Pubkey[0] = 0xFF
if copied.Pubkey[0] == 0xFF {
t.Error("CopyBuilder() did not create deep copy of Pubkey")
}
}
if len(tt.builder.ExecutionAddress) > 0 {
tt.builder.ExecutionAddress[0] = 0xFF
if copied.ExecutionAddress[0] == 0xFF {
t.Error("CopyBuilder() did not create deep copy of ExecutionAddress")
}
}
})
}
}

View File

@@ -48,6 +48,7 @@ mainnet = {
"payload_attestation.size": "4", # Gloas: MAX_PAYLOAD_ATTESTATIONS defined in block body
"execution_payload_availability.size": "1024", # Gloas: SLOTS_PER_HISTORICAL_ROOT
"builder_pending_payments.size": "64", # Gloas: vector length (2 * SLOTS_PER_EPOCH)
"builder_registry_limit": "1099511627776", # Gloas: BUILDER_REGISTRY_LIMIT (same for mainnet/minimal)
}
minimal = {
@@ -91,7 +92,8 @@ minimal = {
"ptc.type": "github.com/OffchainLabs/go-bitfield.Bitvector2",
"payload_attestation.size": "4", # Gloas: MAX_PAYLOAD_ATTESTATIONS defined in block body
"execution_payload_availability.size": "8", # Gloas: SLOTS_PER_HISTORICAL_ROOT
"builder_pending_payments.size": "16" # Gloas: vector length (2 * SLOTS_PER_EPOCH)
"builder_pending_payments.size": "16", # Gloas: vector length (2 * SLOTS_PER_EPOCH)
"builder_registry_limit": "1099511627776", # Gloas: BUILDER_REGISTRY_LIMIT (same for mainnet/minimal)
}
###### Rules definitions #######

Some files were not shown because too many files have changed in this diff Show More