Compare commits

..

2 Commits

Author SHA1 Message Date
Preston Van Loon
13233e05b5 Changelog fragment 2025-05-10 01:05:51 -05:00
Preston Van Loon
1d276fcc09 Enable gzip compression in gRPC server 2025-05-10 01:04:50 -05:00
1483 changed files with 13780 additions and 32901 deletions

View File

@@ -18,7 +18,7 @@ jobs:
uses: dsaltares/fetch-gh-release-asset@aa2ab1243d6e0d5b405b973c89fa4d06a2d0fff7 # 1.1.2
with:
repo: OffchainLabs/unclog
version: "tags/v0.1.5"
version: "tags/v0.1.3"
file: "unclog"
- name: Get new changelog files

View File

@@ -4,151 +4,6 @@ All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
## [v6.0.4](https://github.com/prysmaticlabs/prysm/compare/v6.0.3...v6.0.4) - 2025-06-05
This release has more work on PeerDAS, and light client support. Additionally, we have a few bug fixes:
- Blob cache size now correctly set at startup.
- A fix for slashing protection history exports where the validator database was in a nested folder.
- Corrected behavior of the API call for state committees with an invalid request.
- `/bin/sh` is now symlinked to `/bin/bash` for Prysm docker images.
In the [Hoodi](https://github.com/eth-clients/hoodi) testnet, the default gas limit is raised to 60M gas.
### Added
- Add light client mainnet spec test. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15295)
- Add support for light client req/resp domain. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15281)
- Added /bin/sh simlink to docker images. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15294)
- Added Prysm build data to otel tracing spans. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15302)
- Add light client minimal spec test support for `update_ranking` tests. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15297)
- Add fulu operation and epoch processing spec tests. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15284)
- Updated e2e Beacon API evaluator to support more endpoints, including the ones introduced in Electra. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15304)
- Data column sidecars verification methods. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15232)
- Implement data column sidecars filesystem. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15257)
- Add blob schedule support from https://github.com/ethereum/consensus-specs/pull/4277. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15272)
- random forkchoice spec tests for fulu. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15287)
- Add ability to download nightly test vectors. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15312)
- PeerDAS: Validation pipeline for data column sidecars received via gossip. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15310)
- PeerDAS: Implement P2P. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15347)
- PeerDAS: Implement the blockchain package. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15350)
### Changed
- Update spec tests to v1.6.0-alpha.0. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15306)
- PeerDAS: Refactor the reconstruction pipeline. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15309)
- PeerDAS: `DataColumnStorage.Get` - Exit early no columns are available. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15309)
- Default hoodi testnet builder gas limit to 60M. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15361)
### Fixed
- Fix cyclical dependencies issue when using testing/util package. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15248)
- Set seen blob cache size correctly based on current slot time at start up. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15348)
- Fix `slashing-protection-history export` failing when `validator.db` is in a nested folder like `data/direct/`. (#14954). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15351)
- Made `/eth/v1/beacon/states/{state_id}/committees` endpoint return `400` when slot does not belong to the specified epoch, aligning with the Beacon API spec (#15355). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15356)
- Removed eager validator context cancellation that was causing validator builder registrations to fail occasionally. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15369)
## [v6.0.3](https://github.com/prysmaticlabs/prysm/compare/v6.0.2...v6.0.3) - 2025-05-21
This release has important bugfixes for users of the [Beacon API](https://ethereum.github.io/beacon-APIs/). These fixes include:
- Fixed pending consolidations endpoint to return the correct response.
- Fixed incorrect field name from pending partial withdrawals response.
- Fixed attester slashing to return an empty array instead of nil/null.
- Fixed validator participation and active set changes endpoints to accept a `{state_id}` parameter.
Other improvements include:
- Disabled deposit log processing routine for Electra and beyond.
Operators are encouraged to update at their own convenience.
### Added
- ssz static spec tests for fulu. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15279)
- finality and merkle proof spec tests for fulu. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15286)
- sanity and rewards spec tests for fulu. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15285)
### Changed
- Added more tracing spans to various helpers related to GetDuties. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15271)
- Disable log processing after deposit requests are activated. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15274)
### Fixed
- fixed wrong handler for get pending consolidations endpoint. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15290)
- Fixed /eth/v2/beacon/pool/attester_slashings no slashings returns empty array instead of nil. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15291)
- Fix Prysm endpoints `/prysm/v1/validators/{state_id}/participation` and `/prysm/v1/validators/{state_id}/active_set_changes` to properly handle `{state_id}`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15245)
## [v6.0.2](https://github.com/prysmaticlabs/prysm/compare/v6.0.1...v6.0.2) - 2025-05-12
This is a patch release to fix a few important bugs. Most importantly, we have adjusted the index limit for field tries in the beacon state to better support Pectra states. This should alleviate memory issues that clients are seeing since Pectra mainnet fork.
### Added
- Enable light client gossip for optimistic and finality updates. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15220)
- Implement peerDAS core functions. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15192)
- Force duties start on received blocks. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15251)
- Added additional tracing spans for the GetDuties routine. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15258)
### Changed
- Use otelgrpc for tracing grpc server and client. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15237)
- Upgraded ristretto to v2.2.0, for RISC-V support. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15170)
- Update spec to v1.5.0 compliance which changes minimal execution requests size. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15256)
- Increase indices limit in field trie rebuilding. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15252)
- Increase sepolia gas limit to 60M. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15253)
### Fixed
- Fixed wrong field name in pending partial withdrawals was returned on state json representation, described in https://github.com/ethereum/consensus-specs/blob/dev/specs/electra/beacon-chain.md#pendingpartialwithdrawal. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15254)
- Fixed gocognit on propose block rest path. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15147)
## [v6.0.1](https://github.com/prysmaticlabs/prysm/compare/v6.0.0...v6.0.1) - 2025-05-02
This release fixes two bugs related to the `payload_attributes` [event stream](https://ethereum.github.io/beacon-APIs/#/Events/eventstream). If you are using or planning to use this endpoint, upgrading to version 6.0.1 is mandatory.
Also, a reminder: like other Beacon API endpoints, when a node is syncing, it may return historical data as `finalized` or `head`. Until the node is fully synced to the head of the chain, you should avoid using this data, depending on your application's needs.
We currently recommend against using the `--enable-beacon-rest-api` flag on Mainnet. As you may have noticed, we put a deprecation notice on our gRPC code, in particular on gRPC-related flags. The reason for this is that we want to eventually remove gRPC and have REST HTTP as the standard way of communication between the validator client and the beacon node. That being said, the REST option is still unstable and thus marked as experimental in the flag's description (the flag is `--enable-beacon-rest-api`). Therefore we encourage everyone to keep using gRPC, which is currently the default. It is fine to test the REST option on testnets, but doing it on Mainnet can lead to missing attestations and even missing blocks.
**Updating to v6.0.0 or later is required for Pectra Mainnet!**
This patch release has a few important fixes from v6.0.0. Updating to this release is encouraged.
### Added
- `UpgradeToFulu` spectests. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15190)
- PeerDAS related KZG wrappers. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15186)
- Add light client p2p broadcaster functions. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15175)
- Added immediate broadcasting of proposer slashings when equivocating blocks are detected during block processing. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14693)
- Added 2 new errors: `HeadStateErr` and `ErrCouldNotVerifyBlockHeader`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14693)
- Implement pending consolidations Beacon API endpoint. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15219)
- PeerDAS: Add needed proto files and corresponding generated code. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15187)
- Add light client p2p validator and subscriber functions. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15214)
### Changed
- Refactored internal function `reValidateSubscriptions` to `pruneSubscriptions` in `beacon-chain/sync/subscriber.go` for improved clarity, addressing a TODO comment. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15160)
- Updated geth to v1.15.9. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15216)
- Removed the slot from `UpdateDuties`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15223)
- Update hoodie bootnodes. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15240)
### Fixed
- avoid nondeterministic default fork value when generate genesis. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15151)
- `UpgradeToFulu`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15190)
- fixed underflow with balances in leaking edge case with expected withdrawals. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15191)
- Fixes our generated ssz files to have the correct package name. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15199)
- Fixes our blob sidecar by root request lists for electra. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15209)
- Ensure that the `payload_attributes` event has a consistent view of the head state by passing the head block in the event and using stategen to retrieve the corresponding state. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15213)
- Pass parent context to update duties when dependent roots change. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15221)
- Process slots across epoch for payload attribute event. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15228)
- extend the payload attribute computation deadline to the beginning of the proposal slot. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15230)
### Security
- Fix CVE-2025-22869. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15204)
- Fix CVE-2025-22870. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15204)
- Fix CVE-2025-22872. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15204)
- Fix CVE-2025-30204. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15204)
## [v6.0.0](https://github.com/prysmaticlabs/prysm/compare/v5.3.2...v6.0.0) - 2025-04-21
This release introduces Mainnet support for the upcoming Electra + Prague (Pectra) fork. The fork is scheduled for mainnet epoch 364032 (May 7, 2025, 10:05:11 UTC). You MUST update Prysm Beacon Node, Prysm Validator Client, and your execution layer client to the Pectra ready release prior to the fork to stay on the correct chain.

View File

@@ -4,7 +4,7 @@ Note: The latest and most up-to-date documentation can be found on our [docs por
Excited by our work and want to get involved in building out our sharding releases? Or maybe you haven't learned as much about the Ethereum protocol but are a savvy developer?
You can explore our [Open Issues](https://github.com/OffchainLabs/prysm/issues) in-the works for our different releases. Feel free to fork our repo and start creating PRs after assigning yourself to an issue of interest. We are always chatting on [Discord](https://discord.gg/prysm) drop us a line there if you want to get more involved or have any questions on our implementation!
You can explore our [Open Issues](https://github.com/OffchainLabs/prysm/issues) in-the works for our different releases. Feel free to fork our repo and start creating PRs after assigning yourself to an issue of interest. We are always chatting on [Discord](https://discord.gg/CTYGPUJ) drop us a line there if you want to get more involved or have any questions on our implementation!
> [!IMPORTANT]
> Please, **do not send pull requests for trivial changes**, such as typos, these will be rejected. These types of pull requests incur a cost to reviewers and do not provide much value to the project. If you are unsure, please open an issue first to discuss the change.

View File

@@ -6,7 +6,7 @@
[![Go Report Card](https://goreportcard.com/badge/github.com/OffchainLabs/prysm)](https://goreportcard.com/report/github.com/OffchainLabs/prysm)
[![Consensus_Spec_Version 1.4.0](https://img.shields.io/badge/Consensus%20Spec%20Version-v1.4.0-blue.svg)](https://github.com/ethereum/consensus-specs/tree/v1.4.0)
[![Execution_API_Version 1.0.0-beta.2](https://img.shields.io/badge/Execution%20API%20Version-v1.0.0.beta.2-blue.svg)](https://github.com/ethereum/execution-apis/tree/v1.0.0-beta.2/src/engine)
[![Discord](https://user-images.githubusercontent.com/7288322/34471967-1df7808a-efbb-11e7-9088-ed0b04151291.png)](https://discord.gg/prysm)
[![Discord](https://user-images.githubusercontent.com/7288322/34471967-1df7808a-efbb-11e7-9088-ed0b04151291.png)](https://discord.gg/OffchainLabs)
[![GitPOAP Badge](https://public-api.gitpoap.io/v1/repo/OffchainLabs/prysm/badge)](https://www.gitpoap.io/gh/OffchainLabs/prysm)
</div>
@@ -25,7 +25,7 @@ See the [Changelog](https://github.com/OffchainLabs/prysm/releases) for details
A detailed set of installation and usage instructions as well as breakdowns of each individual component are available in the **[official documentation portal](https://docs.prylabs.network)**.
💬 **Need help?** Join our **[Discord Community](https://discord.gg/prysm)** for support.
💬 **Need help?** Join our **[Discord Community](https://discord.gg/OffchainLabs)** for support.
---

View File

@@ -1,7 +1,7 @@
workspace(name = "prysm")
load("@bazel_tools//tools/build_defs/repo:git.bzl", "git_repository")
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
load("@bazel_tools//tools/build_defs/repo:git.bzl", "git_repository")
http_archive(
name = "rules_pkg",
@@ -16,6 +16,8 @@ load("@rules_pkg//:deps.bzl", "rules_pkg_dependencies")
rules_pkg_dependencies()
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "toolchains_protoc",
sha256 = "abb1540f8a9e045422730670ebb2f25b41fa56ca5a7cf795175a110a0a68f4ad",
@@ -253,18 +255,56 @@ filegroup(
url = "https://github.com/ethereum/EIPs/archive/5480440fe51742ed23342b68cf106cefd427e39d.tar.gz",
)
consensus_spec_version = "v1.6.0-alpha.1"
consensus_spec_version = "v1.5.0"
load("@prysm//tools:download_spectests.bzl", "consensus_spec_tests")
bls_test_version = "v0.1.1"
consensus_spec_tests(
name = "consensus_spec_tests",
flavors = {
"general": "sha256-o4t9p3R+fQHF4KOykGmwlG3zDw5wUdVWprkzId8aIsk=",
"minimal": "sha256-sU7ToI8t3MR8x0vVjC8ERmAHZDWpEmnAC9FWIpHi5x4=",
"mainnet": "sha256-YKS4wngg0LgI9Upp4MYJ77aG+8+e/G4YeqEIlp06LZw=",
},
version = consensus_spec_version,
http_archive(
name = "consensus_spec_tests_general",
build_file_content = """
filegroup(
name = "test_data",
srcs = glob([
"**/*.ssz_snappy",
"**/*.yaml",
]),
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-JljxS/if/t0qvGWcf5CgsX+72fj90yGTg/uEgC56y7U=",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/general.tar.gz" % consensus_spec_version,
)
http_archive(
name = "consensus_spec_tests_minimal",
build_file_content = """
filegroup(
name = "test_data",
srcs = glob([
"**/*.ssz_snappy",
"**/*.yaml",
]),
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-NRba2h4zqb2LAXyDPglHTtkT4gVyuwpY708XmwXKXV8=",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/minimal.tar.gz" % consensus_spec_version,
)
http_archive(
name = "consensus_spec_tests_mainnet",
build_file_content = """
filegroup(
name = "test_data",
srcs = glob([
"**/*.ssz_snappy",
"**/*.yaml",
]),
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-hpbtKUbc3NHtVcUPk/Zm+Hn57G2ijI9qvXJwl9hc/tM=",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/mainnet.tar.gz" % consensus_spec_version,
)
http_archive(
@@ -278,13 +318,11 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-Nv4TEuEJPQIM4E6T9J0FOITsmappmXZjGtlhe1HEXnU=",
integrity = "sha256-Wy3YcJxoXiKQwrGgJecrtjtdokc4X/VUNBmyQXJf0Oc=",
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
)
bls_test_version = "v0.1.1"
http_archive(
name = "bls_spec_tests",
build_file_content = """

View File

@@ -1,6 +1,7 @@
package health
import (
"context"
"sync"
"testing"
@@ -22,7 +23,7 @@ func TestNodeHealth_IsHealthy(t *testing.T) {
isHealthy: &tt.isHealthy,
healthChan: make(chan bool, 1),
}
if got := n.IsHealthy(t.Context()); got != tt.want {
if got := n.IsHealthy(context.Background()); got != tt.want {
t.Errorf("IsHealthy() = %v, want %v", got, tt.want)
}
})
@@ -53,7 +54,7 @@ func TestNodeHealth_UpdateNodeHealth(t *testing.T) {
healthChan: make(chan bool, 1),
}
s := n.CheckHealth(t.Context())
s := n.CheckHealth(context.Background())
// Check if health status was updated
if s != tt.newStatus {
t.Errorf("UpdateNodeHealth() failed to update isHealthy from %v to %v", tt.initial, tt.newStatus)
@@ -92,9 +93,9 @@ func TestNodeHealth_Concurrency(t *testing.T) {
go func() {
defer wg.Done()
client.EXPECT().IsHealthy(gomock.Any()).Return(false).Times(1)
n.CheckHealth(t.Context())
n.CheckHealth(context.Background())
client.EXPECT().IsHealthy(gomock.Any()).Return(true).Times(1)
n.CheckHealth(t.Context())
n.CheckHealth(context.Background())
}()
}
@@ -102,7 +103,7 @@ func TestNodeHealth_Concurrency(t *testing.T) {
for i := 0; i < numGoroutines; i++ {
go func() {
defer wg.Done()
_ = n.IsHealthy(t.Context()) // Just read the value
_ = n.IsHealthy(context.Background()) // Just read the value
}()
}

View File

@@ -13,6 +13,7 @@ go_library(
deps = [
"//api:go_default_library",
"//api/client:go_default_library",
"//api/server:go_default_library",
"//api/server/structs:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
@@ -27,6 +28,7 @@ go_library(
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_fastssz//:go_default_library",

View File

@@ -241,7 +241,7 @@ func (c *Client) GetHeader(ctx context.Context, slot primitives.Slot, parentHash
return nil, errors.Wrap(err, "error getting header from builder server")
}
bid, err := c.parseHeaderResponse(data, header, slot)
bid, err := c.parseHeaderResponse(data, header)
if err != nil {
return nil, errors.Wrapf(
err,
@@ -254,7 +254,7 @@ func (c *Client) GetHeader(ctx context.Context, slot primitives.Slot, parentHash
return bid, nil
}
func (c *Client) parseHeaderResponse(data []byte, header http.Header, slot primitives.Slot) (SignedBid, error) {
func (c *Client) parseHeaderResponse(data []byte, header http.Header) (SignedBid, error) {
var versionHeader string
if c.sszEnabled || header.Get(api.VersionHeader) != "" {
versionHeader = header.Get(api.VersionHeader)
@@ -276,7 +276,7 @@ func (c *Client) parseHeaderResponse(data []byte, header http.Header, slot primi
}
if ver >= version.Electra {
return c.parseHeaderElectra(data, slot)
return c.parseHeaderElectra(data)
}
if ver >= version.Deneb {
return c.parseHeaderDeneb(data)
@@ -291,7 +291,7 @@ func (c *Client) parseHeaderResponse(data []byte, header http.Header, slot primi
return nil, fmt.Errorf("unsupported header version %s", versionHeader)
}
func (c *Client) parseHeaderElectra(data []byte, slot primitives.Slot) (SignedBid, error) {
func (c *Client) parseHeaderElectra(data []byte) (SignedBid, error) {
if c.sszEnabled {
sb := &ethpb.SignedBuilderBidElectra{}
if err := sb.UnmarshalSSZ(data); err != nil {
@@ -303,7 +303,7 @@ func (c *Client) parseHeaderElectra(data []byte, slot primitives.Slot) (SignedBi
if err := json.Unmarshal(data, hr); err != nil {
return nil, errors.Wrap(err, "could not unmarshal ExecHeaderResponseElectra JSON")
}
p, err := hr.ToProto(slot)
p, err := hr.ToProto()
if err != nil {
return nil, errors.Wrap(err, "could not convert ExecHeaderResponseElectra to proto")
}

View File

@@ -2,6 +2,7 @@ package builder
import (
"bytes"
"context"
"encoding/json"
"fmt"
"io"
@@ -30,7 +31,7 @@ func (fn roundtrip) RoundTrip(r *http.Request) (*http.Response, error) {
}
func TestClient_Status(t *testing.T) {
ctx := t.Context()
ctx := context.Background()
statusPath := "/eth/v1/builder/status"
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
@@ -83,7 +84,7 @@ func TestClient_Status(t *testing.T) {
}
func TestClient_RegisterValidator(t *testing.T) {
ctx := t.Context()
ctx := context.Background()
expectedBody := `[{"message":{"fee_recipient":"0x0000000000000000000000000000000000000000","gas_limit":"23","timestamp":"42","pubkey":"0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a"},"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"}]`
expectedPath := "/eth/v1/builder/validators"
t.Run("JSON success", func(t *testing.T) {
@@ -167,7 +168,7 @@ func TestClient_RegisterValidator(t *testing.T) {
}
func TestClient_GetHeader(t *testing.T) {
ctx := t.Context()
ctx := context.Background()
expectedPath := "/eth/v1/builder/header/23/0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2/0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a"
var slot primitives.Slot = 23
parentHash := ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")
@@ -531,7 +532,7 @@ func TestClient_GetHeader(t *testing.T) {
require.Equal(t, expectedPath, r.URL.Path)
epr := &ExecHeaderResponseElectra{}
require.NoError(t, json.Unmarshal([]byte(testExampleHeaderResponseElectra), epr))
pro, err := epr.ToProto(100)
pro, err := epr.ToProto()
require.NoError(t, err)
ssz, err := pro.MarshalSSZ()
require.NoError(t, err)
@@ -600,7 +601,7 @@ func TestClient_GetHeader(t *testing.T) {
}
func TestSubmitBlindedBlock(t *testing.T) {
ctx := t.Context()
ctx := context.Background()
t.Run("bellatrix", func(t *testing.T) {
hc := &http.Client{
@@ -639,9 +640,9 @@ func TestSubmitBlindedBlock(t *testing.T) {
require.Equal(t, api.OctetStreamMediaType, r.Header.Get("Accept"))
epr := &ExecutionPayloadResponse{}
require.NoError(t, json.Unmarshal([]byte(testExampleExecutionPayload), epr))
ep := &structs.ExecutionPayload{}
ep := &ExecutionPayload{}
require.NoError(t, json.Unmarshal(epr.Data, ep))
pro, err := ep.ToConsensus()
pro, err := ep.ToProto()
require.NoError(t, err)
ssz, err := pro.MarshalSSZ()
require.NoError(t, err)
@@ -709,9 +710,9 @@ func TestSubmitBlindedBlock(t *testing.T) {
require.Equal(t, api.OctetStreamMediaType, r.Header.Get("Accept"))
epr := &ExecutionPayloadResponse{}
require.NoError(t, json.Unmarshal([]byte(testExampleExecutionPayloadCapella), epr))
ep := &structs.ExecutionPayloadCapella{}
ep := &ExecutionPayloadCapella{}
require.NoError(t, json.Unmarshal(epr.Data, ep))
pro, err := ep.ToConsensus()
pro, err := ep.ToProto()
require.NoError(t, err)
ssz, err := pro.MarshalSSZ()
require.NoError(t, err)
@@ -1558,7 +1559,7 @@ func TestRequestLogger(t *testing.T) {
c, err := NewClient("localhost:3500", wo)
require.NoError(t, err)
ctx := t.Context()
ctx := context.Background()
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, getStatus, r.URL.Path)

File diff suppressed because it is too large Load Diff

View File

@@ -328,72 +328,72 @@ func TestExecutionHeaderResponseUnmarshal(t *testing.T) {
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hr.Data.Message.Header.ParentHash,
actual: hexutil.Encode(hr.Data.Message.Header.ParentHash),
name: "ExecHeaderResponse.ExecutionPayloadHeader.ParentHash",
},
{
expected: "0xabcf8e0d4e9587369b2301d0790347320302cc09",
actual: hr.Data.Message.Header.FeeRecipient,
actual: hexutil.Encode(hr.Data.Message.Header.FeeRecipient),
name: "ExecHeaderResponse.ExecutionPayloadHeader.FeeRecipient",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hr.Data.Message.Header.StateRoot,
actual: hexutil.Encode(hr.Data.Message.Header.StateRoot),
name: "ExecHeaderResponse.ExecutionPayloadHeader.StateRoot",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hr.Data.Message.Header.ReceiptsRoot,
actual: hexutil.Encode(hr.Data.Message.Header.ReceiptsRoot),
name: "ExecHeaderResponse.ExecutionPayloadHeader.ReceiptsRoot",
},
{
expected: "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
actual: hr.Data.Message.Header.LogsBloom,
actual: hexutil.Encode(hr.Data.Message.Header.LogsBloom),
name: "ExecHeaderResponse.ExecutionPayloadHeader.LogsBloom",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hr.Data.Message.Header.PrevRandao,
actual: hexutil.Encode(hr.Data.Message.Header.PrevRandao),
name: "ExecHeaderResponse.ExecutionPayloadHeader.PrevRandao",
},
{
expected: "1",
actual: hr.Data.Message.Header.BlockNumber,
actual: fmt.Sprintf("%d", hr.Data.Message.Header.BlockNumber),
name: "ExecHeaderResponse.ExecutionPayloadHeader.BlockNumber",
},
{
expected: "1",
actual: hr.Data.Message.Header.GasLimit,
actual: fmt.Sprintf("%d", hr.Data.Message.Header.GasLimit),
name: "ExecHeaderResponse.ExecutionPayloadHeader.GasLimit",
},
{
expected: "1",
actual: hr.Data.Message.Header.GasUsed,
actual: fmt.Sprintf("%d", hr.Data.Message.Header.GasUsed),
name: "ExecHeaderResponse.ExecutionPayloadHeader.GasUsed",
},
{
expected: "1",
actual: hr.Data.Message.Header.Timestamp,
actual: fmt.Sprintf("%d", hr.Data.Message.Header.Timestamp),
name: "ExecHeaderResponse.ExecutionPayloadHeader.Timestamp",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hr.Data.Message.Header.ExtraData,
actual: hexutil.Encode(hr.Data.Message.Header.ExtraData),
name: "ExecHeaderResponse.ExecutionPayloadHeader.ExtraData",
},
{
expected: "452312848583266388373324160190187140051835877600158453279131187530910662656",
actual: hr.Data.Message.Header.BaseFeePerGas,
actual: fmt.Sprintf("%d", hr.Data.Message.Header.BaseFeePerGas),
name: "ExecHeaderResponse.ExecutionPayloadHeader.BaseFeePerGas",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hr.Data.Message.Header.BlockHash,
actual: hexutil.Encode(hr.Data.Message.Header.BlockHash),
name: "ExecHeaderResponse.ExecutionPayloadHeader.BlockHash",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hr.Data.Message.Header.TransactionsRoot,
actual: hexutil.Encode(hr.Data.Message.Header.TransactionsRoot),
name: "ExecHeaderResponse.ExecutionPayloadHeader.TransactionsRoot",
},
}
@@ -427,77 +427,77 @@ func TestExecutionHeaderResponseCapellaUnmarshal(t *testing.T) {
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hr.Data.Message.Header.ParentHash,
actual: hexutil.Encode(hr.Data.Message.Header.ParentHash),
name: "ExecHeaderResponse.ExecutionPayloadHeader.ParentHash",
},
{
expected: "0xabcf8e0d4e9587369b2301d0790347320302cc09",
actual: hr.Data.Message.Header.FeeRecipient,
actual: hexutil.Encode(hr.Data.Message.Header.FeeRecipient),
name: "ExecHeaderResponse.ExecutionPayloadHeader.FeeRecipient",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hr.Data.Message.Header.StateRoot,
actual: hexutil.Encode(hr.Data.Message.Header.StateRoot),
name: "ExecHeaderResponse.ExecutionPayloadHeader.StateRoot",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hr.Data.Message.Header.ReceiptsRoot,
actual: hexutil.Encode(hr.Data.Message.Header.ReceiptsRoot),
name: "ExecHeaderResponse.ExecutionPayloadHeader.ReceiptsRoot",
},
{
expected: "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
actual: hr.Data.Message.Header.LogsBloom,
actual: hexutil.Encode(hr.Data.Message.Header.LogsBloom),
name: "ExecHeaderResponse.ExecutionPayloadHeader.LogsBloom",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hr.Data.Message.Header.PrevRandao,
actual: hexutil.Encode(hr.Data.Message.Header.PrevRandao),
name: "ExecHeaderResponse.ExecutionPayloadHeader.PrevRandao",
},
{
expected: "1",
actual: hr.Data.Message.Header.BlockNumber,
actual: fmt.Sprintf("%d", hr.Data.Message.Header.BlockNumber),
name: "ExecHeaderResponse.ExecutionPayloadHeader.BlockNumber",
},
{
expected: "1",
actual: hr.Data.Message.Header.GasLimit,
actual: fmt.Sprintf("%d", hr.Data.Message.Header.GasLimit),
name: "ExecHeaderResponse.ExecutionPayloadHeader.GasLimit",
},
{
expected: "1",
actual: hr.Data.Message.Header.GasUsed,
actual: fmt.Sprintf("%d", hr.Data.Message.Header.GasUsed),
name: "ExecHeaderResponse.ExecutionPayloadHeader.GasUsed",
},
{
expected: "1",
actual: hr.Data.Message.Header.Timestamp,
actual: fmt.Sprintf("%d", hr.Data.Message.Header.Timestamp),
name: "ExecHeaderResponse.ExecutionPayloadHeader.Timestamp",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hr.Data.Message.Header.ExtraData,
actual: hexutil.Encode(hr.Data.Message.Header.ExtraData),
name: "ExecHeaderResponse.ExecutionPayloadHeader.ExtraData",
},
{
expected: "452312848583266388373324160190187140051835877600158453279131187530910662656",
actual: hr.Data.Message.Header.BaseFeePerGas,
actual: fmt.Sprintf("%d", hr.Data.Message.Header.BaseFeePerGas),
name: "ExecHeaderResponse.ExecutionPayloadHeader.BaseFeePerGas",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hr.Data.Message.Header.BlockHash,
actual: hexutil.Encode(hr.Data.Message.Header.BlockHash),
name: "ExecHeaderResponse.ExecutionPayloadHeader.BlockHash",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hr.Data.Message.Header.TransactionsRoot,
actual: hexutil.Encode(hr.Data.Message.Header.TransactionsRoot),
name: "ExecHeaderResponse.ExecutionPayloadHeader.TransactionsRoot",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hr.Data.Message.Header.WithdrawalsRoot,
actual: hexutil.Encode(hr.Data.Message.Header.WithdrawalsRoot),
name: "ExecHeaderResponse.ExecutionPayloadHeader.WithdrawalsRoot",
},
}
@@ -867,6 +867,88 @@ var testExampleExecutionPayloadDenebDifferentProofCount = fmt.Sprintf(`{
}
}`, hexutil.Encode(make([]byte, fieldparams.BlobLength)))
func TestExecutionPayloadResponseUnmarshal(t *testing.T) {
epr := &ExecPayloadResponse{}
require.NoError(t, json.Unmarshal([]byte(testExampleExecutionPayload), epr))
cases := []struct {
expected string
actual string
name string
}{
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hexutil.Encode(epr.Data.ParentHash),
name: "ExecPayloadResponse.ExecutionPayload.ParentHash",
},
{
expected: "0xabcf8e0d4e9587369b2301d0790347320302cc09",
actual: hexutil.Encode(epr.Data.FeeRecipient),
name: "ExecPayloadResponse.ExecutionPayload.FeeRecipient",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hexutil.Encode(epr.Data.StateRoot),
name: "ExecPayloadResponse.ExecutionPayload.StateRoot",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hexutil.Encode(epr.Data.ReceiptsRoot),
name: "ExecPayloadResponse.ExecutionPayload.ReceiptsRoot",
},
{
expected: "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
actual: hexutil.Encode(epr.Data.LogsBloom),
name: "ExecPayloadResponse.ExecutionPayload.LogsBloom",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hexutil.Encode(epr.Data.PrevRandao),
name: "ExecPayloadResponse.ExecutionPayload.PrevRandao",
},
{
expected: "1",
actual: fmt.Sprintf("%d", epr.Data.BlockNumber),
name: "ExecPayloadResponse.ExecutionPayload.BlockNumber",
},
{
expected: "1",
actual: fmt.Sprintf("%d", epr.Data.GasLimit),
name: "ExecPayloadResponse.ExecutionPayload.GasLimit",
},
{
expected: "1",
actual: fmt.Sprintf("%d", epr.Data.GasUsed),
name: "ExecPayloadResponse.ExecutionPayload.GasUsed",
},
{
expected: "1",
actual: fmt.Sprintf("%d", epr.Data.Timestamp),
name: "ExecPayloadResponse.ExecutionPayload.Timestamp",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hexutil.Encode(epr.Data.ExtraData),
name: "ExecPayloadResponse.ExecutionPayload.ExtraData",
},
{
expected: "452312848583266388373324160190187140051835877600158453279131187530910662656",
actual: fmt.Sprintf("%d", epr.Data.BaseFeePerGas),
name: "ExecPayloadResponse.ExecutionPayload.BaseFeePerGas",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: hexutil.Encode(epr.Data.BlockHash),
name: "ExecPayloadResponse.ExecutionPayload.BlockHash",
},
}
for _, c := range cases {
require.Equal(t, c.expected, c.actual, fmt.Sprintf("unexpected value for field %s", c.name))
}
require.Equal(t, 1, len(epr.Data.Transactions))
txHash := "0x02f878831469668303f51d843b9ac9f9843b9aca0082520894c93269b73096998db66be0441e836d873535cb9c8894a19041886f000080c001a031cc29234036afbf9a1fb9476b463367cb1f957ac0b919b69bbc798436e604aaa018c4e9c3914eb27aadd0b91e10b18655739fcf8c1fc398763a9f1beecb8ddc86"
require.Equal(t, txHash, hexutil.Encode(epr.Data.Transactions[0]))
}
func TestExecutionPayloadResponseCapellaUnmarshal(t *testing.T) {
epr := &ExecPayloadResponseCapella{}
require.NoError(t, json.Unmarshal([]byte(testExampleExecutionPayloadCapella), epr))
@@ -877,67 +959,67 @@ func TestExecutionPayloadResponseCapellaUnmarshal(t *testing.T) {
}{
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: epr.Data.ParentHash,
actual: hexutil.Encode(epr.Data.ParentHash),
name: "ExecPayloadResponse.ExecutionPayload.ParentHash",
},
{
expected: "0xabcf8e0d4e9587369b2301d0790347320302cc09",
actual: epr.Data.FeeRecipient,
actual: hexutil.Encode(epr.Data.FeeRecipient),
name: "ExecPayloadResponse.ExecutionPayload.FeeRecipient",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: epr.Data.StateRoot,
actual: hexutil.Encode(epr.Data.StateRoot),
name: "ExecPayloadResponse.ExecutionPayload.StateRoot",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: epr.Data.ReceiptsRoot,
actual: hexutil.Encode(epr.Data.ReceiptsRoot),
name: "ExecPayloadResponse.ExecutionPayload.ReceiptsRoot",
},
{
expected: "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
actual: epr.Data.LogsBloom,
actual: hexutil.Encode(epr.Data.LogsBloom),
name: "ExecPayloadResponse.ExecutionPayload.LogsBloom",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: epr.Data.PrevRandao,
actual: hexutil.Encode(epr.Data.PrevRandao),
name: "ExecPayloadResponse.ExecutionPayload.PrevRandao",
},
{
expected: "1",
actual: epr.Data.BlockNumber,
actual: fmt.Sprintf("%d", epr.Data.BlockNumber),
name: "ExecPayloadResponse.ExecutionPayload.BlockNumber",
},
{
expected: "1",
actual: epr.Data.GasLimit,
actual: fmt.Sprintf("%d", epr.Data.GasLimit),
name: "ExecPayloadResponse.ExecutionPayload.GasLimit",
},
{
expected: "1",
actual: epr.Data.GasUsed,
actual: fmt.Sprintf("%d", epr.Data.GasUsed),
name: "ExecPayloadResponse.ExecutionPayload.GasUsed",
},
{
expected: "1",
actual: epr.Data.Timestamp,
actual: fmt.Sprintf("%d", epr.Data.Timestamp),
name: "ExecPayloadResponse.ExecutionPayload.Timestamp",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: epr.Data.ExtraData,
actual: hexutil.Encode(epr.Data.ExtraData),
name: "ExecPayloadResponse.ExecutionPayload.ExtraData",
},
{
expected: "452312848583266388373324160190187140051835877600158453279131187530910662656",
actual: epr.Data.BaseFeePerGas,
actual: fmt.Sprintf("%d", epr.Data.BaseFeePerGas),
name: "ExecPayloadResponse.ExecutionPayload.BaseFeePerGas",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: epr.Data.BlockHash,
actual: hexutil.Encode(epr.Data.BlockHash),
name: "ExecPayloadResponse.ExecutionPayload.BlockHash",
},
}
@@ -946,14 +1028,14 @@ func TestExecutionPayloadResponseCapellaUnmarshal(t *testing.T) {
}
require.Equal(t, 1, len(epr.Data.Transactions))
txHash := "0x02f878831469668303f51d843b9ac9f9843b9aca0082520894c93269b73096998db66be0441e836d873535cb9c8894a19041886f000080c001a031cc29234036afbf9a1fb9476b463367cb1f957ac0b919b69bbc798436e604aaa018c4e9c3914eb27aadd0b91e10b18655739fcf8c1fc398763a9f1beecb8ddc86"
require.Equal(t, txHash, epr.Data.Transactions[0])
require.Equal(t, txHash, hexutil.Encode(epr.Data.Transactions[0]))
require.Equal(t, 1, len(epr.Data.Withdrawals))
w := epr.Data.Withdrawals[0]
assert.Equal(t, "1", w.WithdrawalIndex)
assert.Equal(t, "1", w.ValidatorIndex)
assert.DeepEqual(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943", w.ExecutionAddress)
assert.Equal(t, "1", w.Amount)
assert.Equal(t, uint64(1), w.Index.Uint64())
assert.Equal(t, uint64(1), w.ValidatorIndex.Uint64())
assert.DeepEqual(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943", w.Address.String())
assert.Equal(t, uint64(1), w.Amount.Uint64())
}
func TestExecutionPayloadResponseDenebUnmarshal(t *testing.T) {
@@ -966,77 +1048,77 @@ func TestExecutionPayloadResponseDenebUnmarshal(t *testing.T) {
}{
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: epr.Data.ExecutionPayload.ParentHash,
actual: hexutil.Encode(epr.Data.ExecutionPayload.ParentHash),
name: "ExecPayloadResponse.ExecutionPayload.ParentHash",
},
{
expected: "0xabcf8e0d4e9587369b2301d0790347320302cc09",
actual: epr.Data.ExecutionPayload.FeeRecipient,
actual: hexutil.Encode(epr.Data.ExecutionPayload.FeeRecipient),
name: "ExecPayloadResponse.ExecutionPayload.FeeRecipient",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: epr.Data.ExecutionPayload.StateRoot,
actual: hexutil.Encode(epr.Data.ExecutionPayload.StateRoot),
name: "ExecPayloadResponse.ExecutionPayload.StateRoot",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: epr.Data.ExecutionPayload.ReceiptsRoot,
actual: hexutil.Encode(epr.Data.ExecutionPayload.ReceiptsRoot),
name: "ExecPayloadResponse.ExecutionPayload.ReceiptsRoot",
},
{
expected: "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
actual: epr.Data.ExecutionPayload.LogsBloom,
actual: hexutil.Encode(epr.Data.ExecutionPayload.LogsBloom),
name: "ExecPayloadResponse.ExecutionPayload.LogsBloom",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: epr.Data.ExecutionPayload.PrevRandao,
actual: hexutil.Encode(epr.Data.ExecutionPayload.PrevRandao),
name: "ExecPayloadResponse.ExecutionPayload.PrevRandao",
},
{
expected: "1",
actual: epr.Data.ExecutionPayload.BlockNumber,
actual: fmt.Sprintf("%d", epr.Data.ExecutionPayload.BlockNumber),
name: "ExecPayloadResponse.ExecutionPayload.BlockNumber",
},
{
expected: "1",
actual: epr.Data.ExecutionPayload.GasLimit,
actual: fmt.Sprintf("%d", epr.Data.ExecutionPayload.GasLimit),
name: "ExecPayloadResponse.ExecutionPayload.GasLimit",
},
{
expected: "1",
actual: epr.Data.ExecutionPayload.GasUsed,
actual: fmt.Sprintf("%d", epr.Data.ExecutionPayload.GasUsed),
name: "ExecPayloadResponse.ExecutionPayload.GasUsed",
},
{
expected: "1",
actual: epr.Data.ExecutionPayload.Timestamp,
actual: fmt.Sprintf("%d", epr.Data.ExecutionPayload.Timestamp),
name: "ExecPayloadResponse.ExecutionPayload.Timestamp",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: epr.Data.ExecutionPayload.ExtraData,
actual: hexutil.Encode(epr.Data.ExecutionPayload.ExtraData),
name: "ExecPayloadResponse.ExecutionPayload.ExtraData",
},
{
expected: "452312848583266388373324160190187140051835877600158453279131187530910662656",
actual: epr.Data.ExecutionPayload.BaseFeePerGas,
actual: fmt.Sprintf("%d", epr.Data.ExecutionPayload.BaseFeePerGas),
name: "ExecPayloadResponse.ExecutionPayload.BaseFeePerGas",
},
{
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
actual: epr.Data.ExecutionPayload.BlockHash,
actual: hexutil.Encode(epr.Data.ExecutionPayload.BlockHash),
name: "ExecPayloadResponse.ExecutionPayload.BlockHash",
},
{
expected: "2",
actual: epr.Data.ExecutionPayload.BlobGasUsed,
actual: fmt.Sprintf("%d", epr.Data.ExecutionPayload.BlobGasUsed),
name: "ExecPayloadResponse.ExecutionPayload.BlobGasUsed",
},
{
expected: "3",
actual: epr.Data.ExecutionPayload.ExcessBlobGas,
actual: fmt.Sprintf("%d", epr.Data.ExecutionPayload.ExcessBlobGas),
name: "ExecPayloadResponse.ExecutionPayload.ExcessBlobGas",
},
}
@@ -1045,16 +1127,64 @@ func TestExecutionPayloadResponseDenebUnmarshal(t *testing.T) {
}
require.Equal(t, 1, len(epr.Data.ExecutionPayload.Transactions))
txHash := "0x02f878831469668303f51d843b9ac9f9843b9aca0082520894c93269b73096998db66be0441e836d873535cb9c8894a19041886f000080c001a031cc29234036afbf9a1fb9476b463367cb1f957ac0b919b69bbc798436e604aaa018c4e9c3914eb27aadd0b91e10b18655739fcf8c1fc398763a9f1beecb8ddc86"
require.Equal(t, txHash, epr.Data.ExecutionPayload.Transactions[0])
require.Equal(t, txHash, hexutil.Encode(epr.Data.ExecutionPayload.Transactions[0]))
require.Equal(t, 1, len(epr.Data.ExecutionPayload.Withdrawals))
w := epr.Data.ExecutionPayload.Withdrawals[0]
assert.Equal(t, "1", w.WithdrawalIndex)
assert.Equal(t, "1", w.ValidatorIndex)
assert.DeepEqual(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943", w.ExecutionAddress)
assert.Equal(t, "1", w.Amount)
assert.Equal(t, "2", epr.Data.ExecutionPayload.BlobGasUsed)
assert.Equal(t, "3", epr.Data.ExecutionPayload.ExcessBlobGas)
assert.Equal(t, uint64(1), w.Index.Uint64())
assert.Equal(t, uint64(1), w.ValidatorIndex.Uint64())
assert.DeepEqual(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943", w.Address.String())
assert.Equal(t, uint64(1), w.Amount.Uint64())
assert.Equal(t, uint64(2), uint64(epr.Data.ExecutionPayload.BlobGasUsed))
assert.Equal(t, uint64(3), uint64(epr.Data.ExecutionPayload.ExcessBlobGas))
}
func TestExecutionPayloadResponseToProto(t *testing.T) {
hr := &ExecPayloadResponse{}
require.NoError(t, json.Unmarshal([]byte(testExampleExecutionPayload), hr))
p, err := hr.ToProto()
require.NoError(t, err)
parentHash, err := hexutil.Decode("0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")
require.NoError(t, err)
feeRecipient, err := hexutil.Decode("0xabcf8e0d4e9587369b2301d0790347320302cc09")
require.NoError(t, err)
stateRoot, err := hexutil.Decode("0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")
require.NoError(t, err)
receiptsRoot, err := hexutil.Decode("0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")
require.NoError(t, err)
logsBloom, err := hexutil.Decode("0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000")
require.NoError(t, err)
prevRandao, err := hexutil.Decode("0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")
require.NoError(t, err)
extraData, err := hexutil.Decode("0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")
require.NoError(t, err)
blockHash, err := hexutil.Decode("0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")
require.NoError(t, err)
tx, err := hexutil.Decode("0x02f878831469668303f51d843b9ac9f9843b9aca0082520894c93269b73096998db66be0441e836d873535cb9c8894a19041886f000080c001a031cc29234036afbf9a1fb9476b463367cb1f957ac0b919b69bbc798436e604aaa018c4e9c3914eb27aadd0b91e10b18655739fcf8c1fc398763a9f1beecb8ddc86")
require.NoError(t, err)
txList := [][]byte{tx}
bfpg, err := stringToUint256("452312848583266388373324160190187140051835877600158453279131187530910662656")
require.NoError(t, err)
expected := &v1.ExecutionPayload{
ParentHash: parentHash,
FeeRecipient: feeRecipient,
StateRoot: stateRoot,
ReceiptsRoot: receiptsRoot,
LogsBloom: logsBloom,
PrevRandao: prevRandao,
BlockNumber: 1,
GasLimit: 1,
GasUsed: 1,
Timestamp: 1,
ExtraData: extraData,
BaseFeePerGas: bfpg.SSZBytes(),
BlockHash: blockHash,
Transactions: txList,
}
require.DeepEqual(t, expected, p)
}
func TestExecutionPayloadResponseCapellaToProto(t *testing.T) {
@@ -1222,6 +1352,16 @@ func pbEth1Data() *eth.Eth1Data {
}
}
func TestEth1DataMarshal(t *testing.T) {
ed := &Eth1Data{
Eth1Data: pbEth1Data(),
}
b, err := json.Marshal(ed)
require.NoError(t, err)
expected := `{"deposit_root":"0x0000000000000000000000000000000000000000000000000000000000000000","deposit_count":"23","block_hash":"0x0000000000000000000000000000000000000000000000000000000000000000"}`
require.Equal(t, expected, string(b))
}
func pbSyncAggregate() *eth.SyncAggregate {
return &eth.SyncAggregate{
SyncCommitteeSignature: make([]byte, 48),
@@ -1229,6 +1369,14 @@ func pbSyncAggregate() *eth.SyncAggregate {
}
}
func TestSyncAggregate_MarshalJSON(t *testing.T) {
sa := &SyncAggregate{pbSyncAggregate()}
b, err := json.Marshal(sa)
require.NoError(t, err)
expected := `{"sync_committee_bits":"0x01","sync_committee_signature":"0x000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"}`
require.Equal(t, expected, string(b))
}
func pbDeposit(t *testing.T) *eth.Deposit {
return &eth.Deposit{
Proof: [][]byte{ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")},
@@ -1241,6 +1389,16 @@ func pbDeposit(t *testing.T) *eth.Deposit {
}
}
func TestDeposit_MarshalJSON(t *testing.T) {
d := &Deposit{
Deposit: pbDeposit(t),
}
b, err := json.Marshal(d)
require.NoError(t, err)
expected := `{"proof":["0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"],"data":{"pubkey":"0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a","withdrawal_credentials":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","amount":"1","signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"}}`
require.Equal(t, expected, string(b))
}
func pbSignedVoluntaryExit(t *testing.T) *eth.SignedVoluntaryExit {
return &eth.SignedVoluntaryExit{
Exit: &eth.VoluntaryExit{
@@ -1251,6 +1409,16 @@ func pbSignedVoluntaryExit(t *testing.T) *eth.SignedVoluntaryExit {
}
}
func TestVoluntaryExit(t *testing.T) {
ve := &SignedVoluntaryExit{
SignedVoluntaryExit: pbSignedVoluntaryExit(t),
}
b, err := json.Marshal(ve)
require.NoError(t, err)
expected := `{"message":{"epoch":"1","validator_index":"1"},"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"}`
require.Equal(t, expected, string(b))
}
func pbAttestation(t *testing.T) *eth.Attestation {
return &eth.Attestation{
AggregationBits: bitfield.Bitlist{0x01},
@@ -1271,6 +1439,16 @@ func pbAttestation(t *testing.T) *eth.Attestation {
}
}
func TestAttestationMarshal(t *testing.T) {
a := &Attestation{
Attestation: pbAttestation(t),
}
b, err := json.Marshal(a)
require.NoError(t, err)
expected := `{"aggregation_bits":"0x01","data":{"slot":"1","index":"1","beacon_block_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","source":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"},"target":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"}},"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"}`
require.Equal(t, expected, string(b))
}
func pbAttesterSlashing(t *testing.T) *eth.AttesterSlashing {
return &eth.AttesterSlashing{
Attestation_1: &eth.IndexedAttestation{
@@ -1311,7 +1489,9 @@ func pbAttesterSlashing(t *testing.T) *eth.AttesterSlashing {
}
func TestAttesterSlashing_MarshalJSON(t *testing.T) {
as := structs.AttesterSlashingFromConsensus(pbAttesterSlashing(t))
as := &AttesterSlashing{
AttesterSlashing: pbAttesterSlashing(t),
}
b, err := json.Marshal(as)
require.NoError(t, err)
expected := `{"attestation_1":{"attesting_indices":["1"],"data":{"slot":"1","index":"1","beacon_block_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","source":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"},"target":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"}},"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"},"attestation_2":{"attesting_indices":["1"],"data":{"slot":"1","index":"1","beacon_block_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","source":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"},"target":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"}},"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"}}`
@@ -1419,8 +1599,9 @@ func pbExecutionPayloadHeaderDeneb(t *testing.T) *v1.ExecutionPayloadHeaderDeneb
}
func TestExecutionPayloadHeader_MarshalJSON(t *testing.T) {
h, err := structs.ExecutionPayloadHeaderFromConsensus(pbExecutionPayloadHeader(t))
require.NoError(t, err)
h := &ExecutionPayloadHeader{
ExecutionPayloadHeader: pbExecutionPayloadHeader(t),
}
b, err := json.Marshal(h)
require.NoError(t, err)
expected := `{"parent_hash":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","fee_recipient":"0xabcf8e0d4e9587369b2301d0790347320302cc09","state_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","receipts_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","logs_bloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","prev_randao":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","block_number":"1","gas_limit":"1","gas_used":"1","timestamp":"1","extra_data":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","base_fee_per_gas":"452312848583266388373324160190187140051835877600158453279131187530910662656","block_hash":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","transactions_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"}`
@@ -1428,9 +1609,9 @@ func TestExecutionPayloadHeader_MarshalJSON(t *testing.T) {
}
func TestExecutionPayloadHeaderCapella_MarshalJSON(t *testing.T) {
h, err := structs.ExecutionPayloadHeaderCapellaFromConsensus(pbExecutionPayloadHeaderCapella(t))
require.NoError(t, err)
require.NoError(t, err)
h := &ExecutionPayloadHeaderCapella{
ExecutionPayloadHeaderCapella: pbExecutionPayloadHeaderCapella(t),
}
b, err := json.Marshal(h)
require.NoError(t, err)
expected := `{"parent_hash":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","fee_recipient":"0xabcf8e0d4e9587369b2301d0790347320302cc09","state_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","receipts_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","logs_bloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","prev_randao":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","block_number":"1","gas_limit":"1","gas_used":"1","timestamp":"1","extra_data":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","base_fee_per_gas":"452312848583266388373324160190187140051835877600158453279131187530910662656","block_hash":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","transactions_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","withdrawals_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"}`
@@ -1438,8 +1619,9 @@ func TestExecutionPayloadHeaderCapella_MarshalJSON(t *testing.T) {
}
func TestExecutionPayloadHeaderDeneb_MarshalJSON(t *testing.T) {
h, err := structs.ExecutionPayloadHeaderDenebFromConsensus(pbExecutionPayloadHeaderDeneb(t))
require.NoError(t, err)
h := &ExecutionPayloadHeaderDeneb{
ExecutionPayloadHeaderDeneb: pbExecutionPayloadHeaderDeneb(t),
}
b, err := json.Marshal(h)
require.NoError(t, err)
expected := `{"parent_hash":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","fee_recipient":"0xabcf8e0d4e9587369b2301d0790347320302cc09","state_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","receipts_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","logs_bloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","prev_randao":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","block_number":"1","gas_limit":"1","gas_used":"1","timestamp":"1","extra_data":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","base_fee_per_gas":"452312848583266388373324160190187140051835877600158453279131187530910662656","block_hash":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","transactions_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","withdrawals_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","blob_gas_used":"1","excess_blob_gas":"2"}`
@@ -1668,13 +1850,12 @@ func TestRoundTripUint256(t *testing.T) {
func TestRoundTripProtoUint256(t *testing.T) {
h := pbExecutionPayloadHeader(t)
h.BaseFeePerGas = []byte{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31}
hm, err := structs.ExecutionPayloadHeaderFromConsensus(h)
require.NoError(t, err)
hm := &ExecutionPayloadHeader{ExecutionPayloadHeader: h}
m, err := json.Marshal(hm)
require.NoError(t, err)
hu := &structs.ExecutionPayloadHeader{}
hu := &ExecutionPayloadHeader{}
require.NoError(t, json.Unmarshal(m, hu))
hp, err := hu.ToConsensus()
hp, err := hu.ToProto()
require.NoError(t, err)
require.DeepEqual(t, h.BaseFeePerGas, hp.BaseFeePerGas)
}
@@ -1682,7 +1863,7 @@ func TestRoundTripProtoUint256(t *testing.T) {
func TestExecutionPayloadHeaderRoundtrip(t *testing.T) {
expected, err := os.ReadFile("testdata/execution-payload.json")
require.NoError(t, err)
hu := &structs.ExecutionPayloadHeader{}
hu := &ExecutionPayloadHeader{}
require.NoError(t, json.Unmarshal(expected, hu))
m, err := json.Marshal(hu)
require.NoError(t, err)
@@ -1692,7 +1873,7 @@ func TestExecutionPayloadHeaderRoundtrip(t *testing.T) {
func TestExecutionPayloadHeaderCapellaRoundtrip(t *testing.T) {
expected, err := os.ReadFile("testdata/execution-payload-capella.json")
require.NoError(t, err)
hu := &structs.ExecutionPayloadHeaderCapella{}
hu := &ExecutionPayloadHeaderCapella{}
require.NoError(t, json.Unmarshal(expected, hu))
m, err := json.Marshal(hu)
require.NoError(t, err)
@@ -1813,9 +1994,11 @@ func TestEmptyResponseBody(t *testing.T) {
epr := &ExecutionPayloadResponse{}
require.NoError(t, json.Unmarshal(encoded, epr))
pp, err := epr.ParsePayload()
require.NoError(t, err)
pb, err := pp.PayloadProto()
if err == nil {
require.NoError(t, err)
require.Equal(t, false, pp == nil)
require.Equal(t, false, pb == nil)
} else {
require.ErrorIs(t, err, consensusblocks.ErrNilObject)
}

View File

@@ -14,10 +14,22 @@ import (
)
const (
EventHead = "head"
EventError = "error"
EventConnectionError = "connection_error"
EventHead = "head"
EventBlock = "block"
EventAttestation = "attestation"
EventVoluntaryExit = "voluntary_exit"
EventBlsToExecutionChange = "bls_to_execution_change"
EventProposerSlashing = "proposer_slashing"
EventAttesterSlashing = "attester_slashing"
EventFinalizedCheckpoint = "finalized_checkpoint"
EventChainReorg = "chain_reorg"
EventContributionAndProof = "contribution_and_proof"
EventLightClientFinalityUpdate = "light_client_finality_update"
EventLightClientOptimisticUpdate = "light_client_optimistic_update"
EventPayloadAttributes = "payload_attributes"
EventBlobSidecar = "blob_sidecar"
EventError = "error"
EventConnectionError = "connection_error"
)
var (

View File

@@ -1,6 +1,7 @@
package event
import (
"context"
"fmt"
"net/http"
"net/http/httptest"
@@ -29,7 +30,7 @@ func TestNewEventStream(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
_, err := NewEventStream(t.Context(), &http.Client{}, tt.host, tt.topics)
_, err := NewEventStream(context.Background(), &http.Client{}, tt.host, tt.topics)
if (err != nil) != tt.wantErr {
t.Errorf("NewEventStream() error = %v, wantErr %v", err, tt.wantErr)
}
@@ -55,7 +56,7 @@ func TestEventStream(t *testing.T) {
topics := []string{"head"}
eventsChannel := make(chan *Event, 1)
stream, err := NewEventStream(t.Context(), http.DefaultClient, server.URL, topics)
stream, err := NewEventStream(context.Background(), http.DefaultClient, server.URL, topics)
require.NoError(t, err)
go stream.Subscribe(eventsChannel)
@@ -82,7 +83,8 @@ func TestEventStream(t *testing.T) {
func TestEventStreamRequestError(t *testing.T) {
topics := []string{"head"}
eventsChannel := make(chan *Event, 1)
ctx := t.Context()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// use valid url that will result in failed request with nil body
stream, err := NewEventStream(ctx, http.DefaultClient, "http://badhost:1234", topics)

View File

@@ -1,6 +1,7 @@
package grpc
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v6/testing/assert"
@@ -15,7 +16,7 @@ type customErrorData struct {
func TestAppendHeaders(t *testing.T) {
t.Run("one_header", func(t *testing.T) {
ctx := AppendHeaders(t.Context(), []string{"first=value1"})
ctx := AppendHeaders(context.Background(), []string{"first=value1"})
md, ok := metadata.FromOutgoingContext(ctx)
require.Equal(t, true, ok, "Failed to read context metadata")
require.Equal(t, 1, md.Len(), "MetadataV0 contains wrong number of values")
@@ -23,7 +24,7 @@ func TestAppendHeaders(t *testing.T) {
})
t.Run("multiple_headers", func(t *testing.T) {
ctx := AppendHeaders(t.Context(), []string{"first=value1", "second=value2"})
ctx := AppendHeaders(context.Background(), []string{"first=value1", "second=value2"})
md, ok := metadata.FromOutgoingContext(ctx)
require.Equal(t, true, ok, "Failed to read context metadata")
require.Equal(t, 2, md.Len(), "MetadataV0 contains wrong number of values")
@@ -32,7 +33,7 @@ func TestAppendHeaders(t *testing.T) {
})
t.Run("one_empty_header", func(t *testing.T) {
ctx := AppendHeaders(t.Context(), []string{"first=value1", ""})
ctx := AppendHeaders(context.Background(), []string{"first=value1", ""})
md, ok := metadata.FromOutgoingContext(ctx)
require.Equal(t, true, ok, "Failed to read context metadata")
require.Equal(t, 1, md.Len(), "MetadataV0 contains wrong number of values")
@@ -41,7 +42,7 @@ func TestAppendHeaders(t *testing.T) {
t.Run("incorrect_header", func(t *testing.T) {
logHook := logTest.NewGlobal()
ctx := AppendHeaders(t.Context(), []string{"first=value1", "second"})
ctx := AppendHeaders(context.Background(), []string{"first=value1", "second"})
md, ok := metadata.FromOutgoingContext(ctx)
require.Equal(t, true, ok, "Failed to read context metadata")
require.Equal(t, 1, md.Len(), "MetadataV0 contains wrong number of values")
@@ -50,7 +51,7 @@ func TestAppendHeaders(t *testing.T) {
})
t.Run("header_value_with_equal_sign", func(t *testing.T) {
ctx := AppendHeaders(t.Context(), []string{"first=value=1"})
ctx := AppendHeaders(context.Background(), []string{"first=value=1"})
md, ok := metadata.FromOutgoingContext(ctx)
require.Equal(t, true, ok, "Failed to read context metadata")
require.Equal(t, 1, md.Len(), "MetadataV0 contains wrong number of values")

View File

@@ -1,6 +1,7 @@
package httprest
import (
"context"
"flag"
"fmt"
"net"
@@ -33,7 +34,7 @@ func TestServer_StartStop(t *testing.T) {
WithRouter(handler),
}
g, err := New(t.Context(), opts...)
g, err := New(context.Background(), opts...)
require.NoError(t, err)
g.Start()
@@ -61,7 +62,7 @@ func TestServer_NilHandler_NotFoundHandlerRegistered(t *testing.T) {
WithRouter(handler),
}
g, err := New(t.Context(), opts...)
g, err := New(context.Background(), opts...)
require.NoError(t, err)
writer := httptest.NewRecorder()

View File

@@ -8,11 +8,7 @@ go_library(
],
importpath = "github.com/OffchainLabs/prysm/v6/api/server/middleware",
visibility = ["//visibility:public"],
deps = [
"//api:go_default_library",
"@com_github_rs_cors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
deps = ["@com_github_rs_cors//:go_default_library"],
)
go_test(
@@ -26,6 +22,5 @@ go_test(
"//api:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
)

View File

@@ -1,14 +1,11 @@
package middleware
import (
"compress/gzip"
"fmt"
"net/http"
"strings"
"github.com/OffchainLabs/prysm/v6/api"
"github.com/rs/cors"
log "github.com/sirupsen/logrus"
)
type Middleware func(http.Handler) http.Handler
@@ -115,46 +112,6 @@ func AcceptHeaderHandler(serverAcceptedTypes []string) Middleware {
}
}
// AcceptEncodingHeaderHandler compresses the response before sending it back to the client, if gzip is supported.
func AcceptEncodingHeaderHandler() Middleware {
return func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if !strings.Contains(r.Header.Get("Accept-Encoding"), "gzip") {
next.ServeHTTP(w, r)
return
}
gz := gzip.NewWriter(w)
gzipRW := &gzipResponseWriter{gz: gz, ResponseWriter: w}
defer func() {
if !gzipRW.zipped {
return
}
if err := gz.Close(); err != nil {
log.WithError(err).Error("Failed to close gzip writer")
}
}()
next.ServeHTTP(gzipRW, r)
})
}
}
type gzipResponseWriter struct {
gz *gzip.Writer
http.ResponseWriter
zipped bool
}
func (g *gzipResponseWriter) Write(b []byte) (int, error) {
if strings.Contains(g.Header().Get("Content-Type"), api.JsonMediaType) {
g.zipped = true
g.Header().Set("Content-Encoding", "gzip")
return g.gz.Write(b)
}
return g.ResponseWriter.Write(b)
}
func MiddlewareChain(h http.Handler, mw []Middleware) http.Handler {
if len(mw) < 1 {
return h

View File

@@ -1,16 +1,12 @@
package middleware
import (
"bytes"
"compress/gzip"
"io"
"net/http"
"net/http/httptest"
"testing"
"github.com/OffchainLabs/prysm/v6/api"
"github.com/OffchainLabs/prysm/v6/testing/require"
log "github.com/sirupsen/logrus"
)
func TestNormalizeQueryValuesHandler(t *testing.T) {
@@ -128,89 +124,6 @@ func TestContentTypeHandler(t *testing.T) {
}
}
func TestAcceptEncodingHeaderHandler(t *testing.T) {
dummyContent := "Test gzip middleware content"
nextHandler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", r.Header.Get("Accept"))
_, err := w.Write([]byte(dummyContent))
require.NoError(t, err)
})
handler := AcceptEncodingHeaderHandler()(nextHandler)
tests := []struct {
name string
accept string
acceptEncoding string
expectCompressed bool
}{
{
name: "Gzip supported",
accept: api.JsonMediaType,
acceptEncoding: "gzip",
expectCompressed: true,
},
{
name: "Multiple encodings supported",
accept: api.JsonMediaType,
acceptEncoding: "deflate, gzip",
expectCompressed: true,
},
{
name: "Gzip not supported",
accept: api.JsonMediaType,
acceptEncoding: "deflate",
expectCompressed: false,
},
{
name: "No accept encoding header",
accept: api.JsonMediaType,
acceptEncoding: "",
expectCompressed: false,
},
{
name: "SSZ",
accept: api.OctetStreamMediaType,
acceptEncoding: "gzip",
expectCompressed: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
req := httptest.NewRequest("GET", "/", nil)
req.Header.Set("Accept", tt.accept)
if tt.acceptEncoding != "" {
req.Header.Set("Accept-Encoding", tt.acceptEncoding)
}
rr := httptest.NewRecorder()
handler.ServeHTTP(rr, req)
if tt.expectCompressed {
require.Equal(t, "gzip", rr.Header().Get("Content-Encoding"), "Expected Content-Encoding header to be 'gzip'")
compressedBody := rr.Body.Bytes()
require.NotEqual(t, dummyContent, string(compressedBody), "Response body should be compressed and differ from the original")
gzReader, err := gzip.NewReader(bytes.NewReader(compressedBody))
require.NoError(t, err, "Failed to create gzipReader")
defer func() {
if err := gzReader.Close(); err != nil {
log.WithError(err).Error("Failed to close gzip reader")
}
}()
decompressedBody, err := io.ReadAll(gzReader)
require.NoError(t, err, "Failed to decompress response body")
require.Equal(t, dummyContent, string(decompressedBody), "Decompressed content should match the original")
} else {
require.Equal(t, dummyContent, rr.Body.String(), "Response body should be uncompressed and match the original")
}
})
}
}
func TestAcceptHeaderHandler(t *testing.T) {
acceptedTypes := []string{"application/json", "application/octet-stream"}

View File

@@ -31,7 +31,6 @@ go_library(
"//beacon-chain/state:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//consensus-types/validator:go_default_library",
@@ -45,7 +44,6 @@ go_library(
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",
],
)

View File

@@ -7,15 +7,12 @@ import (
"github.com/OffchainLabs/prysm/v6/api/server"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
consensusblocks "github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/container/slice"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
enginev1 "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
"google.golang.org/protobuf/proto"
)
// ----------------------------------------------------------------------------
@@ -135,13 +132,6 @@ func (e *ExecutionPayload) ToConsensus() (*enginev1.ExecutionPayload, error) {
}, nil
}
func (r *ExecutionPayload) PayloadProto() (proto.Message, error) {
if r == nil {
return nil, errors.Wrap(consensusblocks.ErrNilObject, "nil execution payload")
}
return r.ToConsensus()
}
func ExecutionPayloadHeaderFromConsensus(payload *enginev1.ExecutionPayloadHeader) (*ExecutionPayloadHeader, error) {
baseFeePerGas, err := sszBytesToUint256String(payload.BaseFeePerGas)
if err != nil {
@@ -393,13 +383,6 @@ func (e *ExecutionPayloadCapella) ToConsensus() (*enginev1.ExecutionPayloadCapel
}, nil
}
func (p *ExecutionPayloadCapella) PayloadProto() (proto.Message, error) {
if p == nil {
return nil, errors.Wrap(consensusblocks.ErrNilObject, "nil capella execution payload")
}
return p.ToConsensus()
}
func ExecutionPayloadHeaderCapellaFromConsensus(payload *enginev1.ExecutionPayloadHeaderCapella) (*ExecutionPayloadHeaderCapella, error) {
baseFeePerGas, err := sszBytesToUint256String(payload.BaseFeePerGas)
if err != nil {

View File

@@ -923,14 +923,7 @@ func BeaconStateFuluFromConsensus(st beaconState.BeaconState) (*BeaconStateFulu,
if err != nil {
return nil, err
}
srcLookahead, err := st.ProposerLookahead()
if err != nil {
return nil, err
}
lookahead := make([]string, len(srcLookahead))
for i, v := range srcLookahead {
lookahead[i] = fmt.Sprintf("%d", uint64(v))
}
return &BeaconStateFulu{
GenesisTime: fmt.Sprintf("%d", st.GenesisTime()),
GenesisValidatorsRoot: hexutil.Encode(st.GenesisValidatorsRoot()),
@@ -969,6 +962,5 @@ func BeaconStateFuluFromConsensus(st beaconState.BeaconState) (*BeaconStateFulu,
PendingDeposits: PendingDepositsFromConsensus(pbd),
PendingPartialWithdrawals: PendingPartialWithdrawalsFromConsensus(ppw),
PendingConsolidations: PendingConsolidationsFromConsensus(pc),
ProposerLookahead: lookahead,
}, nil
}

View File

@@ -25,13 +25,6 @@ type BlockGossipEvent struct {
Block string `json:"block"`
}
type DataColumnGossipEvent struct {
Slot string `json:"slot"`
Index string `json:"index"`
BlockRoot string `json:"block_root"`
KzgCommitments []string `json:"kzg_commitments"`
}
type AggregatedAttEventSource struct {
Aggregate *Attestation `json:"aggregate"`
}

View File

@@ -33,14 +33,8 @@ type GetPeerResponse struct {
Data *Peer `json:"data"`
}
// Added Meta to align with beacon-api: https://ethereum.github.io/beacon-APIs/#/Node/getPeers
type Meta struct {
Count int `json:"count"`
}
type GetPeersResponse struct {
Data []*Peer `json:"data"`
Meta Meta `json:"meta"`
}
type Peer struct {

View File

@@ -219,5 +219,4 @@ type BeaconStateFulu struct {
PendingDeposits []*PendingDeposit `json:"pending_deposits"`
PendingPartialWithdrawals []*PendingPartialWithdrawal `json:"pending_partial_withdrawals"`
PendingConsolidations []*PendingConsolidation `json:"pending_consolidations"`
ProposerLookahead []string `json:"proposer_lookahead"`
}

View File

@@ -15,7 +15,7 @@ import (
func TestDebounce_NoEvents(t *testing.T) {
eventsChan := make(chan interface{}, 100)
ctx, cancel := context.WithCancel(t.Context())
ctx, cancel := context.WithCancel(context.Background())
interval := time.Second
timesHandled := int32(0)
wg := &sync.WaitGroup{}
@@ -39,7 +39,7 @@ func TestDebounce_NoEvents(t *testing.T) {
func TestDebounce_CtxClosing(t *testing.T) {
eventsChan := make(chan interface{}, 100)
ctx, cancel := context.WithCancel(t.Context())
ctx, cancel := context.WithCancel(context.Background())
interval := time.Second
timesHandled := int32(0)
wg := &sync.WaitGroup{}
@@ -75,7 +75,7 @@ func TestDebounce_CtxClosing(t *testing.T) {
func TestDebounce_SingleHandlerInvocation(t *testing.T) {
eventsChan := make(chan interface{}, 100)
ctx, cancel := context.WithCancel(t.Context())
ctx, cancel := context.WithCancel(context.Background())
interval := time.Second
timesHandled := int32(0)
go async.Debounce(ctx, interval, eventsChan, func(event interface{}) {
@@ -93,7 +93,7 @@ func TestDebounce_SingleHandlerInvocation(t *testing.T) {
func TestDebounce_MultipleHandlerInvocation(t *testing.T) {
eventsChan := make(chan interface{}, 100)
ctx, cancel := context.WithCancel(t.Context())
ctx, cancel := context.WithCancel(context.Background())
interval := time.Second
timesHandled := int32(0)
go async.Debounce(ctx, interval, eventsChan, func(event interface{}) {

View File

@@ -10,7 +10,7 @@ import (
)
func TestEveryRuns(t *testing.T) {
ctx, cancel := context.WithCancel(t.Context())
ctx, cancel := context.WithCancel(context.Background())
i := int32(0)
async.RunEvery(ctx, 100*time.Millisecond, func() {

View File

@@ -4,6 +4,6 @@ This is the main project folder for the beacon chain implementation of Ethereum
You can also read our main [README](https://github.com/prysmaticlabs/prysm/blob/master/README.md) and join our active chat room on Discord.
[![Discord](https://user-images.githubusercontent.com/7288322/34471967-1df7808a-efbb-11e7-9088-ed0b04151291.png)](https://discord.gg/prysm)
[![Discord](https://user-images.githubusercontent.com/7288322/34471967-1df7808a-efbb-11e7-9088-ed0b04151291.png)](https://discord.gg/CTYGPUJ)
Also, read the official beacon chain [specification](https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/beacon-chain.md), this design spec serves as a source of truth for the beacon chain implementation we follow at Prysmatic Labs.

View File

@@ -25,7 +25,6 @@ go_library(
"receive_attestation.go",
"receive_blob.go",
"receive_block.go",
"receive_data_column.go",
"service.go",
"setup_forchoice.go",
"tracked_proposer.go",
@@ -51,7 +50,6 @@ go_library(
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/light-client:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/core/transition:go_default_library",
@@ -148,7 +146,6 @@ go_test(
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/light-client:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/transition:go_default_library",
"//beacon-chain/das:go_default_library",

View File

@@ -1,6 +1,7 @@
package blockchain
import (
"context"
"testing"
testDB "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
@@ -24,7 +25,7 @@ func TestHeadSlot_DataRace(t *testing.T) {
wait := make(chan struct{})
go func() {
defer close(wait)
require.NoError(t, s.saveHead(t.Context(), [32]byte{}, b, st))
require.NoError(t, s.saveHead(context.Background(), [32]byte{}, b, st))
}()
s.HeadSlot()
<-wait
@@ -42,10 +43,10 @@ func TestHeadRoot_DataRace(t *testing.T) {
st, _ := util.DeterministicGenesisState(t, 1)
go func() {
defer close(wait)
require.NoError(t, s.saveHead(t.Context(), [32]byte{}, b, st))
require.NoError(t, s.saveHead(context.Background(), [32]byte{}, b, st))
}()
_, err = s.HeadRoot(t.Context())
_, err = s.HeadRoot(context.Background())
require.NoError(t, err)
<-wait
}
@@ -64,10 +65,10 @@ func TestHeadBlock_DataRace(t *testing.T) {
st, _ := util.DeterministicGenesisState(t, 1)
go func() {
defer close(wait)
require.NoError(t, s.saveHead(t.Context(), [32]byte{}, b, st))
require.NoError(t, s.saveHead(context.Background(), [32]byte{}, b, st))
}()
_, err = s.HeadBlock(t.Context())
_, err = s.HeadBlock(context.Background())
require.NoError(t, err)
<-wait
}
@@ -82,14 +83,14 @@ func TestHeadState_DataRace(t *testing.T) {
wait := make(chan struct{})
st, _ := util.DeterministicGenesisState(t, 1)
root := bytesutil.ToBytes32(bytesutil.PadTo([]byte{'s'}, 32))
require.NoError(t, beaconDB.SaveGenesisBlockRoot(t.Context(), root))
require.NoError(t, beaconDB.SaveState(t.Context(), st, root))
require.NoError(t, beaconDB.SaveGenesisBlockRoot(context.Background(), root))
require.NoError(t, beaconDB.SaveState(context.Background(), st, root))
go func() {
defer close(wait)
require.NoError(t, s.saveHead(t.Context(), [32]byte{}, b, st))
require.NoError(t, s.saveHead(context.Background(), [32]byte{}, b, st))
}()
_, err = s.HeadState(t.Context())
_, err = s.HeadState(context.Background())
require.NoError(t, err)
<-wait
}

View File

@@ -84,7 +84,7 @@ func prepareForkchoiceState(
func TestHeadRoot_Nil(t *testing.T) {
beaconDB := testDB.SetupDB(t)
c := setupBeaconChain(t, beaconDB)
headRoot, err := c.HeadRoot(t.Context())
headRoot, err := c.HeadRoot(context.Background())
require.NoError(t, err)
assert.DeepEqual(t, params.BeaconConfig().ZeroHash[:], headRoot, "Incorrect pre chain start value")
}
@@ -137,7 +137,7 @@ func TestFinalizedBlockHash(t *testing.T) {
}
func TestUnrealizedJustifiedBlockHash(t *testing.T) {
ctx := t.Context()
ctx := context.Background()
service := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}}
ojc := &ethpb.Checkpoint{Root: []byte{'j'}}
ofc := &ethpb.Checkpoint{Root: []byte{'f'}}
@@ -203,7 +203,7 @@ func TestHeadBlock_CanRetrieve(t *testing.T) {
c := &Service{}
c.head = &head{block: wsb, state: s}
received, err := c.HeadBlock(t.Context())
received, err := c.HeadBlock(context.Background())
require.NoError(t, err)
pb, err := received.Proto()
require.NoError(t, err)
@@ -215,7 +215,7 @@ func TestHeadState_CanRetrieve(t *testing.T) {
require.NoError(t, err)
c := &Service{}
c.head = &head{state: s}
headState, err := c.HeadState(t.Context())
headState, err := c.HeadState(context.Background())
require.NoError(t, err)
assert.DeepEqual(t, headState.ToProtoUnsafe(), s.ToProtoUnsafe(), "Incorrect head state received")
}
@@ -277,7 +277,7 @@ func TestHeadETH1Data_CanRetrieve(t *testing.T) {
}
func TestIsCanonical_Ok(t *testing.T) {
ctx := t.Context()
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
c := setupBeaconChain(t, beaconDB)
@@ -301,12 +301,12 @@ func TestService_HeadValidatorsIndices(t *testing.T) {
c := &Service{}
c.head = &head{}
indices, err := c.HeadValidatorsIndices(t.Context(), 0)
indices, err := c.HeadValidatorsIndices(context.Background(), 0)
require.NoError(t, err)
require.Equal(t, 0, len(indices))
c.head = &head{state: s}
indices, err = c.HeadValidatorsIndices(t.Context(), 0)
indices, err = c.HeadValidatorsIndices(context.Background(), 0)
require.NoError(t, err)
require.Equal(t, 10, len(indices))
}
@@ -331,7 +331,7 @@ func TestService_HeadGenesisValidatorsRoot(t *testing.T) {
// ---------- D
func TestService_ChainHeads(t *testing.T) {
ctx := t.Context()
ctx := context.Background()
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}}
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
@@ -399,7 +399,7 @@ func TestService_HeadValidatorIndexToPublicKey(t *testing.T) {
c := &Service{}
c.head = &head{state: s}
p, err := c.HeadValidatorIndexToPublicKey(t.Context(), 0)
p, err := c.HeadValidatorIndexToPublicKey(context.Background(), 0)
require.NoError(t, err)
v, err := s.ValidatorAtIndex(0)
@@ -412,12 +412,12 @@ func TestService_HeadValidatorIndexToPublicKeyNil(t *testing.T) {
c := &Service{}
c.head = nil
p, err := c.HeadValidatorIndexToPublicKey(t.Context(), 0)
p, err := c.HeadValidatorIndexToPublicKey(context.Background(), 0)
require.NoError(t, err)
require.Equal(t, [fieldparams.BLSPubkeyLength]byte{}, p)
c.head = &head{state: nil}
p, err = c.HeadValidatorIndexToPublicKey(t.Context(), 0)
p, err = c.HeadValidatorIndexToPublicKey(context.Background(), 0)
require.NoError(t, err)
require.Equal(t, [fieldparams.BLSPubkeyLength]byte{}, p)
}
@@ -428,7 +428,7 @@ func TestService_IsOptimistic(t *testing.T) {
cfg.BellatrixForkEpoch = 0
params.OverrideBeaconConfig(cfg)
ctx := t.Context()
ctx := context.Background()
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}, head: &head{root: [32]byte{'b'}}}
@@ -456,7 +456,7 @@ func TestService_IsOptimistic(t *testing.T) {
}
func TestService_IsOptimisticBeforeBellatrix(t *testing.T) {
ctx := t.Context()
ctx := context.Background()
c := &Service{genesisTime: time.Now()}
opt, err := c.IsOptimistic(ctx)
require.NoError(t, err)
@@ -464,7 +464,7 @@ func TestService_IsOptimisticBeforeBellatrix(t *testing.T) {
}
func TestService_IsOptimisticForRoot(t *testing.T) {
ctx := t.Context()
ctx := context.Background()
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}, head: &head{root: [32]byte{'b'}}}
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
@@ -482,27 +482,27 @@ func TestService_IsOptimisticForRoot(t *testing.T) {
func TestService_IsOptimisticForRoot_DB(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := t.Context()
ctx := context.Background()
c := &Service{cfg: &config{BeaconDB: beaconDB, ForkChoiceStore: doublylinkedtree.New()}, head: &head{root: [32]byte{'b'}}}
c.head = &head{root: params.BeaconConfig().ZeroHash}
b := util.NewBeaconBlock()
b.Block.Slot = 10
br, err := b.Block.HashTreeRoot()
require.NoError(t, err)
util.SaveBlock(t, t.Context(), beaconDB, b)
require.NoError(t, beaconDB.SaveStateSummary(t.Context(), &ethpb.StateSummary{Root: br[:], Slot: 10}))
util.SaveBlock(t, context.Background(), beaconDB, b)
require.NoError(t, beaconDB.SaveStateSummary(context.Background(), &ethpb.StateSummary{Root: br[:], Slot: 10}))
optimisticBlock := util.NewBeaconBlock()
optimisticBlock.Block.Slot = 97
optimisticRoot, err := optimisticBlock.Block.HashTreeRoot()
require.NoError(t, err)
util.SaveBlock(t, t.Context(), beaconDB, optimisticBlock)
util.SaveBlock(t, context.Background(), beaconDB, optimisticBlock)
validatedBlock := util.NewBeaconBlock()
validatedBlock.Block.Slot = 9
validatedRoot, err := validatedBlock.Block.HashTreeRoot()
require.NoError(t, err)
util.SaveBlock(t, t.Context(), beaconDB, validatedBlock)
util.SaveBlock(t, context.Background(), beaconDB, validatedBlock)
validatedCheckpoint := &ethpb.Checkpoint{Root: br[:]}
require.NoError(t, beaconDB.SaveLastValidatedCheckpoint(ctx, validatedCheckpoint))
@@ -524,10 +524,10 @@ func TestService_IsOptimisticForRoot_DB(t *testing.T) {
// Before the first finalized epoch, finalized root could be zeros.
validatedCheckpoint = &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, br))
require.NoError(t, beaconDB.SaveStateSummary(t.Context(), &ethpb.StateSummary{Root: params.BeaconConfig().ZeroHash[:], Slot: 10}))
require.NoError(t, beaconDB.SaveStateSummary(context.Background(), &ethpb.StateSummary{Root: params.BeaconConfig().ZeroHash[:], Slot: 10}))
require.NoError(t, beaconDB.SaveLastValidatedCheckpoint(ctx, validatedCheckpoint))
require.NoError(t, beaconDB.SaveStateSummary(t.Context(), &ethpb.StateSummary{Root: optimisticRoot[:], Slot: 11}))
require.NoError(t, beaconDB.SaveStateSummary(context.Background(), &ethpb.StateSummary{Root: optimisticRoot[:], Slot: 11}))
optimistic, err = c.IsOptimisticForRoot(ctx, optimisticRoot)
require.NoError(t, err)
require.Equal(t, true, optimistic)
@@ -535,37 +535,37 @@ func TestService_IsOptimisticForRoot_DB(t *testing.T) {
func TestService_IsOptimisticForRoot_DB_non_canonical(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := t.Context()
ctx := context.Background()
c := &Service{cfg: &config{BeaconDB: beaconDB, ForkChoiceStore: doublylinkedtree.New()}, head: &head{root: [32]byte{'b'}}}
c.head = &head{root: params.BeaconConfig().ZeroHash}
b := util.NewBeaconBlock()
b.Block.Slot = 10
br, err := b.Block.HashTreeRoot()
require.NoError(t, err)
util.SaveBlock(t, t.Context(), beaconDB, b)
require.NoError(t, beaconDB.SaveStateSummary(t.Context(), &ethpb.StateSummary{Root: br[:], Slot: 10}))
util.SaveBlock(t, context.Background(), beaconDB, b)
require.NoError(t, beaconDB.SaveStateSummary(context.Background(), &ethpb.StateSummary{Root: br[:], Slot: 10}))
optimisticBlock := util.NewBeaconBlock()
optimisticBlock.Block.Slot = 97
optimisticRoot, err := optimisticBlock.Block.HashTreeRoot()
require.NoError(t, err)
util.SaveBlock(t, t.Context(), beaconDB, optimisticBlock)
util.SaveBlock(t, context.Background(), beaconDB, optimisticBlock)
validatedBlock := util.NewBeaconBlock()
validatedBlock.Block.Slot = 9
validatedRoot, err := validatedBlock.Block.HashTreeRoot()
require.NoError(t, err)
util.SaveBlock(t, t.Context(), beaconDB, validatedBlock)
util.SaveBlock(t, context.Background(), beaconDB, validatedBlock)
validatedCheckpoint := &ethpb.Checkpoint{Root: br[:]}
require.NoError(t, beaconDB.SaveLastValidatedCheckpoint(ctx, validatedCheckpoint))
require.NoError(t, beaconDB.SaveStateSummary(t.Context(), &ethpb.StateSummary{Root: optimisticRoot[:], Slot: 11}))
require.NoError(t, beaconDB.SaveStateSummary(context.Background(), &ethpb.StateSummary{Root: optimisticRoot[:], Slot: 11}))
optimistic, err := c.IsOptimisticForRoot(ctx, optimisticRoot)
require.NoError(t, err)
require.Equal(t, true, optimistic)
require.NoError(t, beaconDB.SaveStateSummary(t.Context(), &ethpb.StateSummary{Root: validatedRoot[:], Slot: 9}))
require.NoError(t, beaconDB.SaveStateSummary(context.Background(), &ethpb.StateSummary{Root: validatedRoot[:], Slot: 9}))
validated, err := c.IsOptimisticForRoot(ctx, validatedRoot)
require.NoError(t, err)
require.Equal(t, true, validated)
@@ -574,14 +574,14 @@ func TestService_IsOptimisticForRoot_DB_non_canonical(t *testing.T) {
func TestService_IsOptimisticForRoot_StateSummaryRecovered(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := t.Context()
ctx := context.Background()
c := &Service{cfg: &config{BeaconDB: beaconDB, ForkChoiceStore: doublylinkedtree.New()}, head: &head{root: [32]byte{'b'}}}
c.head = &head{root: params.BeaconConfig().ZeroHash}
b := util.NewBeaconBlock()
b.Block.Slot = 10
br, err := b.Block.HashTreeRoot()
require.NoError(t, err)
util.SaveBlock(t, t.Context(), beaconDB, b)
util.SaveBlock(t, context.Background(), beaconDB, b)
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, [32]byte{}))
_, err = c.IsOptimisticForRoot(ctx, br)
assert.NoError(t, err)
@@ -594,7 +594,7 @@ func TestService_IsOptimisticForRoot_StateSummaryRecovered(t *testing.T) {
func TestService_IsFinalized(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := t.Context()
ctx := context.Background()
c := &Service{cfg: &config{BeaconDB: beaconDB, ForkChoiceStore: doublylinkedtree.New()}}
r1 := [32]byte{'a'}
require.NoError(t, c.cfg.ForkChoiceStore.UpdateFinalizedCheckpoint(&forkchoicetypes.Checkpoint{
@@ -616,7 +616,7 @@ func TestService_IsFinalized(t *testing.T) {
func Test_hashForGenesisRoot(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := t.Context()
ctx := context.Background()
c := setupBeaconChain(t, beaconDB)
st, _ := util.DeterministicGenesisStateElectra(t, 10)
require.NoError(t, c.cfg.BeaconDB.SaveGenesisData(ctx, st))

View File

@@ -40,12 +40,10 @@ var (
errNotGenesisRoot = errors.New("root is not the genesis block root")
// errBlacklistedBlock is returned when a block is blacklisted as invalid.
errBlacklistedRoot = verification.AsVerificationFailure(errors.New("block root is blacklisted"))
// errMaxBlobsExceeded is returned when the number of blobs in a block exceeds the maximum allowed.
errMaxBlobsExceeded = verification.AsVerificationFailure(errors.New("expected commitments in block exceeds MAX_BLOBS_PER_BLOCK"))
// errMaxDataColumnsExceeded is returned when the number of data columns exceeds the maximum allowed.
errMaxDataColumnsExceeded = verification.AsVerificationFailure(errors.New("expected data columns for node exceeds NUMBER_OF_COLUMNS"))
)
var errMaxBlobsExceeded = verification.AsVerificationFailure(errors.New("Expected commitments in block exceeds MAX_BLOBS_PER_BLOCK"))
// An invalid block is the block that fails state transition based on the core protocol rules.
// The beacon node shall not be accepting nor building blocks that branch off from an invalid block.
// Some examples of invalid blocks are:

View File

@@ -439,9 +439,6 @@ func (s *Service) removeInvalidBlockAndState(ctx context.Context, blkRoots [][32
// Blobs may not exist for some blocks, leading to deletion failures. Log such errors at debug level.
log.WithError(err).Debug("Could not remove blob from blob storage")
}
if err := s.dataColumnStorage.Remove(root); err != nil {
log.WithError(err).Errorf("Could not remove data columns from data column storage for root %#x", root)
}
}
return nil
}

View File

@@ -1,6 +1,7 @@
package blockchain
import (
"context"
"testing"
"time"
@@ -35,29 +36,29 @@ func TestService_isNewHead(t *testing.T) {
func TestService_getHeadStateAndBlock(t *testing.T) {
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
_, _, err := service.getStateAndBlock(t.Context(), [32]byte{})
_, _, err := service.getStateAndBlock(context.Background(), [32]byte{})
require.ErrorContains(t, "block does not exist", err)
blk, err := blocks.NewSignedBeaconBlock(util.HydrateSignedBeaconBlock(&ethpb.SignedBeaconBlock{Signature: []byte{1}}))
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(t.Context(), blk))
require.NoError(t, service.cfg.BeaconDB.SaveBlock(context.Background(), blk))
st, _ := util.DeterministicGenesisState(t, 1)
r, err := blk.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveState(t.Context(), st, r))
require.NoError(t, service.cfg.BeaconDB.SaveState(context.Background(), st, r))
gotState, err := service.cfg.BeaconDB.State(t.Context(), r)
gotState, err := service.cfg.BeaconDB.State(context.Background(), r)
require.NoError(t, err)
require.DeepEqual(t, st.ToProto(), gotState.ToProto())
gotBlk, err := service.cfg.BeaconDB.Block(t.Context(), r)
gotBlk, err := service.cfg.BeaconDB.Block(context.Background(), r)
require.NoError(t, err)
require.DeepEqual(t, blk, gotBlk)
}
func TestService_forkchoiceUpdateWithExecution_exceptionalCases(t *testing.T) {
ctx := t.Context()
ctx := context.Background()
opts := testServiceOptsWithDB(t)
service, err := NewService(ctx, opts...)

View File

@@ -1,6 +1,7 @@
package blockchain
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
@@ -20,18 +21,18 @@ func TestService_HeadSyncCommitteeIndices(t *testing.T) {
// Current period
slot := 2*uint64(params.BeaconConfig().EpochsPerSyncCommitteePeriod)*uint64(params.BeaconConfig().SlotsPerEpoch) + 1
a, err := c.HeadSyncCommitteeIndices(t.Context(), 0, primitives.Slot(slot))
a, err := c.HeadSyncCommitteeIndices(context.Background(), 0, primitives.Slot(slot))
require.NoError(t, err)
// Current period where slot-2 across EPOCHS_PER_SYNC_COMMITTEE_PERIOD
slot = 3*uint64(params.BeaconConfig().EpochsPerSyncCommitteePeriod)*uint64(params.BeaconConfig().SlotsPerEpoch) - 2
b, err := c.HeadSyncCommitteeIndices(t.Context(), 0, primitives.Slot(slot))
b, err := c.HeadSyncCommitteeIndices(context.Background(), 0, primitives.Slot(slot))
require.NoError(t, err)
require.DeepEqual(t, a, b)
// Next period where slot-1 across EPOCHS_PER_SYNC_COMMITTEE_PERIOD
slot = 3*uint64(params.BeaconConfig().EpochsPerSyncCommitteePeriod)*uint64(params.BeaconConfig().SlotsPerEpoch) - 1
b, err = c.HeadSyncCommitteeIndices(t.Context(), 0, primitives.Slot(slot))
b, err = c.HeadSyncCommitteeIndices(context.Background(), 0, primitives.Slot(slot))
require.NoError(t, err)
require.DeepNotEqual(t, a, b)
}
@@ -43,7 +44,7 @@ func TestService_headCurrentSyncCommitteeIndices(t *testing.T) {
// Process slot up to `EpochsPerSyncCommitteePeriod` so it can `ProcessSyncCommitteeUpdates`.
slot := uint64(params.BeaconConfig().EpochsPerSyncCommitteePeriod)*uint64(params.BeaconConfig().SlotsPerEpoch) + 1
indices, err := c.headCurrentSyncCommitteeIndices(t.Context(), 0, primitives.Slot(slot))
indices, err := c.headCurrentSyncCommitteeIndices(context.Background(), 0, primitives.Slot(slot))
require.NoError(t, err)
// NextSyncCommittee becomes CurrentSyncCommittee so it should be empty by default.
@@ -57,7 +58,7 @@ func TestService_headNextSyncCommitteeIndices(t *testing.T) {
// Process slot up to `EpochsPerSyncCommitteePeriod` so it can `ProcessSyncCommitteeUpdates`.
slot := uint64(params.BeaconConfig().EpochsPerSyncCommitteePeriod)*uint64(params.BeaconConfig().SlotsPerEpoch) + 1
indices, err := c.headNextSyncCommitteeIndices(t.Context(), 0, primitives.Slot(slot))
indices, err := c.headNextSyncCommitteeIndices(context.Background(), 0, primitives.Slot(slot))
require.NoError(t, err)
// NextSyncCommittee should be empty after `ProcessSyncCommitteeUpdates`. Validator should get indices.
@@ -71,7 +72,7 @@ func TestService_HeadSyncCommitteePubKeys(t *testing.T) {
// Process slot up to 2 * `EpochsPerSyncCommitteePeriod` so it can run `ProcessSyncCommitteeUpdates` twice.
slot := uint64(2*params.BeaconConfig().EpochsPerSyncCommitteePeriod)*uint64(params.BeaconConfig().SlotsPerEpoch) + 1
pubkeys, err := c.HeadSyncCommitteePubKeys(t.Context(), primitives.Slot(slot), 0)
pubkeys, err := c.HeadSyncCommitteePubKeys(context.Background(), primitives.Slot(slot), 0)
require.NoError(t, err)
// Any subcommittee should match the subcommittee size.
@@ -87,7 +88,7 @@ func TestService_HeadSyncCommitteeDomain(t *testing.T) {
wanted, err := signing.Domain(s.Fork(), slots.ToEpoch(s.Slot()), params.BeaconConfig().DomainSyncCommittee, s.GenesisValidatorsRoot())
require.NoError(t, err)
d, err := c.HeadSyncCommitteeDomain(t.Context(), 0)
d, err := c.HeadSyncCommitteeDomain(context.Background(), 0)
require.NoError(t, err)
require.DeepEqual(t, wanted, d)
@@ -101,7 +102,7 @@ func TestService_HeadSyncContributionProofDomain(t *testing.T) {
wanted, err := signing.Domain(s.Fork(), slots.ToEpoch(s.Slot()), params.BeaconConfig().DomainContributionAndProof, s.GenesisValidatorsRoot())
require.NoError(t, err)
d, err := c.HeadSyncContributionProofDomain(t.Context(), 0)
d, err := c.HeadSyncContributionProofDomain(context.Background(), 0)
require.NoError(t, err)
require.DeepEqual(t, wanted, d)
@@ -115,7 +116,7 @@ func TestService_HeadSyncSelectionProofDomain(t *testing.T) {
wanted, err := signing.Domain(s.Fork(), slots.ToEpoch(s.Slot()), params.BeaconConfig().DomainSyncCommitteeSelectionProof, s.GenesisValidatorsRoot())
require.NoError(t, err)
d, err := c.HeadSyncSelectionProofDomain(t.Context(), 0)
d, err := c.HeadSyncSelectionProofDomain(context.Background(), 0)
require.NoError(t, err)
require.DeepEqual(t, wanted, d)

View File

@@ -34,17 +34,17 @@ func TestSaveHead_Same(t *testing.T) {
b, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlock())
require.NoError(t, err)
st, _ := util.DeterministicGenesisState(t, 1)
require.NoError(t, service.saveHead(t.Context(), r, b, st))
require.NoError(t, service.saveHead(context.Background(), r, b, st))
assert.Equal(t, primitives.Slot(0), service.headSlot(), "Head did not stay the same")
assert.Equal(t, r, service.headRoot(), "Head did not stay the same")
}
func TestSaveHead_Different(t *testing.T) {
ctx := t.Context()
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
oldBlock := util.SaveBlock(t, t.Context(), service.cfg.BeaconDB, util.NewBeaconBlock())
oldBlock := util.SaveBlock(t, context.Background(), service.cfg.BeaconDB, util.NewBeaconBlock())
oldRoot, err := oldBlock.Block().HashTreeRoot()
require.NoError(t, err)
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
@@ -61,7 +61,7 @@ func TestSaveHead_Different(t *testing.T) {
newHeadSignedBlock.Block.Slot = 1
newHeadBlock := newHeadSignedBlock.Block
wsb := util.SaveBlock(t, t.Context(), service.cfg.BeaconDB, newHeadSignedBlock)
wsb := util.SaveBlock(t, context.Background(), service.cfg.BeaconDB, newHeadSignedBlock)
newRoot, err := newHeadBlock.HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err = prepareForkchoiceState(ctx, slots.PrevSlot(wsb.Block().Slot()), wsb.Block().ParentRoot(), service.cfg.ForkChoiceStore.CachedHeadRoot(), [32]byte{}, ojc, ofc)
@@ -74,13 +74,13 @@ func TestSaveHead_Different(t *testing.T) {
headState, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, headState.SetSlot(1))
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(t.Context(), &ethpb.StateSummary{Slot: 1, Root: newRoot[:]}))
require.NoError(t, service.cfg.BeaconDB.SaveState(t.Context(), headState, newRoot))
require.NoError(t, service.saveHead(t.Context(), newRoot, wsb, headState))
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(context.Background(), &ethpb.StateSummary{Slot: 1, Root: newRoot[:]}))
require.NoError(t, service.cfg.BeaconDB.SaveState(context.Background(), headState, newRoot))
require.NoError(t, service.saveHead(context.Background(), newRoot, wsb, headState))
assert.Equal(t, primitives.Slot(1), service.HeadSlot(), "Head did not change")
cachedRoot, err := service.HeadRoot(t.Context())
cachedRoot, err := service.HeadRoot(context.Background())
require.NoError(t, err)
assert.DeepEqual(t, cachedRoot, newRoot[:], "Head did not change")
headBlock, err := service.headBlock()
@@ -92,12 +92,12 @@ func TestSaveHead_Different(t *testing.T) {
}
func TestSaveHead_Different_Reorg(t *testing.T) {
ctx := t.Context()
ctx := context.Background()
hook := logTest.NewGlobal()
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
oldBlock := util.SaveBlock(t, t.Context(), service.cfg.BeaconDB, util.NewBeaconBlock())
oldBlock := util.SaveBlock(t, context.Background(), service.cfg.BeaconDB, util.NewBeaconBlock())
oldRoot, err := oldBlock.Block().HashTreeRoot()
require.NoError(t, err)
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
@@ -120,7 +120,7 @@ func TestSaveHead_Different_Reorg(t *testing.T) {
newHeadSignedBlock.Block.ParentRoot = reorgChainParent[:]
newHeadBlock := newHeadSignedBlock.Block
wsb := util.SaveBlock(t, t.Context(), service.cfg.BeaconDB, newHeadSignedBlock)
wsb := util.SaveBlock(t, context.Background(), service.cfg.BeaconDB, newHeadSignedBlock)
newRoot, err := newHeadBlock.HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err = prepareForkchoiceState(ctx, wsb.Block().Slot(), newRoot, wsb.Block().ParentRoot(), [32]byte{}, ojc, ofc)
@@ -129,13 +129,13 @@ func TestSaveHead_Different_Reorg(t *testing.T) {
headState, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, headState.SetSlot(1))
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(t.Context(), &ethpb.StateSummary{Slot: 1, Root: newRoot[:]}))
require.NoError(t, service.cfg.BeaconDB.SaveState(t.Context(), headState, newRoot))
require.NoError(t, service.saveHead(t.Context(), newRoot, wsb, headState))
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(context.Background(), &ethpb.StateSummary{Slot: 1, Root: newRoot[:]}))
require.NoError(t, service.cfg.BeaconDB.SaveState(context.Background(), headState, newRoot))
require.NoError(t, service.saveHead(context.Background(), newRoot, wsb, headState))
assert.Equal(t, primitives.Slot(1), service.HeadSlot(), "Head did not change")
cachedRoot, err := service.HeadRoot(t.Context())
cachedRoot, err := service.HeadRoot(context.Background())
require.NoError(t, err)
if !bytes.Equal(cachedRoot, newRoot[:]) {
t.Error("Head did not change")
@@ -162,12 +162,12 @@ func Test_notifyNewHeadEvent(t *testing.T) {
},
originBlockRoot: [32]byte{1},
}
st, blk, err := prepareForkchoiceState(t.Context(), 0, [32]byte{}, [32]byte{}, [32]byte{}, &ethpb.Checkpoint{}, &ethpb.Checkpoint{})
st, blk, err := prepareForkchoiceState(context.Background(), 0, [32]byte{}, [32]byte{}, [32]byte{}, &ethpb.Checkpoint{}, &ethpb.Checkpoint{})
require.NoError(t, err)
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(t.Context(), st, blk))
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(context.Background(), st, blk))
newHeadStateRoot := [32]byte{2}
newHeadRoot := [32]byte{3}
require.NoError(t, srv.notifyNewHeadEvent(t.Context(), 1, bState, newHeadStateRoot[:], newHeadRoot[:]))
require.NoError(t, srv.notifyNewHeadEvent(context.Background(), 1, bState, newHeadStateRoot[:], newHeadRoot[:]))
events := notifier.ReceivedEvents()
require.Equal(t, 1, len(events))
@@ -194,9 +194,9 @@ func Test_notifyNewHeadEvent(t *testing.T) {
},
originBlockRoot: genesisRoot,
}
st, blk, err := prepareForkchoiceState(t.Context(), 0, [32]byte{}, [32]byte{}, [32]byte{}, &ethpb.Checkpoint{}, &ethpb.Checkpoint{})
st, blk, err := prepareForkchoiceState(context.Background(), 0, [32]byte{}, [32]byte{}, [32]byte{}, &ethpb.Checkpoint{}, &ethpb.Checkpoint{})
require.NoError(t, err)
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(t.Context(), st, blk))
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(context.Background(), st, blk))
epoch1Start, err := slots.EpochStart(1)
require.NoError(t, err)
epoch2Start, err := slots.EpochStart(1)
@@ -205,7 +205,7 @@ func Test_notifyNewHeadEvent(t *testing.T) {
newHeadStateRoot := [32]byte{2}
newHeadRoot := [32]byte{3}
err = srv.notifyNewHeadEvent(t.Context(), epoch2Start, bState, newHeadStateRoot[:], newHeadRoot[:])
err = srv.notifyNewHeadEvent(context.Background(), epoch2Start, bState, newHeadStateRoot[:], newHeadRoot[:])
require.NoError(t, err)
events := notifier.ReceivedEvents()
require.Equal(t, 1, len(events))
@@ -225,11 +225,11 @@ func Test_notifyNewHeadEvent(t *testing.T) {
}
func TestRetrieveHead_ReadOnly(t *testing.T) {
ctx := t.Context()
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
oldBlock := util.SaveBlock(t, t.Context(), service.cfg.BeaconDB, util.NewBeaconBlock())
oldBlock := util.SaveBlock(t, context.Background(), service.cfg.BeaconDB, util.NewBeaconBlock())
oldRoot, err := oldBlock.Block().HashTreeRoot()
require.NoError(t, err)
service.head = &head{
@@ -243,7 +243,7 @@ func TestRetrieveHead_ReadOnly(t *testing.T) {
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
wsb := util.SaveBlock(t, t.Context(), service.cfg.BeaconDB, newHeadSignedBlock)
wsb := util.SaveBlock(t, context.Background(), service.cfg.BeaconDB, newHeadSignedBlock)
newRoot, err := newHeadBlock.HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err := prepareForkchoiceState(ctx, slots.PrevSlot(wsb.Block().Slot()), wsb.Block().ParentRoot(), service.cfg.ForkChoiceStore.CachedHeadRoot(), [32]byte{}, ojc, ofc)
@@ -256,9 +256,9 @@ func TestRetrieveHead_ReadOnly(t *testing.T) {
headState, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, headState.SetSlot(1))
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(t.Context(), &ethpb.StateSummary{Slot: 1, Root: newRoot[:]}))
require.NoError(t, service.cfg.BeaconDB.SaveState(t.Context(), headState, newRoot))
require.NoError(t, service.saveHead(t.Context(), newRoot, wsb, headState))
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(context.Background(), &ethpb.StateSummary{Slot: 1, Root: newRoot[:]}))
require.NoError(t, service.cfg.BeaconDB.SaveState(context.Background(), headState, newRoot))
require.NoError(t, service.saveHead(context.Background(), newRoot, wsb, headState))
rOnlyState, err := service.HeadStateReadOnly(ctx)
require.NoError(t, err)
@@ -267,7 +267,7 @@ func TestRetrieveHead_ReadOnly(t *testing.T) {
}
func TestSaveOrphanedAtts(t *testing.T) {
ctx := t.Context()
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
service.genesisTime = time.Now().Add(time.Duration(-10*int64(1)*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second)
@@ -333,7 +333,7 @@ func TestSaveOrphanedAtts(t *testing.T) {
}
func TestSaveOrphanedAttsElectra(t *testing.T) {
ctx := t.Context()
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
service.genesisTime = time.Now().Add(time.Duration(-10*int64(1)*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second)
@@ -404,7 +404,7 @@ func TestSaveOrphanedOps(t *testing.T) {
config.ShardCommitteePeriod = 0
params.OverrideBeaconConfig(config)
ctx := t.Context()
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
service.genesisTime = time.Now().Add(time.Duration(-10*int64(1)*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second)
@@ -481,7 +481,7 @@ func TestSaveOrphanedOps(t *testing.T) {
}
func TestSaveOrphanedAtts_CanFilter(t *testing.T) {
ctx := t.Context()
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
service.cfg.BLSToExecPool = blstoexec.NewPool()
@@ -539,7 +539,7 @@ func TestSaveOrphanedAtts_CanFilter(t *testing.T) {
}
func TestSaveOrphanedAtts_DoublyLinkedTrie(t *testing.T) {
ctx := t.Context()
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
service.genesisTime = time.Now().Add(time.Duration(-10*int64(1)*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second)
@@ -604,7 +604,7 @@ func TestSaveOrphanedAtts_DoublyLinkedTrie(t *testing.T) {
}
func TestSaveOrphanedAtts_CanFilter_DoublyLinkedTrie(t *testing.T) {
ctx := t.Context()
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
service.genesisTime = time.Now().Add(time.Duration(-1*int64(params.BeaconConfig().SlotsPerEpoch+2)*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second)

View File

@@ -1,6 +1,7 @@
package blockchain
import (
"context"
"testing"
testDB "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
@@ -10,7 +11,7 @@ import (
)
func TestService_getBlock(t *testing.T) {
ctx := t.Context()
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
s := setupBeaconChain(t, beaconDB)
b1 := util.NewBeaconBlock()
@@ -41,7 +42,7 @@ func TestService_getBlock(t *testing.T) {
}
func TestService_hasBlockInInitSyncOrDB(t *testing.T) {
ctx := t.Context()
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
s := setupBeaconChain(t, beaconDB)
b1 := util.NewBeaconBlock()

View File

@@ -29,8 +29,9 @@ go_test(
embed = [":go_default_library"],
deps = [
"//consensus-types/blocks:go_default_library",
"//crypto/random:go_default_library",
"//testing/require:go_default_library",
"@com_github_consensys_gnark_crypto//ecc/bls12-381/fr:go_default_library",
"@com_github_crate_crypto_go_kzg_4844//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
)

View File

@@ -1,12 +1,16 @@
package kzg
import (
"bytes"
"crypto/sha256"
"encoding/binary"
"testing"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/crypto/random"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/consensys/gnark-crypto/ecc/bls12-381/fr"
GoKZG "github.com/crate-crypto/go-kzg-4844"
"github.com/sirupsen/logrus"
)
func GenerateCommitmentAndProof(blob GoKZG.Blob) (GoKZG.KZGCommitment, GoKZG.KZGProof, error) {
@@ -37,7 +41,7 @@ func TestBytesToAny(t *testing.T) {
}
func TestGenerateCommitmentAndProof(t *testing.T) {
blob := random.GetRandBlob(123)
blob := getRandBlob(123)
commitment, proof, err := GenerateCommitmentAndProof(blob)
require.NoError(t, err)
expectedCommitment := GoKZG.KZGCommitment{180, 218, 156, 194, 59, 20, 10, 189, 186, 254, 132, 93, 7, 127, 104, 172, 238, 240, 237, 70, 83, 89, 1, 152, 99, 0, 165, 65, 143, 62, 20, 215, 230, 14, 205, 95, 28, 245, 54, 25, 160, 16, 178, 31, 232, 207, 38, 85}
@@ -45,3 +49,36 @@ func TestGenerateCommitmentAndProof(t *testing.T) {
require.Equal(t, expectedCommitment, commitment)
require.Equal(t, expectedProof, proof)
}
func deterministicRandomness(seed int64) [32]byte {
// Converts an int64 to a byte slice
buf := new(bytes.Buffer)
err := binary.Write(buf, binary.BigEndian, seed)
if err != nil {
logrus.WithError(err).Error("Failed to write int64 to bytes buffer")
return [32]byte{}
}
bytes := buf.Bytes()
return sha256.Sum256(bytes)
}
// Returns a serialized random field element in big-endian
func getRandFieldElement(seed int64) [32]byte {
bytes := deterministicRandomness(seed)
var r fr.Element
r.SetBytes(bytes[:])
return GoKZG.SerializeScalar(r)
}
// Returns a random blob using the passed seed as entropy
func getRandBlob(seed int64) GoKZG.Blob {
var blob GoKZG.Blob
bytesPerBlob := GoKZG.ScalarsPerBlob * GoKZG.SerializedScalarSize
for i := 0; i < bytesPerBlob; i += GoKZG.SerializedScalarSize {
fieldElementBytes := getRandFieldElement(seed + int64(i))
copy(blob[i:i+GoKZG.SerializedScalarSize], fieldElementBytes[:])
}
return blob
}

View File

@@ -26,6 +26,9 @@ func logStateTransitionData(b interfaces.ReadOnlyBeaconBlock) error {
if len(b.Body().Attestations()) > 0 {
log = log.WithField("attestations", len(b.Body().Attestations()))
}
if len(b.Body().Deposits()) > 0 {
log = log.WithField("deposits", len(b.Body().Deposits()))
}
if len(b.Body().AttesterSlashings()) > 0 {
log = log.WithField("attesterSlashings", len(b.Body().AttesterSlashings()))
}
@@ -108,6 +111,7 @@ func logBlockSyncStatus(block interfaces.ReadOnlyBeaconBlock, blockRoot [32]byte
"sinceSlotStartTime": prysmTime.Now().Sub(startTime),
"chainServiceProcessedTime": prysmTime.Now().Sub(receivedTime) - daWaitedTime,
"dataAvailabilityWaitedTime": daWaitedTime,
"deposits": len(block.Body().Deposits()),
}
log.WithFields(lf).Debug("Synced new block")
} else {
@@ -155,9 +159,7 @@ func logPayload(block interfaces.ReadOnlyBeaconBlock) error {
if err != nil {
return errors.Wrap(err, "could not get BLSToExecutionChanges")
}
if len(changes) > 0 {
fields["blsToExecutionChanges"] = len(changes)
}
fields["blsToExecutionChanges"] = len(changes)
}
log.WithFields(fields).Debug("Synced new payload")
return nil

View File

@@ -53,7 +53,7 @@ func Test_logStateTransitionData(t *testing.T) {
require.NoError(t, err)
return wb
},
want: "\"Finished applying state transition\" attestations=1 prefix=blockchain slot=0",
want: "\"Finished applying state transition\" attestations=1 deposits=1 prefix=blockchain slot=0",
},
{name: "has attester slashing",
b: func() interfaces.ReadOnlyBeaconBlock {
@@ -93,7 +93,7 @@ func Test_logStateTransitionData(t *testing.T) {
require.NoError(t, err)
return wb
},
want: "\"Finished applying state transition\" attestations=1 attesterSlashings=1 prefix=blockchain proposerSlashings=1 slot=0 voluntaryExits=1",
want: "\"Finished applying state transition\" attestations=1 attesterSlashings=1 deposits=1 prefix=blockchain proposerSlashings=1 slot=0 voluntaryExits=1",
},
{name: "has payload",
b: func() interfaces.ReadOnlyBeaconBlock { return wrappedPayloadBlk },

View File

@@ -1,6 +1,7 @@
package blockchain
import (
"context"
"testing"
eth "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
@@ -14,7 +15,7 @@ func TestReportEpochMetrics_BadHeadState(t *testing.T) {
h, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, h.SetValidators(nil))
err = reportEpochMetrics(t.Context(), s, h)
err = reportEpochMetrics(context.Background(), s, h)
require.ErrorContains(t, "failed to initialize precompute: state has nil validator slice", err)
}
@@ -24,7 +25,7 @@ func TestReportEpochMetrics_BadAttestation(t *testing.T) {
h, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, h.AppendCurrentEpochAttestations(&eth.PendingAttestation{InclusionDelay: 0}))
err = reportEpochMetrics(t.Context(), s, h)
err = reportEpochMetrics(context.Background(), s, h)
require.ErrorContains(t, "attestation with inclusion delay of 0", err)
}
@@ -35,6 +36,6 @@ func TestReportEpochMetrics_SlashedValidatorOutOfBound(t *testing.T) {
v.Slashed = true
require.NoError(t, h.UpdateValidatorAtIndex(0, v))
require.NoError(t, h.AppendCurrentEpochAttestations(&eth.PendingAttestation{InclusionDelay: 1, Data: util.HydrateAttestationData(&eth.AttestationData{})}))
err = reportEpochMetrics(t.Context(), h, h)
err = reportEpochMetrics(context.Background(), h, h)
require.ErrorContains(t, "slot 0 out of bounds", err)
}

View File

@@ -1,13 +1,10 @@
package blockchain
import (
"time"
"github.com/OffchainLabs/prysm/v6/async/event"
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
lightclient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v6/beacon-chain/execution"
@@ -130,9 +127,9 @@ func WithBLSToExecPool(p blstoexec.PoolManager) Option {
}
// WithP2PBroadcaster to broadcast messages after appropriate processing.
func WithP2PBroadcaster(p p2p.Accessor) Option {
func WithP2PBroadcaster(p p2p.Broadcaster) Option {
return func(s *Service) error {
s.cfg.P2P = p
s.cfg.P2p = p
return nil
}
}
@@ -211,15 +208,6 @@ func WithBlobStorage(b *filesystem.BlobStorage) Option {
}
}
// WithDataColumnStorage sets the data column storage backend for the blockchain service.
func WithDataColumnStorage(b *filesystem.DataColumnStorage) Option {
return func(s *Service) error {
s.dataColumnStorage = b
return nil
}
}
// WithSyncChecker sets the sync checker for the blockchain service.
func WithSyncChecker(checker Checker) Option {
return func(s *Service) error {
s.cfg.SyncChecker = checker
@@ -227,15 +215,6 @@ func WithSyncChecker(checker Checker) Option {
}
}
// WithCustodyInfo sets the custody info for the blockchain service.
func WithCustodyInfo(custodyInfo *peerdas.CustodyInfo) Option {
return func(s *Service) error {
s.cfg.CustodyInfo = custodyInfo
return nil
}
}
// WithSlasherEnabled sets whether the slasher is enabled or not.
func WithSlasherEnabled(enabled bool) Option {
return func(s *Service) error {
s.slasherEnabled = enabled
@@ -243,27 +222,9 @@ func WithSlasherEnabled(enabled bool) Option {
}
}
// WithGenesisTime sets the genesis time for the blockchain service.
func WithGenesisTime(genesisTime time.Time) Option {
return func(s *Service) error {
s.genesisTime = genesisTime
return nil
}
}
// WithLightClientStore sets the light client store for the blockchain service.
func WithLightClientStore(lcs *lightclient.Store) Option {
return func(s *Service) error {
s.lcStore = lcs
return nil
}
}
// WithStartWaitingDataColumnSidecars sets a channel that the `areDataColumnsAvailable` function will fill
// in when starting to wait for additional data columns.
func WithStartWaitingDataColumnSidecars(c chan bool) Option {
return func(s *Service) error {
s.startWaitingDataColumnSidecars = c
return nil
}
}

View File

@@ -5,7 +5,6 @@ import (
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
@@ -89,7 +88,7 @@ func (s *Service) OnAttestation(ctx context.Context, a ethpb.Att, disparity time
if err != nil {
return err
}
if err := attestation.IsValidAttestationIndices(ctx, indexedAtt, params.BeaconConfig().MaxValidatorsPerCommittee, params.BeaconConfig().MaxCommitteesPerSlot); err != nil {
if err := attestation.IsValidAttestationIndices(ctx, indexedAtt); err != nil {
return err
}

View File

@@ -161,7 +161,7 @@ func TestStore_OnAttestation_Ok_DoublyLinkedTree(t *testing.T) {
func TestService_GetRecentPreState(t *testing.T) {
service, _ := minimalTestService(t)
ctx := t.Context()
ctx := context.Background()
s, err := util.NewBeaconState()
require.NoError(t, err)
@@ -183,7 +183,7 @@ func TestService_GetRecentPreState(t *testing.T) {
func TestService_GetAttPreState_Concurrency(t *testing.T) {
service, _ := minimalTestService(t)
ctx := t.Context()
ctx := context.Background()
s, err := util.NewBeaconState()
require.NoError(t, err)
@@ -353,21 +353,21 @@ func TestStore_UpdateCheckpointState(t *testing.T) {
}
func TestAttEpoch_MatchPrevEpoch(t *testing.T) {
ctx := t.Context()
ctx := context.Background()
nowTime := uint64(params.BeaconConfig().SlotsPerEpoch) * params.BeaconConfig().SecondsPerSlot
require.NoError(t, verifyAttTargetEpoch(ctx, 0, nowTime, &ethpb.Checkpoint{Root: make([]byte, fieldparams.RootLength)}))
}
func TestAttEpoch_MatchCurrentEpoch(t *testing.T) {
ctx := t.Context()
ctx := context.Background()
nowTime := uint64(params.BeaconConfig().SlotsPerEpoch) * params.BeaconConfig().SecondsPerSlot
require.NoError(t, verifyAttTargetEpoch(ctx, 0, nowTime, &ethpb.Checkpoint{Epoch: 1}))
}
func TestAttEpoch_NotMatch(t *testing.T) {
ctx := t.Context()
ctx := context.Background()
nowTime := 2 * uint64(params.BeaconConfig().SlotsPerEpoch) * params.BeaconConfig().SecondsPerSlot
err := verifyAttTargetEpoch(ctx, 0, nowTime, &ethpb.Checkpoint{Root: make([]byte, fieldparams.RootLength)})
@@ -375,7 +375,7 @@ func TestAttEpoch_NotMatch(t *testing.T) {
}
func TestVerifyBeaconBlock_NoBlock(t *testing.T) {
ctx := t.Context()
ctx := context.Background()
opts := testServiceOptsWithDB(t)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
@@ -385,7 +385,7 @@ func TestVerifyBeaconBlock_NoBlock(t *testing.T) {
}
func TestVerifyBeaconBlock_futureBlock(t *testing.T) {
ctx := t.Context()
ctx := context.Background()
opts := testServiceOptsWithDB(t)
service, err := NewService(ctx, opts...)
@@ -402,7 +402,7 @@ func TestVerifyBeaconBlock_futureBlock(t *testing.T) {
}
func TestVerifyBeaconBlock_OK(t *testing.T) {
ctx := t.Context()
ctx := context.Background()
opts := testServiceOptsWithDB(t)
service, err := NewService(ctx, opts...)

View File

@@ -3,12 +3,10 @@ package blockchain
import (
"context"
"fmt"
"slices"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/blocks"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
coreTime "github.com/OffchainLabs/prysm/v6/beacon-chain/core/time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/transition"
"github.com/OffchainLabs/prysm/v6/beacon-chain/das"
@@ -72,6 +70,8 @@ func (s *Service) postBlockProcess(cfg *postBlockProcessConfig) error {
}
if features.Get().EnableLightClient && slots.ToEpoch(s.CurrentSlot()) >= params.BeaconConfig().AltairForkEpoch {
defer s.processLightClientUpdates(cfg)
defer s.saveLightClientUpdate(cfg)
defer s.saveLightClientBootstrap(cfg)
}
defer s.sendStateFeedOnBlock(cfg)
defer reportProcessingTime(startTime)
@@ -239,9 +239,8 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlo
return err
}
}
if err := avs.IsDataAvailable(ctx, s.CurrentSlot(), b); err != nil {
return errors.Wrapf(err, "could not validate sidecar availability at slot %d", b.Block().Slot())
return errors.Wrapf(err, "could not validate blob data availability at slot %d", b.Block().Slot())
}
args := &forkchoicetypes.BlockAndCheckpoints{Block: b,
JustifiedCheckpoint: jCheckpoints[i],
@@ -579,12 +578,12 @@ func (s *Service) runLateBlockTasks() {
}
}
// missingBlobIndices uses the expected commitments from the block to determine
// missingIndices uses the expected commitments from the block to determine
// which BlobSidecar indices would need to be in the database for DA success.
// It returns a map where each key represents a missing BlobSidecar index.
// An empty map means we have all indices; a non-empty map can be used to compare incoming
// BlobSidecars against the set of known missing sidecars.
func missingBlobIndices(bs *filesystem.BlobStorage, root [fieldparams.RootLength]byte, expected [][]byte, slot primitives.Slot) (map[uint64]bool, error) {
func missingIndices(bs *filesystem.BlobStorage, root [32]byte, expected [][]byte, slot primitives.Slot) (map[uint64]struct{}, error) {
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(slot)
if len(expected) == 0 {
return nil, nil
@@ -593,235 +592,29 @@ func missingBlobIndices(bs *filesystem.BlobStorage, root [fieldparams.RootLength
return nil, errMaxBlobsExceeded
}
indices := bs.Summary(root)
missing := make(map[uint64]bool, len(expected))
missing := make(map[uint64]struct{}, len(expected))
for i := range expected {
if len(expected[i]) > 0 && !indices.HasIndex(uint64(i)) {
missing[uint64(i)] = true
missing[uint64(i)] = struct{}{}
}
}
return missing, nil
}
// missingDataColumnIndices uses the expected data columns from the block to determine
// which DataColumnSidecar indices would need to be in the database for DA success.
// It returns a map where each key represents a missing DataColumnSidecar index.
// An empty map means we have all indices; a non-empty map can be used to compare incoming
// DataColumns against the set of known missing sidecars.
func missingDataColumnIndices(bs *filesystem.DataColumnStorage, root [fieldparams.RootLength]byte, expected map[uint64]bool) (map[uint64]bool, error) {
if len(expected) == 0 {
return nil, nil
}
numberOfColumns := params.BeaconConfig().NumberOfColumns
if uint64(len(expected)) > numberOfColumns {
return nil, errMaxDataColumnsExceeded
}
// Get a summary of the data columns stored in the database.
summary := bs.Summary(root)
// Check all expected data columns against the summary.
missing := make(map[uint64]bool)
for column := range expected {
if !summary.HasIndex(column) {
missing[column] = true
}
}
return missing, nil
}
// isDataAvailable blocks until all sidecars committed to in the block are available,
// isDataAvailable blocks until all BlobSidecars committed to in the block are available,
// or an error or context cancellation occurs. A nil result means that the data availability check is successful.
// The function will first check the database to see if all sidecars have been persisted. If any
// sidecars are missing, it will then read from the sidecar notifier channel for the given root until the channel is
// sidecars are missing, it will then read from the blobNotifier channel for the given root until the channel is
// closed, the context hits cancellation/timeout, or notifications have been received for all the missing sidecars.
func (s *Service) isDataAvailable(
ctx context.Context,
root [fieldparams.RootLength]byte,
signedBlock interfaces.ReadOnlySignedBeaconBlock,
) error {
block := signedBlock.Block()
func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signed interfaces.ReadOnlySignedBeaconBlock) error {
if signed.Version() < version.Deneb {
return nil
}
block := signed.Block()
if block == nil {
return errors.New("invalid nil beacon block")
}
blockVersion := block.Version()
if blockVersion >= version.Fulu {
return s.areDataColumnsAvailable(ctx, root, block)
}
if blockVersion >= version.Deneb {
return s.areBlobsAvailable(ctx, root, block)
}
return nil
}
// areDataColumnsAvailable blocks until all data columns committed to in the block are available,
// or an error or context cancellation occurs. A nil result means that the data availability check is successful.
func (s *Service) areDataColumnsAvailable(
ctx context.Context,
root [fieldparams.RootLength]byte,
block interfaces.ReadOnlyBeaconBlock,
) error {
// We are only required to check within MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS
blockSlot, currentSlot := block.Slot(), s.CurrentSlot()
blockEpoch, currentEpoch := slots.ToEpoch(blockSlot), slots.ToEpoch(currentSlot)
if !params.WithinDAPeriod(blockEpoch, currentEpoch) {
return nil
}
body := block.Body()
if body == nil {
return errors.New("invalid nil beacon block body")
}
kzgCommitments, err := body.BlobKzgCommitments()
if err != nil {
return errors.Wrap(err, "blob KZG commitments")
}
// If block has not commitments there is nothing to wait for.
if len(kzgCommitments) == 0 {
return nil
}
// All columns to sample need to be available for the block to be considered available.
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-alpha.10/specs/fulu/das-core.md#custody-sampling
nodeID := s.cfg.P2P.NodeID()
// Prevent custody group count to change during the rest of the function.
s.cfg.CustodyInfo.Mut.RLock()
defer s.cfg.CustodyInfo.Mut.RUnlock()
// Get the custody group sampling size for the node.
custodyGroupSamplingSize := s.cfg.CustodyInfo.CustodyGroupSamplingSize(peerdas.Actual)
peerInfo, _, err := peerdas.Info(nodeID, custodyGroupSamplingSize)
if err != nil {
return errors.Wrap(err, "peer info")
}
// Subscribe to newly data columns stored in the database.
subscription, identsChan := s.dataColumnStorage.Subscribe()
defer subscription.Unsubscribe()
// Get the count of data columns we already have in the store.
summary := s.dataColumnStorage.Summary(root)
storedDataColumnsCount := summary.Count()
minimumColumnCountToReconstruct := peerdas.MinimumColumnsCountToReconstruct()
// As soon as we have enough data column sidecars, we can reconstruct the missing ones.
// We don't need to wait for the rest of the data columns to declare the block as available.
if storedDataColumnsCount >= minimumColumnCountToReconstruct {
return nil
}
// Get a map of data column indices that are not currently available.
missingMap, err := missingDataColumnIndices(s.dataColumnStorage, root, peerInfo.CustodyColumns)
if err != nil {
return errors.Wrap(err, "missing data columns")
}
// If there are no missing indices, all data column sidecars are available.
// This is the happy path.
if len(missingMap) == 0 {
return nil
}
if s.startWaitingDataColumnSidecars != nil {
s.startWaitingDataColumnSidecars <- true
}
// Log for DA checks that cross over into the next slot; helpful for debugging.
nextSlot := slots.BeginsAt(block.Slot()+1, s.genesisTime)
// Avoid logging if DA check is called after next slot start.
if nextSlot.After(time.Now()) {
timer := time.AfterFunc(time.Until(nextSlot), func() {
missingMapCount := uint64(len(missingMap))
if missingMapCount == 0 {
return
}
var (
expected interface{} = "all"
missing interface{} = "all"
)
numberOfColumns := params.BeaconConfig().NumberOfColumns
colMapCount := uint64(len(peerInfo.CustodyColumns))
if colMapCount < numberOfColumns {
expected = uint64MapToSortedSlice(peerInfo.CustodyColumns)
}
if missingMapCount < numberOfColumns {
missing = uint64MapToSortedSlice(missingMap)
}
log.WithFields(logrus.Fields{
"slot": block.Slot(),
"root": fmt.Sprintf("%#x", root),
"columnsExpected": expected,
"columnsWaiting": missing,
}).Warning("Data columns still missing at slot end")
})
defer timer.Stop()
}
for {
select {
case idents := <-identsChan:
if idents.Root != root {
// This is not the root we are looking for.
continue
}
for _, index := range idents.Indices {
// This is a data column we are expecting.
if _, ok := missingMap[index]; ok {
storedDataColumnsCount++
}
// As soon as we have more than half of the data columns, we can reconstruct the missing ones.
// We don't need to wait for the rest of the data columns to declare the block as available.
if storedDataColumnsCount >= minimumColumnCountToReconstruct {
return nil
}
// Remove the index from the missing map.
delete(missingMap, index)
// Return if there is no more missing data columns.
if len(missingMap) == 0 {
return nil
}
}
case <-ctx.Done():
var missingIndices interface{} = "all"
numberOfColumns := params.BeaconConfig().NumberOfColumns
missingIndicesCount := uint64(len(missingMap))
if missingIndicesCount < numberOfColumns {
missingIndices = uint64MapToSortedSlice(missingMap)
}
return errors.Wrapf(ctx.Err(), "data column sidecars slot: %d, BlockRoot: %#x, missing %v", block.Slot(), root, missingIndices)
}
}
}
// areBlobsAvailable blocks until all BlobSidecars committed to in the block are available,
// or an error or context cancellation occurs. A nil result means that the data availability check is successful.
func (s *Service) areBlobsAvailable(ctx context.Context, root [fieldparams.RootLength]byte, block interfaces.ReadOnlyBeaconBlock) error {
blockSlot := block.Slot()
// We are only required to check within MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS
if !params.WithinDAPeriod(slots.ToEpoch(block.Slot()), slots.ToEpoch(s.CurrentSlot())) {
return nil
@@ -841,9 +634,9 @@ func (s *Service) areBlobsAvailable(ctx context.Context, root [fieldparams.RootL
return nil
}
// get a map of BlobSidecar indices that are not currently available.
missing, err := missingBlobIndices(s.blobStorage, root, kzgCommitments, block.Slot())
missing, err := missingIndices(s.blobStorage, root, kzgCommitments, block.Slot())
if err != nil {
return errors.Wrap(err, "missing indices")
return err
}
// If there are no missing indices, all BlobSidecars are available.
if len(missing) == 0 {
@@ -855,20 +648,15 @@ func (s *Service) areBlobsAvailable(ctx context.Context, root [fieldparams.RootL
nc := s.blobNotifiers.forRoot(root, block.Slot())
// Log for DA checks that cross over into the next slot; helpful for debugging.
nextSlot := slots.BeginsAt(block.Slot()+1, s.genesisTime)
nextSlot := slots.BeginsAt(signed.Block().Slot()+1, s.genesisTime)
// Avoid logging if DA check is called after next slot start.
if nextSlot.After(time.Now()) {
nst := time.AfterFunc(time.Until(nextSlot), func() {
if len(missing) == 0 {
return
}
log.WithFields(logrus.Fields{
"slot": blockSlot,
"root": fmt.Sprintf("%#x", root),
"blobsExpected": expected,
"blobsWaiting": len(missing),
}).Error("Still waiting for blobs DA check at slot end.")
log.WithFields(daCheckLogFields(root, signed.Block().Slot(), expected, len(missing))).
Error("Still waiting for DA check at slot end.")
})
defer nst.Stop()
}
@@ -890,14 +678,13 @@ func (s *Service) areBlobsAvailable(ctx context.Context, root [fieldparams.RootL
}
}
// uint64MapToSortedSlice produces a sorted uint64 slice from a map.
func uint64MapToSortedSlice(input map[uint64]bool) []uint64 {
output := make([]uint64, 0, len(input))
for idx := range input {
output = append(output, idx)
func daCheckLogFields(root [32]byte, slot primitives.Slot, expected, missing int) logrus.Fields {
return logrus.Fields{
"slot": slot,
"root": fmt.Sprintf("%#x", root),
"blobsExpected": expected,
"blobsWaiting": missing,
}
slices.Sort[[]uint64](output)
return output
}
// lateBlockTasks is called 4 seconds into the slot and performs tasks
@@ -983,7 +770,7 @@ func (s *Service) waitForSync() error {
}
}
func (s *Service) handleInvalidExecutionError(ctx context.Context, err error, blockRoot, parentRoot [fieldparams.RootLength]byte) error {
func (s *Service) handleInvalidExecutionError(ctx context.Context, err error, blockRoot [32]byte, parentRoot [32]byte) error {
if IsInvalidBlock(err) && InvalidBlockLVH(err) != [32]byte{} {
return s.pruneInvalidBlock(ctx, blockRoot, parentRoot, InvalidBlockLVH(err))
}

View File

@@ -131,12 +131,6 @@ func (s *Service) sendStateFeedOnBlock(cfg *postBlockProcessConfig) {
}
func (s *Service) processLightClientUpdates(cfg *postBlockProcessConfig) {
if err := s.processLightClientUpdate(cfg); err != nil {
log.WithError(err).Error("Failed to process light client update")
}
if err := s.processLightClientBootstrap(cfg); err != nil {
log.WithError(err).Error("Failed to process light client bootstrap")
}
if err := s.processLightClientOptimisticUpdate(cfg.ctx, cfg.roblock, cfg.postState); err != nil {
log.WithError(err).Error("Failed to process light client optimistic update")
}
@@ -145,33 +139,38 @@ func (s *Service) processLightClientUpdates(cfg *postBlockProcessConfig) {
}
}
// processLightClientUpdate saves the light client update for this block
// saveLightClientUpdate saves the light client update for this block
// if it's better than the already saved one, when feature flag is enabled.
func (s *Service) processLightClientUpdate(cfg *postBlockProcessConfig) error {
func (s *Service) saveLightClientUpdate(cfg *postBlockProcessConfig) {
attestedRoot := cfg.roblock.Block().ParentRoot()
attestedBlock, err := s.getBlock(cfg.ctx, attestedRoot)
if err != nil {
return errors.Wrapf(err, "could not get attested block for root %#x", attestedRoot)
log.WithError(err).Errorf("Saving light client update failed: Could not get attested block for root %#x", attestedRoot)
return
}
if attestedBlock == nil || attestedBlock.IsNil() {
return errors.New("attested block is nil")
log.Error("Saving light client update failed: Attested block is nil")
return
}
attestedState, err := s.cfg.StateGen.StateByRoot(cfg.ctx, attestedRoot)
if err != nil {
return errors.Wrapf(err, "could not get attested state for root %#x", attestedRoot)
log.WithError(err).Errorf("Saving light client update failed: Could not get attested state for root %#x", attestedRoot)
return
}
if attestedState == nil || attestedState.IsNil() {
return errors.New("attested state is nil")
log.Error("Saving light client update failed: Attested state is nil")
return
}
finalizedRoot := attestedState.FinalizedCheckpoint().Root
finalizedBlock, err := s.getBlock(cfg.ctx, [32]byte(finalizedRoot))
if err != nil {
if errors.Is(err, errBlockNotFoundInCacheOrDB) {
log.Debugf("Skipping saving light client update because finalized block is nil for root %#x", finalizedRoot)
return nil
log.Debugf("Skipping saving light client update: Finalized block is nil for root %#x", finalizedRoot)
} else {
log.WithError(err).Errorf("Saving light client update failed: Could not get finalized block for root %#x", finalizedRoot)
}
return errors.Wrapf(err, "could not get finalized block for root %#x", finalizedRoot)
return
}
update, err := lightclient.NewLightClientUpdateFromBeaconState(
@@ -184,52 +183,57 @@ func (s *Service) processLightClientUpdate(cfg *postBlockProcessConfig) error {
finalizedBlock,
)
if err != nil {
return errors.Wrapf(err, "could not create light client update")
log.WithError(err).Error("Saving light client update failed: Could not create light client update")
return
}
period := slots.SyncCommitteePeriod(slots.ToEpoch(attestedState.Slot()))
oldUpdate, err := s.cfg.BeaconDB.LightClientUpdate(cfg.ctx, period)
if err != nil {
return errors.Wrapf(err, "could not get current light client update")
log.WithError(err).Error("Saving light client update failed: Could not get current light client update")
return
}
if oldUpdate == nil {
if err := s.cfg.BeaconDB.SaveLightClientUpdate(cfg.ctx, period, update); err != nil {
return errors.Wrapf(err, "could not save light client update")
log.WithError(err).Error("Saving light client update failed: Could not save light client update")
} else {
log.WithField("period", period).Debug("Saving light client update: Saved new update")
}
log.WithField("period", period).Debug("Saved new light client update")
return nil
return
}
isNewUpdateBetter, err := lightclient.IsBetterUpdate(update, oldUpdate)
if err != nil {
return errors.Wrapf(err, "could not compare light client updates")
log.WithError(err).Error("Saving light client update failed: Could not compare light client updates")
return
}
if isNewUpdateBetter {
if err := s.cfg.BeaconDB.SaveLightClientUpdate(cfg.ctx, period, update); err != nil {
return errors.Wrapf(err, "could not save light client update")
log.WithError(err).Error("Saving light client update failed: Could not save light client update")
} else {
log.WithField("period", period).Debug("Saving light client update: Saved new update")
}
log.WithField("period", period).Debug("Saved new light client update")
return nil
} else {
log.WithField("period", period).Debug("Saving light client update: New update is not better than the current one. Skipping save.")
}
log.WithField("period", period).Debug("New light client update is not better than the current one, skipping save")
return nil
}
// processLightClientBootstrap saves a light client bootstrap for this block
// saveLightClientBootstrap saves a light client bootstrap for this block
// when feature flag is enabled.
func (s *Service) processLightClientBootstrap(cfg *postBlockProcessConfig) error {
func (s *Service) saveLightClientBootstrap(cfg *postBlockProcessConfig) {
blockRoot := cfg.roblock.Root()
bootstrap, err := lightclient.NewLightClientBootstrapFromBeaconState(cfg.ctx, s.CurrentSlot(), cfg.postState, cfg.roblock)
if err != nil {
return errors.Wrapf(err, "could not create light client bootstrap")
log.WithError(err).Error("Saving light client bootstrap failed: Could not create light client bootstrap")
return
}
if err := s.cfg.BeaconDB.SaveLightClientBootstrap(cfg.ctx, blockRoot[:], bootstrap); err != nil {
return errors.Wrapf(err, "could not save light client bootstrap")
err = s.cfg.BeaconDB.SaveLightClientBootstrap(cfg.ctx, blockRoot[:], bootstrap)
if err != nil {
log.WithError(err).Error("Saving light client bootstrap failed: Could not save light client bootstrap in DB")
}
return nil
}
func (s *Service) processLightClientFinalityUpdate(
@@ -306,7 +310,7 @@ func (s *Service) processLightClientFinalityUpdate(
Data: newUpdate,
})
if err = s.cfg.P2P.BroadcastLightClientFinalityUpdate(ctx, newUpdate); err != nil {
if err = s.cfg.P2p.BroadcastLightClientFinalityUpdate(ctx, newUpdate); err != nil {
return errors.Wrap(err, "could not broadcast light client finality update")
}
@@ -359,7 +363,7 @@ func (s *Service) processLightClientOptimisticUpdate(ctx context.Context, signed
Data: newUpdate,
})
if err = s.cfg.P2P.BroadcastLightClientOptimisticUpdate(ctx, newUpdate); err != nil {
if err = s.cfg.P2p.BroadcastLightClientOptimisticUpdate(ctx, newUpdate); err != nil {
return errors.Wrap(err, "could not broadcast light client optimistic update")
}

View File

@@ -13,7 +13,6 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/blocks"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
lightClient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/signing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/transition"
"github.com/OffchainLabs/prysm/v6/beacon-chain/das"
@@ -51,7 +50,7 @@ import (
)
func Test_pruneAttsFromPool_Electra(t *testing.T) {
ctx := t.Context()
ctx := context.Background()
logHook := logTest.NewGlobal()
params.SetupTestConfigCleanup(t)
@@ -241,7 +240,7 @@ func TestFillForkChoiceMissingBlocks_CanSave(t *testing.T) {
fcp2 := &forkchoicetypes.Checkpoint{Epoch: 0, Root: r0}
require.NoError(t, service.cfg.ForkChoiceStore.UpdateFinalizedCheckpoint(fcp2))
err = service.fillInForkChoiceMissingBlocks(
t.Context(), wsb, beaconState.FinalizedCheckpoint(), beaconState.CurrentJustifiedCheckpoint())
context.Background(), wsb, beaconState.FinalizedCheckpoint(), beaconState.CurrentJustifiedCheckpoint())
require.NoError(t, err)
// 5 nodes from the block tree 1. B0 - B3 - B4 - B6 - B8
@@ -284,7 +283,7 @@ func TestFillForkChoiceMissingBlocks_RootsMatch(t *testing.T) {
require.NoError(t, service.cfg.ForkChoiceStore.UpdateFinalizedCheckpoint(fcp2))
err = service.fillInForkChoiceMissingBlocks(
t.Context(), wsb, beaconState.FinalizedCheckpoint(), beaconState.CurrentJustifiedCheckpoint())
context.Background(), wsb, beaconState.FinalizedCheckpoint(), beaconState.CurrentJustifiedCheckpoint())
require.NoError(t, err)
// 5 nodes from the block tree 1. B0 - B3 - B4 - B6 - B8
@@ -294,7 +293,7 @@ func TestFillForkChoiceMissingBlocks_RootsMatch(t *testing.T) {
wantedRoots := [][]byte{roots[0], roots[3], roots[4], roots[6], roots[8]}
for i, rt := range wantedRoots {
assert.Equal(t, true, service.cfg.ForkChoiceStore.HasNode(bytesutil.ToBytes32(rt)), fmt.Sprintf("Didn't save node: %d", i))
assert.Equal(t, true, service.cfg.BeaconDB.HasBlock(t.Context(), bytesutil.ToBytes32(rt)))
assert.Equal(t, true, service.cfg.BeaconDB.HasBlock(context.Background(), bytesutil.ToBytes32(rt)))
}
}
@@ -340,7 +339,7 @@ func TestFillForkChoiceMissingBlocks_FilterFinalized(t *testing.T) {
// Set finalized epoch to 2.
require.NoError(t, service.cfg.ForkChoiceStore.UpdateFinalizedCheckpoint(&forkchoicetypes.Checkpoint{Epoch: 2, Root: r64}))
err = service.fillInForkChoiceMissingBlocks(
t.Context(), wsb, beaconState.FinalizedCheckpoint(), beaconState.CurrentJustifiedCheckpoint())
context.Background(), wsb, beaconState.FinalizedCheckpoint(), beaconState.CurrentJustifiedCheckpoint())
require.NoError(t, err)
// There should be 1 node: block 65
@@ -373,7 +372,7 @@ func TestFillForkChoiceMissingBlocks_FinalizedSibling(t *testing.T) {
require.NoError(t, err)
err = service.fillInForkChoiceMissingBlocks(
t.Context(), wsb, beaconState.FinalizedCheckpoint(), beaconState.CurrentJustifiedCheckpoint())
context.Background(), wsb, beaconState.FinalizedCheckpoint(), beaconState.CurrentJustifiedCheckpoint())
require.Equal(t, ErrNotDescendantOfFinalized.Error(), err.Error())
}
@@ -451,20 +450,20 @@ func blockTree1(t *testing.T, beaconDB db.Database, genesisRoot []byte) ([][]byt
beaconBlock.Block.ParentRoot = bytesutil.PadTo(b.Block.ParentRoot, 32)
wsb, err := consensusblocks.NewSignedBeaconBlock(beaconBlock)
require.NoError(t, err)
if err := beaconDB.SaveBlock(t.Context(), wsb); err != nil {
if err := beaconDB.SaveBlock(context.Background(), wsb); err != nil {
return nil, err
}
if err := beaconDB.SaveState(t.Context(), st.Copy(), bytesutil.ToBytes32(beaconBlock.Block.ParentRoot)); err != nil {
if err := beaconDB.SaveState(context.Background(), st.Copy(), bytesutil.ToBytes32(beaconBlock.Block.ParentRoot)); err != nil {
return nil, errors.Wrap(err, "could not save state")
}
}
if err := beaconDB.SaveState(t.Context(), st.Copy(), r1); err != nil {
if err := beaconDB.SaveState(context.Background(), st.Copy(), r1); err != nil {
return nil, err
}
if err := beaconDB.SaveState(t.Context(), st.Copy(), r7); err != nil {
if err := beaconDB.SaveState(context.Background(), st.Copy(), r7); err != nil {
return nil, err
}
if err := beaconDB.SaveState(t.Context(), st.Copy(), r8); err != nil {
if err := beaconDB.SaveState(context.Background(), st.Copy(), r8); err != nil {
return nil, err
}
return [][]byte{r0[:], r1[:], nil, r3[:], r4[:], r5[:], r6[:], r7[:], r8[:]}, nil
@@ -477,7 +476,7 @@ func TestCurrentSlot_HandlesOverflow(t *testing.T) {
require.Equal(t, primitives.Slot(0), slot, "Unexpected slot")
}
func TestAncestorByDB_CtxErr(t *testing.T) {
ctx, cancel := context.WithCancel(t.Context())
ctx, cancel := context.WithCancel(context.Background())
opts := testServiceOptsWithDB(t)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
@@ -510,18 +509,18 @@ func TestAncestor_HandleSkipSlot(t *testing.T) {
beaconBlock := util.NewBeaconBlock()
beaconBlock.Block.Slot = b.Block.Slot
beaconBlock.Block.ParentRoot = bytesutil.PadTo(b.Block.ParentRoot, 32)
util.SaveBlock(t, t.Context(), beaconDB, beaconBlock)
util.SaveBlock(t, context.Background(), beaconDB, beaconBlock)
}
// Slots 100 to 200 are skip slots. Requesting root at 150 will yield root at 100. The last physical block.
r, err := service.Ancestor(t.Context(), r200[:], 150)
r, err := service.Ancestor(context.Background(), r200[:], 150)
require.NoError(t, err)
if bytesutil.ToBytes32(r) != r100 {
t.Error("Did not get correct root")
}
// Slots 1 to 100 are skip slots. Requesting root at 50 will yield root at 1. The last physical block.
r, err = service.Ancestor(t.Context(), r200[:], 50)
r, err = service.Ancestor(context.Background(), r200[:], 50)
require.NoError(t, err)
if bytesutil.ToBytes32(r) != r1 {
t.Error("Did not get correct root")
@@ -529,7 +528,7 @@ func TestAncestor_HandleSkipSlot(t *testing.T) {
}
func TestAncestor_CanUseForkchoice(t *testing.T) {
ctx := t.Context()
ctx := context.Background()
opts := testServiceOptsWithDB(t)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
@@ -557,12 +556,12 @@ func TestAncestor_CanUseForkchoice(t *testing.T) {
beaconBlock.Block.ParentRoot = bytesutil.PadTo(b.Block.ParentRoot, 32)
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
st, blkRoot, err := prepareForkchoiceState(t.Context(), b.Block.Slot, r, bytesutil.ToBytes32(b.Block.ParentRoot), params.BeaconConfig().ZeroHash, ojc, ofc)
st, blkRoot, err := prepareForkchoiceState(context.Background(), b.Block.Slot, r, bytesutil.ToBytes32(b.Block.ParentRoot), params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
}
r, err := service.Ancestor(t.Context(), r200[:], 150)
r, err := service.Ancestor(context.Background(), r200[:], 150)
require.NoError(t, err)
if bytesutil.ToBytes32(r) != r100 {
t.Error("Did not get correct root")
@@ -594,14 +593,14 @@ func TestAncestor_CanUseDB(t *testing.T) {
beaconBlock := util.NewBeaconBlock()
beaconBlock.Block.Slot = b.Block.Slot
beaconBlock.Block.ParentRoot = bytesutil.PadTo(b.Block.ParentRoot, 32)
util.SaveBlock(t, t.Context(), beaconDB, beaconBlock)
util.SaveBlock(t, context.Background(), beaconDB, beaconBlock)
}
st, blkRoot, err := prepareForkchoiceState(t.Context(), 200, r200, r200, params.BeaconConfig().ZeroHash, ojc, ofc)
st, blkRoot, err := prepareForkchoiceState(context.Background(), 200, r200, r200, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
r, err := service.Ancestor(t.Context(), r200[:], 150)
r, err := service.Ancestor(context.Background(), r200[:], 150)
require.NoError(t, err)
if bytesutil.ToBytes32(r) != r100 {
t.Error("Did not get correct root")
@@ -609,7 +608,7 @@ func TestAncestor_CanUseDB(t *testing.T) {
}
func TestEnsureRootNotZeroHashes(t *testing.T) {
ctx := t.Context()
ctx := context.Background()
opts := testServiceOptsNoDB()
service, err := NewService(ctx, opts...)
require.NoError(t, err)
@@ -623,7 +622,7 @@ func TestEnsureRootNotZeroHashes(t *testing.T) {
}
func TestHandleEpochBoundary_UpdateFirstSlot(t *testing.T) {
ctx := t.Context()
ctx := context.Background()
opts := testServiceOptsNoDB()
service, err := NewService(ctx, opts...)
require.NoError(t, err)
@@ -922,7 +921,7 @@ func TestRemoveBlockAttestationsInPool(t *testing.T) {
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
ctx := t.Context()
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{Root: r[:]}))
@@ -935,7 +934,7 @@ func TestRemoveBlockAttestationsInPool(t *testing.T) {
require.NoError(t, service.cfg.AttPool.SaveAggregatedAttestations(atts))
wsb, err := consensusblocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
service.pruneAttsFromPool(t.Context(), nil /* state not needed pre-Electra */, wsb)
service.pruneAttsFromPool(context.Background(), nil /* state not needed pre-Electra */, wsb)
require.LogsDoNotContain(t, logHook, "Could not prune attestations")
require.Equal(t, 0, service.cfg.AttPool.AggregatedAttestationCount())
}
@@ -2332,13 +2331,13 @@ func driftGenesisTime(s *Service, slot, delay int64) {
s.cfg.ForkChoiceStore.SetGenesisTime(uint64(newTime.Unix()))
}
func TestMissingBlobIndices(t *testing.T) {
func TestMissingIndices(t *testing.T) {
cases := []struct {
name string
expected [][]byte
present []uint64
result map[uint64]struct{}
root [fieldparams.RootLength]byte
root [32]byte
err error
}{
{
@@ -2396,7 +2395,7 @@ func TestMissingBlobIndices(t *testing.T) {
bm, bs := filesystem.NewEphemeralBlobStorageWithMocker(t)
t.Run(c.name, func(t *testing.T) {
require.NoError(t, bm.CreateFakeIndices(c.root, 0, c.present...))
missing, err := missingBlobIndices(bs, c.root, c.expected, 0)
missing, err := missingIndices(bs, c.root, c.expected, 0)
if c.err != nil {
require.ErrorIs(t, err, c.err)
return
@@ -2404,70 +2403,9 @@ func TestMissingBlobIndices(t *testing.T) {
require.NoError(t, err)
require.Equal(t, len(c.result), len(missing))
for key := range c.result {
require.Equal(t, true, missing[key])
}
})
}
}
func TestMissingDataColumnIndices(t *testing.T) {
countPlusOne := params.BeaconConfig().NumberOfColumns + 1
tooManyColumns := make(map[uint64]bool, countPlusOne)
for i := range countPlusOne {
tooManyColumns[uint64(i)] = true
}
testCases := []struct {
name string
storedIndices []uint64
input map[uint64]bool
expected map[uint64]bool
err error
}{
{
name: "zero len expected",
input: map[uint64]bool{},
},
{
name: "expected exceeds max",
input: tooManyColumns,
err: errMaxDataColumnsExceeded,
},
{
name: "all missing",
storedIndices: []uint64{},
input: map[uint64]bool{0: true, 1: true, 2: true},
expected: map[uint64]bool{0: true, 1: true, 2: true},
},
{
name: "none missing",
input: map[uint64]bool{0: true, 1: true, 2: true},
expected: map[uint64]bool{},
storedIndices: []uint64{0, 1, 2, 3, 4}, // Extra columns stored but not expected
},
{
name: "some missing",
storedIndices: []uint64{0, 20},
input: map[uint64]bool{0: true, 10: true, 20: true, 30: true},
expected: map[uint64]bool{10: true, 30: true},
},
}
var emptyRoot [fieldparams.RootLength]byte
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
dcm, dcs := filesystem.NewEphemeralDataColumnStorageWithMocker(t)
err := dcm.CreateFakeIndices(emptyRoot, 0, tc.storedIndices...)
require.NoError(t, err)
// Test the function
actual, err := missingDataColumnIndices(dcs, emptyRoot, tc.input)
require.ErrorIs(t, err, tc.err)
require.Equal(t, len(tc.expected), len(actual))
for key := range tc.expected {
require.Equal(t, true, actual[key])
m, ok := missing[key]
require.Equal(t, true, ok)
require.Equal(t, c.result[key], m)
}
})
}
@@ -2667,7 +2605,7 @@ func TestRollbackBlock_ContextDeadline(t *testing.T) {
require.Equal(t, true, hasState)
// Set deadlined context when processing the block
cancCtx, canc := context.WithCancel(t.Context())
cancCtx, canc := context.WithCancel(context.Background())
canc()
roblock, err = consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
@@ -2706,7 +2644,7 @@ func fakeResult(missing []uint64) map[uint64]struct{} {
return r
}
func TestProcessLightClientUpdate(t *testing.T) {
func TestSaveLightClientUpdate(t *testing.T) {
featCfg := &features.Flags{}
featCfg.EnableLightClient = true
reset := features.InitWithReset(featCfg)
@@ -2747,7 +2685,7 @@ func TestProcessLightClientUpdate(t *testing.T) {
isValidPayload: true,
}
require.NoError(t, s.processLightClientUpdate(cfg))
s.saveLightClientUpdate(cfg)
// Check that the light client update is saved
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
@@ -2796,13 +2734,13 @@ func TestProcessLightClientUpdate(t *testing.T) {
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
// create and save old update
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(l.AttestedBlock)
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(s.CurrentSlot(), l.AttestedState)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
require.NoError(t, err)
require.NoError(t, s.processLightClientUpdate(cfg))
s.saveLightClientUpdate(cfg)
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
require.NoError(t, err)
@@ -2848,7 +2786,7 @@ func TestProcessLightClientUpdate(t *testing.T) {
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
// create and save old update
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(l.AttestedBlock)
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(s.CurrentSlot(), l.AttestedState)
require.NoError(t, err)
scb := make([]byte, 64)
@@ -2863,7 +2801,7 @@ func TestProcessLightClientUpdate(t *testing.T) {
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
require.NoError(t, err)
require.NoError(t, s.processLightClientUpdate(cfg))
s.saveLightClientUpdate(cfg)
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
require.NoError(t, err)
@@ -2906,7 +2844,7 @@ func TestProcessLightClientUpdate(t *testing.T) {
isValidPayload: true,
}
require.NoError(t, s.processLightClientUpdate(cfg))
s.saveLightClientUpdate(cfg)
// Check that the light client update is saved
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
@@ -2954,13 +2892,13 @@ func TestProcessLightClientUpdate(t *testing.T) {
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
// create and save old update
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(l.AttestedBlock)
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(s.CurrentSlot(), l.AttestedState)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
require.NoError(t, err)
require.NoError(t, s.processLightClientUpdate(cfg))
s.saveLightClientUpdate(cfg)
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
require.NoError(t, err)
@@ -3006,7 +2944,7 @@ func TestProcessLightClientUpdate(t *testing.T) {
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
// create and save old update
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(l.AttestedBlock)
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(s.CurrentSlot(), l.AttestedState)
require.NoError(t, err)
scb := make([]byte, 64)
@@ -3021,7 +2959,7 @@ func TestProcessLightClientUpdate(t *testing.T) {
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
require.NoError(t, err)
require.NoError(t, s.processLightClientUpdate(cfg))
s.saveLightClientUpdate(cfg)
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
require.NoError(t, err)
@@ -3064,7 +3002,7 @@ func TestProcessLightClientUpdate(t *testing.T) {
isValidPayload: true,
}
require.NoError(t, s.processLightClientUpdate(cfg))
s.saveLightClientUpdate(cfg)
// Check that the light client update is saved
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
@@ -3112,13 +3050,13 @@ func TestProcessLightClientUpdate(t *testing.T) {
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
// create and save old update
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(l.AttestedBlock)
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(s.CurrentSlot(), l.AttestedState)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
require.NoError(t, err)
require.NoError(t, s.processLightClientUpdate(cfg))
s.saveLightClientUpdate(cfg)
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
require.NoError(t, err)
@@ -3164,7 +3102,7 @@ func TestProcessLightClientUpdate(t *testing.T) {
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
// create and save old update
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(l.AttestedBlock)
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(s.CurrentSlot(), l.AttestedState)
require.NoError(t, err)
scb := make([]byte, 64)
@@ -3179,7 +3117,7 @@ func TestProcessLightClientUpdate(t *testing.T) {
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
require.NoError(t, err)
require.NoError(t, s.processLightClientUpdate(cfg))
s.saveLightClientUpdate(cfg)
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
require.NoError(t, err)
@@ -3192,7 +3130,7 @@ func TestProcessLightClientUpdate(t *testing.T) {
reset()
}
func TestProcessLightClientBootstrap(t *testing.T) {
func TestSaveLightClientBootstrap(t *testing.T) {
featCfg := &features.Flags{}
featCfg.EnableLightClient = true
reset := features.InitWithReset(featCfg)
@@ -3222,7 +3160,7 @@ func TestProcessLightClientBootstrap(t *testing.T) {
isValidPayload: true,
}
require.NoError(t, s.processLightClientBootstrap(cfg))
s.saveLightClientBootstrap(cfg)
// Check that the light client bootstrap is saved
b, err := s.cfg.BeaconDB.LightClientBootstrap(ctx, currentBlockRoot[:])
@@ -3257,7 +3195,7 @@ func TestProcessLightClientBootstrap(t *testing.T) {
isValidPayload: true,
}
require.NoError(t, s.processLightClientBootstrap(cfg))
s.saveLightClientBootstrap(cfg)
// Check that the light client bootstrap is saved
b, err := s.cfg.BeaconDB.LightClientBootstrap(ctx, currentBlockRoot[:])
@@ -3292,7 +3230,7 @@ func TestProcessLightClientBootstrap(t *testing.T) {
isValidPayload: true,
}
require.NoError(t, s.processLightClientBootstrap(cfg))
s.saveLightClientBootstrap(cfg)
// Check that the light client bootstrap is saved
b, err := s.cfg.BeaconDB.LightClientBootstrap(ctx, currentBlockRoot[:])
@@ -3308,235 +3246,6 @@ func TestProcessLightClientBootstrap(t *testing.T) {
reset()
}
type testIsAvailableParams struct {
options []Option
blobKzgCommitmentsCount uint64
columnsToSave []uint64
}
func testIsAvailableSetup(t *testing.T, params testIsAvailableParams) (context.Context, context.CancelFunc, *Service, [fieldparams.RootLength]byte, interfaces.SignedBeaconBlock) {
ctx, cancel := context.WithCancel(t.Context())
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
options := append(params.options, WithDataColumnStorage(dataColumnStorage))
service, _ := minimalTestService(t, options...)
genesisState, secretKeys := util.DeterministicGenesisStateElectra(t, 32 /*validator count*/)
err := service.saveGenesisData(ctx, genesisState)
require.NoError(t, err)
conf := util.DefaultBlockGenConfig()
conf.NumBlobKzgCommitments = params.blobKzgCommitmentsCount
signedBeaconBlock, err := util.GenerateFullBlockFulu(genesisState, secretKeys, conf, 10 /*block slot*/)
require.NoError(t, err)
block := signedBeaconBlock.Block
bodyRoot, err := block.Body.HashTreeRoot()
require.NoError(t, err)
root, err := block.HashTreeRoot()
require.NoError(t, err)
dataColumnsParams := make([]util.DataColumnParam, 0, len(params.columnsToSave))
for _, i := range params.columnsToSave {
dataColumnParam := util.DataColumnParam{
Index: i,
Slot: block.Slot,
ProposerIndex: block.ProposerIndex,
ParentRoot: block.ParentRoot,
StateRoot: block.StateRoot,
BodyRoot: bodyRoot[:],
}
dataColumnsParams = append(dataColumnsParams, dataColumnParam)
}
_, verifiedRODataColumns := util.CreateTestVerifiedRoDataColumnSidecars(t, dataColumnsParams)
err = dataColumnStorage.Save(verifiedRODataColumns)
require.NoError(t, err)
signed, err := consensusblocks.NewSignedBeaconBlock(signedBeaconBlock)
require.NoError(t, err)
return ctx, cancel, service, root, signed
}
func TestIsDataAvailable(t *testing.T) {
t.Run("Fulu - out of retention window", func(t *testing.T) {
params := testIsAvailableParams{options: []Option{WithGenesisTime(time.Unix(0, 0))}}
ctx, _, service, root, signed := testIsAvailableSetup(t, params)
err := service.isDataAvailable(ctx, root, signed)
require.NoError(t, err)
})
t.Run("Fulu - no commitment in blocks", func(t *testing.T) {
ctx, _, service, root, signed := testIsAvailableSetup(t, testIsAvailableParams{})
err := service.isDataAvailable(ctx, root, signed)
require.NoError(t, err)
})
t.Run("Fulu - more than half of the columns in custody", func(t *testing.T) {
minimumColumnsCountToReconstruct := peerdas.MinimumColumnsCountToReconstruct()
indices := make([]uint64, 0, minimumColumnsCountToReconstruct)
for i := range minimumColumnsCountToReconstruct {
indices = append(indices, i)
}
params := testIsAvailableParams{
options: []Option{WithCustodyInfo(&peerdas.CustodyInfo{})},
columnsToSave: indices,
blobKzgCommitmentsCount: 3,
}
ctx, _, service, root, signed := testIsAvailableSetup(t, params)
err := service.isDataAvailable(ctx, root, signed)
require.NoError(t, err)
})
t.Run("Fulu - no missing data columns", func(t *testing.T) {
params := testIsAvailableParams{
options: []Option{WithCustodyInfo(&peerdas.CustodyInfo{})},
columnsToSave: []uint64{1, 17, 19, 42, 75, 87, 102, 117, 119}, // 119 is not needed
blobKzgCommitmentsCount: 3,
}
ctx, _, service, root, signed := testIsAvailableSetup(t, params)
err := service.isDataAvailable(ctx, root, signed)
require.NoError(t, err)
})
t.Run("Fulu - some initially missing data columns (no reconstruction)", func(t *testing.T) {
startWaiting := make(chan bool)
testParams := testIsAvailableParams{
options: []Option{WithCustodyInfo(&peerdas.CustodyInfo{}), WithStartWaitingDataColumnSidecars(startWaiting)},
columnsToSave: []uint64{1, 17, 19, 75, 102, 117, 119}, // 119 is not needed, 42 and 87 are missing
blobKzgCommitmentsCount: 3,
}
ctx, _, service, root, signed := testIsAvailableSetup(t, testParams)
block := signed.Block()
slot := block.Slot()
proposerIndex := block.ProposerIndex()
parentRoot := block.ParentRoot()
stateRoot := block.StateRoot()
bodyRoot, err := block.Body().HashTreeRoot()
require.NoError(t, err)
_, verifiedSidecarsWrongRoot := util.CreateTestVerifiedRoDataColumnSidecars(
t,
[]util.DataColumnParam{
{Index: 42, Slot: slot + 1}, // Needed index, but not for this slot.
})
_, verifiedSidecars := util.CreateTestVerifiedRoDataColumnSidecars(t, []util.DataColumnParam{
{Index: 87, Slot: slot, ProposerIndex: proposerIndex, ParentRoot: parentRoot[:], StateRoot: stateRoot[:], BodyRoot: bodyRoot[:]}, // Needed index
{Index: 1, Slot: slot, ProposerIndex: proposerIndex, ParentRoot: parentRoot[:], StateRoot: stateRoot[:], BodyRoot: bodyRoot[:]}, // Not needed index
{Index: 42, Slot: slot, ProposerIndex: proposerIndex, ParentRoot: parentRoot[:], StateRoot: stateRoot[:], BodyRoot: bodyRoot[:]}, // Needed index
})
go func() {
<-startWaiting
err := service.dataColumnStorage.Save(verifiedSidecarsWrongRoot)
require.NoError(t, err)
err = service.dataColumnStorage.Save(verifiedSidecars)
require.NoError(t, err)
}()
err = service.isDataAvailable(ctx, root, signed)
require.NoError(t, err)
})
t.Run("Fulu - some initially missing data columns (reconstruction)", func(t *testing.T) {
const (
missingColumns = uint64(2)
cgc = 128
)
startWaiting := make(chan bool)
var custodyInfo peerdas.CustodyInfo
custodyInfo.TargetGroupCount.SetValidatorsCustodyRequirement(cgc)
custodyInfo.ToAdvertiseGroupCount.Set(cgc)
minimumColumnsCountToReconstruct := peerdas.MinimumColumnsCountToReconstruct()
indices := make([]uint64, 0, minimumColumnsCountToReconstruct-missingColumns)
for i := range minimumColumnsCountToReconstruct - missingColumns {
indices = append(indices, i)
}
testParams := testIsAvailableParams{
options: []Option{WithCustodyInfo(&custodyInfo), WithStartWaitingDataColumnSidecars(startWaiting)},
columnsToSave: indices,
blobKzgCommitmentsCount: 3,
}
ctx, _, service, root, signed := testIsAvailableSetup(t, testParams)
block := signed.Block()
slot := block.Slot()
proposerIndex := block.ProposerIndex()
parentRoot := block.ParentRoot()
stateRoot := block.StateRoot()
bodyRoot, err := block.Body().HashTreeRoot()
require.NoError(t, err)
dataColumnParams := make([]util.DataColumnParam, 0, missingColumns)
for i := minimumColumnsCountToReconstruct - missingColumns; i < minimumColumnsCountToReconstruct; i++ {
dataColumnParam := util.DataColumnParam{
Index: i,
Slot: slot,
ProposerIndex: proposerIndex,
ParentRoot: parentRoot[:],
StateRoot: stateRoot[:],
BodyRoot: bodyRoot[:],
}
dataColumnParams = append(dataColumnParams, dataColumnParam)
}
_, verifiedSidecars := util.CreateTestVerifiedRoDataColumnSidecars(t, dataColumnParams)
go func() {
<-startWaiting
err := service.dataColumnStorage.Save(verifiedSidecars)
require.NoError(t, err)
}()
err = service.isDataAvailable(ctx, root, signed)
require.NoError(t, err)
})
t.Run("Fulu - some columns are definitively missing", func(t *testing.T) {
startWaiting := make(chan bool)
params := testIsAvailableParams{
options: []Option{WithCustodyInfo(&peerdas.CustodyInfo{}), WithStartWaitingDataColumnSidecars(startWaiting)},
blobKzgCommitmentsCount: 3,
}
ctx, cancel, service, root, signed := testIsAvailableSetup(t, params)
go func() {
<-startWaiting
cancel()
}()
err := service.isDataAvailable(ctx, root, signed)
require.NotNil(t, err)
})
}
func setupLightClientTestRequirements(ctx context.Context, t *testing.T, s *Service, v int, options ...util.LightClientOption) (*util.TestLightClient, *postBlockProcessConfig) {
var l *util.TestLightClient
switch v {
@@ -3601,7 +3310,7 @@ func TestProcessLightClientOptimisticUpdate(t *testing.T) {
params.OverrideBeaconConfig(beaconCfg)
s, tr := minimalTestService(t)
s.cfg.P2P = &mockp2p.FakeP2P{}
s.cfg.P2p = &mockp2p.FakeP2P{}
ctx := tr.ctx
testCases := []struct {
@@ -3647,7 +3356,7 @@ func TestProcessLightClientOptimisticUpdate(t *testing.T) {
expectedVersion = version.Altair
case 2:
forkEpoch = uint64(params.BeaconConfig().BellatrixForkEpoch)
expectedVersion = version.Bellatrix
expectedVersion = version.Altair
case 3:
forkEpoch = uint64(params.BeaconConfig().CapellaForkEpoch)
expectedVersion = version.Capella
@@ -3656,7 +3365,7 @@ func TestProcessLightClientOptimisticUpdate(t *testing.T) {
expectedVersion = version.Deneb
case 5:
forkEpoch = uint64(params.BeaconConfig().ElectraForkEpoch)
expectedVersion = version.Electra
expectedVersion = version.Deneb
default:
t.Errorf("Unsupported fork version %s", version.String(testVersion))
}
@@ -3737,7 +3446,7 @@ func TestProcessLightClientFinalityUpdate(t *testing.T) {
params.OverrideBeaconConfig(beaconCfg)
s, tr := minimalTestService(t)
s.cfg.P2P = &mockp2p.FakeP2P{}
s.cfg.P2p = &mockp2p.FakeP2P{}
ctx := tr.ctx
testCases := []struct {
@@ -3801,7 +3510,7 @@ func TestProcessLightClientFinalityUpdate(t *testing.T) {
expectedVersion = version.Altair
case 2:
forkEpoch = uint64(params.BeaconConfig().BellatrixForkEpoch)
expectedVersion = version.Bellatrix
expectedVersion = version.Altair
case 3:
forkEpoch = uint64(params.BeaconConfig().CapellaForkEpoch)
expectedVersion = version.Capella

View File

@@ -1,6 +1,7 @@
package blockchain
import (
"context"
"testing"
"time"
@@ -31,7 +32,7 @@ func TestAttestationCheckPtState_FarFutureSlot(t *testing.T) {
service.genesisTime = time.Now()
e := primitives.Epoch(slots.MaxSlotBuffer/uint64(params.BeaconConfig().SlotsPerEpoch) + 1)
_, err := service.AttestationTargetState(t.Context(), &ethpb.Checkpoint{Epoch: e})
_, err := service.AttestationTargetState(context.Background(), &ethpb.Checkpoint{Epoch: e})
require.ErrorContains(t, "exceeds max allowed value relative to the local clock", err)
}
@@ -55,11 +56,11 @@ func TestVerifyLMDFFGConsistent(t *testing.T) {
a.Data.Target.Root = []byte{'c'}
r33Root := r33.Root()
a.Data.BeaconBlockRoot = r33Root[:]
require.ErrorContains(t, wanted, service.VerifyLmdFfgConsistency(t.Context(), a))
require.ErrorContains(t, wanted, service.VerifyLmdFfgConsistency(context.Background(), a))
r32Root := r32.Root()
a.Data.Target.Root = r32Root[:]
err = service.VerifyLmdFfgConsistency(t.Context(), a)
err = service.VerifyLmdFfgConsistency(context.Background(), a)
require.NoError(t, err, "Could not verify LMD and FFG votes to be consistent")
}

View File

@@ -16,7 +16,6 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/slasher/types"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/config/features"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
@@ -53,13 +52,6 @@ type BlobReceiver interface {
ReceiveBlob(context.Context, blocks.VerifiedROBlob) error
}
// DataColumnReceiver interface defines the methods of chain service for receiving new
// data columns
type DataColumnReceiver interface {
ReceiveDataColumn(blocks.VerifiedRODataColumn) error
ReceiveDataColumns([]blocks.VerifiedRODataColumn) error
}
// SlashingReceiver interface defines the methods of chain service for receiving validated slashing over the wire.
type SlashingReceiver interface {
ReceiveAttesterSlashing(ctx context.Context, slashing ethpb.AttSlashing)
@@ -82,7 +74,6 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
log.WithField("blockRoot", fmt.Sprintf("%#x", blockRoot)).Debug("Ignoring already synced block")
return nil
}
receivedTime := time.Now()
s.blockBeingSynced.set(blockRoot)
defer s.blockBeingSynced.unset(blockRoot)
@@ -91,7 +82,6 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
if err != nil {
return err
}
preState, err := s.getBlockPreState(ctx, blockCopy.Block())
if err != nil {
return errors.Wrap(err, "could not get block's prestate")
@@ -107,12 +97,10 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
if err != nil {
return err
}
daWaitedTime, err := s.handleDA(ctx, blockCopy, blockRoot, avs)
if err != nil {
return err
}
// Defragment the state before continuing block processing.
s.defragmentState(postState)
@@ -239,34 +227,26 @@ func (s *Service) validateExecutionAndConsensus(
func (s *Service) handleDA(
ctx context.Context,
block interfaces.SignedBeaconBlock,
blockRoot [fieldparams.RootLength]byte,
blockRoot [32]byte,
avs das.AvailabilityStore,
) (elapsed time.Duration, err error) {
defer func(start time.Time) {
elapsed = time.Since(start)
if err == nil {
dataAvailWaitedTime.Observe(float64(elapsed.Milliseconds()))
) (time.Duration, error) {
daStartTime := time.Now()
if avs != nil {
rob, err := blocks.NewROBlockWithRoot(block, blockRoot)
if err != nil {
return 0, err
}
}(time.Now())
if avs == nil {
if err = s.isDataAvailable(ctx, blockRoot, block); err != nil {
return
if err := avs.IsDataAvailable(ctx, s.CurrentSlot(), rob); err != nil {
return 0, errors.Wrap(err, "could not validate blob data availability (AvailabilityStore.IsDataAvailable)")
}
} else {
if err := s.isDataAvailable(ctx, blockRoot, block); err != nil {
return 0, errors.Wrap(err, "could not validate blob data availability")
}
return
}
var rob blocks.ROBlock
rob, err = blocks.NewROBlockWithRoot(block, blockRoot)
if err != nil {
return
}
err = avs.IsDataAvailable(ctx, s.CurrentSlot(), rob)
return
daWaitedTime := time.Since(daStartTime)
dataAvailWaitedTime.Observe(float64(daWaitedTime.Milliseconds()))
return daWaitedTime, nil
}
func (s *Service) reportPostBlockProcessing(

View File

@@ -1,6 +1,7 @@
package blockchain
import (
"context"
"sync"
"testing"
"time"
@@ -27,7 +28,7 @@ import (
)
func TestService_ReceiveBlock(t *testing.T) {
ctx := t.Context()
ctx := context.Background()
genesis, keys := util.DeterministicGenesisState(t, 64)
copiedGen := genesis.Copy()
@@ -179,19 +180,6 @@ func TestService_ReceiveBlock(t *testing.T) {
}
wg.Wait()
}
func TestHandleDA(t *testing.T) {
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlock{
Block: &ethpb.BeaconBlock{
Body: &ethpb.BeaconBlockBody{},
},
})
require.NoError(t, err)
s, _ := minimalTestService(t)
elapsed, err := s.handleDA(t.Context(), signedBeaconBlock, [fieldparams.RootLength]byte{}, nil)
require.NoError(t, err)
require.Equal(t, true, elapsed > 0, "Elapsed time should be greater than 0")
}
func TestService_ReceiveBlockUpdateHead(t *testing.T) {
s, tr := minimalTestService(t,
@@ -227,7 +215,7 @@ func TestService_ReceiveBlockUpdateHead(t *testing.T) {
}
func TestService_ReceiveBlockBatch(t *testing.T) {
ctx := t.Context()
ctx := context.Background()
genesis, keys := util.DeterministicGenesisState(t, 64)
genFullBlock := func(t *testing.T, conf *util.BlockGenConfig, slot primitives.Slot) *ethpb.SignedBeaconBlock {
@@ -292,23 +280,23 @@ func TestService_ReceiveBlockBatch(t *testing.T) {
func TestService_HasBlock(t *testing.T) {
s, _ := minimalTestService(t)
r := [32]byte{'a'}
if s.HasBlock(t.Context(), r) {
if s.HasBlock(context.Background(), r) {
t.Error("Should not have block")
}
wsb, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlock())
require.NoError(t, err)
require.NoError(t, s.saveInitSyncBlock(t.Context(), r, wsb))
if !s.HasBlock(t.Context(), r) {
require.NoError(t, s.saveInitSyncBlock(context.Background(), r, wsb))
if !s.HasBlock(context.Background(), r) {
t.Error("Should have block")
}
b := util.NewBeaconBlock()
b.Block.Slot = 1
util.SaveBlock(t, t.Context(), s.cfg.BeaconDB, b)
util.SaveBlock(t, context.Background(), s.cfg.BeaconDB, b)
r, err = b.Block.HashTreeRoot()
require.NoError(t, err)
require.Equal(t, true, s.HasBlock(t.Context(), r))
require.Equal(t, true, s.HasBlock(context.Background(), r))
s.blockBeingSynced.set(r)
require.Equal(t, false, s.HasBlock(t.Context(), r))
require.Equal(t, false, s.HasBlock(context.Background(), r))
}
func TestCheckSaveHotStateDB_Enabling(t *testing.T) {
@@ -317,7 +305,7 @@ func TestCheckSaveHotStateDB_Enabling(t *testing.T) {
st := params.BeaconConfig().SlotsPerEpoch.Mul(uint64(epochsSinceFinalitySaveHotStateDB))
s.genesisTime = time.Now().Add(time.Duration(-1*int64(st)*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second)
require.NoError(t, s.checkSaveHotStateDB(t.Context()))
require.NoError(t, s.checkSaveHotStateDB(context.Background()))
assert.LogsContain(t, hook, "Entering mode to save hot states in DB")
}
@@ -328,10 +316,10 @@ func TestCheckSaveHotStateDB_Disabling(t *testing.T) {
st := params.BeaconConfig().SlotsPerEpoch.Mul(uint64(epochsSinceFinalitySaveHotStateDB))
s.genesisTime = time.Now().Add(time.Duration(-1*int64(st)*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second)
require.NoError(t, s.checkSaveHotStateDB(t.Context()))
require.NoError(t, s.checkSaveHotStateDB(context.Background()))
s.genesisTime = time.Now()
require.NoError(t, s.checkSaveHotStateDB(t.Context()))
require.NoError(t, s.checkSaveHotStateDB(context.Background()))
assert.LogsContain(t, hook, "Exiting mode to save hot states in DB")
}
@@ -340,7 +328,7 @@ func TestCheckSaveHotStateDB_Overflow(t *testing.T) {
s, _ := minimalTestService(t)
s.genesisTime = time.Now()
require.NoError(t, s.checkSaveHotStateDB(t.Context()))
require.NoError(t, s.checkSaveHotStateDB(context.Background()))
assert.LogsDoNotContain(t, hook, "Entering mode to save hot states in DB")
}
@@ -455,7 +443,7 @@ func Test_executePostFinalizationTasks(t *testing.T) {
headState, err := util.NewBeaconStateElectra()
require.NoError(t, err)
finalizedStRoot, err := headState.HashTreeRoot(t.Context())
finalizedStRoot, err := headState.HashTreeRoot(context.Background())
require.NoError(t, err)
genesis := util.NewBeaconBlock()

View File

@@ -1,25 +0,0 @@
package blockchain
import (
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/pkg/errors"
)
// ReceiveDataColumns receives a batch of data columns.
func (s *Service) ReceiveDataColumns(dataColumnSidecars []blocks.VerifiedRODataColumn) error {
if err := s.dataColumnStorage.Save(dataColumnSidecars); err != nil {
return errors.Wrap(err, "save data column sidecars")
}
return nil
}
// ReceiveDataColumn receives a single data column.
// (It is only a wrapper around ReceiveDataColumns.)
func (s *Service) ReceiveDataColumn(dataColumnSidecar blocks.VerifiedRODataColumn) error {
if err := s.dataColumnStorage.Save([]blocks.VerifiedRODataColumn{dataColumnSidecar}); err != nil {
return errors.Wrap(err, "save data column sidecars")
}
return nil
}

View File

@@ -16,7 +16,6 @@ import (
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
lightClient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
coreTime "github.com/OffchainLabs/prysm/v6/beacon-chain/core/time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/transition"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
@@ -47,28 +46,26 @@ import (
// Service represents a service that handles the internal
// logic of managing the full PoS beacon chain.
type Service struct {
cfg *config
ctx context.Context
cancel context.CancelFunc
genesisTime time.Time
head *head
headLock sync.RWMutex
originBlockRoot [32]byte // genesis root, or weak subjectivity checkpoint root, depending on how the node is initialized
boundaryRoots [][32]byte
checkpointStateCache *cache.CheckpointStateCache
initSyncBlocks map[[32]byte]interfaces.ReadOnlySignedBeaconBlock
initSyncBlocksLock sync.RWMutex
wsVerifier *WeakSubjectivityVerifier
clockSetter startup.ClockSetter
clockWaiter startup.ClockWaiter
syncComplete chan struct{}
blobNotifiers *blobNotifierMap
blockBeingSynced *currentlySyncingBlock
blobStorage *filesystem.BlobStorage
dataColumnStorage *filesystem.DataColumnStorage
slasherEnabled bool
lcStore *lightClient.Store
startWaitingDataColumnSidecars chan bool // for testing purposes only
cfg *config
ctx context.Context
cancel context.CancelFunc
genesisTime time.Time
head *head
headLock sync.RWMutex
originBlockRoot [32]byte // genesis root, or weak subjectivity checkpoint root, depending on how the node is initialized
boundaryRoots [][32]byte
checkpointStateCache *cache.CheckpointStateCache
initSyncBlocks map[[32]byte]interfaces.ReadOnlySignedBeaconBlock
initSyncBlocksLock sync.RWMutex
wsVerifier *WeakSubjectivityVerifier
clockSetter startup.ClockSetter
clockWaiter startup.ClockWaiter
syncComplete chan struct{}
blobNotifiers *blobNotifierMap
blockBeingSynced *currentlySyncingBlock
blobStorage *filesystem.BlobStorage
slasherEnabled bool
lcStore *lightClient.Store
}
// config options for the service.
@@ -84,7 +81,7 @@ type config struct {
ExitPool voluntaryexits.PoolManager
SlashingPool slashings.PoolManager
BLSToExecPool blstoexec.PoolManager
P2P p2p.Accessor
P2p p2p.Broadcaster
MaxRoutines int
StateNotifier statefeed.Notifier
ForkChoiceStore f.ForkChoicer
@@ -96,7 +93,6 @@ type config struct {
FinalizedStateAtStartUp state.BeaconState
ExecutionEngineCaller execution.EngineCaller
SyncChecker Checker
CustodyInfo *peerdas.CustodyInfo
}
// Checker is an interface used to determine if a node is in initial sync

View File

@@ -1,6 +1,7 @@
package blockchain
import (
"context"
"io"
"testing"
@@ -25,7 +26,7 @@ func TestChainService_SaveHead_DataRace(t *testing.T) {
st, _ := util.DeterministicGenesisState(t, 1)
require.NoError(t, err)
go func() {
require.NoError(t, s.saveHead(t.Context(), [32]byte{}, b, st))
require.NoError(t, s.saveHead(context.Background(), [32]byte{}, b, st))
}()
require.NoError(t, s.saveHead(t.Context(), [32]byte{}, b, st))
require.NoError(t, s.saveHead(context.Background(), [32]byte{}, b, st))
}

View File

@@ -42,7 +42,7 @@ import (
)
func setupBeaconChain(t *testing.T, beaconDB db.Database) *Service {
ctx := t.Context()
ctx := context.Background()
var web3Service *execution.Service
var err error
srv, endpoint, err := mockExecution.SetupRPCServer()
@@ -97,7 +97,7 @@ func setupBeaconChain(t *testing.T, beaconDB db.Database) *Service {
WithAttestationPool(attestations.NewPool()),
WithSlashingPool(slashings.NewPool()),
WithExitPool(voluntaryexits.NewPool()),
WithP2PBroadcaster(&mockAccessor{}),
WithP2PBroadcaster(&mockBroadcaster{}),
WithStateNotifier(&mockBeaconNode{}),
WithForkChoiceStore(fc),
WithAttestationService(attService),
@@ -115,7 +115,7 @@ func setupBeaconChain(t *testing.T, beaconDB db.Database) *Service {
func TestChainStartStop_Initialized(t *testing.T) {
hook := logTest.NewGlobal()
ctx := t.Context()
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
chainService := setupBeaconChain(t, beaconDB)
@@ -152,7 +152,7 @@ func TestChainStartStop_Initialized(t *testing.T) {
func TestChainStartStop_GenesisZeroHashes(t *testing.T) {
hook := logTest.NewGlobal()
ctx := t.Context()
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
chainService := setupBeaconChain(t, beaconDB)
@@ -184,7 +184,7 @@ func TestChainStartStop_GenesisZeroHashes(t *testing.T) {
func TestChainService_InitializeBeaconChain(t *testing.T) {
helpers.ClearCache()
beaconDB := testDB.SetupDB(t)
ctx := t.Context()
ctx := context.Background()
bc := setupBeaconChain(t, beaconDB)
var err error
@@ -226,7 +226,7 @@ func TestChainService_InitializeBeaconChain(t *testing.T) {
}
func TestChainService_CorrectGenesisRoots(t *testing.T) {
ctx := t.Context()
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
chainService := setupBeaconChain(t, beaconDB)
@@ -295,7 +295,7 @@ func TestChainService_InitializeChainInfo(t *testing.T) {
require.NoError(t, err)
assert.DeepSSZEqual(t, headState.ToProtoUnsafe(), s.ToProtoUnsafe(), "Head state incorrect")
assert.Equal(t, c.HeadSlot(), headBlock.Block.Slot, "Head slot incorrect")
r, err := c.HeadRoot(t.Context())
r, err := c.HeadRoot(context.Background())
require.NoError(t, err)
if !bytes.Equal(headRoot[:], r) {
t.Error("head slot incorrect")
@@ -346,7 +346,7 @@ func TestChainService_InitializeChainInfo_SetHeadAtGenesis(t *testing.T) {
func TestChainService_SaveHeadNoDB(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := t.Context()
ctx := context.Background()
fc := doublylinkedtree.New()
s := &Service{
cfg: &config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB, fc), ForkChoiceStore: fc},
@@ -370,7 +370,7 @@ func TestChainService_SaveHeadNoDB(t *testing.T) {
}
func TestHasBlock_ForkChoiceAndDB_DoublyLinkedTree(t *testing.T) {
ctx := t.Context()
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
s := &Service{
cfg: &config{ForkChoiceStore: doublylinkedtree.New(), BeaconDB: beaconDB},
@@ -391,7 +391,7 @@ func TestHasBlock_ForkChoiceAndDB_DoublyLinkedTree(t *testing.T) {
}
func TestServiceStop_SaveCachedBlocks(t *testing.T) {
ctx, cancel := context.WithCancel(t.Context())
ctx, cancel := context.WithCancel(context.Background())
beaconDB := testDB.SetupDB(t)
s := &Service{
cfg: &config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB, doublylinkedtree.New())},
@@ -410,13 +410,13 @@ func TestServiceStop_SaveCachedBlocks(t *testing.T) {
}
func TestProcessChainStartTime_ReceivedFeed(t *testing.T) {
ctx := t.Context()
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
mgs := &MockClockSetter{}
service.clockSetter = mgs
gt := time.Now()
service.onExecutionChainStart(t.Context(), gt)
service.onExecutionChainStart(context.Background(), gt)
gs, err := beaconDB.GenesisState(ctx)
require.NoError(t, err)
require.NotEqual(t, nil, gs)
@@ -429,7 +429,7 @@ func TestProcessChainStartTime_ReceivedFeed(t *testing.T) {
func BenchmarkHasBlockDB(b *testing.B) {
beaconDB := testDB.SetupDB(b)
ctx := b.Context()
ctx := context.Background()
s := &Service{
cfg: &config{BeaconDB: beaconDB},
}
@@ -447,7 +447,7 @@ func BenchmarkHasBlockDB(b *testing.B) {
}
func BenchmarkHasBlockForkChoiceStore_DoublyLinkedTree(b *testing.B) {
ctx := b.Context()
ctx := context.Background()
beaconDB := testDB.SetupDB(b)
s := &Service{
cfg: &config{ForkChoiceStore: doublylinkedtree.New(), BeaconDB: beaconDB},

View File

@@ -20,10 +20,8 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/operations/attestations"
"github.com/OffchainLabs/prysm/v6/beacon-chain/operations/blstoexec"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
p2pTesting "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state/stategen"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/require"
@@ -49,11 +47,6 @@ type mockBroadcaster struct {
broadcastCalled bool
}
type mockAccessor struct {
mockBroadcaster
p2pTesting.MockPeerManager
}
func (mb *mockBroadcaster) Broadcast(_ context.Context, _ proto.Message) error {
mb.broadcastCalled = true
return nil
@@ -84,11 +77,6 @@ func (mb *mockBroadcaster) BroadcastLightClientFinalityUpdate(_ context.Context,
return nil
}
func (mb *mockBroadcaster) BroadcastDataColumn(_ [fieldparams.RootLength]byte, _ uint64, _ *ethpb.DataColumnSidecar, _ ...chan<- bool) error {
mb.broadcastCalled = true
return nil
}
func (mb *mockBroadcaster) BroadcastBLSChanges(_ context.Context, _ []*ethpb.SignedBLSToExecutionChange) {
}
@@ -108,7 +96,7 @@ type testServiceRequirements struct {
}
func minimalTestService(t *testing.T, opts ...Option) (*Service, *testServiceRequirements) {
ctx := t.Context()
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := doublylinkedtree.New()
sg := stategen.New(beaconDB, fcs)
@@ -144,10 +132,8 @@ func minimalTestService(t *testing.T, opts ...Option) (*Service, *testServiceReq
WithDepositCache(dc),
WithTrackedValidatorsCache(cache.NewTrackedValidatorsCache()),
WithBlobStorage(filesystem.NewEphemeralBlobStorage(t)),
WithDataColumnStorage(filesystem.NewEphemeralDataColumnStorage(t)),
WithSyncChecker(mock.MockChecker{}),
WithExecutionEngineCaller(&mockExecution.EngineClient{}),
WithP2PBroadcaster(&mockAccessor{}),
WithLightClientStore(&lightclient.Store{}),
}
// append the variadic opts so they override the defaults by being processed afterwards

View File

@@ -75,7 +75,6 @@ type ChainService struct {
BlockSlot primitives.Slot
SyncingRoot [32]byte
Blobs []blocks.VerifiedROBlob
DataColumns []blocks.VerifiedRODataColumn
TargetRoot [32]byte
MockHeadSlot *primitives.Slot
}
@@ -716,17 +715,6 @@ func (c *ChainService) ReceiveBlob(_ context.Context, b blocks.VerifiedROBlob) e
return nil
}
// ReceiveDataColumn implements the same method in chain service
func (c *ChainService) ReceiveDataColumn(dc blocks.VerifiedRODataColumn) error {
c.DataColumns = append(c.DataColumns, dc)
return nil
}
// ReceiveDataColumns implements the same method in chain service
func (*ChainService) ReceiveDataColumns(_ []blocks.VerifiedRODataColumn) error {
return nil
}
// TargetRootForEpoch mocks the same method in the chain service
func (c *ChainService) TargetRootForEpoch(_ [32]byte, _ primitives.Epoch) ([32]byte, error) {
return c.TargetRoot, nil

View File

@@ -1,6 +1,7 @@
package blockchain
import (
"context"
"testing"
testDB "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
@@ -21,7 +22,7 @@ func TestService_VerifyWeakSubjectivityRoot(t *testing.T) {
b := util.NewBeaconBlock()
b.Block.Slot = 1792480
util.SaveBlock(t, t.Context(), beaconDB, b)
util.SaveBlock(t, context.Background(), beaconDB, b)
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
@@ -78,7 +79,7 @@ func TestService_VerifyWeakSubjectivityRoot(t *testing.T) {
}
require.NoError(t, fcs.UpdateFinalizedCheckpoint(&forkchoicetypes.Checkpoint{Epoch: tt.finalizedEpoch}))
cp := s.cfg.ForkChoiceStore.FinalizedCheckpoint()
err = s.wsVerifier.VerifyWeakSubjectivity(t.Context(), cp.Epoch)
err = s.wsVerifier.VerifyWeakSubjectivity(context.Background(), cp.Epoch)
if tt.wantErr == nil {
require.NoError(t, err)
} else {

View File

@@ -1,6 +1,7 @@
package builder
import (
"context"
"testing"
"time"
@@ -14,19 +15,19 @@ import (
)
func Test_NewServiceWithBuilder(t *testing.T) {
s, err := NewService(t.Context(), WithBuilderClient(&buildertesting.MockClient{}))
s, err := NewService(context.Background(), WithBuilderClient(&buildertesting.MockClient{}))
require.NoError(t, err)
assert.Equal(t, true, s.Configured())
}
func Test_NewServiceWithoutBuilder(t *testing.T) {
s, err := NewService(t.Context())
s, err := NewService(context.Background())
require.NoError(t, err)
assert.Equal(t, false, s.Configured())
}
func Test_RegisterValidator(t *testing.T) {
ctx := t.Context()
ctx := context.Background()
db := dbtesting.SetupDB(t)
headFetcher := &blockchainTesting.ChainService{}
builder := buildertesting.NewClient()
@@ -39,7 +40,7 @@ func Test_RegisterValidator(t *testing.T) {
}
func Test_RegisterValidator_WithCache(t *testing.T) {
ctx := t.Context()
ctx := context.Background()
headFetcher := &blockchainTesting.ChainService{}
builder := buildertesting.NewClient()
s, err := NewService(ctx, WithRegistrationCache(), WithHeadFetcher(headFetcher), WithBuilderClient(&builder))
@@ -54,16 +55,16 @@ func Test_RegisterValidator_WithCache(t *testing.T) {
}
func Test_BuilderMethodsWithouClient(t *testing.T) {
s, err := NewService(t.Context())
s, err := NewService(context.Background())
require.NoError(t, err)
assert.Equal(t, false, s.Configured())
_, err = s.GetHeader(t.Context(), 0, [32]byte{}, [48]byte{})
_, err = s.GetHeader(context.Background(), 0, [32]byte{}, [48]byte{})
assert.ErrorContains(t, ErrNoBuilder.Error(), err)
_, _, err = s.SubmitBlindedBlock(t.Context(), nil)
_, _, err = s.SubmitBlindedBlock(context.Background(), nil)
assert.ErrorContains(t, ErrNoBuilder.Error(), err)
err = s.RegisterValidator(t.Context(), nil)
err = s.RegisterValidator(context.Background(), nil)
assert.ErrorContains(t, ErrNoBuilder.Error(), err)
}

View File

@@ -14,7 +14,6 @@ import (
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/container/slice"
mathutil "github.com/OffchainLabs/prysm/v6/math"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
lru "github.com/hashicorp/golang-lru"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
@@ -105,16 +104,11 @@ func (c *CommitteeCache) CompressCommitteeCache() {
// Committee fetches the shuffled indices by slot and committee index. Every list of indices
// represent one committee. Returns true if the list exists with slot and committee index. Otherwise returns false, nil.
func (c *CommitteeCache) Committee(ctx context.Context, slot primitives.Slot, seed [32]byte, index primitives.CommitteeIndex) ([]primitives.ValidatorIndex, error) {
ctx, span := trace.StartSpan(ctx, "committeeCache.Committee")
defer span.End()
span.SetAttributes(trace.Int64Attribute("slot", int64(slot)), trace.Int64Attribute("index", int64(index))) // lint:ignore uintcast -- OK for tracing.
if err := c.checkInProgress(ctx, seed); err != nil {
return nil, err
}
obj, exists := c.CommitteeCache.Get(key(seed))
span.SetAttributes(trace.BoolAttribute("cache_hit", exists))
if exists {
CommitteeCacheHit.Inc()
} else {
@@ -163,14 +157,11 @@ func (c *CommitteeCache) AddCommitteeShuffledList(ctx context.Context, committee
// ActiveIndices returns the active indices of a given seed stored in cache.
func (c *CommitteeCache) ActiveIndices(ctx context.Context, seed [32]byte) ([]primitives.ValidatorIndex, error) {
ctx, span := trace.StartSpan(ctx, "committeeCache.ActiveIndices")
defer span.End()
if err := c.checkInProgress(ctx, seed); err != nil {
return nil, err
}
obj, exists := c.CommitteeCache.Get(key(seed))
span.SetAttributes(trace.BoolAttribute("cache_hit", exists))
if exists {
CommitteeCacheHit.Inc()
} else {

View File

@@ -3,6 +3,7 @@
package cache
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v6/testing/assert"
@@ -29,8 +30,8 @@ func TestCommitteeCache_FuzzCommitteesByEpoch(t *testing.T) {
for i := 0; i < 100000; i++ {
fuzzer.Fuzz(c)
require.NoError(t, cache.AddCommitteeShuffledList(t.Context(), c))
_, err := cache.Committee(t.Context(), 0, c.Seed, 0)
require.NoError(t, cache.AddCommitteeShuffledList(context.Background(), c))
_, err := cache.Committee(context.Background(), 0, c.Seed, 0)
require.NoError(t, err)
}
@@ -44,9 +45,9 @@ func TestCommitteeCache_FuzzActiveIndices(t *testing.T) {
for i := 0; i < 100000; i++ {
fuzzer.Fuzz(c)
require.NoError(t, cache.AddCommitteeShuffledList(t.Context(), c))
require.NoError(t, cache.AddCommitteeShuffledList(context.Background(), c))
indices, err := cache.ActiveIndices(t.Context(), c.Seed)
indices, err := cache.ActiveIndices(context.Background(), c.Seed)
require.NoError(t, err)
assert.DeepEqual(t, c.SortedIndices, indices)
}

View File

@@ -44,15 +44,15 @@ func TestCommitteeCache_CommitteesByEpoch(t *testing.T) {
slot := params.BeaconConfig().SlotsPerEpoch
committeeIndex := primitives.CommitteeIndex(1)
indices, err := cache.Committee(t.Context(), slot, item.Seed, committeeIndex)
indices, err := cache.Committee(context.Background(), slot, item.Seed, committeeIndex)
require.NoError(t, err)
if indices != nil {
t.Error("Expected committee not to exist in empty cache")
}
require.NoError(t, cache.AddCommitteeShuffledList(t.Context(), item))
require.NoError(t, cache.AddCommitteeShuffledList(context.Background(), item))
wantedIndex := primitives.CommitteeIndex(0)
indices, err = cache.Committee(t.Context(), slot, item.Seed, wantedIndex)
indices, err = cache.Committee(context.Background(), slot, item.Seed, wantedIndex)
require.NoError(t, err)
start, end := startEndIndices(item, uint64(wantedIndex))
@@ -63,15 +63,15 @@ func TestCommitteeCache_ActiveIndices(t *testing.T) {
cache := NewCommitteesCache()
item := &Committees{Seed: [32]byte{'A'}, SortedIndices: []primitives.ValidatorIndex{1, 2, 3, 4, 5, 6}}
indices, err := cache.ActiveIndices(t.Context(), item.Seed)
indices, err := cache.ActiveIndices(context.Background(), item.Seed)
require.NoError(t, err)
if indices != nil {
t.Error("Expected committee not to exist in empty cache")
}
require.NoError(t, cache.AddCommitteeShuffledList(t.Context(), item))
require.NoError(t, cache.AddCommitteeShuffledList(context.Background(), item))
indices, err = cache.ActiveIndices(t.Context(), item.Seed)
indices, err = cache.ActiveIndices(context.Background(), item.Seed)
require.NoError(t, err)
assert.DeepEqual(t, item.SortedIndices, indices)
}
@@ -80,13 +80,13 @@ func TestCommitteeCache_ActiveCount(t *testing.T) {
cache := NewCommitteesCache()
item := &Committees{Seed: [32]byte{'A'}, SortedIndices: []primitives.ValidatorIndex{1, 2, 3, 4, 5, 6}}
count, err := cache.ActiveIndicesCount(t.Context(), item.Seed)
count, err := cache.ActiveIndicesCount(context.Background(), item.Seed)
require.NoError(t, err)
assert.Equal(t, 0, count, "Expected active count not to exist in empty cache")
require.NoError(t, cache.AddCommitteeShuffledList(t.Context(), item))
require.NoError(t, cache.AddCommitteeShuffledList(context.Background(), item))
count, err = cache.ActiveIndicesCount(t.Context(), item.Seed)
count, err = cache.ActiveIndicesCount(context.Background(), item.Seed)
require.NoError(t, err)
assert.Equal(t, len(item.SortedIndices), count)
}
@@ -100,7 +100,7 @@ func TestCommitteeCache_CanRotate(t *testing.T) {
for i := start; i < end; i++ {
s := []byte(strconv.Itoa(i))
item := &Committees{Seed: bytesutil.ToBytes32(s)}
require.NoError(t, cache.AddCommitteeShuffledList(t.Context(), item))
require.NoError(t, cache.AddCommitteeShuffledList(context.Background(), item))
}
k := cache.CommitteeCache.Keys()
@@ -130,7 +130,7 @@ func TestCommitteeCacheOutOfRange(t *testing.T) {
assert.NoError(t, err)
_ = cache.CommitteeCache.Add(key, comms)
_, err = cache.Committee(t.Context(), 0, seed, math.MaxUint64) // Overflow!
_, err = cache.Committee(context.Background(), 0, seed, math.MaxUint64) // Overflow!
require.NotNil(t, err, "Did not fail as expected")
}
@@ -138,15 +138,15 @@ func TestCommitteeCache_DoesNothingWhenCancelledContext(t *testing.T) {
cache := NewCommitteesCache()
item := &Committees{Seed: [32]byte{'A'}, SortedIndices: []primitives.ValidatorIndex{1, 2, 3, 4, 5, 6}}
count, err := cache.ActiveIndicesCount(t.Context(), item.Seed)
count, err := cache.ActiveIndicesCount(context.Background(), item.Seed)
require.NoError(t, err)
assert.Equal(t, 0, count, "Expected active count not to exist in empty cache")
cancelled, cancel := context.WithCancel(t.Context())
cancelled, cancel := context.WithCancel(context.Background())
cancel()
require.ErrorIs(t, cache.AddCommitteeShuffledList(cancelled, item), context.Canceled)
count, err = cache.ActiveIndicesCount(t.Context(), item.Seed)
count, err = cache.ActiveIndicesCount(context.Background(), item.Seed)
require.NoError(t, err)
assert.Equal(t, 0, count)
}

View File

@@ -2,6 +2,7 @@ package depositsnapshot
import (
"bytes"
"context"
"fmt"
"math/big"
"testing"
@@ -54,7 +55,7 @@ func TestAllDeposits_ReturnsAllDeposits(t *testing.T) {
}
dc.deposits = deposits
d := dc.AllDeposits(t.Context(), nil)
d := dc.AllDeposits(context.Background(), nil)
assert.Equal(t, len(deposits), len(d))
}
@@ -94,7 +95,7 @@ func TestAllDeposits_FiltersDepositUpToAndIncludingBlockNumber(t *testing.T) {
}
dc.deposits = deposits
d := dc.AllDeposits(t.Context(), big.NewInt(11))
d := dc.AllDeposits(context.Background(), big.NewInt(11))
assert.Equal(t, 5, len(d))
}
@@ -126,7 +127,7 @@ func TestDepositsNumberAndRootAtHeight(t *testing.T) {
DepositRoot: wantedRoot,
},
}
n, root := dc.DepositsNumberAndRootAtHeight(t.Context(), big.NewInt(13))
n, root := dc.DepositsNumberAndRootAtHeight(context.Background(), big.NewInt(13))
assert.Equal(t, 4, int(n))
require.DeepEqual(t, wantedRoot, root[:])
})
@@ -142,7 +143,7 @@ func TestDepositsNumberAndRootAtHeight(t *testing.T) {
DepositRoot: wantedRoot,
},
}
n, root := dc.DepositsNumberAndRootAtHeight(t.Context(), big.NewInt(10))
n, root := dc.DepositsNumberAndRootAtHeight(context.Background(), big.NewInt(10))
assert.Equal(t, 1, int(n))
require.DeepEqual(t, wantedRoot, root[:])
})
@@ -168,7 +169,7 @@ func TestDepositsNumberAndRootAtHeight(t *testing.T) {
Deposit: &ethpb.Deposit{},
},
}
n, root := dc.DepositsNumberAndRootAtHeight(t.Context(), big.NewInt(10))
n, root := dc.DepositsNumberAndRootAtHeight(context.Background(), big.NewInt(10))
assert.Equal(t, 2, int(n))
require.DeepEqual(t, wantedRoot, root[:])
})
@@ -184,7 +185,7 @@ func TestDepositsNumberAndRootAtHeight(t *testing.T) {
DepositRoot: wantedRoot,
},
}
n, root := dc.DepositsNumberAndRootAtHeight(t.Context(), big.NewInt(7))
n, root := dc.DepositsNumberAndRootAtHeight(context.Background(), big.NewInt(7))
assert.Equal(t, 0, int(n))
require.DeepEqual(t, params.BeaconConfig().ZeroHash, root)
})
@@ -200,7 +201,7 @@ func TestDepositsNumberAndRootAtHeight(t *testing.T) {
DepositRoot: wantedRoot,
},
}
n, root := dc.DepositsNumberAndRootAtHeight(t.Context(), big.NewInt(10))
n, root := dc.DepositsNumberAndRootAtHeight(context.Background(), big.NewInt(10))
assert.Equal(t, 1, int(n))
require.DeepEqual(t, wantedRoot, root[:])
})
@@ -236,7 +237,7 @@ func TestDepositsNumberAndRootAtHeight(t *testing.T) {
Deposit: &ethpb.Deposit{},
},
}
n, root := dc.DepositsNumberAndRootAtHeight(t.Context(), big.NewInt(9))
n, root := dc.DepositsNumberAndRootAtHeight(context.Background(), big.NewInt(9))
assert.Equal(t, 3, int(n))
require.DeepEqual(t, wantedRoot, root[:])
})
@@ -287,10 +288,10 @@ func TestDepositByPubkey_ReturnsFirstMatchingDeposit(t *testing.T) {
},
},
}
dc.InsertDepositContainers(t.Context(), ctrs)
dc.InsertDepositContainers(context.Background(), ctrs)
pk1 := bytesutil.PadTo([]byte("pk1"), 48)
dep, blkNum := dc.DepositByPubkey(t.Context(), pk1)
dep, blkNum := dc.DepositByPubkey(context.Background(), pk1)
if dep == nil || !bytes.Equal(dep.Data.PublicKey, pk1) {
t.Error("Returned wrong deposit")
@@ -302,7 +303,7 @@ func TestDepositByPubkey_ReturnsFirstMatchingDeposit(t *testing.T) {
func TestInsertDepositContainers_NotNil(t *testing.T) {
dc, err := New()
require.NoError(t, err)
dc.InsertDepositContainers(t.Context(), nil)
dc.InsertDepositContainers(context.Background(), nil)
assert.DeepEqual(t, []*ethpb.DepositContainer{}, dc.deposits)
}
@@ -358,10 +359,10 @@ func TestFinalizedDeposits_DepositsCachedCorrectly(t *testing.T) {
err = dc.finalizedDeposits.depositTree.pushLeaf(root)
require.NoError(t, err)
}
err = dc.InsertFinalizedDeposits(t.Context(), 2, [32]byte{}, 0)
err = dc.InsertFinalizedDeposits(context.Background(), 2, [32]byte{}, 0)
require.NoError(t, err)
cachedDeposits, err := dc.FinalizedDeposits(t.Context())
cachedDeposits, err := dc.FinalizedDeposits(context.Background())
require.NoError(t, err)
require.NotNil(t, cachedDeposits, "Deposits not cached")
assert.Equal(t, int64(2), cachedDeposits.MerkleTrieIndex())
@@ -424,15 +425,15 @@ func TestFinalizedDeposits_UtilizesPreviouslyCachedDeposits(t *testing.T) {
err = dc.finalizedDeposits.Deposits().Insert(root[:], 0)
require.NoError(t, err)
}
err = dc.InsertFinalizedDeposits(t.Context(), 1, [32]byte{}, 0)
err = dc.InsertFinalizedDeposits(context.Background(), 1, [32]byte{}, 0)
require.NoError(t, err)
err = dc.InsertFinalizedDeposits(t.Context(), 2, [32]byte{}, 0)
err = dc.InsertFinalizedDeposits(context.Background(), 2, [32]byte{}, 0)
require.NoError(t, err)
dc.deposits = append(dc.deposits, []*ethpb.DepositContainer{newFinalizedDeposit}...)
cachedDeposits, err := dc.FinalizedDeposits(t.Context())
cachedDeposits, err := dc.FinalizedDeposits(context.Background())
require.NoError(t, err)
require.NotNil(t, cachedDeposits, "Deposits not cached")
require.Equal(t, int64(1), cachedDeposits.MerkleTrieIndex())
@@ -458,10 +459,10 @@ func TestFinalizedDeposits_HandleZeroDeposits(t *testing.T) {
dc, err := New()
require.NoError(t, err)
err = dc.InsertFinalizedDeposits(t.Context(), 2, [32]byte{}, 0)
err = dc.InsertFinalizedDeposits(context.Background(), 2, [32]byte{}, 0)
require.NoError(t, err)
cachedDeposits, err := dc.FinalizedDeposits(t.Context())
cachedDeposits, err := dc.FinalizedDeposits(context.Background())
require.NoError(t, err)
require.NotNil(t, cachedDeposits, "Deposits not cached")
assert.Equal(t, int64(-1), cachedDeposits.MerkleTrieIndex())
@@ -508,10 +509,10 @@ func TestFinalizedDeposits_HandleSmallerThanExpectedDeposits(t *testing.T) {
}
dc.deposits = finalizedDeposits
err = dc.InsertFinalizedDeposits(t.Context(), 5, [32]byte{}, 0)
err = dc.InsertFinalizedDeposits(context.Background(), 5, [32]byte{}, 0)
require.NoError(t, err)
cachedDeposits, err := dc.FinalizedDeposits(t.Context())
cachedDeposits, err := dc.FinalizedDeposits(context.Background())
require.NoError(t, err)
require.NotNil(t, cachedDeposits, "Deposits not cached")
assert.Equal(t, int64(2), cachedDeposits.MerkleTrieIndex())
@@ -591,14 +592,14 @@ func TestFinalizedDeposits_HandleLowerEth1DepositIndex(t *testing.T) {
}
dc.deposits = finalizedDeposits
err = dc.InsertFinalizedDeposits(t.Context(), 5, [32]byte{}, 0)
err = dc.InsertFinalizedDeposits(context.Background(), 5, [32]byte{}, 0)
require.NoError(t, err)
// Reinsert finalized deposits with a lower index.
err = dc.InsertFinalizedDeposits(t.Context(), 2, [32]byte{}, 0)
err = dc.InsertFinalizedDeposits(context.Background(), 2, [32]byte{}, 0)
require.NoError(t, err)
cachedDeposits, err := dc.FinalizedDeposits(t.Context())
cachedDeposits, err := dc.FinalizedDeposits(context.Background())
require.NoError(t, err)
require.NotNil(t, cachedDeposits, "Deposits not cached")
assert.Equal(t, int64(5), cachedDeposits.MerkleTrieIndex())
@@ -669,10 +670,10 @@ func TestNonFinalizedDeposits_ReturnsAllNonFinalizedDeposits(t *testing.T) {
Index: 3,
DepositRoot: rootCreator('D'),
})
err = dc.InsertFinalizedDeposits(t.Context(), 1, [32]byte{}, 0)
err = dc.InsertFinalizedDeposits(context.Background(), 1, [32]byte{}, 0)
require.NoError(t, err)
deps := dc.NonFinalizedDeposits(t.Context(), 1, nil)
deps := dc.NonFinalizedDeposits(context.Background(), 1, nil)
assert.Equal(t, 2, len(deps))
}
@@ -680,7 +681,7 @@ func TestNonFinalizedDeposits_ReturnsAllNonFinalizedDeposits_Nil(t *testing.T) {
dc, err := New()
require.NoError(t, err)
deps := dc.NonFinalizedDeposits(t.Context(), 0, nil)
deps := dc.NonFinalizedDeposits(context.Background(), 0, nil)
assert.Equal(t, 0, len(deps))
}
@@ -739,10 +740,10 @@ func TestNonFinalizedDeposits_ReturnsNonFinalizedDepositsUpToBlockNumber(t *test
Index: 3,
DepositRoot: rootCreator('D'),
})
err = dc.InsertFinalizedDeposits(t.Context(), 1, [32]byte{}, 0)
err = dc.InsertFinalizedDeposits(context.Background(), 1, [32]byte{}, 0)
require.NoError(t, err)
deps := dc.NonFinalizedDeposits(t.Context(), 1, big.NewInt(10))
deps := dc.NonFinalizedDeposits(context.Background(), 1, big.NewInt(10))
assert.Equal(t, 1, len(deps))
}
@@ -784,21 +785,21 @@ func TestFinalizedDeposits_ReturnsTrieCorrectly(t *testing.T) {
assert.NoError(t, err)
// Perform this in a nonsensical ordering
err = dc.InsertFinalizedDeposits(t.Context(), 1, [32]byte{}, 0)
err = dc.InsertFinalizedDeposits(context.Background(), 1, [32]byte{}, 0)
require.NoError(t, err)
err = dc.InsertFinalizedDeposits(t.Context(), 2, [32]byte{}, 0)
err = dc.InsertFinalizedDeposits(context.Background(), 2, [32]byte{}, 0)
require.NoError(t, err)
err = dc.InsertFinalizedDeposits(t.Context(), 3, [32]byte{}, 0)
err = dc.InsertFinalizedDeposits(context.Background(), 3, [32]byte{}, 0)
require.NoError(t, err)
err = dc.InsertFinalizedDeposits(t.Context(), 4, [32]byte{}, 0)
err = dc.InsertFinalizedDeposits(context.Background(), 4, [32]byte{}, 0)
require.NoError(t, err)
err = dc.InsertFinalizedDeposits(t.Context(), 4, [32]byte{}, 0)
err = dc.InsertFinalizedDeposits(context.Background(), 4, [32]byte{}, 0)
require.NoError(t, err)
// Mimic finalized deposit trie fetch.
fd, err := dc.FinalizedDeposits(t.Context())
fd, err := dc.FinalizedDeposits(context.Background())
require.NoError(t, err)
deps := dc.NonFinalizedDeposits(t.Context(), fd.MerkleTrieIndex(), nil)
deps := dc.NonFinalizedDeposits(context.Background(), fd.MerkleTrieIndex(), nil)
insertIndex := fd.MerkleTrieIndex() + 1
for _, dep := range deps {
@@ -809,24 +810,24 @@ func TestFinalizedDeposits_ReturnsTrieCorrectly(t *testing.T) {
}
insertIndex++
}
err = dc.InsertFinalizedDeposits(t.Context(), 5, [32]byte{}, 0)
err = dc.InsertFinalizedDeposits(context.Background(), 5, [32]byte{}, 0)
require.NoError(t, err)
err = dc.InsertFinalizedDeposits(t.Context(), 6, [32]byte{}, 0)
err = dc.InsertFinalizedDeposits(context.Background(), 6, [32]byte{}, 0)
require.NoError(t, err)
err = dc.InsertFinalizedDeposits(t.Context(), 9, [32]byte{}, 0)
err = dc.InsertFinalizedDeposits(context.Background(), 9, [32]byte{}, 0)
require.NoError(t, err)
err = dc.InsertFinalizedDeposits(t.Context(), 12, [32]byte{}, 0)
err = dc.InsertFinalizedDeposits(context.Background(), 12, [32]byte{}, 0)
require.NoError(t, err)
err = dc.InsertFinalizedDeposits(t.Context(), 15, [32]byte{}, 0)
err = dc.InsertFinalizedDeposits(context.Background(), 15, [32]byte{}, 0)
require.NoError(t, err)
err = dc.InsertFinalizedDeposits(t.Context(), 15, [32]byte{}, 0)
err = dc.InsertFinalizedDeposits(context.Background(), 15, [32]byte{}, 0)
require.NoError(t, err)
err = dc.InsertFinalizedDeposits(t.Context(), 14, [32]byte{}, 0)
err = dc.InsertFinalizedDeposits(context.Background(), 14, [32]byte{}, 0)
require.NoError(t, err)
fd, err = dc.FinalizedDeposits(t.Context())
fd, err = dc.FinalizedDeposits(context.Background())
require.NoError(t, err)
deps = dc.NonFinalizedDeposits(t.Context(), fd.MerkleTrieIndex(), nil)
deps = dc.NonFinalizedDeposits(context.Background(), fd.MerkleTrieIndex(), nil)
insertIndex = fd.MerkleTrieIndex() + 1
for _, dep := range dc.deposits {
@@ -887,9 +888,9 @@ func TestMin(t *testing.T) {
}
dc.deposits = finalizedDeposits
fd, err := dc.FinalizedDeposits(t.Context())
fd, err := dc.FinalizedDeposits(context.Background())
require.NoError(t, err)
deps := dc.NonFinalizedDeposits(t.Context(), fd.MerkleTrieIndex(), big.NewInt(16))
deps := dc.NonFinalizedDeposits(context.Background(), fd.MerkleTrieIndex(), big.NewInt(16))
insertIndex := fd.MerkleTrieIndex() + 1
for _, dep := range deps {
depHash, err := dep.Data.HashTreeRoot()
@@ -907,28 +908,28 @@ func TestDepositMap_WorksCorrectly(t *testing.T) {
require.NoError(t, err)
pk0 := bytesutil.PadTo([]byte("pk0"), 48)
dep, _ := dc.DepositByPubkey(t.Context(), pk0)
dep, _ := dc.DepositByPubkey(context.Background(), pk0)
var nilDep *ethpb.Deposit
assert.DeepEqual(t, nilDep, dep)
dep = &ethpb.Deposit{Proof: makeDepositProof(), Data: &ethpb.Deposit_Data{PublicKey: pk0, Amount: 1000}}
assert.NoError(t, dc.InsertDeposit(t.Context(), dep, 1000, 0, [32]byte{}))
assert.NoError(t, dc.InsertDeposit(context.Background(), dep, 1000, 0, [32]byte{}))
dep, _ = dc.DepositByPubkey(t.Context(), pk0)
dep, _ = dc.DepositByPubkey(context.Background(), pk0)
assert.NotEqual(t, nilDep, dep)
assert.Equal(t, uint64(1000), dep.Data.Amount)
dep = &ethpb.Deposit{Proof: makeDepositProof(), Data: &ethpb.Deposit_Data{PublicKey: pk0, Amount: 10000}}
assert.NoError(t, dc.InsertDeposit(t.Context(), dep, 1000, 1, [32]byte{}))
assert.NoError(t, dc.InsertDeposit(context.Background(), dep, 1000, 1, [32]byte{}))
// Make sure we have the same deposit returned over here.
dep, _ = dc.DepositByPubkey(t.Context(), pk0)
dep, _ = dc.DepositByPubkey(context.Background(), pk0)
assert.NotEqual(t, nilDep, dep)
assert.Equal(t, uint64(1000), dep.Data.Amount)
// Make sure another key doesn't work.
pk1 := bytesutil.PadTo([]byte("pk1"), 48)
dep, _ = dc.DepositByPubkey(t.Context(), pk1)
dep, _ = dc.DepositByPubkey(context.Background(), pk1)
assert.DeepEqual(t, nilDep, dep)
}

View File

@@ -1,6 +1,7 @@
package depositsnapshot
import (
"context"
"math/big"
"testing"
@@ -12,14 +13,14 @@ var _ PendingDepositsFetcher = (*Cache)(nil)
func TestInsertPendingDeposit_OK(t *testing.T) {
dc := Cache{}
dc.InsertPendingDeposit(t.Context(), &ethpb.Deposit{}, 111, 100, [32]byte{})
dc.InsertPendingDeposit(context.Background(), &ethpb.Deposit{}, 111, 100, [32]byte{})
assert.Equal(t, 1, len(dc.pendingDeposits), "deposit not inserted")
}
func TestInsertPendingDeposit_ignoresNilDeposit(t *testing.T) {
dc := Cache{}
dc.InsertPendingDeposit(t.Context(), nil /*deposit*/, 0 /*blockNum*/, 0, [32]byte{})
dc.InsertPendingDeposit(context.Background(), nil /*deposit*/, 0 /*blockNum*/, 0, [32]byte{})
assert.Equal(t, 0, len(dc.pendingDeposits))
}
@@ -33,13 +34,13 @@ func TestPendingDeposits_OK(t *testing.T) {
{Eth1BlockHeight: 6, Deposit: &ethpb.Deposit{Proof: [][]byte{[]byte("c")}}},
}
deposits := dc.PendingDeposits(t.Context(), big.NewInt(4))
deposits := dc.PendingDeposits(context.Background(), big.NewInt(4))
expected := []*ethpb.Deposit{
{Proof: [][]byte{[]byte("A")}},
{Proof: [][]byte{[]byte("B")}},
}
assert.DeepSSZEqual(t, expected, deposits)
all := dc.PendingDeposits(t.Context(), nil)
all := dc.PendingDeposits(context.Background(), nil)
assert.Equal(t, len(dc.pendingDeposits), len(all), "PendingDeposits(ctx, nil) did not return all deposits")
}

View File

@@ -1,6 +1,7 @@
package depositsnapshot
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
@@ -21,7 +22,7 @@ func TestPrunePendingDeposits_ZeroMerkleIndex(t *testing.T) {
{Eth1BlockHeight: 12, Index: 12},
}
dc.PrunePendingDeposits(t.Context(), 0)
dc.PrunePendingDeposits(context.Background(), 0)
expected := []*ethpb.DepositContainer{
{Eth1BlockHeight: 2, Index: 2},
{Eth1BlockHeight: 4, Index: 4},
@@ -45,7 +46,7 @@ func TestPrunePendingDeposits_OK(t *testing.T) {
{Eth1BlockHeight: 12, Index: 12},
}
dc.PrunePendingDeposits(t.Context(), 6)
dc.PrunePendingDeposits(context.Background(), 6)
expected := []*ethpb.DepositContainer{
{Eth1BlockHeight: 6, Index: 6},
{Eth1BlockHeight: 8, Index: 8},
@@ -64,7 +65,7 @@ func TestPrunePendingDeposits_OK(t *testing.T) {
{Eth1BlockHeight: 12, Index: 12},
}
dc.PrunePendingDeposits(t.Context(), 10)
dc.PrunePendingDeposits(context.Background(), 10)
expected = []*ethpb.DepositContainer{
{Eth1BlockHeight: 10, Index: 10},
{Eth1BlockHeight: 12, Index: 12},
@@ -85,7 +86,7 @@ func TestPruneAllPendingDeposits(t *testing.T) {
{Eth1BlockHeight: 12, Index: 12},
}
dc.PruneAllPendingDeposits(t.Context())
dc.PruneAllPendingDeposits(context.Background())
expected := []*ethpb.DepositContainer{}
assert.DeepEqual(t, expected, dc.pendingDeposits)
@@ -127,10 +128,10 @@ func TestPruneProofs_Ok(t *testing.T) {
}
for _, ins := range deposits {
assert.NoError(t, dc.InsertDeposit(t.Context(), ins.deposit, ins.blkNum, ins.index, [32]byte{}))
assert.NoError(t, dc.InsertDeposit(context.Background(), ins.deposit, ins.blkNum, ins.index, [32]byte{}))
}
require.NoError(t, dc.PruneProofs(t.Context(), 1))
require.NoError(t, dc.PruneProofs(context.Background(), 1))
assert.DeepEqual(t, [][]byte(nil), dc.deposits[0].Deposit.Proof)
assert.DeepEqual(t, [][]byte(nil), dc.deposits[1].Deposit.Proof)
@@ -172,10 +173,10 @@ func TestPruneProofs_SomeAlreadyPruned(t *testing.T) {
}
for _, ins := range deposits {
assert.NoError(t, dc.InsertDeposit(t.Context(), ins.deposit, ins.blkNum, ins.index, [32]byte{}))
assert.NoError(t, dc.InsertDeposit(context.Background(), ins.deposit, ins.blkNum, ins.index, [32]byte{}))
}
require.NoError(t, dc.PruneProofs(t.Context(), 2))
require.NoError(t, dc.PruneProofs(context.Background(), 2))
assert.DeepEqual(t, [][]byte(nil), dc.deposits[2].Deposit.Proof)
}
@@ -216,10 +217,10 @@ func TestPruneProofs_PruneAllWhenDepositIndexTooBig(t *testing.T) {
}
for _, ins := range deposits {
assert.NoError(t, dc.InsertDeposit(t.Context(), ins.deposit, ins.blkNum, ins.index, [32]byte{}))
assert.NoError(t, dc.InsertDeposit(context.Background(), ins.deposit, ins.blkNum, ins.index, [32]byte{}))
}
require.NoError(t, dc.PruneProofs(t.Context(), 99))
require.NoError(t, dc.PruneProofs(context.Background(), 99))
assert.DeepEqual(t, [][]byte(nil), dc.deposits[0].Deposit.Proof)
assert.DeepEqual(t, [][]byte(nil), dc.deposits[1].Deposit.Proof)
@@ -263,10 +264,10 @@ func TestPruneProofs_CorrectlyHandleLastIndex(t *testing.T) {
}
for _, ins := range deposits {
assert.NoError(t, dc.InsertDeposit(t.Context(), ins.deposit, ins.blkNum, ins.index, [32]byte{}))
assert.NoError(t, dc.InsertDeposit(context.Background(), ins.deposit, ins.blkNum, ins.index, [32]byte{}))
}
require.NoError(t, dc.PruneProofs(t.Context(), 4))
require.NoError(t, dc.PruneProofs(context.Background(), 4))
assert.DeepEqual(t, [][]byte(nil), dc.deposits[0].Deposit.Proof)
assert.DeepEqual(t, [][]byte(nil), dc.deposits[1].Deposit.Proof)
@@ -310,10 +311,10 @@ func TestPruneAllProofs(t *testing.T) {
}
for _, ins := range deposits {
assert.NoError(t, dc.InsertDeposit(t.Context(), ins.deposit, ins.blkNum, ins.index, [32]byte{}))
assert.NoError(t, dc.InsertDeposit(context.Background(), ins.deposit, ins.blkNum, ins.index, [32]byte{}))
}
dc.PruneAllProofs(t.Context())
dc.PruneAllProofs(context.Background())
assert.DeepEqual(t, [][]byte(nil), dc.deposits[0].Deposit.Proof)
assert.DeepEqual(t, [][]byte(nil), dc.deposits[1].Deposit.Proof)

View File

@@ -1,6 +1,7 @@
package cache
import (
"context"
"testing"
"time"
@@ -23,7 +24,7 @@ func TestRegistrationCache(t *testing.T) {
Timestamp: uint64(time.Now().Unix()),
Pubkey: pubkey,
}
cache.UpdateIndexToRegisteredMap(t.Context(), m)
cache.UpdateIndexToRegisteredMap(context.Background(), m)
reg, err := cache.RegistrationByIndex(validatorIndex)
require.NoError(t, err)
require.Equal(t, string(reg.Pubkey), string(pubkey))
@@ -37,7 +38,7 @@ func TestRegistrationCache(t *testing.T) {
Timestamp: uint64(time.Now().Unix()),
Pubkey: pubkey,
}
cache.UpdateIndexToRegisteredMap(t.Context(), m)
cache.UpdateIndexToRegisteredMap(context.Background(), m)
reg, err := cache.RegistrationByIndex(validatorIndex2)
require.NoError(t, err)
require.Equal(t, string(reg.Pubkey), string(pubkey))

View File

@@ -1,6 +1,7 @@
package cache_test
import (
"context"
"sync"
"testing"
@@ -13,7 +14,7 @@ import (
)
func TestSkipSlotCache_RoundTrip(t *testing.T) {
ctx := t.Context()
ctx := context.Background()
c := cache.NewSkipSlotCache()
r := [32]byte{'a'}
@@ -37,7 +38,7 @@ func TestSkipSlotCache_RoundTrip(t *testing.T) {
}
func TestSkipSlotCache_DisabledAndEnabled(t *testing.T) {
ctx := t.Context()
ctx := context.Background()
c := cache.NewSkipSlotCache()
r := [32]byte{'a'}

View File

@@ -81,7 +81,6 @@ go_test(
"//proto/prysm/v1alpha1/attestation:go_default_library",
"//runtime/version:go_default_library",
"//testing/assert:go_default_library",
"//testing/fuzz:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//time:go_default_library",

View File

@@ -1,6 +1,7 @@
package altair_test
import (
"context"
"fmt"
"testing"
@@ -18,10 +19,9 @@ import (
"github.com/OffchainLabs/prysm/v6/math"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1/attestation"
"github.com/OffchainLabs/prysm/v6/testing/fuzz"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
gofuzz "github.com/google/gofuzz"
fuzz "github.com/google/gofuzz"
"github.com/prysmaticlabs/go-bitfield"
)
@@ -50,7 +50,7 @@ func TestProcessAttestations_InclusionDelayFailure(t *testing.T) {
)
wsb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
_, err = altair.ProcessAttestationsNoVerifySignature(t.Context(), beaconState, wsb.Block())
_, err = altair.ProcessAttestationsNoVerifySignature(context.Background(), beaconState, wsb.Block())
require.ErrorContains(t, want, err)
}
@@ -81,7 +81,7 @@ func TestProcessAttestations_NeitherCurrentNorPrevEpoch(t *testing.T) {
)
wsb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
_, err = altair.ProcessAttestationsNoVerifySignature(t.Context(), beaconState, wsb.Block())
_, err = altair.ProcessAttestationsNoVerifySignature(context.Background(), beaconState, wsb.Block())
require.ErrorContains(t, want, err)
}
@@ -110,13 +110,13 @@ func TestProcessAttestations_CurrentEpochFFGDataMismatches(t *testing.T) {
want := "source check point not equal to current justified checkpoint"
wsb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
_, err = altair.ProcessAttestationsNoVerifySignature(t.Context(), beaconState, wsb.Block())
_, err = altair.ProcessAttestationsNoVerifySignature(context.Background(), beaconState, wsb.Block())
require.ErrorContains(t, want, err)
b.Block.Body.Attestations[0].Data.Source.Epoch = time.CurrentEpoch(beaconState)
b.Block.Body.Attestations[0].Data.Source.Root = []byte{}
wsb, err = blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
_, err = altair.ProcessAttestationsNoVerifySignature(t.Context(), beaconState, wsb.Block())
_, err = altair.ProcessAttestationsNoVerifySignature(context.Background(), beaconState, wsb.Block())
require.ErrorContains(t, want, err)
}
@@ -151,14 +151,14 @@ func TestProcessAttestations_PrevEpochFFGDataMismatches(t *testing.T) {
want := "source check point not equal to previous justified checkpoint"
wsb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
_, err = altair.ProcessAttestationsNoVerifySignature(t.Context(), beaconState, wsb.Block())
_, err = altair.ProcessAttestationsNoVerifySignature(context.Background(), beaconState, wsb.Block())
require.ErrorContains(t, want, err)
b.Block.Body.Attestations[0].Data.Source.Epoch = time.PrevEpoch(beaconState)
b.Block.Body.Attestations[0].Data.Target.Epoch = time.PrevEpoch(beaconState)
b.Block.Body.Attestations[0].Data.Source.Root = []byte{}
wsb, err = blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
_, err = altair.ProcessAttestationsNoVerifySignature(t.Context(), beaconState, wsb.Block())
_, err = altair.ProcessAttestationsNoVerifySignature(context.Background(), beaconState, wsb.Block())
require.ErrorContains(t, want, err)
}
@@ -190,7 +190,7 @@ func TestProcessAttestations_InvalidAggregationBitsLength(t *testing.T) {
expected := "failed to verify aggregation bitfield: wanted participants bitfield length 3, got: 4"
wsb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
_, err = altair.ProcessAttestationsNoVerifySignature(t.Context(), beaconState, wsb.Block())
_, err = altair.ProcessAttestationsNoVerifySignature(context.Background(), beaconState, wsb.Block())
require.ErrorContains(t, expected, err)
}
@@ -214,7 +214,7 @@ func TestProcessAttestations_OK(t *testing.T) {
cfc.Root = mockRoot[:]
require.NoError(t, beaconState.SetCurrentJustifiedCheckpoint(cfc))
committee, err := helpers.BeaconCommitteeFromState(t.Context(), beaconState, att.Data.Slot, 0)
committee, err := helpers.BeaconCommitteeFromState(context.Background(), beaconState, att.Data.Slot, 0)
require.NoError(t, err)
attestingIndices, err := attestation.AttestingIndices(att, committee)
require.NoError(t, err)
@@ -235,7 +235,7 @@ func TestProcessAttestations_OK(t *testing.T) {
require.NoError(t, err)
wsb, err := blocks.NewSignedBeaconBlock(block)
require.NoError(t, err)
_, err = altair.ProcessAttestationsNoVerifySignature(t.Context(), beaconState, wsb.Block())
_, err = altair.ProcessAttestationsNoVerifySignature(context.Background(), beaconState, wsb.Block())
require.NoError(t, err)
})
t.Run("post-Electra", func(t *testing.T) {
@@ -260,7 +260,7 @@ func TestProcessAttestations_OK(t *testing.T) {
cfc.Root = mockRoot[:]
require.NoError(t, beaconState.SetCurrentJustifiedCheckpoint(cfc))
committee, err := helpers.BeaconCommitteeFromState(t.Context(), beaconState, att.Data.Slot, 0)
committee, err := helpers.BeaconCommitteeFromState(context.Background(), beaconState, att.Data.Slot, 0)
require.NoError(t, err)
attestingIndices, err := attestation.AttestingIndices(att, committee)
require.NoError(t, err)
@@ -281,7 +281,7 @@ func TestProcessAttestations_OK(t *testing.T) {
require.NoError(t, err)
wsb, err := blocks.NewSignedBeaconBlock(block)
require.NoError(t, err)
_, err = altair.ProcessAttestationsNoVerifySignature(t.Context(), beaconState, wsb.Block())
_, err = altair.ProcessAttestationsNoVerifySignature(context.Background(), beaconState, wsb.Block())
require.NoError(t, err)
})
}
@@ -313,13 +313,13 @@ func TestProcessAttestationNoVerify_SourceTargetHead(t *testing.T) {
b, err := helpers.TotalActiveBalance(beaconState)
require.NoError(t, err)
beaconState, err = altair.ProcessAttestationNoVerifySignature(t.Context(), beaconState, att, b)
beaconState, err = altair.ProcessAttestationNoVerifySignature(context.Background(), beaconState, att, b)
require.NoError(t, err)
p, err := beaconState.CurrentEpochParticipation()
require.NoError(t, err)
committee, err := helpers.BeaconCommitteeFromState(t.Context(), beaconState, att.Data.Slot, att.Data.CommitteeIndex)
committee, err := helpers.BeaconCommitteeFromState(context.Background(), beaconState, att.Data.Slot, att.Data.CommitteeIndex)
require.NoError(t, err)
indices, err := attestation.AttestingIndices(att, committee)
require.NoError(t, err)
@@ -458,7 +458,7 @@ func TestValidatorFlag_Add_ExceedsLength(t *testing.T) {
}
func TestFuzzProcessAttestationsNoVerify_10000(t *testing.T) {
fuzzer := gofuzz.NewWithSeed(0)
fuzzer := fuzz.NewWithSeed(0)
st := &ethpb.BeaconStateAltair{}
b := &ethpb.SignedBeaconBlockAltair{Block: &ethpb.BeaconBlockAltair{}}
for i := 0; i < 10000; i++ {
@@ -474,11 +474,10 @@ func TestFuzzProcessAttestationsNoVerify_10000(t *testing.T) {
}
wsb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
r, err := altair.ProcessAttestationsNoVerifySignature(t.Context(), s, wsb.Block())
r, err := altair.ProcessAttestationsNoVerifySignature(context.Background(), s, wsb.Block())
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, s, b)
}
fuzz.FreeMemory(i)
}
}
@@ -556,10 +555,10 @@ func TestSetParticipationAndRewardProposer(t *testing.T) {
b, err := helpers.TotalActiveBalance(beaconState)
require.NoError(t, err)
st, err := altair.SetParticipationAndRewardProposer(t.Context(), beaconState, test.epoch, test.indices, test.participatedFlags, b)
st, err := altair.SetParticipationAndRewardProposer(context.Background(), beaconState, test.epoch, test.indices, test.participatedFlags, b)
require.NoError(t, err)
i, err := helpers.BeaconProposerIndex(t.Context(), st)
i, err := helpers.BeaconProposerIndex(context.Background(), st)
require.NoError(t, err)
b, err = beaconState.BalanceAtIndex(i)
require.NoError(t, err)
@@ -662,8 +661,8 @@ func TestRewardProposer(t *testing.T) {
{rewardNumerator: 1000000000000, want: 34234377253},
}
for _, test := range tests {
require.NoError(t, altair.RewardProposer(t.Context(), beaconState, test.rewardNumerator))
i, err := helpers.BeaconProposerIndex(t.Context(), beaconState)
require.NoError(t, altair.RewardProposer(context.Background(), beaconState, test.rewardNumerator))
i, err := helpers.BeaconProposerIndex(context.Background(), beaconState)
require.NoError(t, err)
b, err := beaconState.BalanceAtIndex(i)
require.NoError(t, err)

View File

@@ -1,6 +1,7 @@
package altair_test
import (
"context"
"math"
"testing"
@@ -25,7 +26,7 @@ import (
func TestProcessSyncCommittee_PerfectParticipation(t *testing.T) {
beaconState, privKeys := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().MaxValidatorsPerCommittee)
require.NoError(t, beaconState.SetSlot(1))
committee, err := altair.NextSyncCommittee(t.Context(), beaconState)
committee, err := altair.NextSyncCommittee(context.Background(), beaconState)
require.NoError(t, err)
require.NoError(t, beaconState.SetCurrentSyncCommittee(committee))
@@ -33,7 +34,7 @@ func TestProcessSyncCommittee_PerfectParticipation(t *testing.T) {
for i := range syncBits {
syncBits[i] = 0xff
}
indices, err := altair.NextSyncCommitteeIndices(t.Context(), beaconState)
indices, err := altair.NextSyncCommitteeIndices(context.Background(), beaconState)
require.NoError(t, err)
ps := slots.PrevSlot(beaconState.Slot())
pbr, err := helpers.BlockRootAtSlot(beaconState, ps)
@@ -54,7 +55,7 @@ func TestProcessSyncCommittee_PerfectParticipation(t *testing.T) {
}
var reward uint64
beaconState, reward, err = altair.ProcessSyncAggregate(t.Context(), beaconState, syncAggregate)
beaconState, reward, err = altair.ProcessSyncAggregate(context.Background(), beaconState, syncAggregate)
require.NoError(t, err)
assert.Equal(t, uint64(72192), reward)
@@ -76,7 +77,7 @@ func TestProcessSyncCommittee_PerfectParticipation(t *testing.T) {
require.Equal(t, true, balances[indices[0]] > balances[nonSyncIndex])
// Proposer should be more profitable than rest of the sync committee
proposerIndex, err := helpers.BeaconProposerIndex(t.Context(), beaconState)
proposerIndex, err := helpers.BeaconProposerIndex(context.Background(), beaconState)
require.NoError(t, err)
require.Equal(t, true, balances[proposerIndex] > balances[indices[0]])
@@ -101,7 +102,7 @@ func TestProcessSyncCommittee_PerfectParticipation(t *testing.T) {
func TestProcessSyncCommittee_MixParticipation_BadSignature(t *testing.T) {
beaconState, privKeys := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().MaxValidatorsPerCommittee)
require.NoError(t, beaconState.SetSlot(1))
committee, err := altair.NextSyncCommittee(t.Context(), beaconState)
committee, err := altair.NextSyncCommittee(context.Background(), beaconState)
require.NoError(t, err)
require.NoError(t, beaconState.SetCurrentSyncCommittee(committee))
@@ -109,7 +110,7 @@ func TestProcessSyncCommittee_MixParticipation_BadSignature(t *testing.T) {
for i := range syncBits {
syncBits[i] = 0xAA
}
indices, err := altair.NextSyncCommitteeIndices(t.Context(), beaconState)
indices, err := altair.NextSyncCommitteeIndices(context.Background(), beaconState)
require.NoError(t, err)
ps := slots.PrevSlot(beaconState.Slot())
pbr, err := helpers.BlockRootAtSlot(beaconState, ps)
@@ -129,14 +130,14 @@ func TestProcessSyncCommittee_MixParticipation_BadSignature(t *testing.T) {
SyncCommitteeSignature: aggregatedSig,
}
_, _, err = altair.ProcessSyncAggregate(t.Context(), beaconState, syncAggregate)
_, _, err = altair.ProcessSyncAggregate(context.Background(), beaconState, syncAggregate)
require.ErrorContains(t, "invalid sync committee signature", err)
}
func TestProcessSyncCommittee_MixParticipation_GoodSignature(t *testing.T) {
beaconState, privKeys := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().MaxValidatorsPerCommittee)
require.NoError(t, beaconState.SetSlot(1))
committee, err := altair.NextSyncCommittee(t.Context(), beaconState)
committee, err := altair.NextSyncCommittee(context.Background(), beaconState)
require.NoError(t, err)
require.NoError(t, beaconState.SetCurrentSyncCommittee(committee))
@@ -144,7 +145,7 @@ func TestProcessSyncCommittee_MixParticipation_GoodSignature(t *testing.T) {
for i := range syncBits {
syncBits[i] = 0xAA
}
indices, err := altair.NextSyncCommitteeIndices(t.Context(), beaconState)
indices, err := altair.NextSyncCommitteeIndices(context.Background(), beaconState)
require.NoError(t, err)
ps := slots.PrevSlot(beaconState.Slot())
pbr, err := helpers.BlockRootAtSlot(beaconState, ps)
@@ -166,7 +167,7 @@ func TestProcessSyncCommittee_MixParticipation_GoodSignature(t *testing.T) {
SyncCommitteeSignature: aggregatedSig,
}
_, _, err = altair.ProcessSyncAggregate(t.Context(), beaconState, syncAggregate)
_, _, err = altair.ProcessSyncAggregate(context.Background(), beaconState, syncAggregate)
require.NoError(t, err)
}
@@ -174,7 +175,7 @@ func TestProcessSyncCommittee_MixParticipation_GoodSignature(t *testing.T) {
func TestProcessSyncCommittee_DontPrecompute(t *testing.T) {
beaconState, _ := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().MaxValidatorsPerCommittee)
require.NoError(t, beaconState.SetSlot(1))
committee, err := altair.NextSyncCommittee(t.Context(), beaconState)
committee, err := altair.NextSyncCommittee(context.Background(), beaconState)
require.NoError(t, err)
committeeKeys := committee.Pubkeys
committeeKeys[1] = committeeKeys[0]
@@ -191,7 +192,7 @@ func TestProcessSyncCommittee_DontPrecompute(t *testing.T) {
SyncCommitteeBits: syncBits,
}
require.NoError(t, beaconState.UpdateBalancesAtIndex(idx, 0))
st, votedKeys, _, err := altair.ProcessSyncAggregateEported(t.Context(), beaconState, syncAggregate)
st, votedKeys, _, err := altair.ProcessSyncAggregateEported(context.Background(), beaconState, syncAggregate)
require.NoError(t, err)
require.Equal(t, 511, len(votedKeys))
require.DeepEqual(t, committeeKeys[0], votedKeys[0].Marshal())
@@ -202,7 +203,7 @@ func TestProcessSyncCommittee_DontPrecompute(t *testing.T) {
func TestProcessSyncCommittee_processSyncAggregate(t *testing.T) {
beaconState, _ := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().MaxValidatorsPerCommittee)
require.NoError(t, beaconState.SetSlot(1))
committee, err := altair.NextSyncCommittee(t.Context(), beaconState)
committee, err := altair.NextSyncCommittee(context.Background(), beaconState)
require.NoError(t, err)
require.NoError(t, beaconState.SetCurrentSyncCommittee(committee))
@@ -214,7 +215,7 @@ func TestProcessSyncCommittee_processSyncAggregate(t *testing.T) {
SyncCommitteeBits: syncBits,
}
st, votedKeys, _, err := altair.ProcessSyncAggregateEported(t.Context(), beaconState, syncAggregate)
st, votedKeys, _, err := altair.ProcessSyncAggregateEported(context.Background(), beaconState, syncAggregate)
require.NoError(t, err)
votedMap := make(map[[fieldparams.BLSPubkeyLength]byte]bool)
for _, key := range votedKeys {
@@ -227,7 +228,7 @@ func TestProcessSyncCommittee_processSyncAggregate(t *testing.T) {
committeeKeys := currentSyncCommittee.Pubkeys
balances := st.Balances()
proposerIndex, err := helpers.BeaconProposerIndex(t.Context(), beaconState)
proposerIndex, err := helpers.BeaconProposerIndex(context.Background(), beaconState)
require.NoError(t, err)
for i := 0; i < len(syncBits); i++ {
@@ -253,7 +254,7 @@ func TestProcessSyncCommittee_processSyncAggregate(t *testing.T) {
func Test_VerifySyncCommitteeSig(t *testing.T) {
beaconState, privKeys := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().MaxValidatorsPerCommittee)
require.NoError(t, beaconState.SetSlot(1))
committee, err := altair.NextSyncCommittee(t.Context(), beaconState)
committee, err := altair.NextSyncCommittee(context.Background(), beaconState)
require.NoError(t, err)
require.NoError(t, beaconState.SetCurrentSyncCommittee(committee))
@@ -261,7 +262,7 @@ func Test_VerifySyncCommitteeSig(t *testing.T) {
for i := range syncBits {
syncBits[i] = 0xff
}
indices, err := altair.NextSyncCommitteeIndices(t.Context(), beaconState)
indices, err := altair.NextSyncCommitteeIndices(context.Background(), beaconState)
require.NoError(t, err)
ps := slots.PrevSlot(beaconState.Slot())
pbr, err := helpers.BlockRootAtSlot(beaconState, ps)

View File

@@ -1,21 +1,21 @@
package altair_test
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/altair"
state_native "github.com/OffchainLabs/prysm/v6/beacon-chain/state/state-native"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/fuzz"
"github.com/OffchainLabs/prysm/v6/testing/require"
gofuzz "github.com/google/gofuzz"
fuzz "github.com/google/gofuzz"
)
func TestFuzzProcessDeposits_10000(t *testing.T) {
fuzzer := gofuzz.NewWithSeed(0)
fuzzer := fuzz.NewWithSeed(0)
state := &ethpb.BeaconStateAltair{}
deposits := make([]*ethpb.Deposit, 100)
ctx := t.Context()
ctx := context.Background()
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
for i := range deposits {
@@ -27,15 +27,14 @@ func TestFuzzProcessDeposits_10000(t *testing.T) {
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, deposits)
}
fuzz.FreeMemory(i)
}
}
func TestFuzzProcessPreGenesisDeposit_10000(t *testing.T) {
fuzzer := gofuzz.NewWithSeed(0)
fuzzer := fuzz.NewWithSeed(0)
state := &ethpb.BeaconStateAltair{}
deposit := &ethpb.Deposit{}
ctx := t.Context()
ctx := context.Background()
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
@@ -46,15 +45,14 @@ func TestFuzzProcessPreGenesisDeposit_10000(t *testing.T) {
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, deposit)
}
fuzz.FreeMemory(i)
}
}
func TestFuzzProcessPreGenesisDeposit_Phase0_10000(t *testing.T) {
fuzzer := gofuzz.NewWithSeed(0)
fuzzer := fuzz.NewWithSeed(0)
state := &ethpb.BeaconState{}
deposit := &ethpb.Deposit{}
ctx := t.Context()
ctx := context.Background()
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
@@ -65,12 +63,11 @@ func TestFuzzProcessPreGenesisDeposit_Phase0_10000(t *testing.T) {
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, deposit)
}
fuzz.FreeMemory(i)
}
}
func TestFuzzProcessDeposit_Phase0_10000(t *testing.T) {
fuzzer := gofuzz.NewWithSeed(0)
fuzzer := fuzz.NewWithSeed(0)
state := &ethpb.BeaconState{}
deposit := &ethpb.Deposit{}
@@ -83,12 +80,11 @@ func TestFuzzProcessDeposit_Phase0_10000(t *testing.T) {
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, deposit)
}
fuzz.FreeMemory(i)
}
}
func TestFuzzProcessDeposit_10000(t *testing.T) {
fuzzer := gofuzz.NewWithSeed(0)
fuzzer := fuzz.NewWithSeed(0)
state := &ethpb.BeaconStateAltair{}
deposit := &ethpb.Deposit{}
@@ -101,6 +97,5 @@ func TestFuzzProcessDeposit_10000(t *testing.T) {
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, deposit)
}
fuzz.FreeMemory(i)
}
}

View File

@@ -1,6 +1,7 @@
package altair_test
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/altair"
@@ -41,7 +42,7 @@ func TestProcessDeposits_SameValidatorMultipleDepositsSameBlock(t *testing.T) {
},
})
require.NoError(t, err)
newState, err := altair.ProcessDeposits(t.Context(), beaconState, []*ethpb.Deposit{dep[0], dep[1], dep[2]})
newState, err := altair.ProcessDeposits(context.Background(), beaconState, []*ethpb.Deposit{dep[0], dep[1], dep[2]})
require.NoError(t, err, "Expected block deposits to process correctly")
require.Equal(t, 2, len(newState.Validators()), "Incorrect validator count")
}
@@ -69,7 +70,7 @@ func TestProcessDeposits_AddsNewValidatorDeposit(t *testing.T) {
},
})
require.NoError(t, err)
newState, err := altair.ProcessDeposits(t.Context(), beaconState, []*ethpb.Deposit{dep[0]})
newState, err := altair.ProcessDeposits(context.Background(), beaconState, []*ethpb.Deposit{dep[0]})
require.NoError(t, err, "Expected block deposits to process correctly")
if newState.Balances()[1] != dep[0].Data.Amount {
t.Errorf(
@@ -126,7 +127,7 @@ func TestProcessDeposits_RepeatedDeposit_IncreasesValidatorBalance(t *testing.T)
},
})
require.NoError(t, err)
newState, err := altair.ProcessDeposits(t.Context(), beaconState, []*ethpb.Deposit{deposit})
newState, err := altair.ProcessDeposits(context.Background(), beaconState, []*ethpb.Deposit{deposit})
require.NoError(t, err, "Process deposit failed")
require.Equal(t, uint64(1000+50), newState.Balances()[1], "Expected balance at index 1 to be 1050")
}
@@ -255,7 +256,7 @@ func TestPreGenesisDeposits_SkipInvalidDeposit(t *testing.T) {
},
})
require.NoError(t, err)
newState, err := altair.ProcessPreGenesisDeposits(t.Context(), beaconState, dep)
newState, err := altair.ProcessPreGenesisDeposits(context.Background(), beaconState, dep)
require.NoError(t, err, "Expected invalid block deposit to be ignored without error")
_, ok := newState.ValidatorIndexByPubkey(bytesutil.ToBytes48(dep[0].Data.PublicKey))
@@ -369,6 +370,6 @@ func TestProcessDeposits_MerkleBranchFailsVerification(t *testing.T) {
})
require.NoError(t, err)
want := "deposit root did not verify"
_, err = altair.ProcessDeposits(t.Context(), beaconState, b.Block.Body.Deposits)
_, err = altair.ProcessDeposits(context.Background(), beaconState, b.Block.Body.Deposits)
assert.ErrorContains(t, want, err)
}

View File

@@ -1,6 +1,7 @@
package altair
import (
"context"
"math"
"testing"
@@ -31,7 +32,7 @@ func TestInitializeEpochValidators_Ok(t *testing.T) {
InactivityScores: []uint64{0, 1, 2, 3},
})
require.NoError(t, err)
v, b, err := InitializePrecomputeValidators(t.Context(), s)
v, b, err := InitializePrecomputeValidators(context.Background(), s)
require.NoError(t, err)
assert.DeepEqual(t, &precompute.Validator{
IsSlashed: true,
@@ -73,7 +74,7 @@ func TestInitializeEpochValidators_Overflow(t *testing.T) {
InactivityScores: []uint64{0, 1},
})
require.NoError(t, err)
_, _, err = InitializePrecomputeValidators(t.Context(), s)
_, _, err = InitializePrecomputeValidators(context.Background(), s)
require.ErrorContains(t, "could not read every validator: addition overflows", err)
}
@@ -83,16 +84,16 @@ func TestInitializeEpochValidators_BadState(t *testing.T) {
InactivityScores: []uint64{},
})
require.NoError(t, err)
_, _, err = InitializePrecomputeValidators(t.Context(), s)
_, _, err = InitializePrecomputeValidators(context.Background(), s)
require.ErrorContains(t, "num of validators is different than num of inactivity scores", err)
}
func TestProcessEpochParticipation(t *testing.T) {
s, err := testState()
require.NoError(t, err)
validators, balance, err := InitializePrecomputeValidators(t.Context(), s)
validators, balance, err := InitializePrecomputeValidators(context.Background(), s)
require.NoError(t, err)
validators, balance, err = ProcessEpochParticipation(t.Context(), s, balance, validators)
validators, balance, err = ProcessEpochParticipation(context.Background(), s, balance, validators)
require.NoError(t, err)
require.DeepEqual(t, &precompute.Validator{
IsActiveCurrentEpoch: true,
@@ -168,9 +169,9 @@ func TestProcessEpochParticipation_InactiveValidator(t *testing.T) {
InactivityScores: []uint64{0, 0, 0},
})
require.NoError(t, err)
validators, balance, err := InitializePrecomputeValidators(t.Context(), st)
validators, balance, err := InitializePrecomputeValidators(context.Background(), st)
require.NoError(t, err)
validators, balance, err = ProcessEpochParticipation(t.Context(), st, balance, validators)
validators, balance, err = ProcessEpochParticipation(context.Background(), st, balance, validators)
require.NoError(t, err)
require.DeepEqual(t, &precompute.Validator{
IsActiveCurrentEpoch: false,
@@ -208,9 +209,9 @@ func TestProcessEpochParticipation_InactiveValidator(t *testing.T) {
func TestAttestationsDelta(t *testing.T) {
s, err := testState()
require.NoError(t, err)
validators, balance, err := InitializePrecomputeValidators(t.Context(), s)
validators, balance, err := InitializePrecomputeValidators(context.Background(), s)
require.NoError(t, err)
validators, balance, err = ProcessEpochParticipation(t.Context(), s, balance, validators)
validators, balance, err = ProcessEpochParticipation(context.Background(), s, balance, validators)
require.NoError(t, err)
deltas, err := AttestationsDelta(s, balance, validators)
require.NoError(t, err)
@@ -246,9 +247,9 @@ func TestAttestationsDelta(t *testing.T) {
func TestAttestationsDeltaBellatrix(t *testing.T) {
s, err := testStateBellatrix()
require.NoError(t, err)
validators, balance, err := InitializePrecomputeValidators(t.Context(), s)
validators, balance, err := InitializePrecomputeValidators(context.Background(), s)
require.NoError(t, err)
validators, balance, err = ProcessEpochParticipation(t.Context(), s, balance, validators)
validators, balance, err = ProcessEpochParticipation(context.Background(), s, balance, validators)
require.NoError(t, err)
deltas, err := AttestationsDelta(s, balance, validators)
require.NoError(t, err)
@@ -284,9 +285,9 @@ func TestAttestationsDeltaBellatrix(t *testing.T) {
func TestProcessRewardsAndPenaltiesPrecompute_Ok(t *testing.T) {
s, err := testState()
require.NoError(t, err)
validators, balance, err := InitializePrecomputeValidators(t.Context(), s)
validators, balance, err := InitializePrecomputeValidators(context.Background(), s)
require.NoError(t, err)
validators, balance, err = ProcessEpochParticipation(t.Context(), s, balance, validators)
validators, balance, err = ProcessEpochParticipation(context.Background(), s, balance, validators)
require.NoError(t, err)
s, err = ProcessRewardsAndPenaltiesPrecompute(s, balance, validators)
require.NoError(t, err)
@@ -323,9 +324,9 @@ func TestProcessRewardsAndPenaltiesPrecompute_Ok(t *testing.T) {
func TestProcessRewardsAndPenaltiesPrecompute_InactivityLeak(t *testing.T) {
s, err := testState()
require.NoError(t, err)
validators, balance, err := InitializePrecomputeValidators(t.Context(), s)
validators, balance, err := InitializePrecomputeValidators(context.Background(), s)
require.NoError(t, err)
validators, balance, err = ProcessEpochParticipation(t.Context(), s, balance, validators)
validators, balance, err = ProcessEpochParticipation(context.Background(), s, balance, validators)
require.NoError(t, err)
sCopy := s.Copy()
s, err = ProcessRewardsAndPenaltiesPrecompute(s, balance, validators)
@@ -351,11 +352,11 @@ func TestProcessInactivityScores_CanProcessInactivityLeak(t *testing.T) {
defaultScore := uint64(5)
require.NoError(t, s.SetInactivityScores([]uint64{defaultScore, defaultScore, defaultScore, defaultScore}))
require.NoError(t, s.SetSlot(params.BeaconConfig().SlotsPerEpoch*primitives.Slot(params.BeaconConfig().MinEpochsToInactivityPenalty+2)))
validators, balance, err := InitializePrecomputeValidators(t.Context(), s)
validators, balance, err := InitializePrecomputeValidators(context.Background(), s)
require.NoError(t, err)
validators, _, err = ProcessEpochParticipation(t.Context(), s, balance, validators)
validators, _, err = ProcessEpochParticipation(context.Background(), s, balance, validators)
require.NoError(t, err)
s, _, err = ProcessInactivityScores(t.Context(), s, validators)
s, _, err = ProcessInactivityScores(context.Background(), s, validators)
require.NoError(t, err)
inactivityScores, err := s.InactivityScores()
require.NoError(t, err)
@@ -372,11 +373,11 @@ func TestProcessInactivityScores_GenesisEpoch(t *testing.T) {
defaultScore := uint64(10)
require.NoError(t, s.SetInactivityScores([]uint64{defaultScore, defaultScore, defaultScore, defaultScore}))
require.NoError(t, s.SetSlot(params.BeaconConfig().GenesisSlot))
validators, balance, err := InitializePrecomputeValidators(t.Context(), s)
validators, balance, err := InitializePrecomputeValidators(context.Background(), s)
require.NoError(t, err)
validators, _, err = ProcessEpochParticipation(t.Context(), s, balance, validators)
validators, _, err = ProcessEpochParticipation(context.Background(), s, balance, validators)
require.NoError(t, err)
s, _, err = ProcessInactivityScores(t.Context(), s, validators)
s, _, err = ProcessInactivityScores(context.Background(), s, validators)
require.NoError(t, err)
inactivityScores, err := s.InactivityScores()
require.NoError(t, err)
@@ -391,11 +392,11 @@ func TestProcessInactivityScores_CanProcessNonInactivityLeak(t *testing.T) {
require.NoError(t, err)
defaultScore := uint64(5)
require.NoError(t, s.SetInactivityScores([]uint64{defaultScore, defaultScore, defaultScore, defaultScore}))
validators, balance, err := InitializePrecomputeValidators(t.Context(), s)
validators, balance, err := InitializePrecomputeValidators(context.Background(), s)
require.NoError(t, err)
validators, _, err = ProcessEpochParticipation(t.Context(), s, balance, validators)
validators, _, err = ProcessEpochParticipation(context.Background(), s, balance, validators)
require.NoError(t, err)
s, _, err = ProcessInactivityScores(t.Context(), s, validators)
s, _, err = ProcessInactivityScores(context.Background(), s, validators)
require.NoError(t, err)
inactivityScores, err := s.InactivityScores()
require.NoError(t, err)
@@ -409,9 +410,9 @@ func TestProcessInactivityScores_CanProcessNonInactivityLeak(t *testing.T) {
func TestProcessRewardsAndPenaltiesPrecompute_GenesisEpoch(t *testing.T) {
s, err := testState()
require.NoError(t, err)
validators, balance, err := InitializePrecomputeValidators(t.Context(), s)
validators, balance, err := InitializePrecomputeValidators(context.Background(), s)
require.NoError(t, err)
validators, balance, err = ProcessEpochParticipation(t.Context(), s, balance, validators)
validators, balance, err = ProcessEpochParticipation(context.Background(), s, balance, validators)
require.NoError(t, err)
require.NoError(t, s.SetSlot(0))
s, err = ProcessRewardsAndPenaltiesPrecompute(s, balance, validators)
@@ -428,9 +429,9 @@ func TestProcessRewardsAndPenaltiesPrecompute_GenesisEpoch(t *testing.T) {
func TestProcessRewardsAndPenaltiesPrecompute_BadState(t *testing.T) {
s, err := testState()
require.NoError(t, err)
validators, balance, err := InitializePrecomputeValidators(t.Context(), s)
validators, balance, err := InitializePrecomputeValidators(context.Background(), s)
require.NoError(t, err)
_, balance, err = ProcessEpochParticipation(t.Context(), s, balance, validators)
_, balance, err = ProcessEpochParticipation(context.Background(), s, balance, validators)
require.NoError(t, err)
_, err = ProcessRewardsAndPenaltiesPrecompute(s, balance, []*precompute.Validator{})
require.ErrorContains(t, "validator registries not the same length as state's validator registries", err)
@@ -441,7 +442,7 @@ func TestProcessInactivityScores_NonEligibleValidator(t *testing.T) {
require.NoError(t, err)
defaultScore := uint64(5)
require.NoError(t, s.SetInactivityScores([]uint64{defaultScore, defaultScore, defaultScore, defaultScore}))
validators, balance, err := InitializePrecomputeValidators(t.Context(), s)
validators, balance, err := InitializePrecomputeValidators(context.Background(), s)
require.NoError(t, err)
// v0 is eligible (not active previous epoch, slashed and not withdrawable)
@@ -462,9 +463,9 @@ func TestProcessInactivityScores_NonEligibleValidator(t *testing.T) {
// v3 is eligible (active previous epoch)
validators[3].IsActivePrevEpoch = true
validators, _, err = ProcessEpochParticipation(t.Context(), s, balance, validators)
validators, _, err = ProcessEpochParticipation(context.Background(), s, balance, validators)
require.NoError(t, err)
s, _, err = ProcessInactivityScores(t.Context(), s, validators)
s, _, err = ProcessInactivityScores(context.Background(), s, validators)
require.NoError(t, err)
inactivityScores, err := s.InactivityScores()
require.NoError(t, err)

View File

@@ -1,6 +1,7 @@
package altair_test
import (
"context"
"fmt"
"math"
"testing"
@@ -28,7 +29,7 @@ func TestProcessSyncCommitteeUpdates_CanRotate(t *testing.T) {
BodyRoot: bytesutil.PadTo([]byte{'c'}, 32),
}
require.NoError(t, s.SetLatestBlockHeader(h))
postState, err := altair.ProcessSyncCommitteeUpdates(t.Context(), s)
postState, err := altair.ProcessSyncCommitteeUpdates(context.Background(), s)
require.NoError(t, err)
current, err := postState.CurrentSyncCommittee()
require.NoError(t, err)
@@ -37,7 +38,7 @@ func TestProcessSyncCommitteeUpdates_CanRotate(t *testing.T) {
require.DeepEqual(t, current, next)
require.NoError(t, s.SetSlot(params.BeaconConfig().SlotsPerEpoch))
postState, err = altair.ProcessSyncCommitteeUpdates(t.Context(), s)
postState, err = altair.ProcessSyncCommitteeUpdates(context.Background(), s)
require.NoError(t, err)
c, err := postState.CurrentSyncCommittee()
require.NoError(t, err)
@@ -47,7 +48,7 @@ func TestProcessSyncCommitteeUpdates_CanRotate(t *testing.T) {
require.DeepEqual(t, next, n)
require.NoError(t, s.SetSlot(primitives.Slot(params.BeaconConfig().EpochsPerSyncCommitteePeriod)*params.BeaconConfig().SlotsPerEpoch-1))
postState, err = altair.ProcessSyncCommitteeUpdates(t.Context(), s)
postState, err = altair.ProcessSyncCommitteeUpdates(context.Background(), s)
require.NoError(t, err)
c, err = postState.CurrentSyncCommittee()
require.NoError(t, err)
@@ -60,7 +61,7 @@ func TestProcessSyncCommitteeUpdates_CanRotate(t *testing.T) {
// Test boundary condition.
slot := params.BeaconConfig().SlotsPerEpoch * primitives.Slot(time.CurrentEpoch(s)+params.BeaconConfig().EpochsPerSyncCommitteePeriod)
require.NoError(t, s.SetSlot(slot))
boundaryCommittee, err := altair.NextSyncCommittee(t.Context(), s)
boundaryCommittee, err := altair.NextSyncCommittee(context.Background(), s)
require.NoError(t, err)
require.DeepNotEqual(t, boundaryCommittee, n)
}

View File

@@ -1,6 +1,7 @@
package altair_test
import (
"context"
"testing"
"time"
@@ -96,7 +97,7 @@ func TestSyncCommitteeIndices_CanGet(t *testing.T) {
t.Run(version.String(v), func(t *testing.T) {
helpers.ClearCache()
st := getState(t, tt.args.validatorCount, v)
got, err := altair.NextSyncCommitteeIndices(t.Context(), st)
got, err := altair.NextSyncCommitteeIndices(context.Background(), st)
if tt.wantErr {
require.ErrorContains(t, tt.errString, err)
} else {
@@ -128,18 +129,18 @@ func TestSyncCommitteeIndices_DifferentPeriods(t *testing.T) {
}
st := getState(t, params.BeaconConfig().MaxValidatorsPerCommittee)
got1, err := altair.NextSyncCommitteeIndices(t.Context(), st)
got1, err := altair.NextSyncCommitteeIndices(context.Background(), st)
require.NoError(t, err)
require.NoError(t, st.SetSlot(params.BeaconConfig().SlotsPerEpoch))
got2, err := altair.NextSyncCommitteeIndices(t.Context(), st)
got2, err := altair.NextSyncCommitteeIndices(context.Background(), st)
require.NoError(t, err)
require.DeepNotEqual(t, got1, got2)
require.NoError(t, st.SetSlot(params.BeaconConfig().SlotsPerEpoch*primitives.Slot(params.BeaconConfig().EpochsPerSyncCommitteePeriod)))
got2, err = altair.NextSyncCommitteeIndices(t.Context(), st)
got2, err = altair.NextSyncCommitteeIndices(context.Background(), st)
require.NoError(t, err)
require.DeepNotEqual(t, got1, got2)
require.NoError(t, st.SetSlot(params.BeaconConfig().SlotsPerEpoch*primitives.Slot(2*params.BeaconConfig().EpochsPerSyncCommitteePeriod)))
got2, err = altair.NextSyncCommitteeIndices(t.Context(), st)
got2, err = altair.NextSyncCommitteeIndices(context.Background(), st)
require.NoError(t, err)
require.DeepNotEqual(t, got1, got2)
}
@@ -205,7 +206,7 @@ func TestSyncCommittee_CanGet(t *testing.T) {
if !tt.wantErr {
require.NoError(t, tt.args.state.SetSlot(primitives.Slot(tt.args.epoch)*params.BeaconConfig().SlotsPerEpoch))
}
got, err := altair.NextSyncCommittee(t.Context(), tt.args.state)
got, err := altair.NextSyncCommittee(context.Background(), tt.args.state)
if tt.wantErr {
require.ErrorContains(t, tt.errString, err)
} else {
@@ -269,7 +270,7 @@ func TestValidateNilSyncContribution(t *testing.T) {
func TestSyncSubCommitteePubkeys_CanGet(t *testing.T) {
helpers.ClearCache()
st := getState(t, params.BeaconConfig().MaxValidatorsPerCommittee)
com, err := altair.NextSyncCommittee(t.Context(), st)
com, err := altair.NextSyncCommittee(context.Background(), st)
require.NoError(t, err)
sub, err := altair.SyncSubCommitteePubkeys(com, 0)
require.NoError(t, err)

View File

@@ -1,6 +1,7 @@
package altair_test
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/altair"
@@ -12,7 +13,7 @@ import (
func TestProcessEpoch_CanProcess(t *testing.T) {
st, _ := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().MaxValidatorsPerCommittee)
require.NoError(t, st.SetSlot(10*params.BeaconConfig().SlotsPerEpoch))
err := altair.ProcessEpoch(t.Context(), st)
err := altair.ProcessEpoch(context.Background(), st)
require.NoError(t, err)
require.Equal(t, uint64(0), st.Slashings()[2], "Unexpected slashed balance")
@@ -44,7 +45,7 @@ func TestProcessEpoch_CanProcess(t *testing.T) {
func TestProcessEpoch_CanProcessBellatrix(t *testing.T) {
st, _ := util.DeterministicGenesisStateBellatrix(t, params.BeaconConfig().MaxValidatorsPerCommittee)
require.NoError(t, st.SetSlot(10*params.BeaconConfig().SlotsPerEpoch))
err := altair.ProcessEpoch(t.Context(), st)
err := altair.ProcessEpoch(context.Background(), st)
require.NoError(t, err)
require.Equal(t, uint64(0), st.Slashings()[2], "Unexpected slashed balance")

View File

@@ -1,6 +1,7 @@
package altair_test
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/altair"
@@ -16,7 +17,7 @@ import (
)
func TestTranslateParticipation(t *testing.T) {
ctx := t.Context()
ctx := context.Background()
s, _ := util.DeterministicGenesisStateAltair(t, 64)
require.NoError(t, s.SetSlot(s.Slot()+params.BeaconConfig().MinAttestationInclusionDelay))
@@ -72,7 +73,7 @@ func TestTranslateParticipation(t *testing.T) {
func TestUpgradeToAltair(t *testing.T) {
st, _ := util.DeterministicGenesisState(t, params.BeaconConfig().MaxValidatorsPerCommittee)
preForkState := st.Copy()
aState, err := altair.UpgradeToAltair(t.Context(), st)
aState, err := altair.UpgradeToAltair(context.Background(), st)
require.NoError(t, err)
require.Equal(t, preForkState.GenesisTime(), aState.GenesisTime())

View File

@@ -105,7 +105,6 @@ go_test(
"//proto/prysm/v1alpha1/attestation/aggregation/attestations:go_default_library",
"//runtime/version:go_default_library",
"//testing/assert:go_default_library",
"//testing/fuzz:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//time/slots:go_default_library",

View File

@@ -178,7 +178,7 @@ func VerifyAttestationNoVerifySignature(
}
}
return attestation.IsValidAttestationIndices(ctx, indexedAtt, params.BeaconConfig().MaxValidatorsPerCommittee, params.BeaconConfig().MaxCommitteesPerSlot)
return attestation.IsValidAttestationIndices(ctx, indexedAtt)
}
// ProcessAttestationNoVerifySignature processes the attestation without verifying the attestation signature. This
@@ -243,7 +243,7 @@ func VerifyIndexedAttestation(ctx context.Context, beaconState state.ReadOnlyBea
ctx, span := trace.StartSpan(ctx, "core.VerifyIndexedAttestation")
defer span.End()
if err := attestation.IsValidAttestationIndices(ctx, indexedAtt, params.BeaconConfig().MaxValidatorsPerCommittee, params.BeaconConfig().MaxCommitteesPerSlot); err != nil {
if err := attestation.IsValidAttestationIndices(ctx, indexedAtt); err != nil {
return err
}
domain, err := signing.Domain(

View File

@@ -41,7 +41,7 @@ func TestProcessAttestationNoVerifySignature_BeaconFuzzIssue78(t *testing.T) {
t.Fatal(err)
}
ctx := t.Context()
ctx := context.Background()
_, err = blocks.ProcessAttestationNoVerifySignature(ctx, st, att)
require.ErrorContains(t, "committee index 1 >= committee count 1", err)
}

View File

@@ -42,7 +42,7 @@ func TestProcessAggregatedAttestation_OverlappingBits(t *testing.T) {
require.NoError(t, beaconState.SetCurrentJustifiedCheckpoint(cfc))
require.NoError(t, beaconState.AppendCurrentEpochAttestations(&ethpb.PendingAttestation{}))
committee, err := helpers.BeaconCommitteeFromState(t.Context(), beaconState, att1.Data.Slot, att1.Data.CommitteeIndex)
committee, err := helpers.BeaconCommitteeFromState(context.Background(), beaconState, att1.Data.Slot, att1.Data.CommitteeIndex)
require.NoError(t, err)
attestingIndices1, err := attestation.AttestingIndices(att1, committee)
require.NoError(t, err)
@@ -64,7 +64,7 @@ func TestProcessAggregatedAttestation_OverlappingBits(t *testing.T) {
AggregationBits: aggBits2,
}
committee, err = helpers.BeaconCommitteeFromState(t.Context(), beaconState, att2.Data.Slot, att2.Data.CommitteeIndex)
committee, err = helpers.BeaconCommitteeFromState(context.Background(), beaconState, att2.Data.Slot, att2.Data.CommitteeIndex)
require.NoError(t, err)
attestingIndices2, err := attestation.AttestingIndices(att2, committee)
require.NoError(t, err)
@@ -137,7 +137,7 @@ func TestProcessAttestationsNoVerify_OlderThanSlotsPerEpoch(t *testing.T) {
},
AggregationBits: aggBits,
}
ctx := t.Context()
ctx := context.Background()
t.Run("attestation older than slots per epoch", func(t *testing.T) {
beaconState, _ := util.DeterministicGenesisState(t, 100)
@@ -360,9 +360,9 @@ func TestConvertToIndexed_OK(t *testing.T) {
Signature: att.Signature,
}
committee, err := helpers.BeaconCommitteeFromState(t.Context(), state, att.Data.Slot, att.Data.CommitteeIndex)
committee, err := helpers.BeaconCommitteeFromState(context.Background(), state, att.Data.Slot, att.Data.CommitteeIndex)
require.NoError(t, err)
ia, err := attestation.ConvertToIndexed(t.Context(), att, committee)
ia, err := attestation.ConvertToIndexed(context.Background(), att, committee)
require.NoError(t, err)
assert.DeepEqual(t, wanted, ia, "Convert attestation to indexed attestation didn't result as wanted")
}
@@ -448,7 +448,7 @@ func TestVerifyIndexedAttestation_OK(t *testing.T) {
tt.attestation.Signature = marshalledSig
err = blocks.VerifyIndexedAttestation(t.Context(), state, tt.attestation)
err = blocks.VerifyIndexedAttestation(context.Background(), state, tt.attestation)
assert.NoError(t, err, "Failed to verify indexed attestation")
}
}
@@ -471,7 +471,7 @@ func TestValidateIndexedAttestation_AboveMaxLength(t *testing.T) {
want := "validator indices count exceeds MAX_VALIDATORS_PER_COMMITTEE"
st, err := state_native.InitializeFromProtoUnsafePhase0(&ethpb.BeaconState{})
require.NoError(t, err)
err = blocks.VerifyIndexedAttestation(t.Context(), st, indexedAtt1)
err = blocks.VerifyIndexedAttestation(context.Background(), st, indexedAtt1)
assert.ErrorContains(t, want, err)
}
@@ -493,7 +493,7 @@ func TestValidateIndexedAttestation_BadAttestationsSignatureSet(t *testing.T) {
}
want := "nil or missing indexed attestation data"
_, err := blocks.AttestationSignatureBatch(t.Context(), beaconState, atts)
_, err := blocks.AttestationSignatureBatch(context.Background(), beaconState, atts)
assert.ErrorContains(t, want, err)
atts = []ethpb.Att{}
@@ -514,7 +514,7 @@ func TestValidateIndexedAttestation_BadAttestationsSignatureSet(t *testing.T) {
}
want = "expected non-empty attesting indices"
_, err = blocks.AttestationSignatureBatch(t.Context(), beaconState, atts)
_, err = blocks.AttestationSignatureBatch(context.Background(), beaconState, atts)
assert.ErrorContains(t, want, err)
}
@@ -542,7 +542,7 @@ func TestVerifyAttestations_HandlesPlannedFork(t *testing.T) {
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
}))
comm1, err := helpers.BeaconCommitteeFromState(t.Context(), st, 1 /*slot*/, 0 /*committeeIndex*/)
comm1, err := helpers.BeaconCommitteeFromState(context.Background(), st, 1 /*slot*/, 0 /*committeeIndex*/)
require.NoError(t, err)
att1 := util.HydrateAttestation(&ethpb.Attestation{
AggregationBits: bitfield.NewBitlist(uint64(len(comm1))),
@@ -561,7 +561,7 @@ func TestVerifyAttestations_HandlesPlannedFork(t *testing.T) {
}
att1.Signature = bls.AggregateSignatures(sigs).Marshal()
comm2, err := helpers.BeaconCommitteeFromState(t.Context(), st, 1*params.BeaconConfig().SlotsPerEpoch+1 /*slot*/, 1 /*committeeIndex*/)
comm2, err := helpers.BeaconCommitteeFromState(context.Background(), st, 1*params.BeaconConfig().SlotsPerEpoch+1 /*slot*/, 1 /*committeeIndex*/)
require.NoError(t, err)
att2 := util.HydrateAttestation(&ethpb.Attestation{
AggregationBits: bitfield.NewBitlist(uint64(len(comm2))),
@@ -583,7 +583,7 @@ func TestVerifyAttestations_HandlesPlannedFork(t *testing.T) {
}
func TestRetrieveAttestationSignatureSet_VerifiesMultipleAttestations(t *testing.T) {
ctx := t.Context()
ctx := context.Background()
numOfValidators := uint64(params.BeaconConfig().SlotsPerEpoch.Mul(4))
validators := make([]*ethpb.Validator, numOfValidators)
_, keys, err := util.DeterministicDepositsAndKeys(numOfValidators)
@@ -602,7 +602,7 @@ func TestRetrieveAttestationSignatureSet_VerifiesMultipleAttestations(t *testing
require.NoError(t, st.SetSlot(5))
require.NoError(t, st.SetValidators(validators))
comm1, err := helpers.BeaconCommitteeFromState(t.Context(), st, 1 /*slot*/, 0 /*committeeIndex*/)
comm1, err := helpers.BeaconCommitteeFromState(context.Background(), st, 1 /*slot*/, 0 /*committeeIndex*/)
require.NoError(t, err)
att1 := util.HydrateAttestation(&ethpb.Attestation{
AggregationBits: bitfield.NewBitlist(uint64(len(comm1))),
@@ -621,7 +621,7 @@ func TestRetrieveAttestationSignatureSet_VerifiesMultipleAttestations(t *testing
}
att1.Signature = bls.AggregateSignatures(sigs).Marshal()
comm2, err := helpers.BeaconCommitteeFromState(t.Context(), st, 1 /*slot*/, 1 /*committeeIndex*/)
comm2, err := helpers.BeaconCommitteeFromState(context.Background(), st, 1 /*slot*/, 1 /*committeeIndex*/)
require.NoError(t, err)
att2 := util.HydrateAttestation(&ethpb.Attestation{
AggregationBits: bitfield.NewBitlist(uint64(len(comm2))),
@@ -651,7 +651,7 @@ func TestRetrieveAttestationSignatureSet_VerifiesMultipleAttestations(t *testing
require.NoError(t, st.SetSlot(5))
require.NoError(t, st.SetValidators(validators))
comm1, err := helpers.BeaconCommitteeFromState(t.Context(), st, 1 /*slot*/, 0 /*committeeIndex*/)
comm1, err := helpers.BeaconCommitteeFromState(context.Background(), st, 1 /*slot*/, 0 /*committeeIndex*/)
require.NoError(t, err)
commBits1 := primitives.NewAttestationCommitteeBits()
commBits1.SetBitAt(0, true)
@@ -673,7 +673,7 @@ func TestRetrieveAttestationSignatureSet_VerifiesMultipleAttestations(t *testing
}
att1.Signature = bls.AggregateSignatures(sigs).Marshal()
comm2, err := helpers.BeaconCommitteeFromState(t.Context(), st, 1 /*slot*/, 1 /*committeeIndex*/)
comm2, err := helpers.BeaconCommitteeFromState(context.Background(), st, 1 /*slot*/, 1 /*committeeIndex*/)
require.NoError(t, err)
commBits2 := primitives.NewAttestationCommitteeBits()
commBits2.SetBitAt(1, true)
@@ -702,7 +702,7 @@ func TestRetrieveAttestationSignatureSet_VerifiesMultipleAttestations(t *testing
}
func TestRetrieveAttestationSignatureSet_AcrossFork(t *testing.T) {
ctx := t.Context()
ctx := context.Background()
numOfValidators := uint64(params.BeaconConfig().SlotsPerEpoch.Mul(4))
validators := make([]*ethpb.Validator, numOfValidators)
_, keys, err := util.DeterministicDepositsAndKeys(numOfValidators)

View File

@@ -1,6 +1,7 @@
package blocks_test
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/blocks"
@@ -62,7 +63,7 @@ func TestProcessAttesterSlashings_DataNotSlashable(t *testing.T) {
for i, s := range b.Block.Body.AttesterSlashings {
ss[i] = s
}
_, err = blocks.ProcessAttesterSlashings(t.Context(), beaconState, ss, v.SlashValidator)
_, err = blocks.ProcessAttesterSlashings(context.Background(), beaconState, ss, v.SlashValidator)
assert.ErrorContains(t, "attestations are not slashable", err)
}
@@ -101,7 +102,7 @@ func TestProcessAttesterSlashings_IndexedAttestationFailedToVerify(t *testing.T)
for i, s := range b.Block.Body.AttesterSlashings {
ss[i] = s
}
_, err = blocks.ProcessAttesterSlashings(t.Context(), beaconState, ss, v.SlashValidator)
_, err = blocks.ProcessAttesterSlashings(context.Background(), beaconState, ss, v.SlashValidator)
assert.ErrorContains(t, "validator indices count exceeds MAX_VALIDATORS_PER_COMMITTEE", err)
}
@@ -243,7 +244,7 @@ func TestProcessAttesterSlashings_AppliesCorrectStatus(t *testing.T) {
currentSlot := 2 * params.BeaconConfig().SlotsPerEpoch
require.NoError(t, tc.st.SetSlot(currentSlot))
newState, err := blocks.ProcessAttesterSlashings(t.Context(), tc.st, []ethpb.AttSlashing{tc.slashing}, v.SlashValidator)
newState, err := blocks.ProcessAttesterSlashings(context.Background(), tc.st, []ethpb.AttSlashing{tc.slashing}, v.SlashValidator)
require.NoError(t, err)
newRegistry := newState.Validators()

View File

@@ -1,6 +1,7 @@
package blocks
import (
"context"
"testing"
v "github.com/OffchainLabs/prysm/v6/beacon-chain/core/validators"
@@ -10,14 +11,13 @@ import (
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/fuzz"
"github.com/OffchainLabs/prysm/v6/testing/require"
gofuzz "github.com/google/gofuzz"
fuzz "github.com/google/gofuzz"
)
func TestFuzzProcessAttestationNoVerify_10000(t *testing.T) {
fuzzer := gofuzz.NewWithSeed(0)
ctx := t.Context()
fuzzer := fuzz.NewWithSeed(0)
ctx := context.Background()
state := &ethpb.BeaconState{}
att := &ethpb.Attestation{}
@@ -28,12 +28,11 @@ func TestFuzzProcessAttestationNoVerify_10000(t *testing.T) {
require.NoError(t, err)
_, err = ProcessAttestationNoVerifySignature(ctx, s, att)
_ = err
fuzz.FreeMemory(i)
}
}
func TestFuzzProcessBlockHeader_10000(t *testing.T) {
fuzzer := gofuzz.NewWithSeed(0)
fuzzer := fuzz.NewWithSeed(0)
state := &ethpb.BeaconState{}
block := &ethpb.SignedBeaconBlock{}
@@ -48,14 +47,13 @@ func TestFuzzProcessBlockHeader_10000(t *testing.T) {
}
wsb, err := blocks.NewSignedBeaconBlock(block)
require.NoError(t, err)
_, err = ProcessBlockHeader(t.Context(), s, wsb)
_, err = ProcessBlockHeader(context.Background(), s, wsb)
_ = err
fuzz.FreeMemory(i)
}
}
func TestFuzzverifyDepositDataSigningRoot_10000(_ *testing.T) {
fuzzer := gofuzz.NewWithSeed(0)
fuzzer := fuzz.NewWithSeed(0)
var ba []byte
var pubkey [fieldparams.BLSPubkeyLength]byte
var sig [96]byte
@@ -79,14 +77,14 @@ func TestFuzzverifyDepositDataSigningRoot_10000(_ *testing.T) {
}
func TestFuzzProcessEth1DataInBlock_10000(t *testing.T) {
fuzzer := gofuzz.NewWithSeed(0)
fuzzer := fuzz.NewWithSeed(0)
e := &ethpb.Eth1Data{}
state, err := state_native.InitializeFromProtoUnsafePhase0(&ethpb.BeaconState{})
require.NoError(t, err)
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(e)
s, err := ProcessEth1DataInBlock(t.Context(), state, e)
s, err := ProcessEth1DataInBlock(context.Background(), state, e)
if err != nil && s != nil {
t.Fatalf("state should be nil on err. found: %v on error: %v for state: %v and eth1data: %v", s, err, state, e)
}
@@ -94,7 +92,7 @@ func TestFuzzProcessEth1DataInBlock_10000(t *testing.T) {
}
func TestFuzzareEth1DataEqual_10000(_ *testing.T) {
fuzzer := gofuzz.NewWithSeed(0)
fuzzer := fuzz.NewWithSeed(0)
eth1data := &ethpb.Eth1Data{}
eth1data2 := &ethpb.Eth1Data{}
@@ -107,7 +105,7 @@ func TestFuzzareEth1DataEqual_10000(_ *testing.T) {
}
func TestFuzzEth1DataHasEnoughSupport_10000(t *testing.T) {
fuzzer := gofuzz.NewWithSeed(0)
fuzzer := fuzz.NewWithSeed(0)
eth1data := &ethpb.Eth1Data{}
var stateVotes []*ethpb.Eth1Data
for i := 0; i < 100000; i++ {
@@ -124,7 +122,7 @@ func TestFuzzEth1DataHasEnoughSupport_10000(t *testing.T) {
}
func TestFuzzProcessBlockHeaderNoVerify_10000(t *testing.T) {
fuzzer := gofuzz.NewWithSeed(0)
fuzzer := fuzz.NewWithSeed(0)
state := &ethpb.BeaconState{}
block := &ethpb.BeaconBlock{}
@@ -133,14 +131,13 @@ func TestFuzzProcessBlockHeaderNoVerify_10000(t *testing.T) {
fuzzer.Fuzz(block)
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
require.NoError(t, err)
_, err = ProcessBlockHeaderNoVerify(t.Context(), s, block.Slot, block.ProposerIndex, block.ParentRoot, []byte{})
_, err = ProcessBlockHeaderNoVerify(context.Background(), s, block.Slot, block.ProposerIndex, block.ParentRoot, []byte{})
_ = err
fuzz.FreeMemory(i)
}
}
func TestFuzzProcessRandao_10000(t *testing.T) {
fuzzer := gofuzz.NewWithSeed(0)
fuzzer := fuzz.NewWithSeed(0)
state := &ethpb.BeaconState{}
b := &ethpb.SignedBeaconBlock{}
@@ -154,16 +151,15 @@ func TestFuzzProcessRandao_10000(t *testing.T) {
}
wsb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
r, err := ProcessRandao(t.Context(), s, wsb)
r, err := ProcessRandao(context.Background(), s, wsb)
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, b)
}
fuzz.FreeMemory(i)
}
}
func TestFuzzProcessRandaoNoVerify_10000(t *testing.T) {
fuzzer := gofuzz.NewWithSeed(0)
fuzzer := fuzz.NewWithSeed(0)
state := &ethpb.BeaconState{}
blockBody := &ethpb.BeaconBlockBody{}
@@ -176,15 +172,14 @@ func TestFuzzProcessRandaoNoVerify_10000(t *testing.T) {
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, blockBody)
}
fuzz.FreeMemory(i)
}
}
func TestFuzzProcessProposerSlashings_10000(t *testing.T) {
fuzzer := gofuzz.NewWithSeed(0)
fuzzer := fuzz.NewWithSeed(0)
state := &ethpb.BeaconState{}
p := &ethpb.ProposerSlashing{}
ctx := t.Context()
ctx := context.Background()
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(p)
@@ -194,12 +189,11 @@ func TestFuzzProcessProposerSlashings_10000(t *testing.T) {
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and slashing: %v", r, err, state, p)
}
fuzz.FreeMemory(i)
}
}
func TestFuzzVerifyProposerSlashing_10000(t *testing.T) {
fuzzer := gofuzz.NewWithSeed(0)
fuzzer := fuzz.NewWithSeed(0)
state := &ethpb.BeaconState{}
proposerSlashing := &ethpb.ProposerSlashing{}
for i := 0; i < 10000; i++ {
@@ -209,15 +203,14 @@ func TestFuzzVerifyProposerSlashing_10000(t *testing.T) {
require.NoError(t, err)
err = VerifyProposerSlashing(s, proposerSlashing)
_ = err
fuzz.FreeMemory(i)
}
}
func TestFuzzProcessAttesterSlashings_10000(t *testing.T) {
fuzzer := gofuzz.NewWithSeed(0)
fuzzer := fuzz.NewWithSeed(0)
state := &ethpb.BeaconState{}
a := &ethpb.AttesterSlashing{}
ctx := t.Context()
ctx := context.Background()
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(a)
@@ -227,15 +220,14 @@ func TestFuzzProcessAttesterSlashings_10000(t *testing.T) {
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and slashing: %v", r, err, state, a)
}
fuzz.FreeMemory(i)
}
}
func TestFuzzVerifyAttesterSlashing_10000(t *testing.T) {
fuzzer := gofuzz.NewWithSeed(0)
fuzzer := fuzz.NewWithSeed(0)
state := &ethpb.BeaconState{}
attesterSlashing := &ethpb.AttesterSlashing{}
ctx := t.Context()
ctx := context.Background()
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(attesterSlashing)
@@ -243,12 +235,11 @@ func TestFuzzVerifyAttesterSlashing_10000(t *testing.T) {
require.NoError(t, err)
err = VerifyAttesterSlashing(ctx, s, attesterSlashing)
_ = err
fuzz.FreeMemory(i)
}
}
func TestFuzzIsSlashableAttestationData_10000(_ *testing.T) {
fuzzer := gofuzz.NewWithSeed(0)
fuzzer := fuzz.NewWithSeed(0)
attestationData := &ethpb.AttestationData{}
attestationData2 := &ethpb.AttestationData{}
@@ -260,7 +251,7 @@ func TestFuzzIsSlashableAttestationData_10000(_ *testing.T) {
}
func TestFuzzslashableAttesterIndices_10000(_ *testing.T) {
fuzzer := gofuzz.NewWithSeed(0)
fuzzer := fuzz.NewWithSeed(0)
attesterSlashing := &ethpb.AttesterSlashing{}
for i := 0; i < 10000; i++ {
@@ -270,10 +261,10 @@ func TestFuzzslashableAttesterIndices_10000(_ *testing.T) {
}
func TestFuzzProcessAttestationsNoVerify_10000(t *testing.T) {
fuzzer := gofuzz.NewWithSeed(0)
fuzzer := fuzz.NewWithSeed(0)
state := &ethpb.BeaconState{}
b := &ethpb.SignedBeaconBlock{}
ctx := t.Context()
ctx := context.Background()
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(b)
@@ -288,15 +279,14 @@ func TestFuzzProcessAttestationsNoVerify_10000(t *testing.T) {
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, b)
}
fuzz.FreeMemory(i)
}
}
func TestFuzzVerifyIndexedAttestationn_10000(t *testing.T) {
fuzzer := gofuzz.NewWithSeed(0)
fuzzer := fuzz.NewWithSeed(0)
state := &ethpb.BeaconState{}
idxAttestation := &ethpb.IndexedAttestation{}
ctx := t.Context()
ctx := context.Background()
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(idxAttestation)
@@ -304,12 +294,11 @@ func TestFuzzVerifyIndexedAttestationn_10000(t *testing.T) {
require.NoError(t, err)
err = VerifyIndexedAttestation(ctx, s, idxAttestation)
_ = err
fuzz.FreeMemory(i)
}
}
func TestFuzzverifyDeposit_10000(t *testing.T) {
fuzzer := gofuzz.NewWithSeed(0)
fuzzer := fuzz.NewWithSeed(0)
state := &ethpb.BeaconState{}
deposit := &ethpb.Deposit{}
for i := 0; i < 10000; i++ {
@@ -323,10 +312,10 @@ func TestFuzzverifyDeposit_10000(t *testing.T) {
}
func TestFuzzProcessVoluntaryExits_10000(t *testing.T) {
fuzzer := gofuzz.NewWithSeed(0)
fuzzer := fuzz.NewWithSeed(0)
state := &ethpb.BeaconState{}
e := &ethpb.SignedVoluntaryExit{}
ctx := t.Context()
ctx := context.Background()
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(e)
@@ -336,12 +325,11 @@ func TestFuzzProcessVoluntaryExits_10000(t *testing.T) {
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and exit: %v", r, err, state, e)
}
fuzz.FreeMemory(i)
}
}
func TestFuzzProcessVoluntaryExitsNoVerify_10000(t *testing.T) {
fuzzer := gofuzz.NewWithSeed(0)
fuzzer := fuzz.NewWithSeed(0)
state := &ethpb.BeaconState{}
e := &ethpb.SignedVoluntaryExit{}
for i := 0; i < 10000; i++ {
@@ -349,16 +337,15 @@ func TestFuzzProcessVoluntaryExitsNoVerify_10000(t *testing.T) {
fuzzer.Fuzz(e)
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
require.NoError(t, err)
r, err := ProcessVoluntaryExits(t.Context(), s, []*ethpb.SignedVoluntaryExit{e})
r, err := ProcessVoluntaryExits(context.Background(), s, []*ethpb.SignedVoluntaryExit{e})
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, e)
}
fuzz.FreeMemory(i)
}
}
func TestFuzzVerifyExit_10000(t *testing.T) {
fuzzer := gofuzz.NewWithSeed(0)
fuzzer := fuzz.NewWithSeed(0)
ve := &ethpb.SignedVoluntaryExit{}
rawVal := &ethpb.Validator{}
fork := &ethpb.Fork{}

View File

@@ -1,6 +1,7 @@
package blocks_test
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/blocks"
@@ -94,7 +95,7 @@ func TestProcessAttesterSlashings_RegressionSlashableIndices(t *testing.T) {
for i, s := range b.Block.Body.AttesterSlashings {
ss[i] = s
}
newState, err := blocks.ProcessAttesterSlashings(t.Context(), beaconState, ss, v.SlashValidator)
newState, err := blocks.ProcessAttesterSlashings(context.Background(), beaconState, ss, v.SlashValidator)
require.NoError(t, err)
newRegistry := newState.Validators()
if !newRegistry[expectedSlashedVal].Slashed {

View File

@@ -1,6 +1,7 @@
package blocks_test
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/blocks"
@@ -45,7 +46,7 @@ func TestBatchVerifyDepositsSignatures_Ok(t *testing.T) {
deposit.Proof = proof
require.NoError(t, err)
verified, err := blocks.BatchVerifyDepositsSignatures(t.Context(), []*ethpb.Deposit{deposit})
verified, err := blocks.BatchVerifyDepositsSignatures(context.Background(), []*ethpb.Deposit{deposit})
require.NoError(t, err)
require.Equal(t, true, verified)
}
@@ -68,7 +69,7 @@ func TestBatchVerifyDepositsSignatures_InvalidSignature(t *testing.T) {
deposit.Proof = proof
require.NoError(t, err)
verified, err := blocks.BatchVerifyDepositsSignatures(t.Context(), []*ethpb.Deposit{deposit})
verified, err := blocks.BatchVerifyDepositsSignatures(context.Background(), []*ethpb.Deposit{deposit})
require.NoError(t, err)
require.Equal(t, false, verified)
}
@@ -163,7 +164,7 @@ func TestBatchVerifyPendingDepositsSignatures_Ok(t *testing.T) {
sig2 := sk2.Sign(sr2[:])
pendingDeposit2.Signature = sig2.Marshal()
verified, err := blocks.BatchVerifyPendingDepositsSignatures(t.Context(), []*ethpb.PendingDeposit{pendingDeposit, pendingDeposit2})
verified, err := blocks.BatchVerifyPendingDepositsSignatures(context.Background(), []*ethpb.PendingDeposit{pendingDeposit, pendingDeposit2})
require.NoError(t, err)
require.Equal(t, true, verified)
}
@@ -174,7 +175,7 @@ func TestBatchVerifyPendingDepositsSignatures_InvalidSignature(t *testing.T) {
WithdrawalCredentials: make([]byte, 32),
Signature: make([]byte, 96),
}
verified, err := blocks.BatchVerifyPendingDepositsSignatures(t.Context(), []*ethpb.PendingDeposit{pendingDeposit})
verified, err := blocks.BatchVerifyPendingDepositsSignatures(context.Background(), []*ethpb.PendingDeposit{pendingDeposit})
require.NoError(t, err)
require.Equal(t, false, verified)
}

View File

@@ -1,6 +1,7 @@
package blocks_test
import (
"context"
"fmt"
"testing"
@@ -176,7 +177,7 @@ func TestProcessEth1Data_SetsCorrectly(t *testing.T) {
period := uint64(params.BeaconConfig().SlotsPerEpoch.Mul(uint64(params.BeaconConfig().EpochsPerEth1VotingPeriod)))
for i := uint64(0); i < period; i++ {
processedState, err := blocks.ProcessEth1DataInBlock(t.Context(), beaconState, b.Block.Body.Eth1Data)
processedState, err := blocks.ProcessEth1DataInBlock(context.Background(), beaconState, b.Block.Body.Eth1Data)
require.NoError(t, err)
require.Equal(t, true, processedState.Version() == version.Phase0)
}

View File

@@ -1,6 +1,7 @@
package blocks_test
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/blocks"
@@ -46,7 +47,7 @@ func TestProcessVoluntaryExits_NotActiveLongEnoughToExit(t *testing.T) {
}
want := "validator has not been active long enough to exit"
_, err = blocks.ProcessVoluntaryExits(t.Context(), state, b.Block.Body.VoluntaryExits)
_, err = blocks.ProcessVoluntaryExits(context.Background(), state, b.Block.Body.VoluntaryExits)
assert.ErrorContains(t, want, err)
}
@@ -76,7 +77,7 @@ func TestProcessVoluntaryExits_ExitAlreadySubmitted(t *testing.T) {
}
want := "validator with index 0 has already submitted an exit, which will take place at epoch: 10"
_, err = blocks.ProcessVoluntaryExits(t.Context(), state, b.Block.Body.VoluntaryExits)
_, err = blocks.ProcessVoluntaryExits(context.Background(), state, b.Block.Body.VoluntaryExits)
assert.ErrorContains(t, want, err)
}
@@ -124,7 +125,7 @@ func TestProcessVoluntaryExits_AppliesCorrectStatus(t *testing.T) {
},
}
newState, err := blocks.ProcessVoluntaryExits(t.Context(), state, b.Block.Body.VoluntaryExits)
newState, err := blocks.ProcessVoluntaryExits(context.Background(), state, b.Block.Body.VoluntaryExits)
require.NoError(t, err, "Could not process exits")
newRegistry := newState.Validators()
if newRegistry[0].ExitEpoch != helpers.ActivationExitEpoch(primitives.Epoch(state.Slot()/params.BeaconConfig().SlotsPerEpoch)) {

View File

@@ -1,6 +1,7 @@
package blocks_test
import (
"context"
"io"
"testing"
@@ -50,7 +51,7 @@ func TestProcessBlockHeader_ImproperBlockSlot(t *testing.T) {
currentEpoch := time.CurrentEpoch(state)
priv, err := bls.RandKey()
require.NoError(t, err)
pID, err := helpers.BeaconProposerIndex(t.Context(), state)
pID, err := helpers.BeaconProposerIndex(context.Background(), state)
require.NoError(t, err)
block := util.NewBeaconBlock()
block.Block.ProposerIndex = pID
@@ -60,7 +61,7 @@ func TestProcessBlockHeader_ImproperBlockSlot(t *testing.T) {
block.Signature, err = signing.ComputeDomainAndSign(state, currentEpoch, block.Block, params.BeaconConfig().DomainBeaconProposer, priv)
require.NoError(t, err)
proposerIdx, err := helpers.BeaconProposerIndex(t.Context(), state)
proposerIdx, err := helpers.BeaconProposerIndex(context.Background(), state)
require.NoError(t, err)
validators[proposerIdx].Slashed = false
validators[proposerIdx].PublicKey = priv.PublicKey().Marshal()
@@ -69,7 +70,7 @@ func TestProcessBlockHeader_ImproperBlockSlot(t *testing.T) {
wsb, err := consensusblocks.NewSignedBeaconBlock(block)
require.NoError(t, err)
_, err = blocks.ProcessBlockHeader(t.Context(), state, wsb)
_, err = blocks.ProcessBlockHeader(context.Background(), state, wsb)
assert.ErrorContains(t, "block.Slot 10 must be greater than state.LatestBlockHeader.Slot 10", err)
}
@@ -84,7 +85,7 @@ func TestProcessBlockHeader_WrongProposerSig(t *testing.T) {
lbhdr, err := beaconState.LatestBlockHeader().HashTreeRoot()
require.NoError(t, err)
proposerIdx, err := helpers.BeaconProposerIndex(t.Context(), beaconState)
proposerIdx, err := helpers.BeaconProposerIndex(context.Background(), beaconState)
require.NoError(t, err)
block := util.NewBeaconBlock()
@@ -97,7 +98,7 @@ func TestProcessBlockHeader_WrongProposerSig(t *testing.T) {
wsb, err := consensusblocks.NewSignedBeaconBlock(block)
require.NoError(t, err)
_, err = blocks.ProcessBlockHeader(t.Context(), beaconState, wsb)
_, err = blocks.ProcessBlockHeader(context.Background(), beaconState, wsb)
want := "signature did not verify"
assert.ErrorContains(t, want, err)
}
@@ -141,7 +142,7 @@ func TestProcessBlockHeader_DifferentSlots(t *testing.T) {
wsb, err := consensusblocks.NewSignedBeaconBlock(block)
require.NoError(t, err)
_, err = blocks.ProcessBlockHeader(t.Context(), state, wsb)
_, err = blocks.ProcessBlockHeader(context.Background(), state, wsb)
want := "is different than block slot"
assert.ErrorContains(t, want, err)
}
@@ -171,7 +172,7 @@ func TestProcessBlockHeader_PreviousBlockRootNotSignedRoot(t *testing.T) {
blockSig, err := signing.ComputeDomainAndSign(state, currentEpoch, &sszBytes, params.BeaconConfig().DomainBeaconProposer, priv)
require.NoError(t, err)
validators[5896].PublicKey = priv.PublicKey().Marshal()
pID, err := helpers.BeaconProposerIndex(t.Context(), state)
pID, err := helpers.BeaconProposerIndex(context.Background(), state)
require.NoError(t, err)
block := util.NewBeaconBlock()
block.Block.Slot = 10
@@ -182,7 +183,7 @@ func TestProcessBlockHeader_PreviousBlockRootNotSignedRoot(t *testing.T) {
wsb, err := consensusblocks.NewSignedBeaconBlock(block)
require.NoError(t, err)
_, err = blocks.ProcessBlockHeader(t.Context(), state, wsb)
_, err = blocks.ProcessBlockHeader(context.Background(), state, wsb)
want := "does not match"
assert.ErrorContains(t, want, err)
}
@@ -215,7 +216,7 @@ func TestProcessBlockHeader_SlashedProposer(t *testing.T) {
require.NoError(t, err)
validators[12683].PublicKey = priv.PublicKey().Marshal()
pID, err := helpers.BeaconProposerIndex(t.Context(), state)
pID, err := helpers.BeaconProposerIndex(context.Background(), state)
require.NoError(t, err)
block := util.NewBeaconBlock()
block.Block.Slot = 10
@@ -226,7 +227,7 @@ func TestProcessBlockHeader_SlashedProposer(t *testing.T) {
wsb, err := consensusblocks.NewSignedBeaconBlock(block)
require.NoError(t, err)
_, err = blocks.ProcessBlockHeader(t.Context(), state, wsb)
_, err = blocks.ProcessBlockHeader(context.Background(), state, wsb)
want := "was previously slashed"
assert.ErrorContains(t, want, err)
}
@@ -256,7 +257,7 @@ func TestProcessBlockHeader_OK(t *testing.T) {
currentEpoch := time.CurrentEpoch(state)
priv, err := bls.RandKey()
require.NoError(t, err)
pID, err := helpers.BeaconProposerIndex(t.Context(), state)
pID, err := helpers.BeaconProposerIndex(context.Background(), state)
require.NoError(t, err)
block := util.NewBeaconBlock()
block.Block.ProposerIndex = pID
@@ -268,7 +269,7 @@ func TestProcessBlockHeader_OK(t *testing.T) {
bodyRoot, err := block.Block.Body.HashTreeRoot()
require.NoError(t, err, "Failed to hash block bytes got")
proposerIdx, err := helpers.BeaconProposerIndex(t.Context(), state)
proposerIdx, err := helpers.BeaconProposerIndex(context.Background(), state)
require.NoError(t, err)
validators[proposerIdx].Slashed = false
validators[proposerIdx].PublicKey = priv.PublicKey().Marshal()
@@ -277,7 +278,7 @@ func TestProcessBlockHeader_OK(t *testing.T) {
wsb, err := consensusblocks.NewSignedBeaconBlock(block)
require.NoError(t, err)
newState, err := blocks.ProcessBlockHeader(t.Context(), state, wsb)
newState, err := blocks.ProcessBlockHeader(context.Background(), state, wsb)
require.NoError(t, err, "Failed to process block header got")
var zeroHash [32]byte
nsh := newState.LatestBlockHeader()
@@ -317,7 +318,7 @@ func TestBlockSignatureSet_OK(t *testing.T) {
currentEpoch := time.CurrentEpoch(state)
priv, err := bls.RandKey()
require.NoError(t, err)
pID, err := helpers.BeaconProposerIndex(t.Context(), state)
pID, err := helpers.BeaconProposerIndex(context.Background(), state)
require.NoError(t, err)
block := util.NewBeaconBlock()
block.Block.Slot = 10
@@ -326,7 +327,7 @@ func TestBlockSignatureSet_OK(t *testing.T) {
block.Block.ParentRoot = latestBlockSignedRoot[:]
block.Signature, err = signing.ComputeDomainAndSign(state, currentEpoch, block.Block, params.BeaconConfig().DomainBeaconProposer, priv)
require.NoError(t, err)
proposerIdx, err := helpers.BeaconProposerIndex(t.Context(), state)
proposerIdx, err := helpers.BeaconProposerIndex(context.Background(), state)
require.NoError(t, err)
validators[proposerIdx].Slashed = false
validators[proposerIdx].PublicKey = priv.PublicKey().Marshal()

View File

@@ -12,6 +12,7 @@ import (
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/pkg/errors"
@@ -229,16 +230,26 @@ func verifyBlobCommitmentCount(slot primitives.Slot, body interfaces.ReadOnlyBea
if body.Version() < version.Deneb {
return nil
}
kzgs, err := body.BlobKzgCommitments()
if err != nil {
return err
}
commitmentCount, maxBlobsPerBlock := len(kzgs), params.BeaconConfig().MaxBlobsPerBlock(slot)
if commitmentCount > maxBlobsPerBlock {
return fmt.Errorf("too many kzg commitments in block: actual count %d - max allowed %d", commitmentCount, maxBlobsPerBlock)
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(slot)
if len(kzgs) > maxBlobsPerBlock {
return fmt.Errorf("too many kzg commitments in block: %d", len(kzgs))
}
return nil
}
// GetBlockPayloadHash returns the hash of the execution payload of the block
func GetBlockPayloadHash(blk interfaces.ReadOnlyBeaconBlock) ([32]byte, error) {
var payloadHash [32]byte
if IsPreBellatrixVersion(blk.Version()) {
return payloadHash, nil
}
payload, err := blk.Body().Execution()
if err != nil {
return payloadHash, err
}
return bytesutil.ToBytes32(payload.BlockHash()), nil
}

View File

@@ -926,10 +926,8 @@ func TestVerifyBlobCommitmentCount(t *testing.T) {
require.NoError(t, err)
require.NoError(t, blocks.VerifyBlobCommitmentCount(rb.Slot(), rb.Body()))
maxCommitmentsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(rb.Slot())
b = &ethpb.BeaconBlockDeneb{Body: &ethpb.BeaconBlockBodyDeneb{BlobKzgCommitments: make([][]byte, maxCommitmentsPerBlock+1)}}
b = &ethpb.BeaconBlockDeneb{Body: &ethpb.BeaconBlockBodyDeneb{BlobKzgCommitments: make([][]byte, params.BeaconConfig().MaxBlobsPerBlock(rb.Slot())+1)}}
rb, err = consensusblocks.NewBeaconBlock(b)
require.NoError(t, err)
require.ErrorContains(t, fmt.Sprintf("too many kzg commitments in block: actual count %d - max allowed %d", maxCommitmentsPerBlock+1, maxCommitmentsPerBlock), blocks.VerifyBlobCommitmentCount(rb.Slot(), rb.Body()))
require.ErrorContains(t, fmt.Sprintf("too many kzg commitments in block: %d", params.BeaconConfig().MaxBlobsPerBlock(rb.Slot())+1), blocks.VerifyBlobCommitmentCount(rb.Slot(), rb.Body()))
}

View File

@@ -1,6 +1,7 @@
package blocks_test
import (
"context"
"fmt"
"testing"
@@ -50,7 +51,7 @@ func TestProcessProposerSlashings_UnmatchedHeaderSlots(t *testing.T) {
},
}
want := "mismatched header slots"
_, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, b.Block.Body.ProposerSlashings, v.SlashValidator)
_, err := blocks.ProcessProposerSlashings(context.Background(), beaconState, b.Block.Body.ProposerSlashings, v.SlashValidator)
assert.ErrorContains(t, want, err)
}
@@ -83,7 +84,7 @@ func TestProcessProposerSlashings_SameHeaders(t *testing.T) {
},
}
want := "expected slashing headers to differ"
_, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, b.Block.Body.ProposerSlashings, v.SlashValidator)
_, err := blocks.ProcessProposerSlashings(context.Background(), beaconState, b.Block.Body.ProposerSlashings, v.SlashValidator)
assert.ErrorContains(t, want, err)
}
@@ -133,7 +134,7 @@ func TestProcessProposerSlashings_ValidatorNotSlashable(t *testing.T) {
"validator with key %#x is not slashable",
bytesutil.ToBytes48(beaconState.Validators()[0].PublicKey),
)
_, err = blocks.ProcessProposerSlashings(t.Context(), beaconState, b.Block.Body.ProposerSlashings, v.SlashValidator)
_, err = blocks.ProcessProposerSlashings(context.Background(), beaconState, b.Block.Body.ProposerSlashings, v.SlashValidator)
assert.ErrorContains(t, want, err)
}
@@ -172,7 +173,7 @@ func TestProcessProposerSlashings_AppliesCorrectStatus(t *testing.T) {
block := util.NewBeaconBlock()
block.Block.Body.ProposerSlashings = slashings
newState, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, block.Block.Body.ProposerSlashings, v.SlashValidator)
newState, err := blocks.ProcessProposerSlashings(context.Background(), beaconState, block.Block.Body.ProposerSlashings, v.SlashValidator)
require.NoError(t, err)
newStateVals := newState.Validators()
@@ -220,7 +221,7 @@ func TestProcessProposerSlashings_AppliesCorrectStatusAltair(t *testing.T) {
block := util.NewBeaconBlock()
block.Block.Body.ProposerSlashings = slashings
newState, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, block.Block.Body.ProposerSlashings, v.SlashValidator)
newState, err := blocks.ProcessProposerSlashings(context.Background(), beaconState, block.Block.Body.ProposerSlashings, v.SlashValidator)
require.NoError(t, err)
newStateVals := newState.Validators()
@@ -268,7 +269,7 @@ func TestProcessProposerSlashings_AppliesCorrectStatusBellatrix(t *testing.T) {
block := util.NewBeaconBlock()
block.Block.Body.ProposerSlashings = slashings
newState, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, block.Block.Body.ProposerSlashings, v.SlashValidator)
newState, err := blocks.ProcessProposerSlashings(context.Background(), beaconState, block.Block.Body.ProposerSlashings, v.SlashValidator)
require.NoError(t, err)
newStateVals := newState.Validators()
@@ -316,7 +317,7 @@ func TestProcessProposerSlashings_AppliesCorrectStatusCapella(t *testing.T) {
block := util.NewBeaconBlock()
block.Block.Body.ProposerSlashings = slashings
newState, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, block.Block.Body.ProposerSlashings, v.SlashValidator)
newState, err := blocks.ProcessProposerSlashings(context.Background(), beaconState, block.Block.Body.ProposerSlashings, v.SlashValidator)
require.NoError(t, err)
newStateVals := newState.Validators()

View File

@@ -1,6 +1,7 @@
package blocks_test
import (
"context"
"encoding/binary"
"testing"
@@ -20,7 +21,7 @@ import (
func TestProcessRandao_IncorrectProposerFailsVerification(t *testing.T) {
beaconState, privKeys := util.DeterministicGenesisState(t, 100)
// We fetch the proposer's index as that is whom the RANDAO will be verified against.
proposerIdx, err := helpers.BeaconProposerIndex(t.Context(), beaconState)
proposerIdx, err := helpers.BeaconProposerIndex(context.Background(), beaconState)
require.NoError(t, err)
epoch := primitives.Epoch(0)
buf := make([]byte, 32)
@@ -41,7 +42,7 @@ func TestProcessRandao_IncorrectProposerFailsVerification(t *testing.T) {
want := "block randao: signature did not verify"
wsb, err := consensusblocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
_, err = blocks.ProcessRandao(t.Context(), beaconState, wsb)
_, err = blocks.ProcessRandao(context.Background(), beaconState, wsb)
assert.ErrorContains(t, want, err)
}
@@ -61,7 +62,7 @@ func TestProcessRandao_SignatureVerifiesAndUpdatesLatestStateMixes(t *testing.T)
wsb, err := consensusblocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
newState, err := blocks.ProcessRandao(
t.Context(),
context.Background(),
beaconState,
wsb,
)
@@ -84,7 +85,7 @@ func TestRandaoSignatureSet_OK(t *testing.T) {
},
}
set, err := blocks.RandaoSignatureBatch(t.Context(), beaconState, block.Body.RandaoReveal)
set, err := blocks.RandaoSignatureBatch(context.Background(), beaconState, block.Body.RandaoReveal)
require.NoError(t, err)
verified, err := set.Verify()
require.NoError(t, err)

View File

@@ -200,7 +200,7 @@ func createAttestationSignatureBatch(
if err != nil {
return nil, err
}
if err := attestation.IsValidAttestationIndices(ctx, ia, params.BeaconConfig().MaxValidatorsPerCommittee, params.BeaconConfig().MaxCommitteesPerSlot); err != nil {
if err := attestation.IsValidAttestationIndices(ctx, ia); err != nil {
return nil, err
}
indices := ia.GetAttestingIndices()

View File

@@ -85,7 +85,6 @@ go_test(
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//testing/assert:go_default_library",
"//testing/fuzz:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//time/slots:go_default_library",

View File

@@ -1,21 +1,21 @@
package electra_test
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/electra"
state_native "github.com/OffchainLabs/prysm/v6/beacon-chain/state/state-native"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/fuzz"
"github.com/OffchainLabs/prysm/v6/testing/require"
gofuzz "github.com/google/gofuzz"
fuzz "github.com/google/gofuzz"
)
func TestFuzzProcessDeposits_10000(t *testing.T) {
fuzzer := gofuzz.NewWithSeed(0)
fuzzer := fuzz.NewWithSeed(0)
state := &ethpb.BeaconStateElectra{}
deposits := make([]*ethpb.Deposit, 100)
ctx := t.Context()
ctx := context.Background()
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
for i := range deposits {
@@ -27,12 +27,11 @@ func TestFuzzProcessDeposits_10000(t *testing.T) {
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, deposits)
}
fuzz.FreeMemory(i)
}
}
func TestFuzzProcessDeposit_10000(t *testing.T) {
fuzzer := gofuzz.NewWithSeed(0)
fuzzer := fuzz.NewWithSeed(0)
state := &ethpb.BeaconStateElectra{}
deposit := &ethpb.Deposit{}
@@ -45,6 +44,5 @@ func TestFuzzProcessDeposit_10000(t *testing.T) {
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, deposit)
}
fuzz.FreeMemory(i)
}
}

View File

@@ -319,7 +319,7 @@ func TestBatchProcessNewPendingDeposits(t *testing.T) {
validDep := stateTesting.GeneratePendingDeposit(t, sk, params.BeaconConfig().MinActivationBalance, bytesutil.ToBytes32(wc), 0)
invalidDep := &eth.PendingDeposit{PublicKey: make([]byte, 48)}
deps := []*eth.PendingDeposit{validDep, invalidDep}
require.NoError(t, electra.BatchProcessNewPendingDeposits(t.Context(), st, deps))
require.NoError(t, electra.BatchProcessNewPendingDeposits(context.Background(), st, deps))
require.Equal(t, 1, len(st.Validators()))
require.Equal(t, 1, len(st.Balances()))
})
@@ -335,7 +335,7 @@ func TestBatchProcessNewPendingDeposits(t *testing.T) {
wc[31] = byte(0)
validDep := stateTesting.GeneratePendingDeposit(t, sk, params.BeaconConfig().MinActivationBalance, bytesutil.ToBytes32(wc), 0)
deps := []*eth.PendingDeposit{validDep, validDep}
require.NoError(t, electra.BatchProcessNewPendingDeposits(t.Context(), st, deps))
require.NoError(t, electra.BatchProcessNewPendingDeposits(context.Background(), st, deps))
require.Equal(t, 1, len(st.Validators()))
require.Equal(t, 1, len(st.Balances()))
require.Equal(t, params.BeaconConfig().MinActivationBalance*2, st.Balances()[0])
@@ -354,7 +354,7 @@ func TestBatchProcessNewPendingDeposits(t *testing.T) {
invalidSigDep := stateTesting.GeneratePendingDeposit(t, sk, params.BeaconConfig().MinActivationBalance, bytesutil.ToBytes32(wc), 0)
invalidSigDep.Signature = make([]byte, 96)
deps := []*eth.PendingDeposit{validDep, invalidSigDep}
require.NoError(t, electra.BatchProcessNewPendingDeposits(t.Context(), st, deps))
require.NoError(t, electra.BatchProcessNewPendingDeposits(context.Background(), st, deps))
require.Equal(t, 1, len(st.Validators()))
require.Equal(t, 1, len(st.Balances()))
require.Equal(t, 2*params.BeaconConfig().MinActivationBalance, st.Balances()[0])
@@ -368,12 +368,12 @@ func TestProcessDepositRequests(t *testing.T) {
require.NoError(t, st.SetDepositRequestsStartIndex(1))
t.Run("empty requests continues", func(t *testing.T) {
newSt, err := electra.ProcessDepositRequests(t.Context(), st, []*enginev1.DepositRequest{})
newSt, err := electra.ProcessDepositRequests(context.Background(), st, []*enginev1.DepositRequest{})
require.NoError(t, err)
require.DeepEqual(t, newSt, st)
})
t.Run("nil request errors", func(t *testing.T) {
_, err = electra.ProcessDepositRequests(t.Context(), st, []*enginev1.DepositRequest{nil})
_, err = electra.ProcessDepositRequests(context.Background(), st, []*enginev1.DepositRequest{nil})
require.ErrorContains(t, "nil deposit request", err)
})
@@ -406,7 +406,7 @@ func TestProcessDepositRequests(t *testing.T) {
Signature: sig.Marshal(),
},
}
st, err = electra.ProcessDepositRequests(t.Context(), st, requests)
st, err = electra.ProcessDepositRequests(context.Background(), st, requests)
require.NoError(t, err)
pbd, err := st.PendingDeposits()
@@ -437,7 +437,7 @@ func TestProcessDeposit_Electra_Simple(t *testing.T) {
},
})
require.NoError(t, err)
pdSt, err := electra.ProcessDeposits(t.Context(), st, deps)
pdSt, err := electra.ProcessDeposits(context.Background(), st, deps)
require.NoError(t, err)
pbd, err := pdSt.PendingDeposits()
require.NoError(t, err)
@@ -592,7 +592,7 @@ func TestApplyPendingDeposit_TopUp(t *testing.T) {
dep := stateTesting.GeneratePendingDeposit(t, sk, excessBalance, bytesutil.ToBytes32(wc), 0)
require.NoError(t, st.SetValidators(validators))
require.NoError(t, electra.ApplyPendingDeposit(t.Context(), st, dep))
require.NoError(t, electra.ApplyPendingDeposit(context.Background(), st, dep))
b, err := st.BalanceAtIndex(0)
require.NoError(t, err)
@@ -608,7 +608,7 @@ func TestApplyPendingDeposit_UnknownKey(t *testing.T) {
wc[31] = byte(0)
dep := stateTesting.GeneratePendingDeposit(t, sk, params.BeaconConfig().MinActivationBalance, bytesutil.ToBytes32(wc), 0)
require.Equal(t, 0, len(st.Validators()))
require.NoError(t, electra.ApplyPendingDeposit(t.Context(), st, dep))
require.NoError(t, electra.ApplyPendingDeposit(context.Background(), st, dep))
// activates new validator
require.Equal(t, 1, len(st.Validators()))
b, err := st.BalanceAtIndex(0)
@@ -630,7 +630,7 @@ func TestApplyPendingDeposit_InvalidSignature(t *testing.T) {
Amount: 100,
}
require.Equal(t, 0, len(st.Validators()))
require.NoError(t, electra.ApplyPendingDeposit(t.Context(), st, dep))
require.NoError(t, electra.ApplyPendingDeposit(context.Background(), st, dep))
// no validator added
require.Equal(t, 0, len(st.Validators()))
// no topup either

View File

@@ -1,6 +1,7 @@
package electra_test
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/electra"
@@ -53,7 +54,7 @@ func TestProcessOperationsWithNilRequests(t *testing.T) {
require.NoError(t, st.SetSlot(1))
_, err = electra.ProcessOperations(t.Context(), st, b.Block())
_, err = electra.ProcessOperations(context.Background(), st, b.Block())
require.ErrorContains(t, tc.errMsg, err)
})
}

View File

@@ -1,6 +1,7 @@
package electra_test
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/electra"
@@ -77,7 +78,7 @@ func TestProcessEpoch_CanProcessElectra(t *testing.T) {
TargetIndex: 1,
},
}))
err := electra.ProcessEpoch(t.Context(), st)
err := electra.ProcessEpoch(context.Background(), st)
require.NoError(t, err)
require.Equal(t, uint64(0), st.Slashings()[2], "Unexpected slashed balance")

View File

@@ -1,6 +1,7 @@
package electra_test
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/electra"
@@ -289,7 +290,7 @@ func TestProcessWithdrawRequests(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := electra.ProcessWithdrawalRequests(t.Context(), tt.args.st, tt.args.wrs)
got, err := electra.ProcessWithdrawalRequests(context.Background(), tt.args.st, tt.args.wrs)
if (err != nil) != tt.wantErr {
t.Errorf("ProcessWithdrawalRequests() error = %v, wantErr %v", err, tt.wantErr)
return

View File

@@ -48,7 +48,6 @@ go_test(
"//consensus-types/primitives:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//testing/assert:go_default_library",
"//testing/fuzz:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"@com_github_google_go_cmp//cmp:go_default_library",

View File

@@ -5,13 +5,12 @@ import (
state_native "github.com/OffchainLabs/prysm/v6/beacon-chain/state/state-native"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/fuzz"
"github.com/OffchainLabs/prysm/v6/testing/require"
gofuzz "github.com/google/gofuzz"
fuzz "github.com/google/gofuzz"
)
func TestFuzzFinalUpdates_10000(t *testing.T) {
fuzzer := gofuzz.NewWithSeed(0)
fuzzer := fuzz.NewWithSeed(0)
base := &ethpb.BeaconState{}
for i := 0; i < 10000; i++ {
@@ -20,6 +19,5 @@ func TestFuzzFinalUpdates_10000(t *testing.T) {
require.NoError(t, err)
_, err = ProcessFinalUpdates(s)
_ = err
fuzz.FreeMemory(i)
}
}

Some files were not shown because too many files have changed in this diff Show More