mirror of
https://github.com/OffchainLabs/prysm.git
synced 2026-01-10 05:47:59 -05:00
Compare commits
68 Commits
v6.0.3
...
rm-redunda
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
d1652d49ad | ||
|
|
8086c3e913 | ||
|
|
97f416b3a7 | ||
|
|
1c1e0f38bb | ||
|
|
121914d0d7 | ||
|
|
e8625cd89d | ||
|
|
667aaf1564 | ||
|
|
e020907d2a | ||
|
|
9927cea35a | ||
|
|
d4233471d2 | ||
|
|
d63ae69920 | ||
|
|
b9fd32dfff | ||
|
|
559d02bf4d | ||
|
|
62fec4d1f3 | ||
|
|
6a13ba9125 | ||
|
|
2a3876427f | ||
|
|
f8b88db1a4 | ||
|
|
d12da8cbe0 | ||
|
|
6087875da5 | ||
|
|
214f4a76fb | ||
|
|
f5a9394c77 | ||
|
|
4095da8568 | ||
|
|
f1288a18ec | ||
|
|
543ebe857e | ||
|
|
e569df5ebc | ||
|
|
8c324cc491 | ||
|
|
265d84569c | ||
|
|
79b064a6cc | ||
|
|
182c18a7b2 | ||
|
|
8b9c161560 | ||
|
|
4a4532f3ba | ||
|
|
91b44360fc | ||
|
|
472c5da49e | ||
|
|
a0060fa794 | ||
|
|
341c7abd7f | ||
|
|
3300866572 | ||
|
|
711984d942 | ||
|
|
9b626864f0 | ||
|
|
3a3bd3902c | ||
|
|
2c09bc65a4 | ||
|
|
ba860fd96b | ||
|
|
0d5a52d20d | ||
|
|
994565acdd | ||
|
|
e34313c752 | ||
|
|
00204ffa6a | ||
|
|
f8d895a5ed | ||
|
|
58b5aac201 | ||
|
|
58f08672c0 | ||
|
|
ec74bac725 | ||
|
|
99cd90f335 | ||
|
|
74aca49741 | ||
|
|
3dfd3d0416 | ||
|
|
b20821dd8e | ||
|
|
e2f0b057b0 | ||
|
|
3d4e2c5568 | ||
|
|
fa744ff78f | ||
|
|
bb5807fd08 | ||
|
|
d6bbfff8b7 | ||
|
|
a8ce85f8de | ||
|
|
00bb3ff2b8 | ||
|
|
edab145001 | ||
|
|
7fd3902b75 | ||
|
|
6b6370bc59 | ||
|
|
17204ca817 | ||
|
|
5bbcfe5237 | ||
|
|
c1b99b74c7 | ||
|
|
f02955676b | ||
|
|
1dea6857d5 |
2
.github/workflows/changelog.yml
vendored
2
.github/workflows/changelog.yml
vendored
@@ -18,7 +18,7 @@ jobs:
|
||||
uses: dsaltares/fetch-gh-release-asset@aa2ab1243d6e0d5b405b973c89fa4d06a2d0fff7 # 1.1.2
|
||||
with:
|
||||
repo: OffchainLabs/unclog
|
||||
version: "tags/v0.1.3"
|
||||
version: "tags/v0.1.5"
|
||||
file: "unclog"
|
||||
|
||||
- name: Get new changelog files
|
||||
|
||||
97
CHANGELOG.md
97
CHANGELOG.md
@@ -4,6 +4,103 @@ All notable changes to this project will be documented in this file.
|
||||
|
||||
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
|
||||
|
||||
## [v6.0.4](https://github.com/prysmaticlabs/prysm/compare/v6.0.3...v6.0.4) - 2025-06-05
|
||||
|
||||
This release has more work on PeerDAS, and light client support. Additionally, we have a few bug fixes:
|
||||
- Blob cache size now correctly set at startup.
|
||||
- A fix for slashing protection history exports where the validator database was in a nested folder.
|
||||
- Corrected behavior of the API call for state committees with an invalid request.
|
||||
- `/bin/sh` is now symlinked to `/bin/bash` for Prysm docker images.
|
||||
|
||||
In the [Hoodi](https://github.com/eth-clients/hoodi) testnet, the default gas limit is raised to 60M gas.
|
||||
|
||||
### Added
|
||||
|
||||
- Add light client mainnet spec test. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15295)
|
||||
- Add support for light client req/resp domain. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15281)
|
||||
- Added /bin/sh simlink to docker images. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15294)
|
||||
- Added Prysm build data to otel tracing spans. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15302)
|
||||
- Add light client minimal spec test support for `update_ranking` tests. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15297)
|
||||
- Add fulu operation and epoch processing spec tests. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15284)
|
||||
- Updated e2e Beacon API evaluator to support more endpoints, including the ones introduced in Electra. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15304)
|
||||
- Data column sidecars verification methods. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15232)
|
||||
- Implement data column sidecars filesystem. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15257)
|
||||
- Add blob schedule support from https://github.com/ethereum/consensus-specs/pull/4277. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15272)
|
||||
- random forkchoice spec tests for fulu. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15287)
|
||||
- Add ability to download nightly test vectors. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15312)
|
||||
- PeerDAS: Validation pipeline for data column sidecars received via gossip. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15310)
|
||||
- PeerDAS: Implement P2P. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15347)
|
||||
- PeerDAS: Implement the blockchain package. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15350)
|
||||
|
||||
### Changed
|
||||
|
||||
- Update spec tests to v1.6.0-alpha.0. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15306)
|
||||
- PeerDAS: Refactor the reconstruction pipeline. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15309)
|
||||
- PeerDAS: `DataColumnStorage.Get` - Exit early no columns are available. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15309)
|
||||
- Default hoodi testnet builder gas limit to 60M. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15361)
|
||||
|
||||
### Fixed
|
||||
|
||||
- Fix cyclical dependencies issue when using testing/util package. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15248)
|
||||
- Set seen blob cache size correctly based on current slot time at start up. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15348)
|
||||
- Fix `slashing-protection-history export` failing when `validator.db` is in a nested folder like `data/direct/`. (#14954). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15351)
|
||||
- Made `/eth/v1/beacon/states/{state_id}/committees` endpoint return `400` when slot does not belong to the specified epoch, aligning with the Beacon API spec (#15355). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15356)
|
||||
- Removed eager validator context cancellation that was causing validator builder registrations to fail occasionally. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15369)
|
||||
|
||||
## [v6.0.3](https://github.com/prysmaticlabs/prysm/compare/v6.0.2...v6.0.3) - 2025-05-21
|
||||
|
||||
This release has important bugfixes for users of the [Beacon API](https://ethereum.github.io/beacon-APIs/). These fixes include:
|
||||
- Fixed pending consolidations endpoint to return the correct response.
|
||||
- Fixed incorrect field name from pending partial withdrawals response.
|
||||
- Fixed attester slashing to return an empty array instead of nil/null.
|
||||
- Fixed validator participation and active set changes endpoints to accept a `{state_id}` parameter.
|
||||
|
||||
Other improvements include:
|
||||
- Disabled deposit log processing routine for Electra and beyond.
|
||||
|
||||
Operators are encouraged to update at their own convenience.
|
||||
|
||||
### Added
|
||||
|
||||
- ssz static spec tests for fulu. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15279)
|
||||
- finality and merkle proof spec tests for fulu. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15286)
|
||||
- sanity and rewards spec tests for fulu. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15285)
|
||||
|
||||
### Changed
|
||||
|
||||
- Added more tracing spans to various helpers related to GetDuties. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15271)
|
||||
- Disable log processing after deposit requests are activated. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15274)
|
||||
|
||||
### Fixed
|
||||
|
||||
- fixed wrong handler for get pending consolidations endpoint. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15290)
|
||||
- Fixed /eth/v2/beacon/pool/attester_slashings no slashings returns empty array instead of nil. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15291)
|
||||
- Fix Prysm endpoints `/prysm/v1/validators/{state_id}/participation` and `/prysm/v1/validators/{state_id}/active_set_changes` to properly handle `{state_id}`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15245)
|
||||
|
||||
## [v6.0.2](https://github.com/prysmaticlabs/prysm/compare/v6.0.1...v6.0.2) - 2025-05-12
|
||||
|
||||
This is a patch release to fix a few important bugs. Most importantly, we have adjusted the index limit for field tries in the beacon state to better support Pectra states. This should alleviate memory issues that clients are seeing since Pectra mainnet fork.
|
||||
|
||||
### Added
|
||||
|
||||
- Enable light client gossip for optimistic and finality updates. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15220)
|
||||
- Implement peerDAS core functions. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15192)
|
||||
- Force duties start on received blocks. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15251)
|
||||
- Added additional tracing spans for the GetDuties routine. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15258)
|
||||
|
||||
### Changed
|
||||
|
||||
- Use otelgrpc for tracing grpc server and client. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15237)
|
||||
- Upgraded ristretto to v2.2.0, for RISC-V support. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15170)
|
||||
- Update spec to v1.5.0 compliance which changes minimal execution requests size. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15256)
|
||||
- Increase indices limit in field trie rebuilding. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15252)
|
||||
- Increase sepolia gas limit to 60M. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15253)
|
||||
|
||||
### Fixed
|
||||
|
||||
- Fixed wrong field name in pending partial withdrawals was returned on state json representation, described in https://github.com/ethereum/consensus-specs/blob/dev/specs/electra/beacon-chain.md#pendingpartialwithdrawal. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15254)
|
||||
- Fixed gocognit on propose block rest path. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15147)
|
||||
|
||||
## [v6.0.1](https://github.com/prysmaticlabs/prysm/compare/v6.0.0...v6.0.1) - 2025-05-02
|
||||
|
||||
This release fixes two bugs related to the `payload_attributes` [event stream](https://ethereum.github.io/beacon-APIs/#/Events/eventstream). If you are using or planning to use this endpoint, upgrading to version 6.0.1 is mandatory.
|
||||
|
||||
@@ -4,7 +4,7 @@ Note: The latest and most up-to-date documentation can be found on our [docs por
|
||||
|
||||
Excited by our work and want to get involved in building out our sharding releases? Or maybe you haven't learned as much about the Ethereum protocol but are a savvy developer?
|
||||
|
||||
You can explore our [Open Issues](https://github.com/OffchainLabs/prysm/issues) in-the works for our different releases. Feel free to fork our repo and start creating PR’s after assigning yourself to an issue of interest. We are always chatting on [Discord](https://discord.gg/CTYGPUJ) drop us a line there if you want to get more involved or have any questions on our implementation!
|
||||
You can explore our [Open Issues](https://github.com/OffchainLabs/prysm/issues) in-the works for our different releases. Feel free to fork our repo and start creating PR’s after assigning yourself to an issue of interest. We are always chatting on [Discord](https://discord.gg/prysm) drop us a line there if you want to get more involved or have any questions on our implementation!
|
||||
|
||||
> [!IMPORTANT]
|
||||
> Please, **do not send pull requests for trivial changes**, such as typos, these will be rejected. These types of pull requests incur a cost to reviewers and do not provide much value to the project. If you are unsure, please open an issue first to discuss the change.
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
[](https://goreportcard.com/report/github.com/OffchainLabs/prysm)
|
||||
[](https://github.com/ethereum/consensus-specs/tree/v1.4.0)
|
||||
[](https://github.com/ethereum/execution-apis/tree/v1.0.0-beta.2/src/engine)
|
||||
[](https://discord.gg/OffchainLabs)
|
||||
[](https://discord.gg/prysm)
|
||||
[](https://www.gitpoap.io/gh/OffchainLabs/prysm)
|
||||
|
||||
</div>
|
||||
@@ -25,7 +25,7 @@ See the [Changelog](https://github.com/OffchainLabs/prysm/releases) for details
|
||||
|
||||
A detailed set of installation and usage instructions as well as breakdowns of each individual component are available in the **[official documentation portal](https://docs.prylabs.network)**.
|
||||
|
||||
💬 **Need help?** Join our **[Discord Community](https://discord.gg/OffchainLabs)** for support.
|
||||
💬 **Need help?** Join our **[Discord Community](https://discord.gg/prysm)** for support.
|
||||
|
||||
---
|
||||
|
||||
|
||||
66
WORKSPACE
66
WORKSPACE
@@ -1,7 +1,7 @@
|
||||
workspace(name = "prysm")
|
||||
|
||||
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
|
||||
load("@bazel_tools//tools/build_defs/repo:git.bzl", "git_repository")
|
||||
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
|
||||
|
||||
http_archive(
|
||||
name = "rules_pkg",
|
||||
@@ -16,8 +16,6 @@ load("@rules_pkg//:deps.bzl", "rules_pkg_dependencies")
|
||||
|
||||
rules_pkg_dependencies()
|
||||
|
||||
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
|
||||
|
||||
http_archive(
|
||||
name = "toolchains_protoc",
|
||||
sha256 = "abb1540f8a9e045422730670ebb2f25b41fa56ca5a7cf795175a110a0a68f4ad",
|
||||
@@ -255,56 +253,18 @@ filegroup(
|
||||
url = "https://github.com/ethereum/EIPs/archive/5480440fe51742ed23342b68cf106cefd427e39d.tar.gz",
|
||||
)
|
||||
|
||||
consensus_spec_version = "v1.5.0"
|
||||
consensus_spec_version = "v1.6.0-alpha.1"
|
||||
|
||||
bls_test_version = "v0.1.1"
|
||||
load("@prysm//tools:download_spectests.bzl", "consensus_spec_tests")
|
||||
|
||||
http_archive(
|
||||
name = "consensus_spec_tests_general",
|
||||
build_file_content = """
|
||||
filegroup(
|
||||
name = "test_data",
|
||||
srcs = glob([
|
||||
"**/*.ssz_snappy",
|
||||
"**/*.yaml",
|
||||
]),
|
||||
visibility = ["//visibility:public"],
|
||||
)
|
||||
""",
|
||||
integrity = "sha256-cI+DJe3BXlZ0lr28w3USi2lnYOUUfdi/YZ3nJuRiiYU=",
|
||||
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/general.tar.gz" % consensus_spec_version,
|
||||
)
|
||||
|
||||
http_archive(
|
||||
name = "consensus_spec_tests_minimal",
|
||||
build_file_content = """
|
||||
filegroup(
|
||||
name = "test_data",
|
||||
srcs = glob([
|
||||
"**/*.ssz_snappy",
|
||||
"**/*.yaml",
|
||||
]),
|
||||
visibility = ["//visibility:public"],
|
||||
)
|
||||
""",
|
||||
integrity = "sha256-eBLWqO/RdcqsANmA/rwkJ4kI+LCL+Q0RmIDq6z85lYQ=",
|
||||
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/minimal.tar.gz" % consensus_spec_version,
|
||||
)
|
||||
|
||||
http_archive(
|
||||
name = "consensus_spec_tests_mainnet",
|
||||
build_file_content = """
|
||||
filegroup(
|
||||
name = "test_data",
|
||||
srcs = glob([
|
||||
"**/*.ssz_snappy",
|
||||
"**/*.yaml",
|
||||
]),
|
||||
visibility = ["//visibility:public"],
|
||||
)
|
||||
""",
|
||||
integrity = "sha256-ab0H0WTzhSwYJ2a+GHVbUMoNRActJw18EmX3o5hhDi0",
|
||||
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/mainnet.tar.gz" % consensus_spec_version,
|
||||
consensus_spec_tests(
|
||||
name = "consensus_spec_tests",
|
||||
flavors = {
|
||||
"general": "sha256-o4t9p3R+fQHF4KOykGmwlG3zDw5wUdVWprkzId8aIsk=",
|
||||
"minimal": "sha256-sU7ToI8t3MR8x0vVjC8ERmAHZDWpEmnAC9FWIpHi5x4=",
|
||||
"mainnet": "sha256-YKS4wngg0LgI9Upp4MYJ77aG+8+e/G4YeqEIlp06LZw=",
|
||||
},
|
||||
version = consensus_spec_version,
|
||||
)
|
||||
|
||||
http_archive(
|
||||
@@ -318,11 +278,13 @@ filegroup(
|
||||
visibility = ["//visibility:public"],
|
||||
)
|
||||
""",
|
||||
integrity = "sha256-Wy3YcJxoXiKQwrGgJecrtjtdokc4X/VUNBmyQXJf0Oc=",
|
||||
integrity = "sha256-Nv4TEuEJPQIM4E6T9J0FOITsmappmXZjGtlhe1HEXnU=",
|
||||
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
|
||||
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
|
||||
)
|
||||
|
||||
bls_test_version = "v0.1.1"
|
||||
|
||||
http_archive(
|
||||
name = "bls_spec_tests",
|
||||
build_file_content = """
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package health
|
||||
|
||||
import (
|
||||
"context"
|
||||
"sync"
|
||||
"testing"
|
||||
|
||||
@@ -23,7 +22,7 @@ func TestNodeHealth_IsHealthy(t *testing.T) {
|
||||
isHealthy: &tt.isHealthy,
|
||||
healthChan: make(chan bool, 1),
|
||||
}
|
||||
if got := n.IsHealthy(context.Background()); got != tt.want {
|
||||
if got := n.IsHealthy(t.Context()); got != tt.want {
|
||||
t.Errorf("IsHealthy() = %v, want %v", got, tt.want)
|
||||
}
|
||||
})
|
||||
@@ -54,7 +53,7 @@ func TestNodeHealth_UpdateNodeHealth(t *testing.T) {
|
||||
healthChan: make(chan bool, 1),
|
||||
}
|
||||
|
||||
s := n.CheckHealth(context.Background())
|
||||
s := n.CheckHealth(t.Context())
|
||||
// Check if health status was updated
|
||||
if s != tt.newStatus {
|
||||
t.Errorf("UpdateNodeHealth() failed to update isHealthy from %v to %v", tt.initial, tt.newStatus)
|
||||
@@ -93,9 +92,9 @@ func TestNodeHealth_Concurrency(t *testing.T) {
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
client.EXPECT().IsHealthy(gomock.Any()).Return(false).Times(1)
|
||||
n.CheckHealth(context.Background())
|
||||
n.CheckHealth(t.Context())
|
||||
client.EXPECT().IsHealthy(gomock.Any()).Return(true).Times(1)
|
||||
n.CheckHealth(context.Background())
|
||||
n.CheckHealth(t.Context())
|
||||
}()
|
||||
}
|
||||
|
||||
@@ -103,7 +102,7 @@ func TestNodeHealth_Concurrency(t *testing.T) {
|
||||
for i := 0; i < numGoroutines; i++ {
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
_ = n.IsHealthy(context.Background()) // Just read the value
|
||||
_ = n.IsHealthy(t.Context()) // Just read the value
|
||||
}()
|
||||
}
|
||||
|
||||
|
||||
@@ -13,7 +13,6 @@ go_library(
|
||||
deps = [
|
||||
"//api:go_default_library",
|
||||
"//api/client:go_default_library",
|
||||
"//api/server:go_default_library",
|
||||
"//api/server/structs:go_default_library",
|
||||
"//config/fieldparams:go_default_library",
|
||||
"//config/params:go_default_library",
|
||||
@@ -28,7 +27,6 @@ go_library(
|
||||
"//proto/engine/v1:go_default_library",
|
||||
"//proto/prysm/v1alpha1:go_default_library",
|
||||
"//runtime/version:go_default_library",
|
||||
"@com_github_ethereum_go_ethereum//common:go_default_library",
|
||||
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
|
||||
"@com_github_pkg_errors//:go_default_library",
|
||||
"@com_github_prysmaticlabs_fastssz//:go_default_library",
|
||||
|
||||
@@ -241,7 +241,7 @@ func (c *Client) GetHeader(ctx context.Context, slot primitives.Slot, parentHash
|
||||
return nil, errors.Wrap(err, "error getting header from builder server")
|
||||
}
|
||||
|
||||
bid, err := c.parseHeaderResponse(data, header)
|
||||
bid, err := c.parseHeaderResponse(data, header, slot)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(
|
||||
err,
|
||||
@@ -254,7 +254,7 @@ func (c *Client) GetHeader(ctx context.Context, slot primitives.Slot, parentHash
|
||||
return bid, nil
|
||||
}
|
||||
|
||||
func (c *Client) parseHeaderResponse(data []byte, header http.Header) (SignedBid, error) {
|
||||
func (c *Client) parseHeaderResponse(data []byte, header http.Header, slot primitives.Slot) (SignedBid, error) {
|
||||
var versionHeader string
|
||||
if c.sszEnabled || header.Get(api.VersionHeader) != "" {
|
||||
versionHeader = header.Get(api.VersionHeader)
|
||||
@@ -276,7 +276,7 @@ func (c *Client) parseHeaderResponse(data []byte, header http.Header) (SignedBid
|
||||
}
|
||||
|
||||
if ver >= version.Electra {
|
||||
return c.parseHeaderElectra(data)
|
||||
return c.parseHeaderElectra(data, slot)
|
||||
}
|
||||
if ver >= version.Deneb {
|
||||
return c.parseHeaderDeneb(data)
|
||||
@@ -291,7 +291,7 @@ func (c *Client) parseHeaderResponse(data []byte, header http.Header) (SignedBid
|
||||
return nil, fmt.Errorf("unsupported header version %s", versionHeader)
|
||||
}
|
||||
|
||||
func (c *Client) parseHeaderElectra(data []byte) (SignedBid, error) {
|
||||
func (c *Client) parseHeaderElectra(data []byte, slot primitives.Slot) (SignedBid, error) {
|
||||
if c.sszEnabled {
|
||||
sb := ðpb.SignedBuilderBidElectra{}
|
||||
if err := sb.UnmarshalSSZ(data); err != nil {
|
||||
@@ -303,7 +303,7 @@ func (c *Client) parseHeaderElectra(data []byte) (SignedBid, error) {
|
||||
if err := json.Unmarshal(data, hr); err != nil {
|
||||
return nil, errors.Wrap(err, "could not unmarshal ExecHeaderResponseElectra JSON")
|
||||
}
|
||||
p, err := hr.ToProto()
|
||||
p, err := hr.ToProto(slot)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "could not convert ExecHeaderResponseElectra to proto")
|
||||
}
|
||||
|
||||
@@ -2,7 +2,6 @@ package builder
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
@@ -31,7 +30,7 @@ func (fn roundtrip) RoundTrip(r *http.Request) (*http.Response, error) {
|
||||
}
|
||||
|
||||
func TestClient_Status(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
statusPath := "/eth/v1/builder/status"
|
||||
hc := &http.Client{
|
||||
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
|
||||
@@ -84,7 +83,7 @@ func TestClient_Status(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestClient_RegisterValidator(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
expectedBody := `[{"message":{"fee_recipient":"0x0000000000000000000000000000000000000000","gas_limit":"23","timestamp":"42","pubkey":"0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a"},"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"}]`
|
||||
expectedPath := "/eth/v1/builder/validators"
|
||||
t.Run("JSON success", func(t *testing.T) {
|
||||
@@ -168,7 +167,7 @@ func TestClient_RegisterValidator(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestClient_GetHeader(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
expectedPath := "/eth/v1/builder/header/23/0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2/0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a"
|
||||
var slot primitives.Slot = 23
|
||||
parentHash := ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")
|
||||
@@ -532,7 +531,7 @@ func TestClient_GetHeader(t *testing.T) {
|
||||
require.Equal(t, expectedPath, r.URL.Path)
|
||||
epr := &ExecHeaderResponseElectra{}
|
||||
require.NoError(t, json.Unmarshal([]byte(testExampleHeaderResponseElectra), epr))
|
||||
pro, err := epr.ToProto()
|
||||
pro, err := epr.ToProto(100)
|
||||
require.NoError(t, err)
|
||||
ssz, err := pro.MarshalSSZ()
|
||||
require.NoError(t, err)
|
||||
@@ -601,7 +600,7 @@ func TestClient_GetHeader(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestSubmitBlindedBlock(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
|
||||
t.Run("bellatrix", func(t *testing.T) {
|
||||
hc := &http.Client{
|
||||
@@ -640,9 +639,9 @@ func TestSubmitBlindedBlock(t *testing.T) {
|
||||
require.Equal(t, api.OctetStreamMediaType, r.Header.Get("Accept"))
|
||||
epr := &ExecutionPayloadResponse{}
|
||||
require.NoError(t, json.Unmarshal([]byte(testExampleExecutionPayload), epr))
|
||||
ep := &ExecutionPayload{}
|
||||
ep := &structs.ExecutionPayload{}
|
||||
require.NoError(t, json.Unmarshal(epr.Data, ep))
|
||||
pro, err := ep.ToProto()
|
||||
pro, err := ep.ToConsensus()
|
||||
require.NoError(t, err)
|
||||
ssz, err := pro.MarshalSSZ()
|
||||
require.NoError(t, err)
|
||||
@@ -710,9 +709,9 @@ func TestSubmitBlindedBlock(t *testing.T) {
|
||||
require.Equal(t, api.OctetStreamMediaType, r.Header.Get("Accept"))
|
||||
epr := &ExecutionPayloadResponse{}
|
||||
require.NoError(t, json.Unmarshal([]byte(testExampleExecutionPayloadCapella), epr))
|
||||
ep := &ExecutionPayloadCapella{}
|
||||
ep := &structs.ExecutionPayloadCapella{}
|
||||
require.NoError(t, json.Unmarshal(epr.Data, ep))
|
||||
pro, err := ep.ToProto()
|
||||
pro, err := ep.ToConsensus()
|
||||
require.NoError(t, err)
|
||||
ssz, err := pro.MarshalSSZ()
|
||||
require.NoError(t, err)
|
||||
@@ -1559,7 +1558,7 @@ func TestRequestLogger(t *testing.T) {
|
||||
c, err := NewClient("localhost:3500", wo)
|
||||
require.NoError(t, err)
|
||||
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
hc := &http.Client{
|
||||
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
|
||||
require.Equal(t, getStatus, r.URL.Path)
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -328,72 +328,72 @@ func TestExecutionHeaderResponseUnmarshal(t *testing.T) {
|
||||
},
|
||||
{
|
||||
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
|
||||
actual: hexutil.Encode(hr.Data.Message.Header.ParentHash),
|
||||
actual: hr.Data.Message.Header.ParentHash,
|
||||
name: "ExecHeaderResponse.ExecutionPayloadHeader.ParentHash",
|
||||
},
|
||||
{
|
||||
expected: "0xabcf8e0d4e9587369b2301d0790347320302cc09",
|
||||
actual: hexutil.Encode(hr.Data.Message.Header.FeeRecipient),
|
||||
actual: hr.Data.Message.Header.FeeRecipient,
|
||||
name: "ExecHeaderResponse.ExecutionPayloadHeader.FeeRecipient",
|
||||
},
|
||||
{
|
||||
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
|
||||
actual: hexutil.Encode(hr.Data.Message.Header.StateRoot),
|
||||
actual: hr.Data.Message.Header.StateRoot,
|
||||
name: "ExecHeaderResponse.ExecutionPayloadHeader.StateRoot",
|
||||
},
|
||||
{
|
||||
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
|
||||
actual: hexutil.Encode(hr.Data.Message.Header.ReceiptsRoot),
|
||||
actual: hr.Data.Message.Header.ReceiptsRoot,
|
||||
name: "ExecHeaderResponse.ExecutionPayloadHeader.ReceiptsRoot",
|
||||
},
|
||||
{
|
||||
expected: "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
|
||||
actual: hexutil.Encode(hr.Data.Message.Header.LogsBloom),
|
||||
actual: hr.Data.Message.Header.LogsBloom,
|
||||
name: "ExecHeaderResponse.ExecutionPayloadHeader.LogsBloom",
|
||||
},
|
||||
{
|
||||
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
|
||||
actual: hexutil.Encode(hr.Data.Message.Header.PrevRandao),
|
||||
actual: hr.Data.Message.Header.PrevRandao,
|
||||
name: "ExecHeaderResponse.ExecutionPayloadHeader.PrevRandao",
|
||||
},
|
||||
{
|
||||
expected: "1",
|
||||
actual: fmt.Sprintf("%d", hr.Data.Message.Header.BlockNumber),
|
||||
actual: hr.Data.Message.Header.BlockNumber,
|
||||
name: "ExecHeaderResponse.ExecutionPayloadHeader.BlockNumber",
|
||||
},
|
||||
{
|
||||
expected: "1",
|
||||
actual: fmt.Sprintf("%d", hr.Data.Message.Header.GasLimit),
|
||||
actual: hr.Data.Message.Header.GasLimit,
|
||||
name: "ExecHeaderResponse.ExecutionPayloadHeader.GasLimit",
|
||||
},
|
||||
{
|
||||
expected: "1",
|
||||
actual: fmt.Sprintf("%d", hr.Data.Message.Header.GasUsed),
|
||||
actual: hr.Data.Message.Header.GasUsed,
|
||||
name: "ExecHeaderResponse.ExecutionPayloadHeader.GasUsed",
|
||||
},
|
||||
{
|
||||
expected: "1",
|
||||
actual: fmt.Sprintf("%d", hr.Data.Message.Header.Timestamp),
|
||||
actual: hr.Data.Message.Header.Timestamp,
|
||||
name: "ExecHeaderResponse.ExecutionPayloadHeader.Timestamp",
|
||||
},
|
||||
{
|
||||
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
|
||||
actual: hexutil.Encode(hr.Data.Message.Header.ExtraData),
|
||||
actual: hr.Data.Message.Header.ExtraData,
|
||||
name: "ExecHeaderResponse.ExecutionPayloadHeader.ExtraData",
|
||||
},
|
||||
{
|
||||
expected: "452312848583266388373324160190187140051835877600158453279131187530910662656",
|
||||
actual: fmt.Sprintf("%d", hr.Data.Message.Header.BaseFeePerGas),
|
||||
actual: hr.Data.Message.Header.BaseFeePerGas,
|
||||
name: "ExecHeaderResponse.ExecutionPayloadHeader.BaseFeePerGas",
|
||||
},
|
||||
{
|
||||
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
|
||||
actual: hexutil.Encode(hr.Data.Message.Header.BlockHash),
|
||||
actual: hr.Data.Message.Header.BlockHash,
|
||||
name: "ExecHeaderResponse.ExecutionPayloadHeader.BlockHash",
|
||||
},
|
||||
{
|
||||
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
|
||||
actual: hexutil.Encode(hr.Data.Message.Header.TransactionsRoot),
|
||||
actual: hr.Data.Message.Header.TransactionsRoot,
|
||||
name: "ExecHeaderResponse.ExecutionPayloadHeader.TransactionsRoot",
|
||||
},
|
||||
}
|
||||
@@ -427,77 +427,77 @@ func TestExecutionHeaderResponseCapellaUnmarshal(t *testing.T) {
|
||||
},
|
||||
{
|
||||
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
|
||||
actual: hexutil.Encode(hr.Data.Message.Header.ParentHash),
|
||||
actual: hr.Data.Message.Header.ParentHash,
|
||||
name: "ExecHeaderResponse.ExecutionPayloadHeader.ParentHash",
|
||||
},
|
||||
{
|
||||
expected: "0xabcf8e0d4e9587369b2301d0790347320302cc09",
|
||||
actual: hexutil.Encode(hr.Data.Message.Header.FeeRecipient),
|
||||
actual: hr.Data.Message.Header.FeeRecipient,
|
||||
name: "ExecHeaderResponse.ExecutionPayloadHeader.FeeRecipient",
|
||||
},
|
||||
{
|
||||
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
|
||||
actual: hexutil.Encode(hr.Data.Message.Header.StateRoot),
|
||||
actual: hr.Data.Message.Header.StateRoot,
|
||||
name: "ExecHeaderResponse.ExecutionPayloadHeader.StateRoot",
|
||||
},
|
||||
{
|
||||
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
|
||||
actual: hexutil.Encode(hr.Data.Message.Header.ReceiptsRoot),
|
||||
actual: hr.Data.Message.Header.ReceiptsRoot,
|
||||
name: "ExecHeaderResponse.ExecutionPayloadHeader.ReceiptsRoot",
|
||||
},
|
||||
{
|
||||
expected: "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
|
||||
actual: hexutil.Encode(hr.Data.Message.Header.LogsBloom),
|
||||
actual: hr.Data.Message.Header.LogsBloom,
|
||||
name: "ExecHeaderResponse.ExecutionPayloadHeader.LogsBloom",
|
||||
},
|
||||
{
|
||||
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
|
||||
actual: hexutil.Encode(hr.Data.Message.Header.PrevRandao),
|
||||
actual: hr.Data.Message.Header.PrevRandao,
|
||||
name: "ExecHeaderResponse.ExecutionPayloadHeader.PrevRandao",
|
||||
},
|
||||
{
|
||||
expected: "1",
|
||||
actual: fmt.Sprintf("%d", hr.Data.Message.Header.BlockNumber),
|
||||
actual: hr.Data.Message.Header.BlockNumber,
|
||||
name: "ExecHeaderResponse.ExecutionPayloadHeader.BlockNumber",
|
||||
},
|
||||
{
|
||||
expected: "1",
|
||||
actual: fmt.Sprintf("%d", hr.Data.Message.Header.GasLimit),
|
||||
actual: hr.Data.Message.Header.GasLimit,
|
||||
name: "ExecHeaderResponse.ExecutionPayloadHeader.GasLimit",
|
||||
},
|
||||
{
|
||||
expected: "1",
|
||||
actual: fmt.Sprintf("%d", hr.Data.Message.Header.GasUsed),
|
||||
actual: hr.Data.Message.Header.GasUsed,
|
||||
name: "ExecHeaderResponse.ExecutionPayloadHeader.GasUsed",
|
||||
},
|
||||
{
|
||||
expected: "1",
|
||||
actual: fmt.Sprintf("%d", hr.Data.Message.Header.Timestamp),
|
||||
actual: hr.Data.Message.Header.Timestamp,
|
||||
name: "ExecHeaderResponse.ExecutionPayloadHeader.Timestamp",
|
||||
},
|
||||
{
|
||||
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
|
||||
actual: hexutil.Encode(hr.Data.Message.Header.ExtraData),
|
||||
actual: hr.Data.Message.Header.ExtraData,
|
||||
name: "ExecHeaderResponse.ExecutionPayloadHeader.ExtraData",
|
||||
},
|
||||
{
|
||||
expected: "452312848583266388373324160190187140051835877600158453279131187530910662656",
|
||||
actual: fmt.Sprintf("%d", hr.Data.Message.Header.BaseFeePerGas),
|
||||
actual: hr.Data.Message.Header.BaseFeePerGas,
|
||||
name: "ExecHeaderResponse.ExecutionPayloadHeader.BaseFeePerGas",
|
||||
},
|
||||
{
|
||||
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
|
||||
actual: hexutil.Encode(hr.Data.Message.Header.BlockHash),
|
||||
actual: hr.Data.Message.Header.BlockHash,
|
||||
name: "ExecHeaderResponse.ExecutionPayloadHeader.BlockHash",
|
||||
},
|
||||
{
|
||||
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
|
||||
actual: hexutil.Encode(hr.Data.Message.Header.TransactionsRoot),
|
||||
actual: hr.Data.Message.Header.TransactionsRoot,
|
||||
name: "ExecHeaderResponse.ExecutionPayloadHeader.TransactionsRoot",
|
||||
},
|
||||
{
|
||||
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
|
||||
actual: hexutil.Encode(hr.Data.Message.Header.WithdrawalsRoot),
|
||||
actual: hr.Data.Message.Header.WithdrawalsRoot,
|
||||
name: "ExecHeaderResponse.ExecutionPayloadHeader.WithdrawalsRoot",
|
||||
},
|
||||
}
|
||||
@@ -867,88 +867,6 @@ var testExampleExecutionPayloadDenebDifferentProofCount = fmt.Sprintf(`{
|
||||
}
|
||||
}`, hexutil.Encode(make([]byte, fieldparams.BlobLength)))
|
||||
|
||||
func TestExecutionPayloadResponseUnmarshal(t *testing.T) {
|
||||
epr := &ExecPayloadResponse{}
|
||||
require.NoError(t, json.Unmarshal([]byte(testExampleExecutionPayload), epr))
|
||||
cases := []struct {
|
||||
expected string
|
||||
actual string
|
||||
name string
|
||||
}{
|
||||
{
|
||||
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
|
||||
actual: hexutil.Encode(epr.Data.ParentHash),
|
||||
name: "ExecPayloadResponse.ExecutionPayload.ParentHash",
|
||||
},
|
||||
{
|
||||
expected: "0xabcf8e0d4e9587369b2301d0790347320302cc09",
|
||||
actual: hexutil.Encode(epr.Data.FeeRecipient),
|
||||
name: "ExecPayloadResponse.ExecutionPayload.FeeRecipient",
|
||||
},
|
||||
{
|
||||
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
|
||||
actual: hexutil.Encode(epr.Data.StateRoot),
|
||||
name: "ExecPayloadResponse.ExecutionPayload.StateRoot",
|
||||
},
|
||||
{
|
||||
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
|
||||
actual: hexutil.Encode(epr.Data.ReceiptsRoot),
|
||||
name: "ExecPayloadResponse.ExecutionPayload.ReceiptsRoot",
|
||||
},
|
||||
{
|
||||
expected: "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
|
||||
actual: hexutil.Encode(epr.Data.LogsBloom),
|
||||
name: "ExecPayloadResponse.ExecutionPayload.LogsBloom",
|
||||
},
|
||||
{
|
||||
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
|
||||
actual: hexutil.Encode(epr.Data.PrevRandao),
|
||||
name: "ExecPayloadResponse.ExecutionPayload.PrevRandao",
|
||||
},
|
||||
{
|
||||
expected: "1",
|
||||
actual: fmt.Sprintf("%d", epr.Data.BlockNumber),
|
||||
name: "ExecPayloadResponse.ExecutionPayload.BlockNumber",
|
||||
},
|
||||
{
|
||||
expected: "1",
|
||||
actual: fmt.Sprintf("%d", epr.Data.GasLimit),
|
||||
name: "ExecPayloadResponse.ExecutionPayload.GasLimit",
|
||||
},
|
||||
{
|
||||
expected: "1",
|
||||
actual: fmt.Sprintf("%d", epr.Data.GasUsed),
|
||||
name: "ExecPayloadResponse.ExecutionPayload.GasUsed",
|
||||
},
|
||||
{
|
||||
expected: "1",
|
||||
actual: fmt.Sprintf("%d", epr.Data.Timestamp),
|
||||
name: "ExecPayloadResponse.ExecutionPayload.Timestamp",
|
||||
},
|
||||
{
|
||||
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
|
||||
actual: hexutil.Encode(epr.Data.ExtraData),
|
||||
name: "ExecPayloadResponse.ExecutionPayload.ExtraData",
|
||||
},
|
||||
{
|
||||
expected: "452312848583266388373324160190187140051835877600158453279131187530910662656",
|
||||
actual: fmt.Sprintf("%d", epr.Data.BaseFeePerGas),
|
||||
name: "ExecPayloadResponse.ExecutionPayload.BaseFeePerGas",
|
||||
},
|
||||
{
|
||||
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
|
||||
actual: hexutil.Encode(epr.Data.BlockHash),
|
||||
name: "ExecPayloadResponse.ExecutionPayload.BlockHash",
|
||||
},
|
||||
}
|
||||
for _, c := range cases {
|
||||
require.Equal(t, c.expected, c.actual, fmt.Sprintf("unexpected value for field %s", c.name))
|
||||
}
|
||||
require.Equal(t, 1, len(epr.Data.Transactions))
|
||||
txHash := "0x02f878831469668303f51d843b9ac9f9843b9aca0082520894c93269b73096998db66be0441e836d873535cb9c8894a19041886f000080c001a031cc29234036afbf9a1fb9476b463367cb1f957ac0b919b69bbc798436e604aaa018c4e9c3914eb27aadd0b91e10b18655739fcf8c1fc398763a9f1beecb8ddc86"
|
||||
require.Equal(t, txHash, hexutil.Encode(epr.Data.Transactions[0]))
|
||||
}
|
||||
|
||||
func TestExecutionPayloadResponseCapellaUnmarshal(t *testing.T) {
|
||||
epr := &ExecPayloadResponseCapella{}
|
||||
require.NoError(t, json.Unmarshal([]byte(testExampleExecutionPayloadCapella), epr))
|
||||
@@ -959,67 +877,67 @@ func TestExecutionPayloadResponseCapellaUnmarshal(t *testing.T) {
|
||||
}{
|
||||
{
|
||||
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
|
||||
actual: hexutil.Encode(epr.Data.ParentHash),
|
||||
actual: epr.Data.ParentHash,
|
||||
name: "ExecPayloadResponse.ExecutionPayload.ParentHash",
|
||||
},
|
||||
{
|
||||
expected: "0xabcf8e0d4e9587369b2301d0790347320302cc09",
|
||||
actual: hexutil.Encode(epr.Data.FeeRecipient),
|
||||
actual: epr.Data.FeeRecipient,
|
||||
name: "ExecPayloadResponse.ExecutionPayload.FeeRecipient",
|
||||
},
|
||||
{
|
||||
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
|
||||
actual: hexutil.Encode(epr.Data.StateRoot),
|
||||
actual: epr.Data.StateRoot,
|
||||
name: "ExecPayloadResponse.ExecutionPayload.StateRoot",
|
||||
},
|
||||
{
|
||||
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
|
||||
actual: hexutil.Encode(epr.Data.ReceiptsRoot),
|
||||
actual: epr.Data.ReceiptsRoot,
|
||||
name: "ExecPayloadResponse.ExecutionPayload.ReceiptsRoot",
|
||||
},
|
||||
{
|
||||
expected: "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
|
||||
actual: hexutil.Encode(epr.Data.LogsBloom),
|
||||
actual: epr.Data.LogsBloom,
|
||||
name: "ExecPayloadResponse.ExecutionPayload.LogsBloom",
|
||||
},
|
||||
{
|
||||
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
|
||||
actual: hexutil.Encode(epr.Data.PrevRandao),
|
||||
actual: epr.Data.PrevRandao,
|
||||
name: "ExecPayloadResponse.ExecutionPayload.PrevRandao",
|
||||
},
|
||||
{
|
||||
expected: "1",
|
||||
actual: fmt.Sprintf("%d", epr.Data.BlockNumber),
|
||||
actual: epr.Data.BlockNumber,
|
||||
name: "ExecPayloadResponse.ExecutionPayload.BlockNumber",
|
||||
},
|
||||
{
|
||||
expected: "1",
|
||||
actual: fmt.Sprintf("%d", epr.Data.GasLimit),
|
||||
actual: epr.Data.GasLimit,
|
||||
name: "ExecPayloadResponse.ExecutionPayload.GasLimit",
|
||||
},
|
||||
{
|
||||
expected: "1",
|
||||
actual: fmt.Sprintf("%d", epr.Data.GasUsed),
|
||||
actual: epr.Data.GasUsed,
|
||||
name: "ExecPayloadResponse.ExecutionPayload.GasUsed",
|
||||
},
|
||||
{
|
||||
expected: "1",
|
||||
actual: fmt.Sprintf("%d", epr.Data.Timestamp),
|
||||
actual: epr.Data.Timestamp,
|
||||
name: "ExecPayloadResponse.ExecutionPayload.Timestamp",
|
||||
},
|
||||
{
|
||||
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
|
||||
actual: hexutil.Encode(epr.Data.ExtraData),
|
||||
actual: epr.Data.ExtraData,
|
||||
name: "ExecPayloadResponse.ExecutionPayload.ExtraData",
|
||||
},
|
||||
{
|
||||
expected: "452312848583266388373324160190187140051835877600158453279131187530910662656",
|
||||
actual: fmt.Sprintf("%d", epr.Data.BaseFeePerGas),
|
||||
actual: epr.Data.BaseFeePerGas,
|
||||
name: "ExecPayloadResponse.ExecutionPayload.BaseFeePerGas",
|
||||
},
|
||||
{
|
||||
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
|
||||
actual: hexutil.Encode(epr.Data.BlockHash),
|
||||
actual: epr.Data.BlockHash,
|
||||
name: "ExecPayloadResponse.ExecutionPayload.BlockHash",
|
||||
},
|
||||
}
|
||||
@@ -1028,14 +946,14 @@ func TestExecutionPayloadResponseCapellaUnmarshal(t *testing.T) {
|
||||
}
|
||||
require.Equal(t, 1, len(epr.Data.Transactions))
|
||||
txHash := "0x02f878831469668303f51d843b9ac9f9843b9aca0082520894c93269b73096998db66be0441e836d873535cb9c8894a19041886f000080c001a031cc29234036afbf9a1fb9476b463367cb1f957ac0b919b69bbc798436e604aaa018c4e9c3914eb27aadd0b91e10b18655739fcf8c1fc398763a9f1beecb8ddc86"
|
||||
require.Equal(t, txHash, hexutil.Encode(epr.Data.Transactions[0]))
|
||||
require.Equal(t, txHash, epr.Data.Transactions[0])
|
||||
|
||||
require.Equal(t, 1, len(epr.Data.Withdrawals))
|
||||
w := epr.Data.Withdrawals[0]
|
||||
assert.Equal(t, uint64(1), w.Index.Uint64())
|
||||
assert.Equal(t, uint64(1), w.ValidatorIndex.Uint64())
|
||||
assert.DeepEqual(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943", w.Address.String())
|
||||
assert.Equal(t, uint64(1), w.Amount.Uint64())
|
||||
assert.Equal(t, "1", w.WithdrawalIndex)
|
||||
assert.Equal(t, "1", w.ValidatorIndex)
|
||||
assert.DeepEqual(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943", w.ExecutionAddress)
|
||||
assert.Equal(t, "1", w.Amount)
|
||||
}
|
||||
|
||||
func TestExecutionPayloadResponseDenebUnmarshal(t *testing.T) {
|
||||
@@ -1048,77 +966,77 @@ func TestExecutionPayloadResponseDenebUnmarshal(t *testing.T) {
|
||||
}{
|
||||
{
|
||||
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
|
||||
actual: hexutil.Encode(epr.Data.ExecutionPayload.ParentHash),
|
||||
actual: epr.Data.ExecutionPayload.ParentHash,
|
||||
name: "ExecPayloadResponse.ExecutionPayload.ParentHash",
|
||||
},
|
||||
{
|
||||
expected: "0xabcf8e0d4e9587369b2301d0790347320302cc09",
|
||||
actual: hexutil.Encode(epr.Data.ExecutionPayload.FeeRecipient),
|
||||
actual: epr.Data.ExecutionPayload.FeeRecipient,
|
||||
name: "ExecPayloadResponse.ExecutionPayload.FeeRecipient",
|
||||
},
|
||||
{
|
||||
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
|
||||
actual: hexutil.Encode(epr.Data.ExecutionPayload.StateRoot),
|
||||
actual: epr.Data.ExecutionPayload.StateRoot,
|
||||
name: "ExecPayloadResponse.ExecutionPayload.StateRoot",
|
||||
},
|
||||
{
|
||||
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
|
||||
actual: hexutil.Encode(epr.Data.ExecutionPayload.ReceiptsRoot),
|
||||
actual: epr.Data.ExecutionPayload.ReceiptsRoot,
|
||||
name: "ExecPayloadResponse.ExecutionPayload.ReceiptsRoot",
|
||||
},
|
||||
{
|
||||
expected: "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
|
||||
actual: hexutil.Encode(epr.Data.ExecutionPayload.LogsBloom),
|
||||
actual: epr.Data.ExecutionPayload.LogsBloom,
|
||||
name: "ExecPayloadResponse.ExecutionPayload.LogsBloom",
|
||||
},
|
||||
{
|
||||
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
|
||||
actual: hexutil.Encode(epr.Data.ExecutionPayload.PrevRandao),
|
||||
actual: epr.Data.ExecutionPayload.PrevRandao,
|
||||
name: "ExecPayloadResponse.ExecutionPayload.PrevRandao",
|
||||
},
|
||||
{
|
||||
expected: "1",
|
||||
actual: fmt.Sprintf("%d", epr.Data.ExecutionPayload.BlockNumber),
|
||||
actual: epr.Data.ExecutionPayload.BlockNumber,
|
||||
name: "ExecPayloadResponse.ExecutionPayload.BlockNumber",
|
||||
},
|
||||
{
|
||||
expected: "1",
|
||||
actual: fmt.Sprintf("%d", epr.Data.ExecutionPayload.GasLimit),
|
||||
actual: epr.Data.ExecutionPayload.GasLimit,
|
||||
name: "ExecPayloadResponse.ExecutionPayload.GasLimit",
|
||||
},
|
||||
{
|
||||
expected: "1",
|
||||
actual: fmt.Sprintf("%d", epr.Data.ExecutionPayload.GasUsed),
|
||||
actual: epr.Data.ExecutionPayload.GasUsed,
|
||||
name: "ExecPayloadResponse.ExecutionPayload.GasUsed",
|
||||
},
|
||||
{
|
||||
expected: "1",
|
||||
actual: fmt.Sprintf("%d", epr.Data.ExecutionPayload.Timestamp),
|
||||
actual: epr.Data.ExecutionPayload.Timestamp,
|
||||
name: "ExecPayloadResponse.ExecutionPayload.Timestamp",
|
||||
},
|
||||
{
|
||||
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
|
||||
actual: hexutil.Encode(epr.Data.ExecutionPayload.ExtraData),
|
||||
actual: epr.Data.ExecutionPayload.ExtraData,
|
||||
name: "ExecPayloadResponse.ExecutionPayload.ExtraData",
|
||||
},
|
||||
{
|
||||
expected: "452312848583266388373324160190187140051835877600158453279131187530910662656",
|
||||
actual: fmt.Sprintf("%d", epr.Data.ExecutionPayload.BaseFeePerGas),
|
||||
actual: epr.Data.ExecutionPayload.BaseFeePerGas,
|
||||
name: "ExecPayloadResponse.ExecutionPayload.BaseFeePerGas",
|
||||
},
|
||||
{
|
||||
expected: "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
|
||||
actual: hexutil.Encode(epr.Data.ExecutionPayload.BlockHash),
|
||||
actual: epr.Data.ExecutionPayload.BlockHash,
|
||||
name: "ExecPayloadResponse.ExecutionPayload.BlockHash",
|
||||
},
|
||||
{
|
||||
expected: "2",
|
||||
actual: fmt.Sprintf("%d", epr.Data.ExecutionPayload.BlobGasUsed),
|
||||
actual: epr.Data.ExecutionPayload.BlobGasUsed,
|
||||
name: "ExecPayloadResponse.ExecutionPayload.BlobGasUsed",
|
||||
},
|
||||
{
|
||||
expected: "3",
|
||||
actual: fmt.Sprintf("%d", epr.Data.ExecutionPayload.ExcessBlobGas),
|
||||
actual: epr.Data.ExecutionPayload.ExcessBlobGas,
|
||||
name: "ExecPayloadResponse.ExecutionPayload.ExcessBlobGas",
|
||||
},
|
||||
}
|
||||
@@ -1127,64 +1045,16 @@ func TestExecutionPayloadResponseDenebUnmarshal(t *testing.T) {
|
||||
}
|
||||
require.Equal(t, 1, len(epr.Data.ExecutionPayload.Transactions))
|
||||
txHash := "0x02f878831469668303f51d843b9ac9f9843b9aca0082520894c93269b73096998db66be0441e836d873535cb9c8894a19041886f000080c001a031cc29234036afbf9a1fb9476b463367cb1f957ac0b919b69bbc798436e604aaa018c4e9c3914eb27aadd0b91e10b18655739fcf8c1fc398763a9f1beecb8ddc86"
|
||||
require.Equal(t, txHash, hexutil.Encode(epr.Data.ExecutionPayload.Transactions[0]))
|
||||
require.Equal(t, txHash, epr.Data.ExecutionPayload.Transactions[0])
|
||||
|
||||
require.Equal(t, 1, len(epr.Data.ExecutionPayload.Withdrawals))
|
||||
w := epr.Data.ExecutionPayload.Withdrawals[0]
|
||||
assert.Equal(t, uint64(1), w.Index.Uint64())
|
||||
assert.Equal(t, uint64(1), w.ValidatorIndex.Uint64())
|
||||
assert.DeepEqual(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943", w.Address.String())
|
||||
assert.Equal(t, uint64(1), w.Amount.Uint64())
|
||||
assert.Equal(t, uint64(2), uint64(epr.Data.ExecutionPayload.BlobGasUsed))
|
||||
assert.Equal(t, uint64(3), uint64(epr.Data.ExecutionPayload.ExcessBlobGas))
|
||||
}
|
||||
|
||||
func TestExecutionPayloadResponseToProto(t *testing.T) {
|
||||
hr := &ExecPayloadResponse{}
|
||||
require.NoError(t, json.Unmarshal([]byte(testExampleExecutionPayload), hr))
|
||||
p, err := hr.ToProto()
|
||||
require.NoError(t, err)
|
||||
|
||||
parentHash, err := hexutil.Decode("0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")
|
||||
require.NoError(t, err)
|
||||
feeRecipient, err := hexutil.Decode("0xabcf8e0d4e9587369b2301d0790347320302cc09")
|
||||
require.NoError(t, err)
|
||||
stateRoot, err := hexutil.Decode("0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")
|
||||
require.NoError(t, err)
|
||||
receiptsRoot, err := hexutil.Decode("0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")
|
||||
require.NoError(t, err)
|
||||
logsBloom, err := hexutil.Decode("0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000")
|
||||
require.NoError(t, err)
|
||||
prevRandao, err := hexutil.Decode("0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")
|
||||
require.NoError(t, err)
|
||||
extraData, err := hexutil.Decode("0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")
|
||||
require.NoError(t, err)
|
||||
blockHash, err := hexutil.Decode("0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")
|
||||
require.NoError(t, err)
|
||||
|
||||
tx, err := hexutil.Decode("0x02f878831469668303f51d843b9ac9f9843b9aca0082520894c93269b73096998db66be0441e836d873535cb9c8894a19041886f000080c001a031cc29234036afbf9a1fb9476b463367cb1f957ac0b919b69bbc798436e604aaa018c4e9c3914eb27aadd0b91e10b18655739fcf8c1fc398763a9f1beecb8ddc86")
|
||||
require.NoError(t, err)
|
||||
txList := [][]byte{tx}
|
||||
|
||||
bfpg, err := stringToUint256("452312848583266388373324160190187140051835877600158453279131187530910662656")
|
||||
require.NoError(t, err)
|
||||
expected := &v1.ExecutionPayload{
|
||||
ParentHash: parentHash,
|
||||
FeeRecipient: feeRecipient,
|
||||
StateRoot: stateRoot,
|
||||
ReceiptsRoot: receiptsRoot,
|
||||
LogsBloom: logsBloom,
|
||||
PrevRandao: prevRandao,
|
||||
BlockNumber: 1,
|
||||
GasLimit: 1,
|
||||
GasUsed: 1,
|
||||
Timestamp: 1,
|
||||
ExtraData: extraData,
|
||||
BaseFeePerGas: bfpg.SSZBytes(),
|
||||
BlockHash: blockHash,
|
||||
Transactions: txList,
|
||||
}
|
||||
require.DeepEqual(t, expected, p)
|
||||
assert.Equal(t, "1", w.WithdrawalIndex)
|
||||
assert.Equal(t, "1", w.ValidatorIndex)
|
||||
assert.DeepEqual(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943", w.ExecutionAddress)
|
||||
assert.Equal(t, "1", w.Amount)
|
||||
assert.Equal(t, "2", epr.Data.ExecutionPayload.BlobGasUsed)
|
||||
assert.Equal(t, "3", epr.Data.ExecutionPayload.ExcessBlobGas)
|
||||
}
|
||||
|
||||
func TestExecutionPayloadResponseCapellaToProto(t *testing.T) {
|
||||
@@ -1352,16 +1222,6 @@ func pbEth1Data() *eth.Eth1Data {
|
||||
}
|
||||
}
|
||||
|
||||
func TestEth1DataMarshal(t *testing.T) {
|
||||
ed := &Eth1Data{
|
||||
Eth1Data: pbEth1Data(),
|
||||
}
|
||||
b, err := json.Marshal(ed)
|
||||
require.NoError(t, err)
|
||||
expected := `{"deposit_root":"0x0000000000000000000000000000000000000000000000000000000000000000","deposit_count":"23","block_hash":"0x0000000000000000000000000000000000000000000000000000000000000000"}`
|
||||
require.Equal(t, expected, string(b))
|
||||
}
|
||||
|
||||
func pbSyncAggregate() *eth.SyncAggregate {
|
||||
return ð.SyncAggregate{
|
||||
SyncCommitteeSignature: make([]byte, 48),
|
||||
@@ -1369,14 +1229,6 @@ func pbSyncAggregate() *eth.SyncAggregate {
|
||||
}
|
||||
}
|
||||
|
||||
func TestSyncAggregate_MarshalJSON(t *testing.T) {
|
||||
sa := &SyncAggregate{pbSyncAggregate()}
|
||||
b, err := json.Marshal(sa)
|
||||
require.NoError(t, err)
|
||||
expected := `{"sync_committee_bits":"0x01","sync_committee_signature":"0x000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"}`
|
||||
require.Equal(t, expected, string(b))
|
||||
}
|
||||
|
||||
func pbDeposit(t *testing.T) *eth.Deposit {
|
||||
return ð.Deposit{
|
||||
Proof: [][]byte{ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")},
|
||||
@@ -1389,16 +1241,6 @@ func pbDeposit(t *testing.T) *eth.Deposit {
|
||||
}
|
||||
}
|
||||
|
||||
func TestDeposit_MarshalJSON(t *testing.T) {
|
||||
d := &Deposit{
|
||||
Deposit: pbDeposit(t),
|
||||
}
|
||||
b, err := json.Marshal(d)
|
||||
require.NoError(t, err)
|
||||
expected := `{"proof":["0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"],"data":{"pubkey":"0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a","withdrawal_credentials":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","amount":"1","signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"}}`
|
||||
require.Equal(t, expected, string(b))
|
||||
}
|
||||
|
||||
func pbSignedVoluntaryExit(t *testing.T) *eth.SignedVoluntaryExit {
|
||||
return ð.SignedVoluntaryExit{
|
||||
Exit: ð.VoluntaryExit{
|
||||
@@ -1409,16 +1251,6 @@ func pbSignedVoluntaryExit(t *testing.T) *eth.SignedVoluntaryExit {
|
||||
}
|
||||
}
|
||||
|
||||
func TestVoluntaryExit(t *testing.T) {
|
||||
ve := &SignedVoluntaryExit{
|
||||
SignedVoluntaryExit: pbSignedVoluntaryExit(t),
|
||||
}
|
||||
b, err := json.Marshal(ve)
|
||||
require.NoError(t, err)
|
||||
expected := `{"message":{"epoch":"1","validator_index":"1"},"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"}`
|
||||
require.Equal(t, expected, string(b))
|
||||
}
|
||||
|
||||
func pbAttestation(t *testing.T) *eth.Attestation {
|
||||
return ð.Attestation{
|
||||
AggregationBits: bitfield.Bitlist{0x01},
|
||||
@@ -1439,16 +1271,6 @@ func pbAttestation(t *testing.T) *eth.Attestation {
|
||||
}
|
||||
}
|
||||
|
||||
func TestAttestationMarshal(t *testing.T) {
|
||||
a := &Attestation{
|
||||
Attestation: pbAttestation(t),
|
||||
}
|
||||
b, err := json.Marshal(a)
|
||||
require.NoError(t, err)
|
||||
expected := `{"aggregation_bits":"0x01","data":{"slot":"1","index":"1","beacon_block_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","source":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"},"target":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"}},"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"}`
|
||||
require.Equal(t, expected, string(b))
|
||||
}
|
||||
|
||||
func pbAttesterSlashing(t *testing.T) *eth.AttesterSlashing {
|
||||
return ð.AttesterSlashing{
|
||||
Attestation_1: ð.IndexedAttestation{
|
||||
@@ -1489,9 +1311,7 @@ func pbAttesterSlashing(t *testing.T) *eth.AttesterSlashing {
|
||||
}
|
||||
|
||||
func TestAttesterSlashing_MarshalJSON(t *testing.T) {
|
||||
as := &AttesterSlashing{
|
||||
AttesterSlashing: pbAttesterSlashing(t),
|
||||
}
|
||||
as := structs.AttesterSlashingFromConsensus(pbAttesterSlashing(t))
|
||||
b, err := json.Marshal(as)
|
||||
require.NoError(t, err)
|
||||
expected := `{"attestation_1":{"attesting_indices":["1"],"data":{"slot":"1","index":"1","beacon_block_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","source":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"},"target":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"}},"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"},"attestation_2":{"attesting_indices":["1"],"data":{"slot":"1","index":"1","beacon_block_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","source":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"},"target":{"epoch":"1","root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"}},"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"}}`
|
||||
@@ -1599,9 +1419,8 @@ func pbExecutionPayloadHeaderDeneb(t *testing.T) *v1.ExecutionPayloadHeaderDeneb
|
||||
}
|
||||
|
||||
func TestExecutionPayloadHeader_MarshalJSON(t *testing.T) {
|
||||
h := &ExecutionPayloadHeader{
|
||||
ExecutionPayloadHeader: pbExecutionPayloadHeader(t),
|
||||
}
|
||||
h, err := structs.ExecutionPayloadHeaderFromConsensus(pbExecutionPayloadHeader(t))
|
||||
require.NoError(t, err)
|
||||
b, err := json.Marshal(h)
|
||||
require.NoError(t, err)
|
||||
expected := `{"parent_hash":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","fee_recipient":"0xabcf8e0d4e9587369b2301d0790347320302cc09","state_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","receipts_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","logs_bloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","prev_randao":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","block_number":"1","gas_limit":"1","gas_used":"1","timestamp":"1","extra_data":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","base_fee_per_gas":"452312848583266388373324160190187140051835877600158453279131187530910662656","block_hash":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","transactions_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"}`
|
||||
@@ -1609,9 +1428,9 @@ func TestExecutionPayloadHeader_MarshalJSON(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestExecutionPayloadHeaderCapella_MarshalJSON(t *testing.T) {
|
||||
h := &ExecutionPayloadHeaderCapella{
|
||||
ExecutionPayloadHeaderCapella: pbExecutionPayloadHeaderCapella(t),
|
||||
}
|
||||
h, err := structs.ExecutionPayloadHeaderCapellaFromConsensus(pbExecutionPayloadHeaderCapella(t))
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, err)
|
||||
b, err := json.Marshal(h)
|
||||
require.NoError(t, err)
|
||||
expected := `{"parent_hash":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","fee_recipient":"0xabcf8e0d4e9587369b2301d0790347320302cc09","state_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","receipts_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","logs_bloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","prev_randao":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","block_number":"1","gas_limit":"1","gas_used":"1","timestamp":"1","extra_data":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","base_fee_per_gas":"452312848583266388373324160190187140051835877600158453279131187530910662656","block_hash":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","transactions_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","withdrawals_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"}`
|
||||
@@ -1619,9 +1438,8 @@ func TestExecutionPayloadHeaderCapella_MarshalJSON(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestExecutionPayloadHeaderDeneb_MarshalJSON(t *testing.T) {
|
||||
h := &ExecutionPayloadHeaderDeneb{
|
||||
ExecutionPayloadHeaderDeneb: pbExecutionPayloadHeaderDeneb(t),
|
||||
}
|
||||
h, err := structs.ExecutionPayloadHeaderDenebFromConsensus(pbExecutionPayloadHeaderDeneb(t))
|
||||
require.NoError(t, err)
|
||||
b, err := json.Marshal(h)
|
||||
require.NoError(t, err)
|
||||
expected := `{"parent_hash":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","fee_recipient":"0xabcf8e0d4e9587369b2301d0790347320302cc09","state_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","receipts_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","logs_bloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","prev_randao":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","block_number":"1","gas_limit":"1","gas_used":"1","timestamp":"1","extra_data":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","base_fee_per_gas":"452312848583266388373324160190187140051835877600158453279131187530910662656","block_hash":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","transactions_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","withdrawals_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","blob_gas_used":"1","excess_blob_gas":"2"}`
|
||||
@@ -1850,12 +1668,13 @@ func TestRoundTripUint256(t *testing.T) {
|
||||
func TestRoundTripProtoUint256(t *testing.T) {
|
||||
h := pbExecutionPayloadHeader(t)
|
||||
h.BaseFeePerGas = []byte{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31}
|
||||
hm := &ExecutionPayloadHeader{ExecutionPayloadHeader: h}
|
||||
hm, err := structs.ExecutionPayloadHeaderFromConsensus(h)
|
||||
require.NoError(t, err)
|
||||
m, err := json.Marshal(hm)
|
||||
require.NoError(t, err)
|
||||
hu := &ExecutionPayloadHeader{}
|
||||
hu := &structs.ExecutionPayloadHeader{}
|
||||
require.NoError(t, json.Unmarshal(m, hu))
|
||||
hp, err := hu.ToProto()
|
||||
hp, err := hu.ToConsensus()
|
||||
require.NoError(t, err)
|
||||
require.DeepEqual(t, h.BaseFeePerGas, hp.BaseFeePerGas)
|
||||
}
|
||||
@@ -1863,7 +1682,7 @@ func TestRoundTripProtoUint256(t *testing.T) {
|
||||
func TestExecutionPayloadHeaderRoundtrip(t *testing.T) {
|
||||
expected, err := os.ReadFile("testdata/execution-payload.json")
|
||||
require.NoError(t, err)
|
||||
hu := &ExecutionPayloadHeader{}
|
||||
hu := &structs.ExecutionPayloadHeader{}
|
||||
require.NoError(t, json.Unmarshal(expected, hu))
|
||||
m, err := json.Marshal(hu)
|
||||
require.NoError(t, err)
|
||||
@@ -1873,7 +1692,7 @@ func TestExecutionPayloadHeaderRoundtrip(t *testing.T) {
|
||||
func TestExecutionPayloadHeaderCapellaRoundtrip(t *testing.T) {
|
||||
expected, err := os.ReadFile("testdata/execution-payload-capella.json")
|
||||
require.NoError(t, err)
|
||||
hu := &ExecutionPayloadHeaderCapella{}
|
||||
hu := &structs.ExecutionPayloadHeaderCapella{}
|
||||
require.NoError(t, json.Unmarshal(expected, hu))
|
||||
m, err := json.Marshal(hu)
|
||||
require.NoError(t, err)
|
||||
@@ -1994,11 +1813,9 @@ func TestEmptyResponseBody(t *testing.T) {
|
||||
epr := &ExecutionPayloadResponse{}
|
||||
require.NoError(t, json.Unmarshal(encoded, epr))
|
||||
pp, err := epr.ParsePayload()
|
||||
require.NoError(t, err)
|
||||
pb, err := pp.PayloadProto()
|
||||
if err == nil {
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, false, pb == nil)
|
||||
require.Equal(t, false, pp == nil)
|
||||
} else {
|
||||
require.ErrorIs(t, err, consensusblocks.ErrNilObject)
|
||||
}
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package event
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
@@ -30,7 +29,7 @@ func TestNewEventStream(t *testing.T) {
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
_, err := NewEventStream(context.Background(), &http.Client{}, tt.host, tt.topics)
|
||||
_, err := NewEventStream(t.Context(), &http.Client{}, tt.host, tt.topics)
|
||||
if (err != nil) != tt.wantErr {
|
||||
t.Errorf("NewEventStream() error = %v, wantErr %v", err, tt.wantErr)
|
||||
}
|
||||
@@ -56,7 +55,7 @@ func TestEventStream(t *testing.T) {
|
||||
|
||||
topics := []string{"head"}
|
||||
eventsChannel := make(chan *Event, 1)
|
||||
stream, err := NewEventStream(context.Background(), http.DefaultClient, server.URL, topics)
|
||||
stream, err := NewEventStream(t.Context(), http.DefaultClient, server.URL, topics)
|
||||
require.NoError(t, err)
|
||||
go stream.Subscribe(eventsChannel)
|
||||
|
||||
@@ -83,8 +82,7 @@ func TestEventStream(t *testing.T) {
|
||||
func TestEventStreamRequestError(t *testing.T) {
|
||||
topics := []string{"head"}
|
||||
eventsChannel := make(chan *Event, 1)
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
ctx := t.Context()
|
||||
|
||||
// use valid url that will result in failed request with nil body
|
||||
stream, err := NewEventStream(ctx, http.DefaultClient, "http://badhost:1234", topics)
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package grpc
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/testing/assert"
|
||||
@@ -16,7 +15,7 @@ type customErrorData struct {
|
||||
|
||||
func TestAppendHeaders(t *testing.T) {
|
||||
t.Run("one_header", func(t *testing.T) {
|
||||
ctx := AppendHeaders(context.Background(), []string{"first=value1"})
|
||||
ctx := AppendHeaders(t.Context(), []string{"first=value1"})
|
||||
md, ok := metadata.FromOutgoingContext(ctx)
|
||||
require.Equal(t, true, ok, "Failed to read context metadata")
|
||||
require.Equal(t, 1, md.Len(), "MetadataV0 contains wrong number of values")
|
||||
@@ -24,7 +23,7 @@ func TestAppendHeaders(t *testing.T) {
|
||||
})
|
||||
|
||||
t.Run("multiple_headers", func(t *testing.T) {
|
||||
ctx := AppendHeaders(context.Background(), []string{"first=value1", "second=value2"})
|
||||
ctx := AppendHeaders(t.Context(), []string{"first=value1", "second=value2"})
|
||||
md, ok := metadata.FromOutgoingContext(ctx)
|
||||
require.Equal(t, true, ok, "Failed to read context metadata")
|
||||
require.Equal(t, 2, md.Len(), "MetadataV0 contains wrong number of values")
|
||||
@@ -33,7 +32,7 @@ func TestAppendHeaders(t *testing.T) {
|
||||
})
|
||||
|
||||
t.Run("one_empty_header", func(t *testing.T) {
|
||||
ctx := AppendHeaders(context.Background(), []string{"first=value1", ""})
|
||||
ctx := AppendHeaders(t.Context(), []string{"first=value1", ""})
|
||||
md, ok := metadata.FromOutgoingContext(ctx)
|
||||
require.Equal(t, true, ok, "Failed to read context metadata")
|
||||
require.Equal(t, 1, md.Len(), "MetadataV0 contains wrong number of values")
|
||||
@@ -42,7 +41,7 @@ func TestAppendHeaders(t *testing.T) {
|
||||
|
||||
t.Run("incorrect_header", func(t *testing.T) {
|
||||
logHook := logTest.NewGlobal()
|
||||
ctx := AppendHeaders(context.Background(), []string{"first=value1", "second"})
|
||||
ctx := AppendHeaders(t.Context(), []string{"first=value1", "second"})
|
||||
md, ok := metadata.FromOutgoingContext(ctx)
|
||||
require.Equal(t, true, ok, "Failed to read context metadata")
|
||||
require.Equal(t, 1, md.Len(), "MetadataV0 contains wrong number of values")
|
||||
@@ -51,7 +50,7 @@ func TestAppendHeaders(t *testing.T) {
|
||||
})
|
||||
|
||||
t.Run("header_value_with_equal_sign", func(t *testing.T) {
|
||||
ctx := AppendHeaders(context.Background(), []string{"first=value=1"})
|
||||
ctx := AppendHeaders(t.Context(), []string{"first=value=1"})
|
||||
md, ok := metadata.FromOutgoingContext(ctx)
|
||||
require.Equal(t, true, ok, "Failed to read context metadata")
|
||||
require.Equal(t, 1, md.Len(), "MetadataV0 contains wrong number of values")
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package httprest
|
||||
|
||||
import (
|
||||
"context"
|
||||
"flag"
|
||||
"fmt"
|
||||
"net"
|
||||
@@ -34,7 +33,7 @@ func TestServer_StartStop(t *testing.T) {
|
||||
WithRouter(handler),
|
||||
}
|
||||
|
||||
g, err := New(context.Background(), opts...)
|
||||
g, err := New(t.Context(), opts...)
|
||||
require.NoError(t, err)
|
||||
|
||||
g.Start()
|
||||
@@ -62,7 +61,7 @@ func TestServer_NilHandler_NotFoundHandlerRegistered(t *testing.T) {
|
||||
WithRouter(handler),
|
||||
}
|
||||
|
||||
g, err := New(context.Background(), opts...)
|
||||
g, err := New(t.Context(), opts...)
|
||||
require.NoError(t, err)
|
||||
|
||||
writer := httptest.NewRecorder()
|
||||
|
||||
@@ -31,6 +31,7 @@ go_library(
|
||||
"//beacon-chain/state:go_default_library",
|
||||
"//config/fieldparams:go_default_library",
|
||||
"//config/params:go_default_library",
|
||||
"//consensus-types/blocks:go_default_library",
|
||||
"//consensus-types/interfaces:go_default_library",
|
||||
"//consensus-types/primitives:go_default_library",
|
||||
"//consensus-types/validator:go_default_library",
|
||||
@@ -44,6 +45,7 @@ go_library(
|
||||
"@com_github_ethereum_go_ethereum//common:go_default_library",
|
||||
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
|
||||
"@com_github_pkg_errors//:go_default_library",
|
||||
"@org_golang_google_protobuf//proto:go_default_library",
|
||||
],
|
||||
)
|
||||
|
||||
|
||||
@@ -7,12 +7,15 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/api/server"
|
||||
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
consensusblocks "github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v6/container/slice"
|
||||
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
|
||||
enginev1 "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
|
||||
"github.com/ethereum/go-ethereum/common"
|
||||
"github.com/ethereum/go-ethereum/common/hexutil"
|
||||
"github.com/pkg/errors"
|
||||
"google.golang.org/protobuf/proto"
|
||||
)
|
||||
|
||||
// ----------------------------------------------------------------------------
|
||||
@@ -132,6 +135,13 @@ func (e *ExecutionPayload) ToConsensus() (*enginev1.ExecutionPayload, error) {
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (r *ExecutionPayload) PayloadProto() (proto.Message, error) {
|
||||
if r == nil {
|
||||
return nil, errors.Wrap(consensusblocks.ErrNilObject, "nil execution payload")
|
||||
}
|
||||
return r.ToConsensus()
|
||||
}
|
||||
|
||||
func ExecutionPayloadHeaderFromConsensus(payload *enginev1.ExecutionPayloadHeader) (*ExecutionPayloadHeader, error) {
|
||||
baseFeePerGas, err := sszBytesToUint256String(payload.BaseFeePerGas)
|
||||
if err != nil {
|
||||
@@ -383,6 +393,13 @@ func (e *ExecutionPayloadCapella) ToConsensus() (*enginev1.ExecutionPayloadCapel
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (p *ExecutionPayloadCapella) PayloadProto() (proto.Message, error) {
|
||||
if p == nil {
|
||||
return nil, errors.Wrap(consensusblocks.ErrNilObject, "nil capella execution payload")
|
||||
}
|
||||
return p.ToConsensus()
|
||||
}
|
||||
|
||||
func ExecutionPayloadHeaderCapellaFromConsensus(payload *enginev1.ExecutionPayloadHeaderCapella) (*ExecutionPayloadHeaderCapella, error) {
|
||||
baseFeePerGas, err := sszBytesToUint256String(payload.BaseFeePerGas)
|
||||
if err != nil {
|
||||
|
||||
@@ -923,7 +923,14 @@ func BeaconStateFuluFromConsensus(st beaconState.BeaconState) (*BeaconStateFulu,
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
srcLookahead, err := st.ProposerLookahead()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
lookahead := make([]string, len(srcLookahead))
|
||||
for i, v := range srcLookahead {
|
||||
lookahead[i] = fmt.Sprintf("%d", uint64(v))
|
||||
}
|
||||
return &BeaconStateFulu{
|
||||
GenesisTime: fmt.Sprintf("%d", st.GenesisTime()),
|
||||
GenesisValidatorsRoot: hexutil.Encode(st.GenesisValidatorsRoot()),
|
||||
@@ -962,5 +969,6 @@ func BeaconStateFuluFromConsensus(st beaconState.BeaconState) (*BeaconStateFulu,
|
||||
PendingDeposits: PendingDepositsFromConsensus(pbd),
|
||||
PendingPartialWithdrawals: PendingPartialWithdrawalsFromConsensus(ppw),
|
||||
PendingConsolidations: PendingConsolidationsFromConsensus(pc),
|
||||
ProposerLookahead: lookahead,
|
||||
}, nil
|
||||
}
|
||||
|
||||
@@ -25,6 +25,13 @@ type BlockGossipEvent struct {
|
||||
Block string `json:"block"`
|
||||
}
|
||||
|
||||
type DataColumnGossipEvent struct {
|
||||
Slot string `json:"slot"`
|
||||
Index string `json:"index"`
|
||||
BlockRoot string `json:"block_root"`
|
||||
KzgCommitments []string `json:"kzg_commitments"`
|
||||
}
|
||||
|
||||
type AggregatedAttEventSource struct {
|
||||
Aggregate *Attestation `json:"aggregate"`
|
||||
}
|
||||
|
||||
@@ -33,8 +33,14 @@ type GetPeerResponse struct {
|
||||
Data *Peer `json:"data"`
|
||||
}
|
||||
|
||||
// Added Meta to align with beacon-api: https://ethereum.github.io/beacon-APIs/#/Node/getPeers
|
||||
type Meta struct {
|
||||
Count int `json:"count"`
|
||||
}
|
||||
|
||||
type GetPeersResponse struct {
|
||||
Data []*Peer `json:"data"`
|
||||
Meta Meta `json:"meta"`
|
||||
}
|
||||
|
||||
type Peer struct {
|
||||
|
||||
@@ -219,4 +219,5 @@ type BeaconStateFulu struct {
|
||||
PendingDeposits []*PendingDeposit `json:"pending_deposits"`
|
||||
PendingPartialWithdrawals []*PendingPartialWithdrawal `json:"pending_partial_withdrawals"`
|
||||
PendingConsolidations []*PendingConsolidation `json:"pending_consolidations"`
|
||||
ProposerLookahead []string `json:"proposer_lookahead"`
|
||||
}
|
||||
|
||||
@@ -15,7 +15,7 @@ import (
|
||||
|
||||
func TestDebounce_NoEvents(t *testing.T) {
|
||||
eventsChan := make(chan interface{}, 100)
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
ctx, cancel := context.WithCancel(t.Context())
|
||||
interval := time.Second
|
||||
timesHandled := int32(0)
|
||||
wg := &sync.WaitGroup{}
|
||||
@@ -39,7 +39,7 @@ func TestDebounce_NoEvents(t *testing.T) {
|
||||
|
||||
func TestDebounce_CtxClosing(t *testing.T) {
|
||||
eventsChan := make(chan interface{}, 100)
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
ctx, cancel := context.WithCancel(t.Context())
|
||||
interval := time.Second
|
||||
timesHandled := int32(0)
|
||||
wg := &sync.WaitGroup{}
|
||||
@@ -75,7 +75,7 @@ func TestDebounce_CtxClosing(t *testing.T) {
|
||||
|
||||
func TestDebounce_SingleHandlerInvocation(t *testing.T) {
|
||||
eventsChan := make(chan interface{}, 100)
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
ctx, cancel := context.WithCancel(t.Context())
|
||||
interval := time.Second
|
||||
timesHandled := int32(0)
|
||||
go async.Debounce(ctx, interval, eventsChan, func(event interface{}) {
|
||||
@@ -93,7 +93,7 @@ func TestDebounce_SingleHandlerInvocation(t *testing.T) {
|
||||
|
||||
func TestDebounce_MultipleHandlerInvocation(t *testing.T) {
|
||||
eventsChan := make(chan interface{}, 100)
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
ctx, cancel := context.WithCancel(t.Context())
|
||||
interval := time.Second
|
||||
timesHandled := int32(0)
|
||||
go async.Debounce(ctx, interval, eventsChan, func(event interface{}) {
|
||||
|
||||
@@ -10,7 +10,7 @@ import (
|
||||
)
|
||||
|
||||
func TestEveryRuns(t *testing.T) {
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
ctx, cancel := context.WithCancel(t.Context())
|
||||
|
||||
i := int32(0)
|
||||
async.RunEvery(ctx, 100*time.Millisecond, func() {
|
||||
|
||||
@@ -4,6 +4,6 @@ This is the main project folder for the beacon chain implementation of Ethereum
|
||||
|
||||
You can also read our main [README](https://github.com/prysmaticlabs/prysm/blob/master/README.md) and join our active chat room on Discord.
|
||||
|
||||
[](https://discord.gg/CTYGPUJ)
|
||||
[](https://discord.gg/prysm)
|
||||
|
||||
Also, read the official beacon chain [specification](https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/beacon-chain.md), this design spec serves as a source of truth for the beacon chain implementation we follow at Prysmatic Labs.
|
||||
|
||||
@@ -25,6 +25,7 @@ go_library(
|
||||
"receive_attestation.go",
|
||||
"receive_blob.go",
|
||||
"receive_block.go",
|
||||
"receive_data_column.go",
|
||||
"service.go",
|
||||
"setup_forchoice.go",
|
||||
"tracked_proposer.go",
|
||||
@@ -50,6 +51,7 @@ go_library(
|
||||
"//beacon-chain/core/feed/state:go_default_library",
|
||||
"//beacon-chain/core/helpers:go_default_library",
|
||||
"//beacon-chain/core/light-client:go_default_library",
|
||||
"//beacon-chain/core/peerdas:go_default_library",
|
||||
"//beacon-chain/core/signing:go_default_library",
|
||||
"//beacon-chain/core/time:go_default_library",
|
||||
"//beacon-chain/core/transition:go_default_library",
|
||||
@@ -146,6 +148,7 @@ go_test(
|
||||
"//beacon-chain/core/feed/state:go_default_library",
|
||||
"//beacon-chain/core/helpers:go_default_library",
|
||||
"//beacon-chain/core/light-client:go_default_library",
|
||||
"//beacon-chain/core/peerdas:go_default_library",
|
||||
"//beacon-chain/core/signing:go_default_library",
|
||||
"//beacon-chain/core/transition:go_default_library",
|
||||
"//beacon-chain/das:go_default_library",
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package blockchain
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
testDB "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
|
||||
@@ -25,7 +24,7 @@ func TestHeadSlot_DataRace(t *testing.T) {
|
||||
wait := make(chan struct{})
|
||||
go func() {
|
||||
defer close(wait)
|
||||
require.NoError(t, s.saveHead(context.Background(), [32]byte{}, b, st))
|
||||
require.NoError(t, s.saveHead(t.Context(), [32]byte{}, b, st))
|
||||
}()
|
||||
s.HeadSlot()
|
||||
<-wait
|
||||
@@ -43,10 +42,10 @@ func TestHeadRoot_DataRace(t *testing.T) {
|
||||
st, _ := util.DeterministicGenesisState(t, 1)
|
||||
go func() {
|
||||
defer close(wait)
|
||||
require.NoError(t, s.saveHead(context.Background(), [32]byte{}, b, st))
|
||||
require.NoError(t, s.saveHead(t.Context(), [32]byte{}, b, st))
|
||||
|
||||
}()
|
||||
_, err = s.HeadRoot(context.Background())
|
||||
_, err = s.HeadRoot(t.Context())
|
||||
require.NoError(t, err)
|
||||
<-wait
|
||||
}
|
||||
@@ -65,10 +64,10 @@ func TestHeadBlock_DataRace(t *testing.T) {
|
||||
st, _ := util.DeterministicGenesisState(t, 1)
|
||||
go func() {
|
||||
defer close(wait)
|
||||
require.NoError(t, s.saveHead(context.Background(), [32]byte{}, b, st))
|
||||
require.NoError(t, s.saveHead(t.Context(), [32]byte{}, b, st))
|
||||
|
||||
}()
|
||||
_, err = s.HeadBlock(context.Background())
|
||||
_, err = s.HeadBlock(t.Context())
|
||||
require.NoError(t, err)
|
||||
<-wait
|
||||
}
|
||||
@@ -83,14 +82,14 @@ func TestHeadState_DataRace(t *testing.T) {
|
||||
wait := make(chan struct{})
|
||||
st, _ := util.DeterministicGenesisState(t, 1)
|
||||
root := bytesutil.ToBytes32(bytesutil.PadTo([]byte{'s'}, 32))
|
||||
require.NoError(t, beaconDB.SaveGenesisBlockRoot(context.Background(), root))
|
||||
require.NoError(t, beaconDB.SaveState(context.Background(), st, root))
|
||||
require.NoError(t, beaconDB.SaveGenesisBlockRoot(t.Context(), root))
|
||||
require.NoError(t, beaconDB.SaveState(t.Context(), st, root))
|
||||
go func() {
|
||||
defer close(wait)
|
||||
require.NoError(t, s.saveHead(context.Background(), [32]byte{}, b, st))
|
||||
require.NoError(t, s.saveHead(t.Context(), [32]byte{}, b, st))
|
||||
|
||||
}()
|
||||
_, err = s.HeadState(context.Background())
|
||||
_, err = s.HeadState(t.Context())
|
||||
require.NoError(t, err)
|
||||
<-wait
|
||||
}
|
||||
|
||||
@@ -84,7 +84,7 @@ func prepareForkchoiceState(
|
||||
func TestHeadRoot_Nil(t *testing.T) {
|
||||
beaconDB := testDB.SetupDB(t)
|
||||
c := setupBeaconChain(t, beaconDB)
|
||||
headRoot, err := c.HeadRoot(context.Background())
|
||||
headRoot, err := c.HeadRoot(t.Context())
|
||||
require.NoError(t, err)
|
||||
assert.DeepEqual(t, params.BeaconConfig().ZeroHash[:], headRoot, "Incorrect pre chain start value")
|
||||
}
|
||||
@@ -137,7 +137,7 @@ func TestFinalizedBlockHash(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestUnrealizedJustifiedBlockHash(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
service := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}}
|
||||
ojc := ðpb.Checkpoint{Root: []byte{'j'}}
|
||||
ofc := ðpb.Checkpoint{Root: []byte{'f'}}
|
||||
@@ -203,7 +203,7 @@ func TestHeadBlock_CanRetrieve(t *testing.T) {
|
||||
c := &Service{}
|
||||
c.head = &head{block: wsb, state: s}
|
||||
|
||||
received, err := c.HeadBlock(context.Background())
|
||||
received, err := c.HeadBlock(t.Context())
|
||||
require.NoError(t, err)
|
||||
pb, err := received.Proto()
|
||||
require.NoError(t, err)
|
||||
@@ -215,7 +215,7 @@ func TestHeadState_CanRetrieve(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
c := &Service{}
|
||||
c.head = &head{state: s}
|
||||
headState, err := c.HeadState(context.Background())
|
||||
headState, err := c.HeadState(t.Context())
|
||||
require.NoError(t, err)
|
||||
assert.DeepEqual(t, headState.ToProtoUnsafe(), s.ToProtoUnsafe(), "Incorrect head state received")
|
||||
}
|
||||
@@ -277,7 +277,7 @@ func TestHeadETH1Data_CanRetrieve(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestIsCanonical_Ok(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
beaconDB := testDB.SetupDB(t)
|
||||
c := setupBeaconChain(t, beaconDB)
|
||||
|
||||
@@ -301,12 +301,12 @@ func TestService_HeadValidatorsIndices(t *testing.T) {
|
||||
c := &Service{}
|
||||
|
||||
c.head = &head{}
|
||||
indices, err := c.HeadValidatorsIndices(context.Background(), 0)
|
||||
indices, err := c.HeadValidatorsIndices(t.Context(), 0)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 0, len(indices))
|
||||
|
||||
c.head = &head{state: s}
|
||||
indices, err = c.HeadValidatorsIndices(context.Background(), 0)
|
||||
indices, err = c.HeadValidatorsIndices(t.Context(), 0)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 10, len(indices))
|
||||
}
|
||||
@@ -331,7 +331,7 @@ func TestService_HeadGenesisValidatorsRoot(t *testing.T) {
|
||||
// ---------- D
|
||||
|
||||
func TestService_ChainHeads(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}}
|
||||
ojc := ðpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
|
||||
ofc := ðpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
|
||||
@@ -399,7 +399,7 @@ func TestService_HeadValidatorIndexToPublicKey(t *testing.T) {
|
||||
c := &Service{}
|
||||
c.head = &head{state: s}
|
||||
|
||||
p, err := c.HeadValidatorIndexToPublicKey(context.Background(), 0)
|
||||
p, err := c.HeadValidatorIndexToPublicKey(t.Context(), 0)
|
||||
require.NoError(t, err)
|
||||
|
||||
v, err := s.ValidatorAtIndex(0)
|
||||
@@ -412,12 +412,12 @@ func TestService_HeadValidatorIndexToPublicKeyNil(t *testing.T) {
|
||||
c := &Service{}
|
||||
c.head = nil
|
||||
|
||||
p, err := c.HeadValidatorIndexToPublicKey(context.Background(), 0)
|
||||
p, err := c.HeadValidatorIndexToPublicKey(t.Context(), 0)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, [fieldparams.BLSPubkeyLength]byte{}, p)
|
||||
|
||||
c.head = &head{state: nil}
|
||||
p, err = c.HeadValidatorIndexToPublicKey(context.Background(), 0)
|
||||
p, err = c.HeadValidatorIndexToPublicKey(t.Context(), 0)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, [fieldparams.BLSPubkeyLength]byte{}, p)
|
||||
}
|
||||
@@ -428,7 +428,7 @@ func TestService_IsOptimistic(t *testing.T) {
|
||||
cfg.BellatrixForkEpoch = 0
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
ojc := ðpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
|
||||
ofc := ðpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
|
||||
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}, head: &head{root: [32]byte{'b'}}}
|
||||
@@ -456,7 +456,7 @@ func TestService_IsOptimistic(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestService_IsOptimisticBeforeBellatrix(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
c := &Service{genesisTime: time.Now()}
|
||||
opt, err := c.IsOptimistic(ctx)
|
||||
require.NoError(t, err)
|
||||
@@ -464,7 +464,7 @@ func TestService_IsOptimisticBeforeBellatrix(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestService_IsOptimisticForRoot(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}, head: &head{root: [32]byte{'b'}}}
|
||||
ojc := ðpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
|
||||
ofc := ðpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
|
||||
@@ -482,27 +482,27 @@ func TestService_IsOptimisticForRoot(t *testing.T) {
|
||||
|
||||
func TestService_IsOptimisticForRoot_DB(t *testing.T) {
|
||||
beaconDB := testDB.SetupDB(t)
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
c := &Service{cfg: &config{BeaconDB: beaconDB, ForkChoiceStore: doublylinkedtree.New()}, head: &head{root: [32]byte{'b'}}}
|
||||
c.head = &head{root: params.BeaconConfig().ZeroHash}
|
||||
b := util.NewBeaconBlock()
|
||||
b.Block.Slot = 10
|
||||
br, err := b.Block.HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
util.SaveBlock(t, context.Background(), beaconDB, b)
|
||||
require.NoError(t, beaconDB.SaveStateSummary(context.Background(), ðpb.StateSummary{Root: br[:], Slot: 10}))
|
||||
util.SaveBlock(t, t.Context(), beaconDB, b)
|
||||
require.NoError(t, beaconDB.SaveStateSummary(t.Context(), ðpb.StateSummary{Root: br[:], Slot: 10}))
|
||||
|
||||
optimisticBlock := util.NewBeaconBlock()
|
||||
optimisticBlock.Block.Slot = 97
|
||||
optimisticRoot, err := optimisticBlock.Block.HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
util.SaveBlock(t, context.Background(), beaconDB, optimisticBlock)
|
||||
util.SaveBlock(t, t.Context(), beaconDB, optimisticBlock)
|
||||
|
||||
validatedBlock := util.NewBeaconBlock()
|
||||
validatedBlock.Block.Slot = 9
|
||||
validatedRoot, err := validatedBlock.Block.HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
util.SaveBlock(t, context.Background(), beaconDB, validatedBlock)
|
||||
util.SaveBlock(t, t.Context(), beaconDB, validatedBlock)
|
||||
|
||||
validatedCheckpoint := ðpb.Checkpoint{Root: br[:]}
|
||||
require.NoError(t, beaconDB.SaveLastValidatedCheckpoint(ctx, validatedCheckpoint))
|
||||
@@ -524,10 +524,10 @@ func TestService_IsOptimisticForRoot_DB(t *testing.T) {
|
||||
// Before the first finalized epoch, finalized root could be zeros.
|
||||
validatedCheckpoint = ðpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
|
||||
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, br))
|
||||
require.NoError(t, beaconDB.SaveStateSummary(context.Background(), ðpb.StateSummary{Root: params.BeaconConfig().ZeroHash[:], Slot: 10}))
|
||||
require.NoError(t, beaconDB.SaveStateSummary(t.Context(), ðpb.StateSummary{Root: params.BeaconConfig().ZeroHash[:], Slot: 10}))
|
||||
require.NoError(t, beaconDB.SaveLastValidatedCheckpoint(ctx, validatedCheckpoint))
|
||||
|
||||
require.NoError(t, beaconDB.SaveStateSummary(context.Background(), ðpb.StateSummary{Root: optimisticRoot[:], Slot: 11}))
|
||||
require.NoError(t, beaconDB.SaveStateSummary(t.Context(), ðpb.StateSummary{Root: optimisticRoot[:], Slot: 11}))
|
||||
optimistic, err = c.IsOptimisticForRoot(ctx, optimisticRoot)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, true, optimistic)
|
||||
@@ -535,37 +535,37 @@ func TestService_IsOptimisticForRoot_DB(t *testing.T) {
|
||||
|
||||
func TestService_IsOptimisticForRoot_DB_non_canonical(t *testing.T) {
|
||||
beaconDB := testDB.SetupDB(t)
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
c := &Service{cfg: &config{BeaconDB: beaconDB, ForkChoiceStore: doublylinkedtree.New()}, head: &head{root: [32]byte{'b'}}}
|
||||
c.head = &head{root: params.BeaconConfig().ZeroHash}
|
||||
b := util.NewBeaconBlock()
|
||||
b.Block.Slot = 10
|
||||
br, err := b.Block.HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
util.SaveBlock(t, context.Background(), beaconDB, b)
|
||||
require.NoError(t, beaconDB.SaveStateSummary(context.Background(), ðpb.StateSummary{Root: br[:], Slot: 10}))
|
||||
util.SaveBlock(t, t.Context(), beaconDB, b)
|
||||
require.NoError(t, beaconDB.SaveStateSummary(t.Context(), ðpb.StateSummary{Root: br[:], Slot: 10}))
|
||||
|
||||
optimisticBlock := util.NewBeaconBlock()
|
||||
optimisticBlock.Block.Slot = 97
|
||||
optimisticRoot, err := optimisticBlock.Block.HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
util.SaveBlock(t, context.Background(), beaconDB, optimisticBlock)
|
||||
util.SaveBlock(t, t.Context(), beaconDB, optimisticBlock)
|
||||
|
||||
validatedBlock := util.NewBeaconBlock()
|
||||
validatedBlock.Block.Slot = 9
|
||||
validatedRoot, err := validatedBlock.Block.HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
util.SaveBlock(t, context.Background(), beaconDB, validatedBlock)
|
||||
util.SaveBlock(t, t.Context(), beaconDB, validatedBlock)
|
||||
|
||||
validatedCheckpoint := ðpb.Checkpoint{Root: br[:]}
|
||||
require.NoError(t, beaconDB.SaveLastValidatedCheckpoint(ctx, validatedCheckpoint))
|
||||
|
||||
require.NoError(t, beaconDB.SaveStateSummary(context.Background(), ðpb.StateSummary{Root: optimisticRoot[:], Slot: 11}))
|
||||
require.NoError(t, beaconDB.SaveStateSummary(t.Context(), ðpb.StateSummary{Root: optimisticRoot[:], Slot: 11}))
|
||||
optimistic, err := c.IsOptimisticForRoot(ctx, optimisticRoot)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, true, optimistic)
|
||||
|
||||
require.NoError(t, beaconDB.SaveStateSummary(context.Background(), ðpb.StateSummary{Root: validatedRoot[:], Slot: 9}))
|
||||
require.NoError(t, beaconDB.SaveStateSummary(t.Context(), ðpb.StateSummary{Root: validatedRoot[:], Slot: 9}))
|
||||
validated, err := c.IsOptimisticForRoot(ctx, validatedRoot)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, true, validated)
|
||||
@@ -574,14 +574,14 @@ func TestService_IsOptimisticForRoot_DB_non_canonical(t *testing.T) {
|
||||
|
||||
func TestService_IsOptimisticForRoot_StateSummaryRecovered(t *testing.T) {
|
||||
beaconDB := testDB.SetupDB(t)
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
c := &Service{cfg: &config{BeaconDB: beaconDB, ForkChoiceStore: doublylinkedtree.New()}, head: &head{root: [32]byte{'b'}}}
|
||||
c.head = &head{root: params.BeaconConfig().ZeroHash}
|
||||
b := util.NewBeaconBlock()
|
||||
b.Block.Slot = 10
|
||||
br, err := b.Block.HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
util.SaveBlock(t, context.Background(), beaconDB, b)
|
||||
util.SaveBlock(t, t.Context(), beaconDB, b)
|
||||
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, [32]byte{}))
|
||||
_, err = c.IsOptimisticForRoot(ctx, br)
|
||||
assert.NoError(t, err)
|
||||
@@ -594,7 +594,7 @@ func TestService_IsOptimisticForRoot_StateSummaryRecovered(t *testing.T) {
|
||||
|
||||
func TestService_IsFinalized(t *testing.T) {
|
||||
beaconDB := testDB.SetupDB(t)
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
c := &Service{cfg: &config{BeaconDB: beaconDB, ForkChoiceStore: doublylinkedtree.New()}}
|
||||
r1 := [32]byte{'a'}
|
||||
require.NoError(t, c.cfg.ForkChoiceStore.UpdateFinalizedCheckpoint(&forkchoicetypes.Checkpoint{
|
||||
@@ -616,7 +616,7 @@ func TestService_IsFinalized(t *testing.T) {
|
||||
|
||||
func Test_hashForGenesisRoot(t *testing.T) {
|
||||
beaconDB := testDB.SetupDB(t)
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
c := setupBeaconChain(t, beaconDB)
|
||||
st, _ := util.DeterministicGenesisStateElectra(t, 10)
|
||||
require.NoError(t, c.cfg.BeaconDB.SaveGenesisData(ctx, st))
|
||||
|
||||
@@ -40,10 +40,12 @@ var (
|
||||
errNotGenesisRoot = errors.New("root is not the genesis block root")
|
||||
// errBlacklistedBlock is returned when a block is blacklisted as invalid.
|
||||
errBlacklistedRoot = verification.AsVerificationFailure(errors.New("block root is blacklisted"))
|
||||
// errMaxBlobsExceeded is returned when the number of blobs in a block exceeds the maximum allowed.
|
||||
errMaxBlobsExceeded = verification.AsVerificationFailure(errors.New("expected commitments in block exceeds MAX_BLOBS_PER_BLOCK"))
|
||||
// errMaxDataColumnsExceeded is returned when the number of data columns exceeds the maximum allowed.
|
||||
errMaxDataColumnsExceeded = verification.AsVerificationFailure(errors.New("expected data columns for node exceeds NUMBER_OF_COLUMNS"))
|
||||
)
|
||||
|
||||
var errMaxBlobsExceeded = verification.AsVerificationFailure(errors.New("Expected commitments in block exceeds MAX_BLOBS_PER_BLOCK"))
|
||||
|
||||
// An invalid block is the block that fails state transition based on the core protocol rules.
|
||||
// The beacon node shall not be accepting nor building blocks that branch off from an invalid block.
|
||||
// Some examples of invalid blocks are:
|
||||
|
||||
@@ -439,6 +439,9 @@ func (s *Service) removeInvalidBlockAndState(ctx context.Context, blkRoots [][32
|
||||
// Blobs may not exist for some blocks, leading to deletion failures. Log such errors at debug level.
|
||||
log.WithError(err).Debug("Could not remove blob from blob storage")
|
||||
}
|
||||
if err := s.dataColumnStorage.Remove(root); err != nil {
|
||||
log.WithError(err).Errorf("Could not remove data columns from data column storage for root %#x", root)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package blockchain
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
@@ -36,29 +35,29 @@ func TestService_isNewHead(t *testing.T) {
|
||||
func TestService_getHeadStateAndBlock(t *testing.T) {
|
||||
beaconDB := testDB.SetupDB(t)
|
||||
service := setupBeaconChain(t, beaconDB)
|
||||
_, _, err := service.getStateAndBlock(context.Background(), [32]byte{})
|
||||
_, _, err := service.getStateAndBlock(t.Context(), [32]byte{})
|
||||
require.ErrorContains(t, "block does not exist", err)
|
||||
|
||||
blk, err := blocks.NewSignedBeaconBlock(util.HydrateSignedBeaconBlock(ðpb.SignedBeaconBlock{Signature: []byte{1}}))
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, service.cfg.BeaconDB.SaveBlock(context.Background(), blk))
|
||||
require.NoError(t, service.cfg.BeaconDB.SaveBlock(t.Context(), blk))
|
||||
|
||||
st, _ := util.DeterministicGenesisState(t, 1)
|
||||
r, err := blk.Block().HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, service.cfg.BeaconDB.SaveState(context.Background(), st, r))
|
||||
require.NoError(t, service.cfg.BeaconDB.SaveState(t.Context(), st, r))
|
||||
|
||||
gotState, err := service.cfg.BeaconDB.State(context.Background(), r)
|
||||
gotState, err := service.cfg.BeaconDB.State(t.Context(), r)
|
||||
require.NoError(t, err)
|
||||
require.DeepEqual(t, st.ToProto(), gotState.ToProto())
|
||||
|
||||
gotBlk, err := service.cfg.BeaconDB.Block(context.Background(), r)
|
||||
gotBlk, err := service.cfg.BeaconDB.Block(t.Context(), r)
|
||||
require.NoError(t, err)
|
||||
require.DeepEqual(t, blk, gotBlk)
|
||||
}
|
||||
|
||||
func TestService_forkchoiceUpdateWithExecution_exceptionalCases(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
opts := testServiceOptsWithDB(t)
|
||||
|
||||
service, err := NewService(ctx, opts...)
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package blockchain
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
|
||||
@@ -21,18 +20,18 @@ func TestService_HeadSyncCommitteeIndices(t *testing.T) {
|
||||
|
||||
// Current period
|
||||
slot := 2*uint64(params.BeaconConfig().EpochsPerSyncCommitteePeriod)*uint64(params.BeaconConfig().SlotsPerEpoch) + 1
|
||||
a, err := c.HeadSyncCommitteeIndices(context.Background(), 0, primitives.Slot(slot))
|
||||
a, err := c.HeadSyncCommitteeIndices(t.Context(), 0, primitives.Slot(slot))
|
||||
require.NoError(t, err)
|
||||
|
||||
// Current period where slot-2 across EPOCHS_PER_SYNC_COMMITTEE_PERIOD
|
||||
slot = 3*uint64(params.BeaconConfig().EpochsPerSyncCommitteePeriod)*uint64(params.BeaconConfig().SlotsPerEpoch) - 2
|
||||
b, err := c.HeadSyncCommitteeIndices(context.Background(), 0, primitives.Slot(slot))
|
||||
b, err := c.HeadSyncCommitteeIndices(t.Context(), 0, primitives.Slot(slot))
|
||||
require.NoError(t, err)
|
||||
require.DeepEqual(t, a, b)
|
||||
|
||||
// Next period where slot-1 across EPOCHS_PER_SYNC_COMMITTEE_PERIOD
|
||||
slot = 3*uint64(params.BeaconConfig().EpochsPerSyncCommitteePeriod)*uint64(params.BeaconConfig().SlotsPerEpoch) - 1
|
||||
b, err = c.HeadSyncCommitteeIndices(context.Background(), 0, primitives.Slot(slot))
|
||||
b, err = c.HeadSyncCommitteeIndices(t.Context(), 0, primitives.Slot(slot))
|
||||
require.NoError(t, err)
|
||||
require.DeepNotEqual(t, a, b)
|
||||
}
|
||||
@@ -44,7 +43,7 @@ func TestService_headCurrentSyncCommitteeIndices(t *testing.T) {
|
||||
|
||||
// Process slot up to `EpochsPerSyncCommitteePeriod` so it can `ProcessSyncCommitteeUpdates`.
|
||||
slot := uint64(params.BeaconConfig().EpochsPerSyncCommitteePeriod)*uint64(params.BeaconConfig().SlotsPerEpoch) + 1
|
||||
indices, err := c.headCurrentSyncCommitteeIndices(context.Background(), 0, primitives.Slot(slot))
|
||||
indices, err := c.headCurrentSyncCommitteeIndices(t.Context(), 0, primitives.Slot(slot))
|
||||
require.NoError(t, err)
|
||||
|
||||
// NextSyncCommittee becomes CurrentSyncCommittee so it should be empty by default.
|
||||
@@ -58,7 +57,7 @@ func TestService_headNextSyncCommitteeIndices(t *testing.T) {
|
||||
|
||||
// Process slot up to `EpochsPerSyncCommitteePeriod` so it can `ProcessSyncCommitteeUpdates`.
|
||||
slot := uint64(params.BeaconConfig().EpochsPerSyncCommitteePeriod)*uint64(params.BeaconConfig().SlotsPerEpoch) + 1
|
||||
indices, err := c.headNextSyncCommitteeIndices(context.Background(), 0, primitives.Slot(slot))
|
||||
indices, err := c.headNextSyncCommitteeIndices(t.Context(), 0, primitives.Slot(slot))
|
||||
require.NoError(t, err)
|
||||
|
||||
// NextSyncCommittee should be empty after `ProcessSyncCommitteeUpdates`. Validator should get indices.
|
||||
@@ -72,7 +71,7 @@ func TestService_HeadSyncCommitteePubKeys(t *testing.T) {
|
||||
|
||||
// Process slot up to 2 * `EpochsPerSyncCommitteePeriod` so it can run `ProcessSyncCommitteeUpdates` twice.
|
||||
slot := uint64(2*params.BeaconConfig().EpochsPerSyncCommitteePeriod)*uint64(params.BeaconConfig().SlotsPerEpoch) + 1
|
||||
pubkeys, err := c.HeadSyncCommitteePubKeys(context.Background(), primitives.Slot(slot), 0)
|
||||
pubkeys, err := c.HeadSyncCommitteePubKeys(t.Context(), primitives.Slot(slot), 0)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Any subcommittee should match the subcommittee size.
|
||||
@@ -88,7 +87,7 @@ func TestService_HeadSyncCommitteeDomain(t *testing.T) {
|
||||
wanted, err := signing.Domain(s.Fork(), slots.ToEpoch(s.Slot()), params.BeaconConfig().DomainSyncCommittee, s.GenesisValidatorsRoot())
|
||||
require.NoError(t, err)
|
||||
|
||||
d, err := c.HeadSyncCommitteeDomain(context.Background(), 0)
|
||||
d, err := c.HeadSyncCommitteeDomain(t.Context(), 0)
|
||||
require.NoError(t, err)
|
||||
|
||||
require.DeepEqual(t, wanted, d)
|
||||
@@ -102,7 +101,7 @@ func TestService_HeadSyncContributionProofDomain(t *testing.T) {
|
||||
wanted, err := signing.Domain(s.Fork(), slots.ToEpoch(s.Slot()), params.BeaconConfig().DomainContributionAndProof, s.GenesisValidatorsRoot())
|
||||
require.NoError(t, err)
|
||||
|
||||
d, err := c.HeadSyncContributionProofDomain(context.Background(), 0)
|
||||
d, err := c.HeadSyncContributionProofDomain(t.Context(), 0)
|
||||
require.NoError(t, err)
|
||||
|
||||
require.DeepEqual(t, wanted, d)
|
||||
@@ -116,7 +115,7 @@ func TestService_HeadSyncSelectionProofDomain(t *testing.T) {
|
||||
wanted, err := signing.Domain(s.Fork(), slots.ToEpoch(s.Slot()), params.BeaconConfig().DomainSyncCommitteeSelectionProof, s.GenesisValidatorsRoot())
|
||||
require.NoError(t, err)
|
||||
|
||||
d, err := c.HeadSyncSelectionProofDomain(context.Background(), 0)
|
||||
d, err := c.HeadSyncSelectionProofDomain(t.Context(), 0)
|
||||
require.NoError(t, err)
|
||||
|
||||
require.DeepEqual(t, wanted, d)
|
||||
|
||||
@@ -34,17 +34,17 @@ func TestSaveHead_Same(t *testing.T) {
|
||||
b, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlock())
|
||||
require.NoError(t, err)
|
||||
st, _ := util.DeterministicGenesisState(t, 1)
|
||||
require.NoError(t, service.saveHead(context.Background(), r, b, st))
|
||||
require.NoError(t, service.saveHead(t.Context(), r, b, st))
|
||||
assert.Equal(t, primitives.Slot(0), service.headSlot(), "Head did not stay the same")
|
||||
assert.Equal(t, r, service.headRoot(), "Head did not stay the same")
|
||||
}
|
||||
|
||||
func TestSaveHead_Different(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
beaconDB := testDB.SetupDB(t)
|
||||
service := setupBeaconChain(t, beaconDB)
|
||||
|
||||
oldBlock := util.SaveBlock(t, context.Background(), service.cfg.BeaconDB, util.NewBeaconBlock())
|
||||
oldBlock := util.SaveBlock(t, t.Context(), service.cfg.BeaconDB, util.NewBeaconBlock())
|
||||
oldRoot, err := oldBlock.Block().HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
ojc := ðpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
|
||||
@@ -61,7 +61,7 @@ func TestSaveHead_Different(t *testing.T) {
|
||||
newHeadSignedBlock.Block.Slot = 1
|
||||
newHeadBlock := newHeadSignedBlock.Block
|
||||
|
||||
wsb := util.SaveBlock(t, context.Background(), service.cfg.BeaconDB, newHeadSignedBlock)
|
||||
wsb := util.SaveBlock(t, t.Context(), service.cfg.BeaconDB, newHeadSignedBlock)
|
||||
newRoot, err := newHeadBlock.HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
state, blkRoot, err = prepareForkchoiceState(ctx, slots.PrevSlot(wsb.Block().Slot()), wsb.Block().ParentRoot(), service.cfg.ForkChoiceStore.CachedHeadRoot(), [32]byte{}, ojc, ofc)
|
||||
@@ -74,13 +74,13 @@ func TestSaveHead_Different(t *testing.T) {
|
||||
headState, err := util.NewBeaconState()
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, headState.SetSlot(1))
|
||||
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(context.Background(), ðpb.StateSummary{Slot: 1, Root: newRoot[:]}))
|
||||
require.NoError(t, service.cfg.BeaconDB.SaveState(context.Background(), headState, newRoot))
|
||||
require.NoError(t, service.saveHead(context.Background(), newRoot, wsb, headState))
|
||||
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(t.Context(), ðpb.StateSummary{Slot: 1, Root: newRoot[:]}))
|
||||
require.NoError(t, service.cfg.BeaconDB.SaveState(t.Context(), headState, newRoot))
|
||||
require.NoError(t, service.saveHead(t.Context(), newRoot, wsb, headState))
|
||||
|
||||
assert.Equal(t, primitives.Slot(1), service.HeadSlot(), "Head did not change")
|
||||
|
||||
cachedRoot, err := service.HeadRoot(context.Background())
|
||||
cachedRoot, err := service.HeadRoot(t.Context())
|
||||
require.NoError(t, err)
|
||||
assert.DeepEqual(t, cachedRoot, newRoot[:], "Head did not change")
|
||||
headBlock, err := service.headBlock()
|
||||
@@ -92,12 +92,12 @@ func TestSaveHead_Different(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestSaveHead_Different_Reorg(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
hook := logTest.NewGlobal()
|
||||
beaconDB := testDB.SetupDB(t)
|
||||
service := setupBeaconChain(t, beaconDB)
|
||||
|
||||
oldBlock := util.SaveBlock(t, context.Background(), service.cfg.BeaconDB, util.NewBeaconBlock())
|
||||
oldBlock := util.SaveBlock(t, t.Context(), service.cfg.BeaconDB, util.NewBeaconBlock())
|
||||
oldRoot, err := oldBlock.Block().HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
ojc := ðpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
|
||||
@@ -120,7 +120,7 @@ func TestSaveHead_Different_Reorg(t *testing.T) {
|
||||
newHeadSignedBlock.Block.ParentRoot = reorgChainParent[:]
|
||||
newHeadBlock := newHeadSignedBlock.Block
|
||||
|
||||
wsb := util.SaveBlock(t, context.Background(), service.cfg.BeaconDB, newHeadSignedBlock)
|
||||
wsb := util.SaveBlock(t, t.Context(), service.cfg.BeaconDB, newHeadSignedBlock)
|
||||
newRoot, err := newHeadBlock.HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
state, blkRoot, err = prepareForkchoiceState(ctx, wsb.Block().Slot(), newRoot, wsb.Block().ParentRoot(), [32]byte{}, ojc, ofc)
|
||||
@@ -129,13 +129,13 @@ func TestSaveHead_Different_Reorg(t *testing.T) {
|
||||
headState, err := util.NewBeaconState()
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, headState.SetSlot(1))
|
||||
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(context.Background(), ðpb.StateSummary{Slot: 1, Root: newRoot[:]}))
|
||||
require.NoError(t, service.cfg.BeaconDB.SaveState(context.Background(), headState, newRoot))
|
||||
require.NoError(t, service.saveHead(context.Background(), newRoot, wsb, headState))
|
||||
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(t.Context(), ðpb.StateSummary{Slot: 1, Root: newRoot[:]}))
|
||||
require.NoError(t, service.cfg.BeaconDB.SaveState(t.Context(), headState, newRoot))
|
||||
require.NoError(t, service.saveHead(t.Context(), newRoot, wsb, headState))
|
||||
|
||||
assert.Equal(t, primitives.Slot(1), service.HeadSlot(), "Head did not change")
|
||||
|
||||
cachedRoot, err := service.HeadRoot(context.Background())
|
||||
cachedRoot, err := service.HeadRoot(t.Context())
|
||||
require.NoError(t, err)
|
||||
if !bytes.Equal(cachedRoot, newRoot[:]) {
|
||||
t.Error("Head did not change")
|
||||
@@ -162,12 +162,12 @@ func Test_notifyNewHeadEvent(t *testing.T) {
|
||||
},
|
||||
originBlockRoot: [32]byte{1},
|
||||
}
|
||||
st, blk, err := prepareForkchoiceState(context.Background(), 0, [32]byte{}, [32]byte{}, [32]byte{}, ðpb.Checkpoint{}, ðpb.Checkpoint{})
|
||||
st, blk, err := prepareForkchoiceState(t.Context(), 0, [32]byte{}, [32]byte{}, [32]byte{}, ðpb.Checkpoint{}, ðpb.Checkpoint{})
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(context.Background(), st, blk))
|
||||
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(t.Context(), st, blk))
|
||||
newHeadStateRoot := [32]byte{2}
|
||||
newHeadRoot := [32]byte{3}
|
||||
require.NoError(t, srv.notifyNewHeadEvent(context.Background(), 1, bState, newHeadStateRoot[:], newHeadRoot[:]))
|
||||
require.NoError(t, srv.notifyNewHeadEvent(t.Context(), 1, bState, newHeadStateRoot[:], newHeadRoot[:]))
|
||||
events := notifier.ReceivedEvents()
|
||||
require.Equal(t, 1, len(events))
|
||||
|
||||
@@ -194,9 +194,9 @@ func Test_notifyNewHeadEvent(t *testing.T) {
|
||||
},
|
||||
originBlockRoot: genesisRoot,
|
||||
}
|
||||
st, blk, err := prepareForkchoiceState(context.Background(), 0, [32]byte{}, [32]byte{}, [32]byte{}, ðpb.Checkpoint{}, ðpb.Checkpoint{})
|
||||
st, blk, err := prepareForkchoiceState(t.Context(), 0, [32]byte{}, [32]byte{}, [32]byte{}, ðpb.Checkpoint{}, ðpb.Checkpoint{})
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(context.Background(), st, blk))
|
||||
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(t.Context(), st, blk))
|
||||
epoch1Start, err := slots.EpochStart(1)
|
||||
require.NoError(t, err)
|
||||
epoch2Start, err := slots.EpochStart(1)
|
||||
@@ -205,7 +205,7 @@ func Test_notifyNewHeadEvent(t *testing.T) {
|
||||
|
||||
newHeadStateRoot := [32]byte{2}
|
||||
newHeadRoot := [32]byte{3}
|
||||
err = srv.notifyNewHeadEvent(context.Background(), epoch2Start, bState, newHeadStateRoot[:], newHeadRoot[:])
|
||||
err = srv.notifyNewHeadEvent(t.Context(), epoch2Start, bState, newHeadStateRoot[:], newHeadRoot[:])
|
||||
require.NoError(t, err)
|
||||
events := notifier.ReceivedEvents()
|
||||
require.Equal(t, 1, len(events))
|
||||
@@ -225,11 +225,11 @@ func Test_notifyNewHeadEvent(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestRetrieveHead_ReadOnly(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
beaconDB := testDB.SetupDB(t)
|
||||
service := setupBeaconChain(t, beaconDB)
|
||||
|
||||
oldBlock := util.SaveBlock(t, context.Background(), service.cfg.BeaconDB, util.NewBeaconBlock())
|
||||
oldBlock := util.SaveBlock(t, t.Context(), service.cfg.BeaconDB, util.NewBeaconBlock())
|
||||
oldRoot, err := oldBlock.Block().HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
service.head = &head{
|
||||
@@ -243,7 +243,7 @@ func TestRetrieveHead_ReadOnly(t *testing.T) {
|
||||
ojc := ðpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
|
||||
ofc := ðpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
|
||||
|
||||
wsb := util.SaveBlock(t, context.Background(), service.cfg.BeaconDB, newHeadSignedBlock)
|
||||
wsb := util.SaveBlock(t, t.Context(), service.cfg.BeaconDB, newHeadSignedBlock)
|
||||
newRoot, err := newHeadBlock.HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
state, blkRoot, err := prepareForkchoiceState(ctx, slots.PrevSlot(wsb.Block().Slot()), wsb.Block().ParentRoot(), service.cfg.ForkChoiceStore.CachedHeadRoot(), [32]byte{}, ojc, ofc)
|
||||
@@ -256,9 +256,9 @@ func TestRetrieveHead_ReadOnly(t *testing.T) {
|
||||
headState, err := util.NewBeaconState()
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, headState.SetSlot(1))
|
||||
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(context.Background(), ðpb.StateSummary{Slot: 1, Root: newRoot[:]}))
|
||||
require.NoError(t, service.cfg.BeaconDB.SaveState(context.Background(), headState, newRoot))
|
||||
require.NoError(t, service.saveHead(context.Background(), newRoot, wsb, headState))
|
||||
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(t.Context(), ðpb.StateSummary{Slot: 1, Root: newRoot[:]}))
|
||||
require.NoError(t, service.cfg.BeaconDB.SaveState(t.Context(), headState, newRoot))
|
||||
require.NoError(t, service.saveHead(t.Context(), newRoot, wsb, headState))
|
||||
|
||||
rOnlyState, err := service.HeadStateReadOnly(ctx)
|
||||
require.NoError(t, err)
|
||||
@@ -267,7 +267,7 @@ func TestRetrieveHead_ReadOnly(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestSaveOrphanedAtts(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
beaconDB := testDB.SetupDB(t)
|
||||
service := setupBeaconChain(t, beaconDB)
|
||||
service.genesisTime = time.Now().Add(time.Duration(-10*int64(1)*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second)
|
||||
@@ -333,7 +333,7 @@ func TestSaveOrphanedAtts(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestSaveOrphanedAttsElectra(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
beaconDB := testDB.SetupDB(t)
|
||||
service := setupBeaconChain(t, beaconDB)
|
||||
service.genesisTime = time.Now().Add(time.Duration(-10*int64(1)*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second)
|
||||
@@ -404,7 +404,7 @@ func TestSaveOrphanedOps(t *testing.T) {
|
||||
config.ShardCommitteePeriod = 0
|
||||
params.OverrideBeaconConfig(config)
|
||||
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
beaconDB := testDB.SetupDB(t)
|
||||
service := setupBeaconChain(t, beaconDB)
|
||||
service.genesisTime = time.Now().Add(time.Duration(-10*int64(1)*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second)
|
||||
@@ -481,7 +481,7 @@ func TestSaveOrphanedOps(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestSaveOrphanedAtts_CanFilter(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
beaconDB := testDB.SetupDB(t)
|
||||
service := setupBeaconChain(t, beaconDB)
|
||||
service.cfg.BLSToExecPool = blstoexec.NewPool()
|
||||
@@ -539,7 +539,7 @@ func TestSaveOrphanedAtts_CanFilter(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestSaveOrphanedAtts_DoublyLinkedTrie(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
beaconDB := testDB.SetupDB(t)
|
||||
service := setupBeaconChain(t, beaconDB)
|
||||
service.genesisTime = time.Now().Add(time.Duration(-10*int64(1)*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second)
|
||||
@@ -604,7 +604,7 @@ func TestSaveOrphanedAtts_DoublyLinkedTrie(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestSaveOrphanedAtts_CanFilter_DoublyLinkedTrie(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
beaconDB := testDB.SetupDB(t)
|
||||
service := setupBeaconChain(t, beaconDB)
|
||||
service.genesisTime = time.Now().Add(time.Duration(-1*int64(params.BeaconConfig().SlotsPerEpoch+2)*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second)
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package blockchain
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
testDB "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
|
||||
@@ -11,7 +10,7 @@ import (
|
||||
)
|
||||
|
||||
func TestService_getBlock(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
beaconDB := testDB.SetupDB(t)
|
||||
s := setupBeaconChain(t, beaconDB)
|
||||
b1 := util.NewBeaconBlock()
|
||||
@@ -42,7 +41,7 @@ func TestService_getBlock(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestService_hasBlockInInitSyncOrDB(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
beaconDB := testDB.SetupDB(t)
|
||||
s := setupBeaconChain(t, beaconDB)
|
||||
b1 := util.NewBeaconBlock()
|
||||
|
||||
@@ -29,9 +29,8 @@ go_test(
|
||||
embed = [":go_default_library"],
|
||||
deps = [
|
||||
"//consensus-types/blocks:go_default_library",
|
||||
"//crypto/random:go_default_library",
|
||||
"//testing/require:go_default_library",
|
||||
"@com_github_consensys_gnark_crypto//ecc/bls12-381/fr:go_default_library",
|
||||
"@com_github_crate_crypto_go_kzg_4844//:go_default_library",
|
||||
"@com_github_sirupsen_logrus//:go_default_library",
|
||||
],
|
||||
)
|
||||
|
||||
@@ -1,16 +1,12 @@
|
||||
package kzg
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/sha256"
|
||||
"encoding/binary"
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v6/crypto/random"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
"github.com/consensys/gnark-crypto/ecc/bls12-381/fr"
|
||||
GoKZG "github.com/crate-crypto/go-kzg-4844"
|
||||
"github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
func GenerateCommitmentAndProof(blob GoKZG.Blob) (GoKZG.KZGCommitment, GoKZG.KZGProof, error) {
|
||||
@@ -41,7 +37,7 @@ func TestBytesToAny(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestGenerateCommitmentAndProof(t *testing.T) {
|
||||
blob := getRandBlob(123)
|
||||
blob := random.GetRandBlob(123)
|
||||
commitment, proof, err := GenerateCommitmentAndProof(blob)
|
||||
require.NoError(t, err)
|
||||
expectedCommitment := GoKZG.KZGCommitment{180, 218, 156, 194, 59, 20, 10, 189, 186, 254, 132, 93, 7, 127, 104, 172, 238, 240, 237, 70, 83, 89, 1, 152, 99, 0, 165, 65, 143, 62, 20, 215, 230, 14, 205, 95, 28, 245, 54, 25, 160, 16, 178, 31, 232, 207, 38, 85}
|
||||
@@ -49,36 +45,3 @@ func TestGenerateCommitmentAndProof(t *testing.T) {
|
||||
require.Equal(t, expectedCommitment, commitment)
|
||||
require.Equal(t, expectedProof, proof)
|
||||
}
|
||||
|
||||
func deterministicRandomness(seed int64) [32]byte {
|
||||
// Converts an int64 to a byte slice
|
||||
buf := new(bytes.Buffer)
|
||||
err := binary.Write(buf, binary.BigEndian, seed)
|
||||
if err != nil {
|
||||
logrus.WithError(err).Error("Failed to write int64 to bytes buffer")
|
||||
return [32]byte{}
|
||||
}
|
||||
bytes := buf.Bytes()
|
||||
|
||||
return sha256.Sum256(bytes)
|
||||
}
|
||||
|
||||
// Returns a serialized random field element in big-endian
|
||||
func getRandFieldElement(seed int64) [32]byte {
|
||||
bytes := deterministicRandomness(seed)
|
||||
var r fr.Element
|
||||
r.SetBytes(bytes[:])
|
||||
|
||||
return GoKZG.SerializeScalar(r)
|
||||
}
|
||||
|
||||
// Returns a random blob using the passed seed as entropy
|
||||
func getRandBlob(seed int64) GoKZG.Blob {
|
||||
var blob GoKZG.Blob
|
||||
bytesPerBlob := GoKZG.ScalarsPerBlob * GoKZG.SerializedScalarSize
|
||||
for i := 0; i < bytesPerBlob; i += GoKZG.SerializedScalarSize {
|
||||
fieldElementBytes := getRandFieldElement(seed + int64(i))
|
||||
copy(blob[i:i+GoKZG.SerializedScalarSize], fieldElementBytes[:])
|
||||
}
|
||||
return blob
|
||||
}
|
||||
|
||||
@@ -26,9 +26,6 @@ func logStateTransitionData(b interfaces.ReadOnlyBeaconBlock) error {
|
||||
if len(b.Body().Attestations()) > 0 {
|
||||
log = log.WithField("attestations", len(b.Body().Attestations()))
|
||||
}
|
||||
if len(b.Body().Deposits()) > 0 {
|
||||
log = log.WithField("deposits", len(b.Body().Deposits()))
|
||||
}
|
||||
if len(b.Body().AttesterSlashings()) > 0 {
|
||||
log = log.WithField("attesterSlashings", len(b.Body().AttesterSlashings()))
|
||||
}
|
||||
@@ -111,7 +108,6 @@ func logBlockSyncStatus(block interfaces.ReadOnlyBeaconBlock, blockRoot [32]byte
|
||||
"sinceSlotStartTime": prysmTime.Now().Sub(startTime),
|
||||
"chainServiceProcessedTime": prysmTime.Now().Sub(receivedTime) - daWaitedTime,
|
||||
"dataAvailabilityWaitedTime": daWaitedTime,
|
||||
"deposits": len(block.Body().Deposits()),
|
||||
}
|
||||
log.WithFields(lf).Debug("Synced new block")
|
||||
} else {
|
||||
@@ -159,7 +155,9 @@ func logPayload(block interfaces.ReadOnlyBeaconBlock) error {
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not get BLSToExecutionChanges")
|
||||
}
|
||||
fields["blsToExecutionChanges"] = len(changes)
|
||||
if len(changes) > 0 {
|
||||
fields["blsToExecutionChanges"] = len(changes)
|
||||
}
|
||||
}
|
||||
log.WithFields(fields).Debug("Synced new payload")
|
||||
return nil
|
||||
|
||||
@@ -53,7 +53,7 @@ func Test_logStateTransitionData(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
return wb
|
||||
},
|
||||
want: "\"Finished applying state transition\" attestations=1 deposits=1 prefix=blockchain slot=0",
|
||||
want: "\"Finished applying state transition\" attestations=1 prefix=blockchain slot=0",
|
||||
},
|
||||
{name: "has attester slashing",
|
||||
b: func() interfaces.ReadOnlyBeaconBlock {
|
||||
@@ -93,7 +93,7 @@ func Test_logStateTransitionData(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
return wb
|
||||
},
|
||||
want: "\"Finished applying state transition\" attestations=1 attesterSlashings=1 deposits=1 prefix=blockchain proposerSlashings=1 slot=0 voluntaryExits=1",
|
||||
want: "\"Finished applying state transition\" attestations=1 attesterSlashings=1 prefix=blockchain proposerSlashings=1 slot=0 voluntaryExits=1",
|
||||
},
|
||||
{name: "has payload",
|
||||
b: func() interfaces.ReadOnlyBeaconBlock { return wrappedPayloadBlk },
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package blockchain
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
eth "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
@@ -15,7 +14,7 @@ func TestReportEpochMetrics_BadHeadState(t *testing.T) {
|
||||
h, err := util.NewBeaconState()
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, h.SetValidators(nil))
|
||||
err = reportEpochMetrics(context.Background(), s, h)
|
||||
err = reportEpochMetrics(t.Context(), s, h)
|
||||
require.ErrorContains(t, "failed to initialize precompute: state has nil validator slice", err)
|
||||
}
|
||||
|
||||
@@ -25,7 +24,7 @@ func TestReportEpochMetrics_BadAttestation(t *testing.T) {
|
||||
h, err := util.NewBeaconState()
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, h.AppendCurrentEpochAttestations(ð.PendingAttestation{InclusionDelay: 0}))
|
||||
err = reportEpochMetrics(context.Background(), s, h)
|
||||
err = reportEpochMetrics(t.Context(), s, h)
|
||||
require.ErrorContains(t, "attestation with inclusion delay of 0", err)
|
||||
}
|
||||
|
||||
@@ -36,6 +35,6 @@ func TestReportEpochMetrics_SlashedValidatorOutOfBound(t *testing.T) {
|
||||
v.Slashed = true
|
||||
require.NoError(t, h.UpdateValidatorAtIndex(0, v))
|
||||
require.NoError(t, h.AppendCurrentEpochAttestations(ð.PendingAttestation{InclusionDelay: 1, Data: util.HydrateAttestationData(ð.AttestationData{})}))
|
||||
err = reportEpochMetrics(context.Background(), h, h)
|
||||
err = reportEpochMetrics(t.Context(), h, h)
|
||||
require.ErrorContains(t, "slot 0 out of bounds", err)
|
||||
}
|
||||
|
||||
@@ -1,10 +1,13 @@
|
||||
package blockchain
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/async/event"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
|
||||
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
|
||||
lightclient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/execution"
|
||||
@@ -127,9 +130,9 @@ func WithBLSToExecPool(p blstoexec.PoolManager) Option {
|
||||
}
|
||||
|
||||
// WithP2PBroadcaster to broadcast messages after appropriate processing.
|
||||
func WithP2PBroadcaster(p p2p.Broadcaster) Option {
|
||||
func WithP2PBroadcaster(p p2p.Accessor) Option {
|
||||
return func(s *Service) error {
|
||||
s.cfg.P2p = p
|
||||
s.cfg.P2P = p
|
||||
return nil
|
||||
}
|
||||
}
|
||||
@@ -208,6 +211,15 @@ func WithBlobStorage(b *filesystem.BlobStorage) Option {
|
||||
}
|
||||
}
|
||||
|
||||
// WithDataColumnStorage sets the data column storage backend for the blockchain service.
|
||||
func WithDataColumnStorage(b *filesystem.DataColumnStorage) Option {
|
||||
return func(s *Service) error {
|
||||
s.dataColumnStorage = b
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithSyncChecker sets the sync checker for the blockchain service.
|
||||
func WithSyncChecker(checker Checker) Option {
|
||||
return func(s *Service) error {
|
||||
s.cfg.SyncChecker = checker
|
||||
@@ -215,6 +227,15 @@ func WithSyncChecker(checker Checker) Option {
|
||||
}
|
||||
}
|
||||
|
||||
// WithCustodyInfo sets the custody info for the blockchain service.
|
||||
func WithCustodyInfo(custodyInfo *peerdas.CustodyInfo) Option {
|
||||
return func(s *Service) error {
|
||||
s.cfg.CustodyInfo = custodyInfo
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithSlasherEnabled sets whether the slasher is enabled or not.
|
||||
func WithSlasherEnabled(enabled bool) Option {
|
||||
return func(s *Service) error {
|
||||
s.slasherEnabled = enabled
|
||||
@@ -222,9 +243,27 @@ func WithSlasherEnabled(enabled bool) Option {
|
||||
}
|
||||
}
|
||||
|
||||
// WithGenesisTime sets the genesis time for the blockchain service.
|
||||
func WithGenesisTime(genesisTime time.Time) Option {
|
||||
return func(s *Service) error {
|
||||
s.genesisTime = genesisTime
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithLightClientStore sets the light client store for the blockchain service.
|
||||
func WithLightClientStore(lcs *lightclient.Store) Option {
|
||||
return func(s *Service) error {
|
||||
s.lcStore = lcs
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithStartWaitingDataColumnSidecars sets a channel that the `areDataColumnsAvailable` function will fill
|
||||
// in when starting to wait for additional data columns.
|
||||
func WithStartWaitingDataColumnSidecars(c chan bool) Option {
|
||||
return func(s *Service) error {
|
||||
s.startWaitingDataColumnSidecars = c
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
@@ -5,6 +5,7 @@ import (
|
||||
"time"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
|
||||
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
|
||||
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
@@ -88,7 +89,7 @@ func (s *Service) OnAttestation(ctx context.Context, a ethpb.Att, disparity time
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if err := attestation.IsValidAttestationIndices(ctx, indexedAtt); err != nil {
|
||||
if err := attestation.IsValidAttestationIndices(ctx, indexedAtt, params.BeaconConfig().MaxValidatorsPerCommittee, params.BeaconConfig().MaxCommitteesPerSlot); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
|
||||
@@ -161,7 +161,7 @@ func TestStore_OnAttestation_Ok_DoublyLinkedTree(t *testing.T) {
|
||||
|
||||
func TestService_GetRecentPreState(t *testing.T) {
|
||||
service, _ := minimalTestService(t)
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
|
||||
s, err := util.NewBeaconState()
|
||||
require.NoError(t, err)
|
||||
@@ -183,7 +183,7 @@ func TestService_GetRecentPreState(t *testing.T) {
|
||||
|
||||
func TestService_GetAttPreState_Concurrency(t *testing.T) {
|
||||
service, _ := minimalTestService(t)
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
|
||||
s, err := util.NewBeaconState()
|
||||
require.NoError(t, err)
|
||||
@@ -353,21 +353,21 @@ func TestStore_UpdateCheckpointState(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestAttEpoch_MatchPrevEpoch(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
|
||||
nowTime := uint64(params.BeaconConfig().SlotsPerEpoch) * params.BeaconConfig().SecondsPerSlot
|
||||
require.NoError(t, verifyAttTargetEpoch(ctx, 0, nowTime, ðpb.Checkpoint{Root: make([]byte, fieldparams.RootLength)}))
|
||||
}
|
||||
|
||||
func TestAttEpoch_MatchCurrentEpoch(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
|
||||
nowTime := uint64(params.BeaconConfig().SlotsPerEpoch) * params.BeaconConfig().SecondsPerSlot
|
||||
require.NoError(t, verifyAttTargetEpoch(ctx, 0, nowTime, ðpb.Checkpoint{Epoch: 1}))
|
||||
}
|
||||
|
||||
func TestAttEpoch_NotMatch(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
|
||||
nowTime := 2 * uint64(params.BeaconConfig().SlotsPerEpoch) * params.BeaconConfig().SecondsPerSlot
|
||||
err := verifyAttTargetEpoch(ctx, 0, nowTime, ðpb.Checkpoint{Root: make([]byte, fieldparams.RootLength)})
|
||||
@@ -375,7 +375,7 @@ func TestAttEpoch_NotMatch(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestVerifyBeaconBlock_NoBlock(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
opts := testServiceOptsWithDB(t)
|
||||
service, err := NewService(ctx, opts...)
|
||||
require.NoError(t, err)
|
||||
@@ -385,7 +385,7 @@ func TestVerifyBeaconBlock_NoBlock(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestVerifyBeaconBlock_futureBlock(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
|
||||
opts := testServiceOptsWithDB(t)
|
||||
service, err := NewService(ctx, opts...)
|
||||
@@ -402,7 +402,7 @@ func TestVerifyBeaconBlock_futureBlock(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestVerifyBeaconBlock_OK(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
|
||||
opts := testServiceOptsWithDB(t)
|
||||
service, err := NewService(ctx, opts...)
|
||||
|
||||
@@ -3,10 +3,12 @@ package blockchain
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"slices"
|
||||
"time"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/blocks"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
|
||||
coreTime "github.com/OffchainLabs/prysm/v6/beacon-chain/core/time"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/transition"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/das"
|
||||
@@ -70,8 +72,6 @@ func (s *Service) postBlockProcess(cfg *postBlockProcessConfig) error {
|
||||
}
|
||||
if features.Get().EnableLightClient && slots.ToEpoch(s.CurrentSlot()) >= params.BeaconConfig().AltairForkEpoch {
|
||||
defer s.processLightClientUpdates(cfg)
|
||||
defer s.saveLightClientUpdate(cfg)
|
||||
defer s.saveLightClientBootstrap(cfg)
|
||||
}
|
||||
defer s.sendStateFeedOnBlock(cfg)
|
||||
defer reportProcessingTime(startTime)
|
||||
@@ -239,8 +239,9 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlo
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
if err := avs.IsDataAvailable(ctx, s.CurrentSlot(), b); err != nil {
|
||||
return errors.Wrapf(err, "could not validate blob data availability at slot %d", b.Block().Slot())
|
||||
return errors.Wrapf(err, "could not validate sidecar availability at slot %d", b.Block().Slot())
|
||||
}
|
||||
args := &forkchoicetypes.BlockAndCheckpoints{Block: b,
|
||||
JustifiedCheckpoint: jCheckpoints[i],
|
||||
@@ -578,12 +579,12 @@ func (s *Service) runLateBlockTasks() {
|
||||
}
|
||||
}
|
||||
|
||||
// missingIndices uses the expected commitments from the block to determine
|
||||
// missingBlobIndices uses the expected commitments from the block to determine
|
||||
// which BlobSidecar indices would need to be in the database for DA success.
|
||||
// It returns a map where each key represents a missing BlobSidecar index.
|
||||
// An empty map means we have all indices; a non-empty map can be used to compare incoming
|
||||
// BlobSidecars against the set of known missing sidecars.
|
||||
func missingIndices(bs *filesystem.BlobStorage, root [32]byte, expected [][]byte, slot primitives.Slot) (map[uint64]struct{}, error) {
|
||||
func missingBlobIndices(bs *filesystem.BlobStorage, root [fieldparams.RootLength]byte, expected [][]byte, slot primitives.Slot) (map[uint64]bool, error) {
|
||||
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(slot)
|
||||
if len(expected) == 0 {
|
||||
return nil, nil
|
||||
@@ -592,29 +593,235 @@ func missingIndices(bs *filesystem.BlobStorage, root [32]byte, expected [][]byte
|
||||
return nil, errMaxBlobsExceeded
|
||||
}
|
||||
indices := bs.Summary(root)
|
||||
missing := make(map[uint64]struct{}, len(expected))
|
||||
missing := make(map[uint64]bool, len(expected))
|
||||
for i := range expected {
|
||||
if len(expected[i]) > 0 && !indices.HasIndex(uint64(i)) {
|
||||
missing[uint64(i)] = struct{}{}
|
||||
missing[uint64(i)] = true
|
||||
}
|
||||
}
|
||||
return missing, nil
|
||||
}
|
||||
|
||||
// isDataAvailable blocks until all BlobSidecars committed to in the block are available,
|
||||
// or an error or context cancellation occurs. A nil result means that the data availability check is successful.
|
||||
// The function will first check the database to see if all sidecars have been persisted. If any
|
||||
// sidecars are missing, it will then read from the blobNotifier channel for the given root until the channel is
|
||||
// closed, the context hits cancellation/timeout, or notifications have been received for all the missing sidecars.
|
||||
func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signed interfaces.ReadOnlySignedBeaconBlock) error {
|
||||
if signed.Version() < version.Deneb {
|
||||
return nil
|
||||
// missingDataColumnIndices uses the expected data columns from the block to determine
|
||||
// which DataColumnSidecar indices would need to be in the database for DA success.
|
||||
// It returns a map where each key represents a missing DataColumnSidecar index.
|
||||
// An empty map means we have all indices; a non-empty map can be used to compare incoming
|
||||
// DataColumns against the set of known missing sidecars.
|
||||
func missingDataColumnIndices(bs *filesystem.DataColumnStorage, root [fieldparams.RootLength]byte, expected map[uint64]bool) (map[uint64]bool, error) {
|
||||
if len(expected) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
block := signed.Block()
|
||||
numberOfColumns := params.BeaconConfig().NumberOfColumns
|
||||
|
||||
if uint64(len(expected)) > numberOfColumns {
|
||||
return nil, errMaxDataColumnsExceeded
|
||||
}
|
||||
|
||||
// Get a summary of the data columns stored in the database.
|
||||
summary := bs.Summary(root)
|
||||
|
||||
// Check all expected data columns against the summary.
|
||||
missing := make(map[uint64]bool)
|
||||
for column := range expected {
|
||||
if !summary.HasIndex(column) {
|
||||
missing[column] = true
|
||||
}
|
||||
}
|
||||
|
||||
return missing, nil
|
||||
}
|
||||
|
||||
// isDataAvailable blocks until all sidecars committed to in the block are available,
|
||||
// or an error or context cancellation occurs. A nil result means that the data availability check is successful.
|
||||
// The function will first check the database to see if all sidecars have been persisted. If any
|
||||
// sidecars are missing, it will then read from the sidecar notifier channel for the given root until the channel is
|
||||
// closed, the context hits cancellation/timeout, or notifications have been received for all the missing sidecars.
|
||||
func (s *Service) isDataAvailable(
|
||||
ctx context.Context,
|
||||
root [fieldparams.RootLength]byte,
|
||||
signedBlock interfaces.ReadOnlySignedBeaconBlock,
|
||||
) error {
|
||||
block := signedBlock.Block()
|
||||
if block == nil {
|
||||
return errors.New("invalid nil beacon block")
|
||||
}
|
||||
|
||||
blockVersion := block.Version()
|
||||
if blockVersion >= version.Fulu {
|
||||
return s.areDataColumnsAvailable(ctx, root, block)
|
||||
}
|
||||
|
||||
if blockVersion >= version.Deneb {
|
||||
return s.areBlobsAvailable(ctx, root, block)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// areDataColumnsAvailable blocks until all data columns committed to in the block are available,
|
||||
// or an error or context cancellation occurs. A nil result means that the data availability check is successful.
|
||||
func (s *Service) areDataColumnsAvailable(
|
||||
ctx context.Context,
|
||||
root [fieldparams.RootLength]byte,
|
||||
block interfaces.ReadOnlyBeaconBlock,
|
||||
) error {
|
||||
// We are only required to check within MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS
|
||||
blockSlot, currentSlot := block.Slot(), s.CurrentSlot()
|
||||
blockEpoch, currentEpoch := slots.ToEpoch(blockSlot), slots.ToEpoch(currentSlot)
|
||||
|
||||
if !params.WithinDAPeriod(blockEpoch, currentEpoch) {
|
||||
return nil
|
||||
}
|
||||
|
||||
body := block.Body()
|
||||
if body == nil {
|
||||
return errors.New("invalid nil beacon block body")
|
||||
}
|
||||
|
||||
kzgCommitments, err := body.BlobKzgCommitments()
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "blob KZG commitments")
|
||||
}
|
||||
|
||||
// If block has not commitments there is nothing to wait for.
|
||||
if len(kzgCommitments) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
// All columns to sample need to be available for the block to be considered available.
|
||||
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-alpha.10/specs/fulu/das-core.md#custody-sampling
|
||||
nodeID := s.cfg.P2P.NodeID()
|
||||
|
||||
// Prevent custody group count to change during the rest of the function.
|
||||
s.cfg.CustodyInfo.Mut.RLock()
|
||||
defer s.cfg.CustodyInfo.Mut.RUnlock()
|
||||
|
||||
// Get the custody group sampling size for the node.
|
||||
custodyGroupSamplingSize := s.cfg.CustodyInfo.CustodyGroupSamplingSize(peerdas.Actual)
|
||||
peerInfo, _, err := peerdas.Info(nodeID, custodyGroupSamplingSize)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "peer info")
|
||||
}
|
||||
|
||||
// Subscribe to newly data columns stored in the database.
|
||||
subscription, identsChan := s.dataColumnStorage.Subscribe()
|
||||
defer subscription.Unsubscribe()
|
||||
|
||||
// Get the count of data columns we already have in the store.
|
||||
summary := s.dataColumnStorage.Summary(root)
|
||||
storedDataColumnsCount := summary.Count()
|
||||
|
||||
minimumColumnCountToReconstruct := peerdas.MinimumColumnsCountToReconstruct()
|
||||
|
||||
// As soon as we have enough data column sidecars, we can reconstruct the missing ones.
|
||||
// We don't need to wait for the rest of the data columns to declare the block as available.
|
||||
if storedDataColumnsCount >= minimumColumnCountToReconstruct {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Get a map of data column indices that are not currently available.
|
||||
missingMap, err := missingDataColumnIndices(s.dataColumnStorage, root, peerInfo.CustodyColumns)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "missing data columns")
|
||||
}
|
||||
|
||||
// If there are no missing indices, all data column sidecars are available.
|
||||
// This is the happy path.
|
||||
if len(missingMap) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
if s.startWaitingDataColumnSidecars != nil {
|
||||
s.startWaitingDataColumnSidecars <- true
|
||||
}
|
||||
|
||||
// Log for DA checks that cross over into the next slot; helpful for debugging.
|
||||
nextSlot := slots.BeginsAt(block.Slot()+1, s.genesisTime)
|
||||
|
||||
// Avoid logging if DA check is called after next slot start.
|
||||
if nextSlot.After(time.Now()) {
|
||||
timer := time.AfterFunc(time.Until(nextSlot), func() {
|
||||
missingMapCount := uint64(len(missingMap))
|
||||
|
||||
if missingMapCount == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
var (
|
||||
expected interface{} = "all"
|
||||
missing interface{} = "all"
|
||||
)
|
||||
|
||||
numberOfColumns := params.BeaconConfig().NumberOfColumns
|
||||
colMapCount := uint64(len(peerInfo.CustodyColumns))
|
||||
|
||||
if colMapCount < numberOfColumns {
|
||||
expected = uint64MapToSortedSlice(peerInfo.CustodyColumns)
|
||||
}
|
||||
|
||||
if missingMapCount < numberOfColumns {
|
||||
missing = uint64MapToSortedSlice(missingMap)
|
||||
}
|
||||
|
||||
log.WithFields(logrus.Fields{
|
||||
"slot": block.Slot(),
|
||||
"root": fmt.Sprintf("%#x", root),
|
||||
"columnsExpected": expected,
|
||||
"columnsWaiting": missing,
|
||||
}).Warning("Data columns still missing at slot end")
|
||||
})
|
||||
defer timer.Stop()
|
||||
}
|
||||
|
||||
for {
|
||||
select {
|
||||
case idents := <-identsChan:
|
||||
if idents.Root != root {
|
||||
// This is not the root we are looking for.
|
||||
continue
|
||||
}
|
||||
|
||||
for _, index := range idents.Indices {
|
||||
// This is a data column we are expecting.
|
||||
if _, ok := missingMap[index]; ok {
|
||||
storedDataColumnsCount++
|
||||
}
|
||||
|
||||
// As soon as we have more than half of the data columns, we can reconstruct the missing ones.
|
||||
// We don't need to wait for the rest of the data columns to declare the block as available.
|
||||
if storedDataColumnsCount >= minimumColumnCountToReconstruct {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Remove the index from the missing map.
|
||||
delete(missingMap, index)
|
||||
|
||||
// Return if there is no more missing data columns.
|
||||
if len(missingMap) == 0 {
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
case <-ctx.Done():
|
||||
var missingIndices interface{} = "all"
|
||||
numberOfColumns := params.BeaconConfig().NumberOfColumns
|
||||
missingIndicesCount := uint64(len(missingMap))
|
||||
|
||||
if missingIndicesCount < numberOfColumns {
|
||||
missingIndices = uint64MapToSortedSlice(missingMap)
|
||||
}
|
||||
|
||||
return errors.Wrapf(ctx.Err(), "data column sidecars slot: %d, BlockRoot: %#x, missing %v", block.Slot(), root, missingIndices)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// areBlobsAvailable blocks until all BlobSidecars committed to in the block are available,
|
||||
// or an error or context cancellation occurs. A nil result means that the data availability check is successful.
|
||||
func (s *Service) areBlobsAvailable(ctx context.Context, root [fieldparams.RootLength]byte, block interfaces.ReadOnlyBeaconBlock) error {
|
||||
blockSlot := block.Slot()
|
||||
|
||||
// We are only required to check within MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS
|
||||
if !params.WithinDAPeriod(slots.ToEpoch(block.Slot()), slots.ToEpoch(s.CurrentSlot())) {
|
||||
return nil
|
||||
@@ -634,9 +841,9 @@ func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signed int
|
||||
return nil
|
||||
}
|
||||
// get a map of BlobSidecar indices that are not currently available.
|
||||
missing, err := missingIndices(s.blobStorage, root, kzgCommitments, block.Slot())
|
||||
missing, err := missingBlobIndices(s.blobStorage, root, kzgCommitments, block.Slot())
|
||||
if err != nil {
|
||||
return err
|
||||
return errors.Wrap(err, "missing indices")
|
||||
}
|
||||
// If there are no missing indices, all BlobSidecars are available.
|
||||
if len(missing) == 0 {
|
||||
@@ -648,15 +855,20 @@ func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signed int
|
||||
nc := s.blobNotifiers.forRoot(root, block.Slot())
|
||||
|
||||
// Log for DA checks that cross over into the next slot; helpful for debugging.
|
||||
nextSlot := slots.BeginsAt(signed.Block().Slot()+1, s.genesisTime)
|
||||
nextSlot := slots.BeginsAt(block.Slot()+1, s.genesisTime)
|
||||
// Avoid logging if DA check is called after next slot start.
|
||||
if nextSlot.After(time.Now()) {
|
||||
nst := time.AfterFunc(time.Until(nextSlot), func() {
|
||||
if len(missing) == 0 {
|
||||
return
|
||||
}
|
||||
log.WithFields(daCheckLogFields(root, signed.Block().Slot(), expected, len(missing))).
|
||||
Error("Still waiting for DA check at slot end.")
|
||||
|
||||
log.WithFields(logrus.Fields{
|
||||
"slot": blockSlot,
|
||||
"root": fmt.Sprintf("%#x", root),
|
||||
"blobsExpected": expected,
|
||||
"blobsWaiting": len(missing),
|
||||
}).Error("Still waiting for blobs DA check at slot end.")
|
||||
})
|
||||
defer nst.Stop()
|
||||
}
|
||||
@@ -678,13 +890,14 @@ func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signed int
|
||||
}
|
||||
}
|
||||
|
||||
func daCheckLogFields(root [32]byte, slot primitives.Slot, expected, missing int) logrus.Fields {
|
||||
return logrus.Fields{
|
||||
"slot": slot,
|
||||
"root": fmt.Sprintf("%#x", root),
|
||||
"blobsExpected": expected,
|
||||
"blobsWaiting": missing,
|
||||
// uint64MapToSortedSlice produces a sorted uint64 slice from a map.
|
||||
func uint64MapToSortedSlice(input map[uint64]bool) []uint64 {
|
||||
output := make([]uint64, 0, len(input))
|
||||
for idx := range input {
|
||||
output = append(output, idx)
|
||||
}
|
||||
slices.Sort[[]uint64](output)
|
||||
return output
|
||||
}
|
||||
|
||||
// lateBlockTasks is called 4 seconds into the slot and performs tasks
|
||||
@@ -770,7 +983,7 @@ func (s *Service) waitForSync() error {
|
||||
}
|
||||
}
|
||||
|
||||
func (s *Service) handleInvalidExecutionError(ctx context.Context, err error, blockRoot [32]byte, parentRoot [32]byte) error {
|
||||
func (s *Service) handleInvalidExecutionError(ctx context.Context, err error, blockRoot, parentRoot [fieldparams.RootLength]byte) error {
|
||||
if IsInvalidBlock(err) && InvalidBlockLVH(err) != [32]byte{} {
|
||||
return s.pruneInvalidBlock(ctx, blockRoot, parentRoot, InvalidBlockLVH(err))
|
||||
}
|
||||
|
||||
@@ -131,6 +131,12 @@ func (s *Service) sendStateFeedOnBlock(cfg *postBlockProcessConfig) {
|
||||
}
|
||||
|
||||
func (s *Service) processLightClientUpdates(cfg *postBlockProcessConfig) {
|
||||
if err := s.processLightClientUpdate(cfg); err != nil {
|
||||
log.WithError(err).Error("Failed to process light client update")
|
||||
}
|
||||
if err := s.processLightClientBootstrap(cfg); err != nil {
|
||||
log.WithError(err).Error("Failed to process light client bootstrap")
|
||||
}
|
||||
if err := s.processLightClientOptimisticUpdate(cfg.ctx, cfg.roblock, cfg.postState); err != nil {
|
||||
log.WithError(err).Error("Failed to process light client optimistic update")
|
||||
}
|
||||
@@ -139,38 +145,33 @@ func (s *Service) processLightClientUpdates(cfg *postBlockProcessConfig) {
|
||||
}
|
||||
}
|
||||
|
||||
// saveLightClientUpdate saves the light client update for this block
|
||||
// processLightClientUpdate saves the light client update for this block
|
||||
// if it's better than the already saved one, when feature flag is enabled.
|
||||
func (s *Service) saveLightClientUpdate(cfg *postBlockProcessConfig) {
|
||||
func (s *Service) processLightClientUpdate(cfg *postBlockProcessConfig) error {
|
||||
attestedRoot := cfg.roblock.Block().ParentRoot()
|
||||
attestedBlock, err := s.getBlock(cfg.ctx, attestedRoot)
|
||||
if err != nil {
|
||||
log.WithError(err).Errorf("Saving light client update failed: Could not get attested block for root %#x", attestedRoot)
|
||||
return
|
||||
return errors.Wrapf(err, "could not get attested block for root %#x", attestedRoot)
|
||||
}
|
||||
if attestedBlock == nil || attestedBlock.IsNil() {
|
||||
log.Error("Saving light client update failed: Attested block is nil")
|
||||
return
|
||||
return errors.New("attested block is nil")
|
||||
}
|
||||
attestedState, err := s.cfg.StateGen.StateByRoot(cfg.ctx, attestedRoot)
|
||||
if err != nil {
|
||||
log.WithError(err).Errorf("Saving light client update failed: Could not get attested state for root %#x", attestedRoot)
|
||||
return
|
||||
return errors.Wrapf(err, "could not get attested state for root %#x", attestedRoot)
|
||||
}
|
||||
if attestedState == nil || attestedState.IsNil() {
|
||||
log.Error("Saving light client update failed: Attested state is nil")
|
||||
return
|
||||
return errors.New("attested state is nil")
|
||||
}
|
||||
|
||||
finalizedRoot := attestedState.FinalizedCheckpoint().Root
|
||||
finalizedBlock, err := s.getBlock(cfg.ctx, [32]byte(finalizedRoot))
|
||||
if err != nil {
|
||||
if errors.Is(err, errBlockNotFoundInCacheOrDB) {
|
||||
log.Debugf("Skipping saving light client update: Finalized block is nil for root %#x", finalizedRoot)
|
||||
} else {
|
||||
log.WithError(err).Errorf("Saving light client update failed: Could not get finalized block for root %#x", finalizedRoot)
|
||||
log.Debugf("Skipping saving light client update because finalized block is nil for root %#x", finalizedRoot)
|
||||
return nil
|
||||
}
|
||||
return
|
||||
return errors.Wrapf(err, "could not get finalized block for root %#x", finalizedRoot)
|
||||
}
|
||||
|
||||
update, err := lightclient.NewLightClientUpdateFromBeaconState(
|
||||
@@ -183,57 +184,52 @@ func (s *Service) saveLightClientUpdate(cfg *postBlockProcessConfig) {
|
||||
finalizedBlock,
|
||||
)
|
||||
if err != nil {
|
||||
log.WithError(err).Error("Saving light client update failed: Could not create light client update")
|
||||
return
|
||||
return errors.Wrapf(err, "could not create light client update")
|
||||
}
|
||||
|
||||
period := slots.SyncCommitteePeriod(slots.ToEpoch(attestedState.Slot()))
|
||||
|
||||
oldUpdate, err := s.cfg.BeaconDB.LightClientUpdate(cfg.ctx, period)
|
||||
if err != nil {
|
||||
log.WithError(err).Error("Saving light client update failed: Could not get current light client update")
|
||||
return
|
||||
return errors.Wrapf(err, "could not get current light client update")
|
||||
}
|
||||
|
||||
if oldUpdate == nil {
|
||||
if err := s.cfg.BeaconDB.SaveLightClientUpdate(cfg.ctx, period, update); err != nil {
|
||||
log.WithError(err).Error("Saving light client update failed: Could not save light client update")
|
||||
} else {
|
||||
log.WithField("period", period).Debug("Saving light client update: Saved new update")
|
||||
return errors.Wrapf(err, "could not save light client update")
|
||||
}
|
||||
return
|
||||
log.WithField("period", period).Debug("Saved new light client update")
|
||||
return nil
|
||||
}
|
||||
|
||||
isNewUpdateBetter, err := lightclient.IsBetterUpdate(update, oldUpdate)
|
||||
if err != nil {
|
||||
log.WithError(err).Error("Saving light client update failed: Could not compare light client updates")
|
||||
return
|
||||
return errors.Wrapf(err, "could not compare light client updates")
|
||||
}
|
||||
|
||||
if isNewUpdateBetter {
|
||||
if err := s.cfg.BeaconDB.SaveLightClientUpdate(cfg.ctx, period, update); err != nil {
|
||||
log.WithError(err).Error("Saving light client update failed: Could not save light client update")
|
||||
} else {
|
||||
log.WithField("period", period).Debug("Saving light client update: Saved new update")
|
||||
return errors.Wrapf(err, "could not save light client update")
|
||||
}
|
||||
} else {
|
||||
log.WithField("period", period).Debug("Saving light client update: New update is not better than the current one. Skipping save.")
|
||||
log.WithField("period", period).Debug("Saved new light client update")
|
||||
return nil
|
||||
}
|
||||
log.WithField("period", period).Debug("New light client update is not better than the current one, skipping save")
|
||||
return nil
|
||||
}
|
||||
|
||||
// saveLightClientBootstrap saves a light client bootstrap for this block
|
||||
// processLightClientBootstrap saves a light client bootstrap for this block
|
||||
// when feature flag is enabled.
|
||||
func (s *Service) saveLightClientBootstrap(cfg *postBlockProcessConfig) {
|
||||
func (s *Service) processLightClientBootstrap(cfg *postBlockProcessConfig) error {
|
||||
blockRoot := cfg.roblock.Root()
|
||||
bootstrap, err := lightclient.NewLightClientBootstrapFromBeaconState(cfg.ctx, s.CurrentSlot(), cfg.postState, cfg.roblock)
|
||||
if err != nil {
|
||||
log.WithError(err).Error("Saving light client bootstrap failed: Could not create light client bootstrap")
|
||||
return
|
||||
return errors.Wrapf(err, "could not create light client bootstrap")
|
||||
}
|
||||
err = s.cfg.BeaconDB.SaveLightClientBootstrap(cfg.ctx, blockRoot[:], bootstrap)
|
||||
if err != nil {
|
||||
log.WithError(err).Error("Saving light client bootstrap failed: Could not save light client bootstrap in DB")
|
||||
if err := s.cfg.BeaconDB.SaveLightClientBootstrap(cfg.ctx, blockRoot[:], bootstrap); err != nil {
|
||||
return errors.Wrapf(err, "could not save light client bootstrap")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *Service) processLightClientFinalityUpdate(
|
||||
@@ -310,7 +306,7 @@ func (s *Service) processLightClientFinalityUpdate(
|
||||
Data: newUpdate,
|
||||
})
|
||||
|
||||
if err = s.cfg.P2p.BroadcastLightClientFinalityUpdate(ctx, newUpdate); err != nil {
|
||||
if err = s.cfg.P2P.BroadcastLightClientFinalityUpdate(ctx, newUpdate); err != nil {
|
||||
return errors.Wrap(err, "could not broadcast light client finality update")
|
||||
}
|
||||
|
||||
@@ -363,7 +359,7 @@ func (s *Service) processLightClientOptimisticUpdate(ctx context.Context, signed
|
||||
Data: newUpdate,
|
||||
})
|
||||
|
||||
if err = s.cfg.P2p.BroadcastLightClientOptimisticUpdate(ctx, newUpdate); err != nil {
|
||||
if err = s.cfg.P2P.BroadcastLightClientOptimisticUpdate(ctx, newUpdate); err != nil {
|
||||
return errors.Wrap(err, "could not broadcast light client optimistic update")
|
||||
}
|
||||
|
||||
|
||||
@@ -13,6 +13,7 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/blocks"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
|
||||
lightClient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/signing"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/transition"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/das"
|
||||
@@ -50,7 +51,7 @@ import (
|
||||
)
|
||||
|
||||
func Test_pruneAttsFromPool_Electra(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
logHook := logTest.NewGlobal()
|
||||
|
||||
params.SetupTestConfigCleanup(t)
|
||||
@@ -240,7 +241,7 @@ func TestFillForkChoiceMissingBlocks_CanSave(t *testing.T) {
|
||||
fcp2 := &forkchoicetypes.Checkpoint{Epoch: 0, Root: r0}
|
||||
require.NoError(t, service.cfg.ForkChoiceStore.UpdateFinalizedCheckpoint(fcp2))
|
||||
err = service.fillInForkChoiceMissingBlocks(
|
||||
context.Background(), wsb, beaconState.FinalizedCheckpoint(), beaconState.CurrentJustifiedCheckpoint())
|
||||
t.Context(), wsb, beaconState.FinalizedCheckpoint(), beaconState.CurrentJustifiedCheckpoint())
|
||||
require.NoError(t, err)
|
||||
|
||||
// 5 nodes from the block tree 1. B0 - B3 - B4 - B6 - B8
|
||||
@@ -283,7 +284,7 @@ func TestFillForkChoiceMissingBlocks_RootsMatch(t *testing.T) {
|
||||
require.NoError(t, service.cfg.ForkChoiceStore.UpdateFinalizedCheckpoint(fcp2))
|
||||
|
||||
err = service.fillInForkChoiceMissingBlocks(
|
||||
context.Background(), wsb, beaconState.FinalizedCheckpoint(), beaconState.CurrentJustifiedCheckpoint())
|
||||
t.Context(), wsb, beaconState.FinalizedCheckpoint(), beaconState.CurrentJustifiedCheckpoint())
|
||||
require.NoError(t, err)
|
||||
|
||||
// 5 nodes from the block tree 1. B0 - B3 - B4 - B6 - B8
|
||||
@@ -293,7 +294,7 @@ func TestFillForkChoiceMissingBlocks_RootsMatch(t *testing.T) {
|
||||
wantedRoots := [][]byte{roots[0], roots[3], roots[4], roots[6], roots[8]}
|
||||
for i, rt := range wantedRoots {
|
||||
assert.Equal(t, true, service.cfg.ForkChoiceStore.HasNode(bytesutil.ToBytes32(rt)), fmt.Sprintf("Didn't save node: %d", i))
|
||||
assert.Equal(t, true, service.cfg.BeaconDB.HasBlock(context.Background(), bytesutil.ToBytes32(rt)))
|
||||
assert.Equal(t, true, service.cfg.BeaconDB.HasBlock(t.Context(), bytesutil.ToBytes32(rt)))
|
||||
}
|
||||
}
|
||||
|
||||
@@ -339,7 +340,7 @@ func TestFillForkChoiceMissingBlocks_FilterFinalized(t *testing.T) {
|
||||
// Set finalized epoch to 2.
|
||||
require.NoError(t, service.cfg.ForkChoiceStore.UpdateFinalizedCheckpoint(&forkchoicetypes.Checkpoint{Epoch: 2, Root: r64}))
|
||||
err = service.fillInForkChoiceMissingBlocks(
|
||||
context.Background(), wsb, beaconState.FinalizedCheckpoint(), beaconState.CurrentJustifiedCheckpoint())
|
||||
t.Context(), wsb, beaconState.FinalizedCheckpoint(), beaconState.CurrentJustifiedCheckpoint())
|
||||
require.NoError(t, err)
|
||||
|
||||
// There should be 1 node: block 65
|
||||
@@ -372,7 +373,7 @@ func TestFillForkChoiceMissingBlocks_FinalizedSibling(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
|
||||
err = service.fillInForkChoiceMissingBlocks(
|
||||
context.Background(), wsb, beaconState.FinalizedCheckpoint(), beaconState.CurrentJustifiedCheckpoint())
|
||||
t.Context(), wsb, beaconState.FinalizedCheckpoint(), beaconState.CurrentJustifiedCheckpoint())
|
||||
require.Equal(t, ErrNotDescendantOfFinalized.Error(), err.Error())
|
||||
}
|
||||
|
||||
@@ -450,20 +451,20 @@ func blockTree1(t *testing.T, beaconDB db.Database, genesisRoot []byte) ([][]byt
|
||||
beaconBlock.Block.ParentRoot = bytesutil.PadTo(b.Block.ParentRoot, 32)
|
||||
wsb, err := consensusblocks.NewSignedBeaconBlock(beaconBlock)
|
||||
require.NoError(t, err)
|
||||
if err := beaconDB.SaveBlock(context.Background(), wsb); err != nil {
|
||||
if err := beaconDB.SaveBlock(t.Context(), wsb); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := beaconDB.SaveState(context.Background(), st.Copy(), bytesutil.ToBytes32(beaconBlock.Block.ParentRoot)); err != nil {
|
||||
if err := beaconDB.SaveState(t.Context(), st.Copy(), bytesutil.ToBytes32(beaconBlock.Block.ParentRoot)); err != nil {
|
||||
return nil, errors.Wrap(err, "could not save state")
|
||||
}
|
||||
}
|
||||
if err := beaconDB.SaveState(context.Background(), st.Copy(), r1); err != nil {
|
||||
if err := beaconDB.SaveState(t.Context(), st.Copy(), r1); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := beaconDB.SaveState(context.Background(), st.Copy(), r7); err != nil {
|
||||
if err := beaconDB.SaveState(t.Context(), st.Copy(), r7); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := beaconDB.SaveState(context.Background(), st.Copy(), r8); err != nil {
|
||||
if err := beaconDB.SaveState(t.Context(), st.Copy(), r8); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return [][]byte{r0[:], r1[:], nil, r3[:], r4[:], r5[:], r6[:], r7[:], r8[:]}, nil
|
||||
@@ -476,7 +477,7 @@ func TestCurrentSlot_HandlesOverflow(t *testing.T) {
|
||||
require.Equal(t, primitives.Slot(0), slot, "Unexpected slot")
|
||||
}
|
||||
func TestAncestorByDB_CtxErr(t *testing.T) {
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
ctx, cancel := context.WithCancel(t.Context())
|
||||
opts := testServiceOptsWithDB(t)
|
||||
service, err := NewService(ctx, opts...)
|
||||
require.NoError(t, err)
|
||||
@@ -509,18 +510,18 @@ func TestAncestor_HandleSkipSlot(t *testing.T) {
|
||||
beaconBlock := util.NewBeaconBlock()
|
||||
beaconBlock.Block.Slot = b.Block.Slot
|
||||
beaconBlock.Block.ParentRoot = bytesutil.PadTo(b.Block.ParentRoot, 32)
|
||||
util.SaveBlock(t, context.Background(), beaconDB, beaconBlock)
|
||||
util.SaveBlock(t, t.Context(), beaconDB, beaconBlock)
|
||||
}
|
||||
|
||||
// Slots 100 to 200 are skip slots. Requesting root at 150 will yield root at 100. The last physical block.
|
||||
r, err := service.Ancestor(context.Background(), r200[:], 150)
|
||||
r, err := service.Ancestor(t.Context(), r200[:], 150)
|
||||
require.NoError(t, err)
|
||||
if bytesutil.ToBytes32(r) != r100 {
|
||||
t.Error("Did not get correct root")
|
||||
}
|
||||
|
||||
// Slots 1 to 100 are skip slots. Requesting root at 50 will yield root at 1. The last physical block.
|
||||
r, err = service.Ancestor(context.Background(), r200[:], 50)
|
||||
r, err = service.Ancestor(t.Context(), r200[:], 50)
|
||||
require.NoError(t, err)
|
||||
if bytesutil.ToBytes32(r) != r1 {
|
||||
t.Error("Did not get correct root")
|
||||
@@ -528,7 +529,7 @@ func TestAncestor_HandleSkipSlot(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestAncestor_CanUseForkchoice(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
opts := testServiceOptsWithDB(t)
|
||||
service, err := NewService(ctx, opts...)
|
||||
require.NoError(t, err)
|
||||
@@ -556,12 +557,12 @@ func TestAncestor_CanUseForkchoice(t *testing.T) {
|
||||
beaconBlock.Block.ParentRoot = bytesutil.PadTo(b.Block.ParentRoot, 32)
|
||||
r, err := b.Block.HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
st, blkRoot, err := prepareForkchoiceState(context.Background(), b.Block.Slot, r, bytesutil.ToBytes32(b.Block.ParentRoot), params.BeaconConfig().ZeroHash, ojc, ofc)
|
||||
st, blkRoot, err := prepareForkchoiceState(t.Context(), b.Block.Slot, r, bytesutil.ToBytes32(b.Block.ParentRoot), params.BeaconConfig().ZeroHash, ojc, ofc)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
|
||||
}
|
||||
|
||||
r, err := service.Ancestor(context.Background(), r200[:], 150)
|
||||
r, err := service.Ancestor(t.Context(), r200[:], 150)
|
||||
require.NoError(t, err)
|
||||
if bytesutil.ToBytes32(r) != r100 {
|
||||
t.Error("Did not get correct root")
|
||||
@@ -593,14 +594,14 @@ func TestAncestor_CanUseDB(t *testing.T) {
|
||||
beaconBlock := util.NewBeaconBlock()
|
||||
beaconBlock.Block.Slot = b.Block.Slot
|
||||
beaconBlock.Block.ParentRoot = bytesutil.PadTo(b.Block.ParentRoot, 32)
|
||||
util.SaveBlock(t, context.Background(), beaconDB, beaconBlock)
|
||||
util.SaveBlock(t, t.Context(), beaconDB, beaconBlock)
|
||||
}
|
||||
|
||||
st, blkRoot, err := prepareForkchoiceState(context.Background(), 200, r200, r200, params.BeaconConfig().ZeroHash, ojc, ofc)
|
||||
st, blkRoot, err := prepareForkchoiceState(t.Context(), 200, r200, r200, params.BeaconConfig().ZeroHash, ojc, ofc)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
|
||||
|
||||
r, err := service.Ancestor(context.Background(), r200[:], 150)
|
||||
r, err := service.Ancestor(t.Context(), r200[:], 150)
|
||||
require.NoError(t, err)
|
||||
if bytesutil.ToBytes32(r) != r100 {
|
||||
t.Error("Did not get correct root")
|
||||
@@ -608,7 +609,7 @@ func TestAncestor_CanUseDB(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestEnsureRootNotZeroHashes(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
opts := testServiceOptsNoDB()
|
||||
service, err := NewService(ctx, opts...)
|
||||
require.NoError(t, err)
|
||||
@@ -622,7 +623,7 @@ func TestEnsureRootNotZeroHashes(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestHandleEpochBoundary_UpdateFirstSlot(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
opts := testServiceOptsNoDB()
|
||||
service, err := NewService(ctx, opts...)
|
||||
require.NoError(t, err)
|
||||
@@ -921,7 +922,7 @@ func TestRemoveBlockAttestationsInPool(t *testing.T) {
|
||||
r, err := b.Block.HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
beaconDB := testDB.SetupDB(t)
|
||||
service := setupBeaconChain(t, beaconDB)
|
||||
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(ctx, ðpb.StateSummary{Root: r[:]}))
|
||||
@@ -934,7 +935,7 @@ func TestRemoveBlockAttestationsInPool(t *testing.T) {
|
||||
require.NoError(t, service.cfg.AttPool.SaveAggregatedAttestations(atts))
|
||||
wsb, err := consensusblocks.NewSignedBeaconBlock(b)
|
||||
require.NoError(t, err)
|
||||
service.pruneAttsFromPool(context.Background(), nil /* state not needed pre-Electra */, wsb)
|
||||
service.pruneAttsFromPool(t.Context(), nil /* state not needed pre-Electra */, wsb)
|
||||
require.LogsDoNotContain(t, logHook, "Could not prune attestations")
|
||||
require.Equal(t, 0, service.cfg.AttPool.AggregatedAttestationCount())
|
||||
}
|
||||
@@ -2331,13 +2332,13 @@ func driftGenesisTime(s *Service, slot, delay int64) {
|
||||
s.cfg.ForkChoiceStore.SetGenesisTime(uint64(newTime.Unix()))
|
||||
}
|
||||
|
||||
func TestMissingIndices(t *testing.T) {
|
||||
func TestMissingBlobIndices(t *testing.T) {
|
||||
cases := []struct {
|
||||
name string
|
||||
expected [][]byte
|
||||
present []uint64
|
||||
result map[uint64]struct{}
|
||||
root [32]byte
|
||||
root [fieldparams.RootLength]byte
|
||||
err error
|
||||
}{
|
||||
{
|
||||
@@ -2395,7 +2396,7 @@ func TestMissingIndices(t *testing.T) {
|
||||
bm, bs := filesystem.NewEphemeralBlobStorageWithMocker(t)
|
||||
t.Run(c.name, func(t *testing.T) {
|
||||
require.NoError(t, bm.CreateFakeIndices(c.root, 0, c.present...))
|
||||
missing, err := missingIndices(bs, c.root, c.expected, 0)
|
||||
missing, err := missingBlobIndices(bs, c.root, c.expected, 0)
|
||||
if c.err != nil {
|
||||
require.ErrorIs(t, err, c.err)
|
||||
return
|
||||
@@ -2403,9 +2404,70 @@ func TestMissingIndices(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, len(c.result), len(missing))
|
||||
for key := range c.result {
|
||||
m, ok := missing[key]
|
||||
require.Equal(t, true, ok)
|
||||
require.Equal(t, c.result[key], m)
|
||||
require.Equal(t, true, missing[key])
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestMissingDataColumnIndices(t *testing.T) {
|
||||
countPlusOne := params.BeaconConfig().NumberOfColumns + 1
|
||||
tooManyColumns := make(map[uint64]bool, countPlusOne)
|
||||
for i := range countPlusOne {
|
||||
tooManyColumns[uint64(i)] = true
|
||||
}
|
||||
|
||||
testCases := []struct {
|
||||
name string
|
||||
storedIndices []uint64
|
||||
input map[uint64]bool
|
||||
expected map[uint64]bool
|
||||
err error
|
||||
}{
|
||||
{
|
||||
name: "zero len expected",
|
||||
input: map[uint64]bool{},
|
||||
},
|
||||
{
|
||||
name: "expected exceeds max",
|
||||
input: tooManyColumns,
|
||||
err: errMaxDataColumnsExceeded,
|
||||
},
|
||||
{
|
||||
name: "all missing",
|
||||
storedIndices: []uint64{},
|
||||
input: map[uint64]bool{0: true, 1: true, 2: true},
|
||||
expected: map[uint64]bool{0: true, 1: true, 2: true},
|
||||
},
|
||||
{
|
||||
name: "none missing",
|
||||
input: map[uint64]bool{0: true, 1: true, 2: true},
|
||||
expected: map[uint64]bool{},
|
||||
storedIndices: []uint64{0, 1, 2, 3, 4}, // Extra columns stored but not expected
|
||||
},
|
||||
{
|
||||
name: "some missing",
|
||||
storedIndices: []uint64{0, 20},
|
||||
input: map[uint64]bool{0: true, 10: true, 20: true, 30: true},
|
||||
expected: map[uint64]bool{10: true, 30: true},
|
||||
},
|
||||
}
|
||||
|
||||
var emptyRoot [fieldparams.RootLength]byte
|
||||
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
dcm, dcs := filesystem.NewEphemeralDataColumnStorageWithMocker(t)
|
||||
err := dcm.CreateFakeIndices(emptyRoot, 0, tc.storedIndices...)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Test the function
|
||||
actual, err := missingDataColumnIndices(dcs, emptyRoot, tc.input)
|
||||
require.ErrorIs(t, err, tc.err)
|
||||
|
||||
require.Equal(t, len(tc.expected), len(actual))
|
||||
for key := range tc.expected {
|
||||
require.Equal(t, true, actual[key])
|
||||
}
|
||||
})
|
||||
}
|
||||
@@ -2605,7 +2667,7 @@ func TestRollbackBlock_ContextDeadline(t *testing.T) {
|
||||
require.Equal(t, true, hasState)
|
||||
|
||||
// Set deadlined context when processing the block
|
||||
cancCtx, canc := context.WithCancel(context.Background())
|
||||
cancCtx, canc := context.WithCancel(t.Context())
|
||||
canc()
|
||||
roblock, err = consensusblocks.NewROBlockWithRoot(wsb, root)
|
||||
require.NoError(t, err)
|
||||
@@ -2644,7 +2706,7 @@ func fakeResult(missing []uint64) map[uint64]struct{} {
|
||||
return r
|
||||
}
|
||||
|
||||
func TestSaveLightClientUpdate(t *testing.T) {
|
||||
func TestProcessLightClientUpdate(t *testing.T) {
|
||||
featCfg := &features.Flags{}
|
||||
featCfg.EnableLightClient = true
|
||||
reset := features.InitWithReset(featCfg)
|
||||
@@ -2685,7 +2747,7 @@ func TestSaveLightClientUpdate(t *testing.T) {
|
||||
isValidPayload: true,
|
||||
}
|
||||
|
||||
s.saveLightClientUpdate(cfg)
|
||||
require.NoError(t, s.processLightClientUpdate(cfg))
|
||||
|
||||
// Check that the light client update is saved
|
||||
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
|
||||
@@ -2740,7 +2802,7 @@ func TestSaveLightClientUpdate(t *testing.T) {
|
||||
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
|
||||
require.NoError(t, err)
|
||||
|
||||
s.saveLightClientUpdate(cfg)
|
||||
require.NoError(t, s.processLightClientUpdate(cfg))
|
||||
|
||||
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
|
||||
require.NoError(t, err)
|
||||
@@ -2801,7 +2863,7 @@ func TestSaveLightClientUpdate(t *testing.T) {
|
||||
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
|
||||
require.NoError(t, err)
|
||||
|
||||
s.saveLightClientUpdate(cfg)
|
||||
require.NoError(t, s.processLightClientUpdate(cfg))
|
||||
|
||||
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
|
||||
require.NoError(t, err)
|
||||
@@ -2844,7 +2906,7 @@ func TestSaveLightClientUpdate(t *testing.T) {
|
||||
isValidPayload: true,
|
||||
}
|
||||
|
||||
s.saveLightClientUpdate(cfg)
|
||||
require.NoError(t, s.processLightClientUpdate(cfg))
|
||||
|
||||
// Check that the light client update is saved
|
||||
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
|
||||
@@ -2898,7 +2960,7 @@ func TestSaveLightClientUpdate(t *testing.T) {
|
||||
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
|
||||
require.NoError(t, err)
|
||||
|
||||
s.saveLightClientUpdate(cfg)
|
||||
require.NoError(t, s.processLightClientUpdate(cfg))
|
||||
|
||||
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
|
||||
require.NoError(t, err)
|
||||
@@ -2959,7 +3021,7 @@ func TestSaveLightClientUpdate(t *testing.T) {
|
||||
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
|
||||
require.NoError(t, err)
|
||||
|
||||
s.saveLightClientUpdate(cfg)
|
||||
require.NoError(t, s.processLightClientUpdate(cfg))
|
||||
|
||||
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
|
||||
require.NoError(t, err)
|
||||
@@ -3002,7 +3064,7 @@ func TestSaveLightClientUpdate(t *testing.T) {
|
||||
isValidPayload: true,
|
||||
}
|
||||
|
||||
s.saveLightClientUpdate(cfg)
|
||||
require.NoError(t, s.processLightClientUpdate(cfg))
|
||||
|
||||
// Check that the light client update is saved
|
||||
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
|
||||
@@ -3056,7 +3118,7 @@ func TestSaveLightClientUpdate(t *testing.T) {
|
||||
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
|
||||
require.NoError(t, err)
|
||||
|
||||
s.saveLightClientUpdate(cfg)
|
||||
require.NoError(t, s.processLightClientUpdate(cfg))
|
||||
|
||||
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
|
||||
require.NoError(t, err)
|
||||
@@ -3117,7 +3179,7 @@ func TestSaveLightClientUpdate(t *testing.T) {
|
||||
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
|
||||
require.NoError(t, err)
|
||||
|
||||
s.saveLightClientUpdate(cfg)
|
||||
require.NoError(t, s.processLightClientUpdate(cfg))
|
||||
|
||||
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
|
||||
require.NoError(t, err)
|
||||
@@ -3130,7 +3192,7 @@ func TestSaveLightClientUpdate(t *testing.T) {
|
||||
reset()
|
||||
}
|
||||
|
||||
func TestSaveLightClientBootstrap(t *testing.T) {
|
||||
func TestProcessLightClientBootstrap(t *testing.T) {
|
||||
featCfg := &features.Flags{}
|
||||
featCfg.EnableLightClient = true
|
||||
reset := features.InitWithReset(featCfg)
|
||||
@@ -3160,7 +3222,7 @@ func TestSaveLightClientBootstrap(t *testing.T) {
|
||||
isValidPayload: true,
|
||||
}
|
||||
|
||||
s.saveLightClientBootstrap(cfg)
|
||||
require.NoError(t, s.processLightClientBootstrap(cfg))
|
||||
|
||||
// Check that the light client bootstrap is saved
|
||||
b, err := s.cfg.BeaconDB.LightClientBootstrap(ctx, currentBlockRoot[:])
|
||||
@@ -3195,7 +3257,7 @@ func TestSaveLightClientBootstrap(t *testing.T) {
|
||||
isValidPayload: true,
|
||||
}
|
||||
|
||||
s.saveLightClientBootstrap(cfg)
|
||||
require.NoError(t, s.processLightClientBootstrap(cfg))
|
||||
|
||||
// Check that the light client bootstrap is saved
|
||||
b, err := s.cfg.BeaconDB.LightClientBootstrap(ctx, currentBlockRoot[:])
|
||||
@@ -3230,7 +3292,7 @@ func TestSaveLightClientBootstrap(t *testing.T) {
|
||||
isValidPayload: true,
|
||||
}
|
||||
|
||||
s.saveLightClientBootstrap(cfg)
|
||||
require.NoError(t, s.processLightClientBootstrap(cfg))
|
||||
|
||||
// Check that the light client bootstrap is saved
|
||||
b, err := s.cfg.BeaconDB.LightClientBootstrap(ctx, currentBlockRoot[:])
|
||||
@@ -3246,6 +3308,235 @@ func TestSaveLightClientBootstrap(t *testing.T) {
|
||||
reset()
|
||||
}
|
||||
|
||||
type testIsAvailableParams struct {
|
||||
options []Option
|
||||
blobKzgCommitmentsCount uint64
|
||||
columnsToSave []uint64
|
||||
}
|
||||
|
||||
func testIsAvailableSetup(t *testing.T, params testIsAvailableParams) (context.Context, context.CancelFunc, *Service, [fieldparams.RootLength]byte, interfaces.SignedBeaconBlock) {
|
||||
ctx, cancel := context.WithCancel(t.Context())
|
||||
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
|
||||
|
||||
options := append(params.options, WithDataColumnStorage(dataColumnStorage))
|
||||
service, _ := minimalTestService(t, options...)
|
||||
|
||||
genesisState, secretKeys := util.DeterministicGenesisStateElectra(t, 32 /*validator count*/)
|
||||
|
||||
err := service.saveGenesisData(ctx, genesisState)
|
||||
require.NoError(t, err)
|
||||
|
||||
conf := util.DefaultBlockGenConfig()
|
||||
conf.NumBlobKzgCommitments = params.blobKzgCommitmentsCount
|
||||
|
||||
signedBeaconBlock, err := util.GenerateFullBlockFulu(genesisState, secretKeys, conf, 10 /*block slot*/)
|
||||
require.NoError(t, err)
|
||||
|
||||
block := signedBeaconBlock.Block
|
||||
bodyRoot, err := block.Body.HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
|
||||
root, err := block.HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
|
||||
dataColumnsParams := make([]util.DataColumnParam, 0, len(params.columnsToSave))
|
||||
for _, i := range params.columnsToSave {
|
||||
dataColumnParam := util.DataColumnParam{
|
||||
Index: i,
|
||||
Slot: block.Slot,
|
||||
ProposerIndex: block.ProposerIndex,
|
||||
ParentRoot: block.ParentRoot,
|
||||
StateRoot: block.StateRoot,
|
||||
BodyRoot: bodyRoot[:],
|
||||
}
|
||||
dataColumnsParams = append(dataColumnsParams, dataColumnParam)
|
||||
}
|
||||
|
||||
_, verifiedRODataColumns := util.CreateTestVerifiedRoDataColumnSidecars(t, dataColumnsParams)
|
||||
|
||||
err = dataColumnStorage.Save(verifiedRODataColumns)
|
||||
require.NoError(t, err)
|
||||
|
||||
signed, err := consensusblocks.NewSignedBeaconBlock(signedBeaconBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
return ctx, cancel, service, root, signed
|
||||
}
|
||||
|
||||
func TestIsDataAvailable(t *testing.T) {
|
||||
t.Run("Fulu - out of retention window", func(t *testing.T) {
|
||||
params := testIsAvailableParams{options: []Option{WithGenesisTime(time.Unix(0, 0))}}
|
||||
ctx, _, service, root, signed := testIsAvailableSetup(t, params)
|
||||
|
||||
err := service.isDataAvailable(ctx, root, signed)
|
||||
require.NoError(t, err)
|
||||
})
|
||||
|
||||
t.Run("Fulu - no commitment in blocks", func(t *testing.T) {
|
||||
ctx, _, service, root, signed := testIsAvailableSetup(t, testIsAvailableParams{})
|
||||
|
||||
err := service.isDataAvailable(ctx, root, signed)
|
||||
require.NoError(t, err)
|
||||
})
|
||||
|
||||
t.Run("Fulu - more than half of the columns in custody", func(t *testing.T) {
|
||||
minimumColumnsCountToReconstruct := peerdas.MinimumColumnsCountToReconstruct()
|
||||
indices := make([]uint64, 0, minimumColumnsCountToReconstruct)
|
||||
for i := range minimumColumnsCountToReconstruct {
|
||||
indices = append(indices, i)
|
||||
}
|
||||
|
||||
params := testIsAvailableParams{
|
||||
options: []Option{WithCustodyInfo(&peerdas.CustodyInfo{})},
|
||||
columnsToSave: indices,
|
||||
blobKzgCommitmentsCount: 3,
|
||||
}
|
||||
|
||||
ctx, _, service, root, signed := testIsAvailableSetup(t, params)
|
||||
|
||||
err := service.isDataAvailable(ctx, root, signed)
|
||||
require.NoError(t, err)
|
||||
})
|
||||
|
||||
t.Run("Fulu - no missing data columns", func(t *testing.T) {
|
||||
params := testIsAvailableParams{
|
||||
options: []Option{WithCustodyInfo(&peerdas.CustodyInfo{})},
|
||||
columnsToSave: []uint64{1, 17, 19, 42, 75, 87, 102, 117, 119}, // 119 is not needed
|
||||
blobKzgCommitmentsCount: 3,
|
||||
}
|
||||
|
||||
ctx, _, service, root, signed := testIsAvailableSetup(t, params)
|
||||
|
||||
err := service.isDataAvailable(ctx, root, signed)
|
||||
require.NoError(t, err)
|
||||
})
|
||||
|
||||
t.Run("Fulu - some initially missing data columns (no reconstruction)", func(t *testing.T) {
|
||||
startWaiting := make(chan bool)
|
||||
|
||||
testParams := testIsAvailableParams{
|
||||
options: []Option{WithCustodyInfo(&peerdas.CustodyInfo{}), WithStartWaitingDataColumnSidecars(startWaiting)},
|
||||
columnsToSave: []uint64{1, 17, 19, 75, 102, 117, 119}, // 119 is not needed, 42 and 87 are missing
|
||||
|
||||
blobKzgCommitmentsCount: 3,
|
||||
}
|
||||
|
||||
ctx, _, service, root, signed := testIsAvailableSetup(t, testParams)
|
||||
block := signed.Block()
|
||||
slot := block.Slot()
|
||||
proposerIndex := block.ProposerIndex()
|
||||
parentRoot := block.ParentRoot()
|
||||
stateRoot := block.StateRoot()
|
||||
bodyRoot, err := block.Body().HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
|
||||
_, verifiedSidecarsWrongRoot := util.CreateTestVerifiedRoDataColumnSidecars(
|
||||
t,
|
||||
[]util.DataColumnParam{
|
||||
{Index: 42, Slot: slot + 1}, // Needed index, but not for this slot.
|
||||
})
|
||||
|
||||
_, verifiedSidecars := util.CreateTestVerifiedRoDataColumnSidecars(t, []util.DataColumnParam{
|
||||
{Index: 87, Slot: slot, ProposerIndex: proposerIndex, ParentRoot: parentRoot[:], StateRoot: stateRoot[:], BodyRoot: bodyRoot[:]}, // Needed index
|
||||
{Index: 1, Slot: slot, ProposerIndex: proposerIndex, ParentRoot: parentRoot[:], StateRoot: stateRoot[:], BodyRoot: bodyRoot[:]}, // Not needed index
|
||||
{Index: 42, Slot: slot, ProposerIndex: proposerIndex, ParentRoot: parentRoot[:], StateRoot: stateRoot[:], BodyRoot: bodyRoot[:]}, // Needed index
|
||||
})
|
||||
|
||||
go func() {
|
||||
<-startWaiting
|
||||
|
||||
err := service.dataColumnStorage.Save(verifiedSidecarsWrongRoot)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = service.dataColumnStorage.Save(verifiedSidecars)
|
||||
require.NoError(t, err)
|
||||
}()
|
||||
|
||||
err = service.isDataAvailable(ctx, root, signed)
|
||||
require.NoError(t, err)
|
||||
})
|
||||
|
||||
t.Run("Fulu - some initially missing data columns (reconstruction)", func(t *testing.T) {
|
||||
const (
|
||||
missingColumns = uint64(2)
|
||||
cgc = 128
|
||||
)
|
||||
|
||||
startWaiting := make(chan bool)
|
||||
|
||||
var custodyInfo peerdas.CustodyInfo
|
||||
custodyInfo.TargetGroupCount.SetValidatorsCustodyRequirement(cgc)
|
||||
custodyInfo.ToAdvertiseGroupCount.Set(cgc)
|
||||
|
||||
minimumColumnsCountToReconstruct := peerdas.MinimumColumnsCountToReconstruct()
|
||||
indices := make([]uint64, 0, minimumColumnsCountToReconstruct-missingColumns)
|
||||
|
||||
for i := range minimumColumnsCountToReconstruct - missingColumns {
|
||||
indices = append(indices, i)
|
||||
}
|
||||
|
||||
testParams := testIsAvailableParams{
|
||||
options: []Option{WithCustodyInfo(&custodyInfo), WithStartWaitingDataColumnSidecars(startWaiting)},
|
||||
columnsToSave: indices,
|
||||
blobKzgCommitmentsCount: 3,
|
||||
}
|
||||
|
||||
ctx, _, service, root, signed := testIsAvailableSetup(t, testParams)
|
||||
block := signed.Block()
|
||||
slot := block.Slot()
|
||||
proposerIndex := block.ProposerIndex()
|
||||
parentRoot := block.ParentRoot()
|
||||
stateRoot := block.StateRoot()
|
||||
bodyRoot, err := block.Body().HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
|
||||
dataColumnParams := make([]util.DataColumnParam, 0, missingColumns)
|
||||
for i := minimumColumnsCountToReconstruct - missingColumns; i < minimumColumnsCountToReconstruct; i++ {
|
||||
dataColumnParam := util.DataColumnParam{
|
||||
Index: i,
|
||||
Slot: slot,
|
||||
ProposerIndex: proposerIndex,
|
||||
ParentRoot: parentRoot[:],
|
||||
StateRoot: stateRoot[:],
|
||||
BodyRoot: bodyRoot[:],
|
||||
}
|
||||
|
||||
dataColumnParams = append(dataColumnParams, dataColumnParam)
|
||||
}
|
||||
|
||||
_, verifiedSidecars := util.CreateTestVerifiedRoDataColumnSidecars(t, dataColumnParams)
|
||||
|
||||
go func() {
|
||||
<-startWaiting
|
||||
|
||||
err := service.dataColumnStorage.Save(verifiedSidecars)
|
||||
require.NoError(t, err)
|
||||
}()
|
||||
|
||||
err = service.isDataAvailable(ctx, root, signed)
|
||||
require.NoError(t, err)
|
||||
})
|
||||
|
||||
t.Run("Fulu - some columns are definitively missing", func(t *testing.T) {
|
||||
startWaiting := make(chan bool)
|
||||
|
||||
params := testIsAvailableParams{
|
||||
options: []Option{WithCustodyInfo(&peerdas.CustodyInfo{}), WithStartWaitingDataColumnSidecars(startWaiting)},
|
||||
blobKzgCommitmentsCount: 3,
|
||||
}
|
||||
|
||||
ctx, cancel, service, root, signed := testIsAvailableSetup(t, params)
|
||||
|
||||
go func() {
|
||||
<-startWaiting
|
||||
cancel()
|
||||
}()
|
||||
|
||||
err := service.isDataAvailable(ctx, root, signed)
|
||||
require.NotNil(t, err)
|
||||
})
|
||||
}
|
||||
|
||||
func setupLightClientTestRequirements(ctx context.Context, t *testing.T, s *Service, v int, options ...util.LightClientOption) (*util.TestLightClient, *postBlockProcessConfig) {
|
||||
var l *util.TestLightClient
|
||||
switch v {
|
||||
@@ -3310,7 +3601,7 @@ func TestProcessLightClientOptimisticUpdate(t *testing.T) {
|
||||
params.OverrideBeaconConfig(beaconCfg)
|
||||
|
||||
s, tr := minimalTestService(t)
|
||||
s.cfg.P2p = &mockp2p.FakeP2P{}
|
||||
s.cfg.P2P = &mockp2p.FakeP2P{}
|
||||
ctx := tr.ctx
|
||||
|
||||
testCases := []struct {
|
||||
@@ -3446,7 +3737,7 @@ func TestProcessLightClientFinalityUpdate(t *testing.T) {
|
||||
params.OverrideBeaconConfig(beaconCfg)
|
||||
|
||||
s, tr := minimalTestService(t)
|
||||
s.cfg.P2p = &mockp2p.FakeP2P{}
|
||||
s.cfg.P2P = &mockp2p.FakeP2P{}
|
||||
ctx := tr.ctx
|
||||
|
||||
testCases := []struct {
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package blockchain
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
@@ -32,7 +31,7 @@ func TestAttestationCheckPtState_FarFutureSlot(t *testing.T) {
|
||||
service.genesisTime = time.Now()
|
||||
|
||||
e := primitives.Epoch(slots.MaxSlotBuffer/uint64(params.BeaconConfig().SlotsPerEpoch) + 1)
|
||||
_, err := service.AttestationTargetState(context.Background(), ðpb.Checkpoint{Epoch: e})
|
||||
_, err := service.AttestationTargetState(t.Context(), ðpb.Checkpoint{Epoch: e})
|
||||
require.ErrorContains(t, "exceeds max allowed value relative to the local clock", err)
|
||||
}
|
||||
|
||||
@@ -56,11 +55,11 @@ func TestVerifyLMDFFGConsistent(t *testing.T) {
|
||||
a.Data.Target.Root = []byte{'c'}
|
||||
r33Root := r33.Root()
|
||||
a.Data.BeaconBlockRoot = r33Root[:]
|
||||
require.ErrorContains(t, wanted, service.VerifyLmdFfgConsistency(context.Background(), a))
|
||||
require.ErrorContains(t, wanted, service.VerifyLmdFfgConsistency(t.Context(), a))
|
||||
|
||||
r32Root := r32.Root()
|
||||
a.Data.Target.Root = r32Root[:]
|
||||
err = service.VerifyLmdFfgConsistency(context.Background(), a)
|
||||
err = service.VerifyLmdFfgConsistency(t.Context(), a)
|
||||
require.NoError(t, err, "Could not verify LMD and FFG votes to be consistent")
|
||||
}
|
||||
|
||||
|
||||
@@ -16,6 +16,7 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/slasher/types"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
|
||||
"github.com/OffchainLabs/prysm/v6/config/features"
|
||||
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
@@ -52,6 +53,13 @@ type BlobReceiver interface {
|
||||
ReceiveBlob(context.Context, blocks.VerifiedROBlob) error
|
||||
}
|
||||
|
||||
// DataColumnReceiver interface defines the methods of chain service for receiving new
|
||||
// data columns
|
||||
type DataColumnReceiver interface {
|
||||
ReceiveDataColumn(blocks.VerifiedRODataColumn) error
|
||||
ReceiveDataColumns([]blocks.VerifiedRODataColumn) error
|
||||
}
|
||||
|
||||
// SlashingReceiver interface defines the methods of chain service for receiving validated slashing over the wire.
|
||||
type SlashingReceiver interface {
|
||||
ReceiveAttesterSlashing(ctx context.Context, slashing ethpb.AttSlashing)
|
||||
@@ -74,6 +82,7 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
|
||||
log.WithField("blockRoot", fmt.Sprintf("%#x", blockRoot)).Debug("Ignoring already synced block")
|
||||
return nil
|
||||
}
|
||||
|
||||
receivedTime := time.Now()
|
||||
s.blockBeingSynced.set(blockRoot)
|
||||
defer s.blockBeingSynced.unset(blockRoot)
|
||||
@@ -82,6 +91,7 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
preState, err := s.getBlockPreState(ctx, blockCopy.Block())
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not get block's prestate")
|
||||
@@ -97,10 +107,12 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
daWaitedTime, err := s.handleDA(ctx, blockCopy, blockRoot, avs)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Defragment the state before continuing block processing.
|
||||
s.defragmentState(postState)
|
||||
|
||||
@@ -227,26 +239,34 @@ func (s *Service) validateExecutionAndConsensus(
|
||||
func (s *Service) handleDA(
|
||||
ctx context.Context,
|
||||
block interfaces.SignedBeaconBlock,
|
||||
blockRoot [32]byte,
|
||||
blockRoot [fieldparams.RootLength]byte,
|
||||
avs das.AvailabilityStore,
|
||||
) (time.Duration, error) {
|
||||
daStartTime := time.Now()
|
||||
if avs != nil {
|
||||
rob, err := blocks.NewROBlockWithRoot(block, blockRoot)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
) (elapsed time.Duration, err error) {
|
||||
defer func(start time.Time) {
|
||||
elapsed = time.Since(start)
|
||||
|
||||
if err == nil {
|
||||
dataAvailWaitedTime.Observe(float64(elapsed.Milliseconds()))
|
||||
}
|
||||
if err := avs.IsDataAvailable(ctx, s.CurrentSlot(), rob); err != nil {
|
||||
return 0, errors.Wrap(err, "could not validate blob data availability (AvailabilityStore.IsDataAvailable)")
|
||||
}
|
||||
} else {
|
||||
if err := s.isDataAvailable(ctx, blockRoot, block); err != nil {
|
||||
return 0, errors.Wrap(err, "could not validate blob data availability")
|
||||
}(time.Now())
|
||||
|
||||
if avs == nil {
|
||||
if err = s.isDataAvailable(ctx, blockRoot, block); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
daWaitedTime := time.Since(daStartTime)
|
||||
dataAvailWaitedTime.Observe(float64(daWaitedTime.Milliseconds()))
|
||||
return daWaitedTime, nil
|
||||
|
||||
var rob blocks.ROBlock
|
||||
rob, err = blocks.NewROBlockWithRoot(block, blockRoot)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
err = avs.IsDataAvailable(ctx, s.CurrentSlot(), rob)
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
func (s *Service) reportPostBlockProcessing(
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package blockchain
|
||||
|
||||
import (
|
||||
"context"
|
||||
"sync"
|
||||
"testing"
|
||||
"time"
|
||||
@@ -28,7 +27,7 @@ import (
|
||||
)
|
||||
|
||||
func TestService_ReceiveBlock(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
|
||||
genesis, keys := util.DeterministicGenesisState(t, 64)
|
||||
copiedGen := genesis.Copy()
|
||||
@@ -180,6 +179,19 @@ func TestService_ReceiveBlock(t *testing.T) {
|
||||
}
|
||||
wg.Wait()
|
||||
}
|
||||
func TestHandleDA(t *testing.T) {
|
||||
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(ðpb.SignedBeaconBlock{
|
||||
Block: ðpb.BeaconBlock{
|
||||
Body: ðpb.BeaconBlockBody{},
|
||||
},
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
s, _ := minimalTestService(t)
|
||||
elapsed, err := s.handleDA(t.Context(), signedBeaconBlock, [fieldparams.RootLength]byte{}, nil)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, true, elapsed > 0, "Elapsed time should be greater than 0")
|
||||
}
|
||||
|
||||
func TestService_ReceiveBlockUpdateHead(t *testing.T) {
|
||||
s, tr := minimalTestService(t,
|
||||
@@ -215,7 +227,7 @@ func TestService_ReceiveBlockUpdateHead(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestService_ReceiveBlockBatch(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
|
||||
genesis, keys := util.DeterministicGenesisState(t, 64)
|
||||
genFullBlock := func(t *testing.T, conf *util.BlockGenConfig, slot primitives.Slot) *ethpb.SignedBeaconBlock {
|
||||
@@ -280,23 +292,23 @@ func TestService_ReceiveBlockBatch(t *testing.T) {
|
||||
func TestService_HasBlock(t *testing.T) {
|
||||
s, _ := minimalTestService(t)
|
||||
r := [32]byte{'a'}
|
||||
if s.HasBlock(context.Background(), r) {
|
||||
if s.HasBlock(t.Context(), r) {
|
||||
t.Error("Should not have block")
|
||||
}
|
||||
wsb, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlock())
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, s.saveInitSyncBlock(context.Background(), r, wsb))
|
||||
if !s.HasBlock(context.Background(), r) {
|
||||
require.NoError(t, s.saveInitSyncBlock(t.Context(), r, wsb))
|
||||
if !s.HasBlock(t.Context(), r) {
|
||||
t.Error("Should have block")
|
||||
}
|
||||
b := util.NewBeaconBlock()
|
||||
b.Block.Slot = 1
|
||||
util.SaveBlock(t, context.Background(), s.cfg.BeaconDB, b)
|
||||
util.SaveBlock(t, t.Context(), s.cfg.BeaconDB, b)
|
||||
r, err = b.Block.HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, true, s.HasBlock(context.Background(), r))
|
||||
require.Equal(t, true, s.HasBlock(t.Context(), r))
|
||||
s.blockBeingSynced.set(r)
|
||||
require.Equal(t, false, s.HasBlock(context.Background(), r))
|
||||
require.Equal(t, false, s.HasBlock(t.Context(), r))
|
||||
}
|
||||
|
||||
func TestCheckSaveHotStateDB_Enabling(t *testing.T) {
|
||||
@@ -305,7 +317,7 @@ func TestCheckSaveHotStateDB_Enabling(t *testing.T) {
|
||||
st := params.BeaconConfig().SlotsPerEpoch.Mul(uint64(epochsSinceFinalitySaveHotStateDB))
|
||||
s.genesisTime = time.Now().Add(time.Duration(-1*int64(st)*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second)
|
||||
|
||||
require.NoError(t, s.checkSaveHotStateDB(context.Background()))
|
||||
require.NoError(t, s.checkSaveHotStateDB(t.Context()))
|
||||
assert.LogsContain(t, hook, "Entering mode to save hot states in DB")
|
||||
}
|
||||
|
||||
@@ -316,10 +328,10 @@ func TestCheckSaveHotStateDB_Disabling(t *testing.T) {
|
||||
|
||||
st := params.BeaconConfig().SlotsPerEpoch.Mul(uint64(epochsSinceFinalitySaveHotStateDB))
|
||||
s.genesisTime = time.Now().Add(time.Duration(-1*int64(st)*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second)
|
||||
require.NoError(t, s.checkSaveHotStateDB(context.Background()))
|
||||
require.NoError(t, s.checkSaveHotStateDB(t.Context()))
|
||||
s.genesisTime = time.Now()
|
||||
|
||||
require.NoError(t, s.checkSaveHotStateDB(context.Background()))
|
||||
require.NoError(t, s.checkSaveHotStateDB(t.Context()))
|
||||
assert.LogsContain(t, hook, "Exiting mode to save hot states in DB")
|
||||
}
|
||||
|
||||
@@ -328,7 +340,7 @@ func TestCheckSaveHotStateDB_Overflow(t *testing.T) {
|
||||
s, _ := minimalTestService(t)
|
||||
s.genesisTime = time.Now()
|
||||
|
||||
require.NoError(t, s.checkSaveHotStateDB(context.Background()))
|
||||
require.NoError(t, s.checkSaveHotStateDB(t.Context()))
|
||||
assert.LogsDoNotContain(t, hook, "Entering mode to save hot states in DB")
|
||||
}
|
||||
|
||||
@@ -443,7 +455,7 @@ func Test_executePostFinalizationTasks(t *testing.T) {
|
||||
|
||||
headState, err := util.NewBeaconStateElectra()
|
||||
require.NoError(t, err)
|
||||
finalizedStRoot, err := headState.HashTreeRoot(context.Background())
|
||||
finalizedStRoot, err := headState.HashTreeRoot(t.Context())
|
||||
require.NoError(t, err)
|
||||
|
||||
genesis := util.NewBeaconBlock()
|
||||
|
||||
25
beacon-chain/blockchain/receive_data_column.go
Normal file
25
beacon-chain/blockchain/receive_data_column.go
Normal file
@@ -0,0 +1,25 @@
|
||||
package blockchain
|
||||
|
||||
import (
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
// ReceiveDataColumns receives a batch of data columns.
|
||||
func (s *Service) ReceiveDataColumns(dataColumnSidecars []blocks.VerifiedRODataColumn) error {
|
||||
if err := s.dataColumnStorage.Save(dataColumnSidecars); err != nil {
|
||||
return errors.Wrap(err, "save data column sidecars")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// ReceiveDataColumn receives a single data column.
|
||||
// (It is only a wrapper around ReceiveDataColumns.)
|
||||
func (s *Service) ReceiveDataColumn(dataColumnSidecar blocks.VerifiedRODataColumn) error {
|
||||
if err := s.dataColumnStorage.Save([]blocks.VerifiedRODataColumn{dataColumnSidecar}); err != nil {
|
||||
return errors.Wrap(err, "save data column sidecars")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
@@ -16,6 +16,7 @@ import (
|
||||
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
|
||||
lightClient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
|
||||
coreTime "github.com/OffchainLabs/prysm/v6/beacon-chain/core/time"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/transition"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
|
||||
@@ -46,26 +47,28 @@ import (
|
||||
// Service represents a service that handles the internal
|
||||
// logic of managing the full PoS beacon chain.
|
||||
type Service struct {
|
||||
cfg *config
|
||||
ctx context.Context
|
||||
cancel context.CancelFunc
|
||||
genesisTime time.Time
|
||||
head *head
|
||||
headLock sync.RWMutex
|
||||
originBlockRoot [32]byte // genesis root, or weak subjectivity checkpoint root, depending on how the node is initialized
|
||||
boundaryRoots [][32]byte
|
||||
checkpointStateCache *cache.CheckpointStateCache
|
||||
initSyncBlocks map[[32]byte]interfaces.ReadOnlySignedBeaconBlock
|
||||
initSyncBlocksLock sync.RWMutex
|
||||
wsVerifier *WeakSubjectivityVerifier
|
||||
clockSetter startup.ClockSetter
|
||||
clockWaiter startup.ClockWaiter
|
||||
syncComplete chan struct{}
|
||||
blobNotifiers *blobNotifierMap
|
||||
blockBeingSynced *currentlySyncingBlock
|
||||
blobStorage *filesystem.BlobStorage
|
||||
slasherEnabled bool
|
||||
lcStore *lightClient.Store
|
||||
cfg *config
|
||||
ctx context.Context
|
||||
cancel context.CancelFunc
|
||||
genesisTime time.Time
|
||||
head *head
|
||||
headLock sync.RWMutex
|
||||
originBlockRoot [32]byte // genesis root, or weak subjectivity checkpoint root, depending on how the node is initialized
|
||||
boundaryRoots [][32]byte
|
||||
checkpointStateCache *cache.CheckpointStateCache
|
||||
initSyncBlocks map[[32]byte]interfaces.ReadOnlySignedBeaconBlock
|
||||
initSyncBlocksLock sync.RWMutex
|
||||
wsVerifier *WeakSubjectivityVerifier
|
||||
clockSetter startup.ClockSetter
|
||||
clockWaiter startup.ClockWaiter
|
||||
syncComplete chan struct{}
|
||||
blobNotifiers *blobNotifierMap
|
||||
blockBeingSynced *currentlySyncingBlock
|
||||
blobStorage *filesystem.BlobStorage
|
||||
dataColumnStorage *filesystem.DataColumnStorage
|
||||
slasherEnabled bool
|
||||
lcStore *lightClient.Store
|
||||
startWaitingDataColumnSidecars chan bool // for testing purposes only
|
||||
}
|
||||
|
||||
// config options for the service.
|
||||
@@ -81,7 +84,7 @@ type config struct {
|
||||
ExitPool voluntaryexits.PoolManager
|
||||
SlashingPool slashings.PoolManager
|
||||
BLSToExecPool blstoexec.PoolManager
|
||||
P2p p2p.Broadcaster
|
||||
P2P p2p.Accessor
|
||||
MaxRoutines int
|
||||
StateNotifier statefeed.Notifier
|
||||
ForkChoiceStore f.ForkChoicer
|
||||
@@ -93,6 +96,7 @@ type config struct {
|
||||
FinalizedStateAtStartUp state.BeaconState
|
||||
ExecutionEngineCaller execution.EngineCaller
|
||||
SyncChecker Checker
|
||||
CustodyInfo *peerdas.CustodyInfo
|
||||
}
|
||||
|
||||
// Checker is an interface used to determine if a node is in initial sync
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package blockchain
|
||||
|
||||
import (
|
||||
"context"
|
||||
"io"
|
||||
"testing"
|
||||
|
||||
@@ -26,7 +25,7 @@ func TestChainService_SaveHead_DataRace(t *testing.T) {
|
||||
st, _ := util.DeterministicGenesisState(t, 1)
|
||||
require.NoError(t, err)
|
||||
go func() {
|
||||
require.NoError(t, s.saveHead(context.Background(), [32]byte{}, b, st))
|
||||
require.NoError(t, s.saveHead(t.Context(), [32]byte{}, b, st))
|
||||
}()
|
||||
require.NoError(t, s.saveHead(context.Background(), [32]byte{}, b, st))
|
||||
require.NoError(t, s.saveHead(t.Context(), [32]byte{}, b, st))
|
||||
}
|
||||
|
||||
@@ -42,7 +42,7 @@ import (
|
||||
)
|
||||
|
||||
func setupBeaconChain(t *testing.T, beaconDB db.Database) *Service {
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
var web3Service *execution.Service
|
||||
var err error
|
||||
srv, endpoint, err := mockExecution.SetupRPCServer()
|
||||
@@ -97,7 +97,7 @@ func setupBeaconChain(t *testing.T, beaconDB db.Database) *Service {
|
||||
WithAttestationPool(attestations.NewPool()),
|
||||
WithSlashingPool(slashings.NewPool()),
|
||||
WithExitPool(voluntaryexits.NewPool()),
|
||||
WithP2PBroadcaster(&mockBroadcaster{}),
|
||||
WithP2PBroadcaster(&mockAccessor{}),
|
||||
WithStateNotifier(&mockBeaconNode{}),
|
||||
WithForkChoiceStore(fc),
|
||||
WithAttestationService(attService),
|
||||
@@ -115,7 +115,7 @@ func setupBeaconChain(t *testing.T, beaconDB db.Database) *Service {
|
||||
|
||||
func TestChainStartStop_Initialized(t *testing.T) {
|
||||
hook := logTest.NewGlobal()
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
beaconDB := testDB.SetupDB(t)
|
||||
|
||||
chainService := setupBeaconChain(t, beaconDB)
|
||||
@@ -152,7 +152,7 @@ func TestChainStartStop_Initialized(t *testing.T) {
|
||||
|
||||
func TestChainStartStop_GenesisZeroHashes(t *testing.T) {
|
||||
hook := logTest.NewGlobal()
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
beaconDB := testDB.SetupDB(t)
|
||||
|
||||
chainService := setupBeaconChain(t, beaconDB)
|
||||
@@ -184,7 +184,7 @@ func TestChainStartStop_GenesisZeroHashes(t *testing.T) {
|
||||
func TestChainService_InitializeBeaconChain(t *testing.T) {
|
||||
helpers.ClearCache()
|
||||
beaconDB := testDB.SetupDB(t)
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
|
||||
bc := setupBeaconChain(t, beaconDB)
|
||||
var err error
|
||||
@@ -226,7 +226,7 @@ func TestChainService_InitializeBeaconChain(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestChainService_CorrectGenesisRoots(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
beaconDB := testDB.SetupDB(t)
|
||||
|
||||
chainService := setupBeaconChain(t, beaconDB)
|
||||
@@ -295,7 +295,7 @@ func TestChainService_InitializeChainInfo(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
assert.DeepSSZEqual(t, headState.ToProtoUnsafe(), s.ToProtoUnsafe(), "Head state incorrect")
|
||||
assert.Equal(t, c.HeadSlot(), headBlock.Block.Slot, "Head slot incorrect")
|
||||
r, err := c.HeadRoot(context.Background())
|
||||
r, err := c.HeadRoot(t.Context())
|
||||
require.NoError(t, err)
|
||||
if !bytes.Equal(headRoot[:], r) {
|
||||
t.Error("head slot incorrect")
|
||||
@@ -346,7 +346,7 @@ func TestChainService_InitializeChainInfo_SetHeadAtGenesis(t *testing.T) {
|
||||
|
||||
func TestChainService_SaveHeadNoDB(t *testing.T) {
|
||||
beaconDB := testDB.SetupDB(t)
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
fc := doublylinkedtree.New()
|
||||
s := &Service{
|
||||
cfg: &config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB, fc), ForkChoiceStore: fc},
|
||||
@@ -370,7 +370,7 @@ func TestChainService_SaveHeadNoDB(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestHasBlock_ForkChoiceAndDB_DoublyLinkedTree(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
beaconDB := testDB.SetupDB(t)
|
||||
s := &Service{
|
||||
cfg: &config{ForkChoiceStore: doublylinkedtree.New(), BeaconDB: beaconDB},
|
||||
@@ -391,7 +391,7 @@ func TestHasBlock_ForkChoiceAndDB_DoublyLinkedTree(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestServiceStop_SaveCachedBlocks(t *testing.T) {
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
ctx, cancel := context.WithCancel(t.Context())
|
||||
beaconDB := testDB.SetupDB(t)
|
||||
s := &Service{
|
||||
cfg: &config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB, doublylinkedtree.New())},
|
||||
@@ -410,13 +410,13 @@ func TestServiceStop_SaveCachedBlocks(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestProcessChainStartTime_ReceivedFeed(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
beaconDB := testDB.SetupDB(t)
|
||||
service := setupBeaconChain(t, beaconDB)
|
||||
mgs := &MockClockSetter{}
|
||||
service.clockSetter = mgs
|
||||
gt := time.Now()
|
||||
service.onExecutionChainStart(context.Background(), gt)
|
||||
service.onExecutionChainStart(t.Context(), gt)
|
||||
gs, err := beaconDB.GenesisState(ctx)
|
||||
require.NoError(t, err)
|
||||
require.NotEqual(t, nil, gs)
|
||||
@@ -429,7 +429,7 @@ func TestProcessChainStartTime_ReceivedFeed(t *testing.T) {
|
||||
|
||||
func BenchmarkHasBlockDB(b *testing.B) {
|
||||
beaconDB := testDB.SetupDB(b)
|
||||
ctx := context.Background()
|
||||
ctx := b.Context()
|
||||
s := &Service{
|
||||
cfg: &config{BeaconDB: beaconDB},
|
||||
}
|
||||
@@ -447,7 +447,7 @@ func BenchmarkHasBlockDB(b *testing.B) {
|
||||
}
|
||||
|
||||
func BenchmarkHasBlockForkChoiceStore_DoublyLinkedTree(b *testing.B) {
|
||||
ctx := context.Background()
|
||||
ctx := b.Context()
|
||||
beaconDB := testDB.SetupDB(b)
|
||||
s := &Service{
|
||||
cfg: &config{ForkChoiceStore: doublylinkedtree.New(), BeaconDB: beaconDB},
|
||||
|
||||
@@ -20,8 +20,10 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/operations/attestations"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/operations/blstoexec"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
|
||||
p2pTesting "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/state/stategen"
|
||||
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
|
||||
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
@@ -47,6 +49,11 @@ type mockBroadcaster struct {
|
||||
broadcastCalled bool
|
||||
}
|
||||
|
||||
type mockAccessor struct {
|
||||
mockBroadcaster
|
||||
p2pTesting.MockPeerManager
|
||||
}
|
||||
|
||||
func (mb *mockBroadcaster) Broadcast(_ context.Context, _ proto.Message) error {
|
||||
mb.broadcastCalled = true
|
||||
return nil
|
||||
@@ -77,6 +84,11 @@ func (mb *mockBroadcaster) BroadcastLightClientFinalityUpdate(_ context.Context,
|
||||
return nil
|
||||
}
|
||||
|
||||
func (mb *mockBroadcaster) BroadcastDataColumn(_ [fieldparams.RootLength]byte, _ uint64, _ *ethpb.DataColumnSidecar, _ ...chan<- bool) error {
|
||||
mb.broadcastCalled = true
|
||||
return nil
|
||||
}
|
||||
|
||||
func (mb *mockBroadcaster) BroadcastBLSChanges(_ context.Context, _ []*ethpb.SignedBLSToExecutionChange) {
|
||||
}
|
||||
|
||||
@@ -96,7 +108,7 @@ type testServiceRequirements struct {
|
||||
}
|
||||
|
||||
func minimalTestService(t *testing.T, opts ...Option) (*Service, *testServiceRequirements) {
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
beaconDB := testDB.SetupDB(t)
|
||||
fcs := doublylinkedtree.New()
|
||||
sg := stategen.New(beaconDB, fcs)
|
||||
@@ -132,8 +144,10 @@ func minimalTestService(t *testing.T, opts ...Option) (*Service, *testServiceReq
|
||||
WithDepositCache(dc),
|
||||
WithTrackedValidatorsCache(cache.NewTrackedValidatorsCache()),
|
||||
WithBlobStorage(filesystem.NewEphemeralBlobStorage(t)),
|
||||
WithDataColumnStorage(filesystem.NewEphemeralDataColumnStorage(t)),
|
||||
WithSyncChecker(mock.MockChecker{}),
|
||||
WithExecutionEngineCaller(&mockExecution.EngineClient{}),
|
||||
WithP2PBroadcaster(&mockAccessor{}),
|
||||
WithLightClientStore(&lightclient.Store{}),
|
||||
}
|
||||
// append the variadic opts so they override the defaults by being processed afterwards
|
||||
|
||||
@@ -75,6 +75,7 @@ type ChainService struct {
|
||||
BlockSlot primitives.Slot
|
||||
SyncingRoot [32]byte
|
||||
Blobs []blocks.VerifiedROBlob
|
||||
DataColumns []blocks.VerifiedRODataColumn
|
||||
TargetRoot [32]byte
|
||||
MockHeadSlot *primitives.Slot
|
||||
}
|
||||
@@ -715,6 +716,17 @@ func (c *ChainService) ReceiveBlob(_ context.Context, b blocks.VerifiedROBlob) e
|
||||
return nil
|
||||
}
|
||||
|
||||
// ReceiveDataColumn implements the same method in chain service
|
||||
func (c *ChainService) ReceiveDataColumn(dc blocks.VerifiedRODataColumn) error {
|
||||
c.DataColumns = append(c.DataColumns, dc)
|
||||
return nil
|
||||
}
|
||||
|
||||
// ReceiveDataColumns implements the same method in chain service
|
||||
func (*ChainService) ReceiveDataColumns(_ []blocks.VerifiedRODataColumn) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// TargetRootForEpoch mocks the same method in the chain service
|
||||
func (c *ChainService) TargetRootForEpoch(_ [32]byte, _ primitives.Epoch) ([32]byte, error) {
|
||||
return c.TargetRoot, nil
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package blockchain
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
testDB "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
|
||||
@@ -22,7 +21,7 @@ func TestService_VerifyWeakSubjectivityRoot(t *testing.T) {
|
||||
|
||||
b := util.NewBeaconBlock()
|
||||
b.Block.Slot = 1792480
|
||||
util.SaveBlock(t, context.Background(), beaconDB, b)
|
||||
util.SaveBlock(t, t.Context(), beaconDB, b)
|
||||
r, err := b.Block.HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
|
||||
@@ -79,7 +78,7 @@ func TestService_VerifyWeakSubjectivityRoot(t *testing.T) {
|
||||
}
|
||||
require.NoError(t, fcs.UpdateFinalizedCheckpoint(&forkchoicetypes.Checkpoint{Epoch: tt.finalizedEpoch}))
|
||||
cp := s.cfg.ForkChoiceStore.FinalizedCheckpoint()
|
||||
err = s.wsVerifier.VerifyWeakSubjectivity(context.Background(), cp.Epoch)
|
||||
err = s.wsVerifier.VerifyWeakSubjectivity(t.Context(), cp.Epoch)
|
||||
if tt.wantErr == nil {
|
||||
require.NoError(t, err)
|
||||
} else {
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package builder
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
@@ -15,19 +14,19 @@ import (
|
||||
)
|
||||
|
||||
func Test_NewServiceWithBuilder(t *testing.T) {
|
||||
s, err := NewService(context.Background(), WithBuilderClient(&buildertesting.MockClient{}))
|
||||
s, err := NewService(t.Context(), WithBuilderClient(&buildertesting.MockClient{}))
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, true, s.Configured())
|
||||
}
|
||||
|
||||
func Test_NewServiceWithoutBuilder(t *testing.T) {
|
||||
s, err := NewService(context.Background())
|
||||
s, err := NewService(t.Context())
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, false, s.Configured())
|
||||
}
|
||||
|
||||
func Test_RegisterValidator(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
db := dbtesting.SetupDB(t)
|
||||
headFetcher := &blockchainTesting.ChainService{}
|
||||
builder := buildertesting.NewClient()
|
||||
@@ -40,7 +39,7 @@ func Test_RegisterValidator(t *testing.T) {
|
||||
}
|
||||
|
||||
func Test_RegisterValidator_WithCache(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
headFetcher := &blockchainTesting.ChainService{}
|
||||
builder := buildertesting.NewClient()
|
||||
s, err := NewService(ctx, WithRegistrationCache(), WithHeadFetcher(headFetcher), WithBuilderClient(&builder))
|
||||
@@ -55,16 +54,16 @@ func Test_RegisterValidator_WithCache(t *testing.T) {
|
||||
}
|
||||
|
||||
func Test_BuilderMethodsWithouClient(t *testing.T) {
|
||||
s, err := NewService(context.Background())
|
||||
s, err := NewService(t.Context())
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, false, s.Configured())
|
||||
|
||||
_, err = s.GetHeader(context.Background(), 0, [32]byte{}, [48]byte{})
|
||||
_, err = s.GetHeader(t.Context(), 0, [32]byte{}, [48]byte{})
|
||||
assert.ErrorContains(t, ErrNoBuilder.Error(), err)
|
||||
|
||||
_, _, err = s.SubmitBlindedBlock(context.Background(), nil)
|
||||
_, _, err = s.SubmitBlindedBlock(t.Context(), nil)
|
||||
assert.ErrorContains(t, ErrNoBuilder.Error(), err)
|
||||
|
||||
err = s.RegisterValidator(context.Background(), nil)
|
||||
err = s.RegisterValidator(t.Context(), nil)
|
||||
assert.ErrorContains(t, ErrNoBuilder.Error(), err)
|
||||
}
|
||||
|
||||
9
beacon-chain/cache/committee_fuzz_test.go
vendored
9
beacon-chain/cache/committee_fuzz_test.go
vendored
@@ -3,7 +3,6 @@
|
||||
package cache
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/testing/assert"
|
||||
@@ -30,8 +29,8 @@ func TestCommitteeCache_FuzzCommitteesByEpoch(t *testing.T) {
|
||||
|
||||
for i := 0; i < 100000; i++ {
|
||||
fuzzer.Fuzz(c)
|
||||
require.NoError(t, cache.AddCommitteeShuffledList(context.Background(), c))
|
||||
_, err := cache.Committee(context.Background(), 0, c.Seed, 0)
|
||||
require.NoError(t, cache.AddCommitteeShuffledList(t.Context(), c))
|
||||
_, err := cache.Committee(t.Context(), 0, c.Seed, 0)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
@@ -45,9 +44,9 @@ func TestCommitteeCache_FuzzActiveIndices(t *testing.T) {
|
||||
|
||||
for i := 0; i < 100000; i++ {
|
||||
fuzzer.Fuzz(c)
|
||||
require.NoError(t, cache.AddCommitteeShuffledList(context.Background(), c))
|
||||
require.NoError(t, cache.AddCommitteeShuffledList(t.Context(), c))
|
||||
|
||||
indices, err := cache.ActiveIndices(context.Background(), c.Seed)
|
||||
indices, err := cache.ActiveIndices(t.Context(), c.Seed)
|
||||
require.NoError(t, err)
|
||||
assert.DeepEqual(t, c.SortedIndices, indices)
|
||||
}
|
||||
|
||||
28
beacon-chain/cache/committee_test.go
vendored
28
beacon-chain/cache/committee_test.go
vendored
@@ -44,15 +44,15 @@ func TestCommitteeCache_CommitteesByEpoch(t *testing.T) {
|
||||
|
||||
slot := params.BeaconConfig().SlotsPerEpoch
|
||||
committeeIndex := primitives.CommitteeIndex(1)
|
||||
indices, err := cache.Committee(context.Background(), slot, item.Seed, committeeIndex)
|
||||
indices, err := cache.Committee(t.Context(), slot, item.Seed, committeeIndex)
|
||||
require.NoError(t, err)
|
||||
if indices != nil {
|
||||
t.Error("Expected committee not to exist in empty cache")
|
||||
}
|
||||
require.NoError(t, cache.AddCommitteeShuffledList(context.Background(), item))
|
||||
require.NoError(t, cache.AddCommitteeShuffledList(t.Context(), item))
|
||||
|
||||
wantedIndex := primitives.CommitteeIndex(0)
|
||||
indices, err = cache.Committee(context.Background(), slot, item.Seed, wantedIndex)
|
||||
indices, err = cache.Committee(t.Context(), slot, item.Seed, wantedIndex)
|
||||
require.NoError(t, err)
|
||||
|
||||
start, end := startEndIndices(item, uint64(wantedIndex))
|
||||
@@ -63,15 +63,15 @@ func TestCommitteeCache_ActiveIndices(t *testing.T) {
|
||||
cache := NewCommitteesCache()
|
||||
|
||||
item := &Committees{Seed: [32]byte{'A'}, SortedIndices: []primitives.ValidatorIndex{1, 2, 3, 4, 5, 6}}
|
||||
indices, err := cache.ActiveIndices(context.Background(), item.Seed)
|
||||
indices, err := cache.ActiveIndices(t.Context(), item.Seed)
|
||||
require.NoError(t, err)
|
||||
if indices != nil {
|
||||
t.Error("Expected committee not to exist in empty cache")
|
||||
}
|
||||
|
||||
require.NoError(t, cache.AddCommitteeShuffledList(context.Background(), item))
|
||||
require.NoError(t, cache.AddCommitteeShuffledList(t.Context(), item))
|
||||
|
||||
indices, err = cache.ActiveIndices(context.Background(), item.Seed)
|
||||
indices, err = cache.ActiveIndices(t.Context(), item.Seed)
|
||||
require.NoError(t, err)
|
||||
assert.DeepEqual(t, item.SortedIndices, indices)
|
||||
}
|
||||
@@ -80,13 +80,13 @@ func TestCommitteeCache_ActiveCount(t *testing.T) {
|
||||
cache := NewCommitteesCache()
|
||||
|
||||
item := &Committees{Seed: [32]byte{'A'}, SortedIndices: []primitives.ValidatorIndex{1, 2, 3, 4, 5, 6}}
|
||||
count, err := cache.ActiveIndicesCount(context.Background(), item.Seed)
|
||||
count, err := cache.ActiveIndicesCount(t.Context(), item.Seed)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, 0, count, "Expected active count not to exist in empty cache")
|
||||
|
||||
require.NoError(t, cache.AddCommitteeShuffledList(context.Background(), item))
|
||||
require.NoError(t, cache.AddCommitteeShuffledList(t.Context(), item))
|
||||
|
||||
count, err = cache.ActiveIndicesCount(context.Background(), item.Seed)
|
||||
count, err = cache.ActiveIndicesCount(t.Context(), item.Seed)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, len(item.SortedIndices), count)
|
||||
}
|
||||
@@ -100,7 +100,7 @@ func TestCommitteeCache_CanRotate(t *testing.T) {
|
||||
for i := start; i < end; i++ {
|
||||
s := []byte(strconv.Itoa(i))
|
||||
item := &Committees{Seed: bytesutil.ToBytes32(s)}
|
||||
require.NoError(t, cache.AddCommitteeShuffledList(context.Background(), item))
|
||||
require.NoError(t, cache.AddCommitteeShuffledList(t.Context(), item))
|
||||
}
|
||||
|
||||
k := cache.CommitteeCache.Keys()
|
||||
@@ -130,7 +130,7 @@ func TestCommitteeCacheOutOfRange(t *testing.T) {
|
||||
assert.NoError(t, err)
|
||||
_ = cache.CommitteeCache.Add(key, comms)
|
||||
|
||||
_, err = cache.Committee(context.Background(), 0, seed, math.MaxUint64) // Overflow!
|
||||
_, err = cache.Committee(t.Context(), 0, seed, math.MaxUint64) // Overflow!
|
||||
require.NotNil(t, err, "Did not fail as expected")
|
||||
}
|
||||
|
||||
@@ -138,15 +138,15 @@ func TestCommitteeCache_DoesNothingWhenCancelledContext(t *testing.T) {
|
||||
cache := NewCommitteesCache()
|
||||
|
||||
item := &Committees{Seed: [32]byte{'A'}, SortedIndices: []primitives.ValidatorIndex{1, 2, 3, 4, 5, 6}}
|
||||
count, err := cache.ActiveIndicesCount(context.Background(), item.Seed)
|
||||
count, err := cache.ActiveIndicesCount(t.Context(), item.Seed)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, 0, count, "Expected active count not to exist in empty cache")
|
||||
|
||||
cancelled, cancel := context.WithCancel(context.Background())
|
||||
cancelled, cancel := context.WithCancel(t.Context())
|
||||
cancel()
|
||||
require.ErrorIs(t, cache.AddCommitteeShuffledList(cancelled, item), context.Canceled)
|
||||
|
||||
count, err = cache.ActiveIndicesCount(context.Background(), item.Seed)
|
||||
count, err = cache.ActiveIndicesCount(t.Context(), item.Seed)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, 0, count)
|
||||
}
|
||||
|
||||
@@ -2,7 +2,6 @@ package depositsnapshot
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"fmt"
|
||||
"math/big"
|
||||
"testing"
|
||||
@@ -55,7 +54,7 @@ func TestAllDeposits_ReturnsAllDeposits(t *testing.T) {
|
||||
}
|
||||
dc.deposits = deposits
|
||||
|
||||
d := dc.AllDeposits(context.Background(), nil)
|
||||
d := dc.AllDeposits(t.Context(), nil)
|
||||
assert.Equal(t, len(deposits), len(d))
|
||||
}
|
||||
|
||||
@@ -95,7 +94,7 @@ func TestAllDeposits_FiltersDepositUpToAndIncludingBlockNumber(t *testing.T) {
|
||||
}
|
||||
dc.deposits = deposits
|
||||
|
||||
d := dc.AllDeposits(context.Background(), big.NewInt(11))
|
||||
d := dc.AllDeposits(t.Context(), big.NewInt(11))
|
||||
assert.Equal(t, 5, len(d))
|
||||
}
|
||||
|
||||
@@ -127,7 +126,7 @@ func TestDepositsNumberAndRootAtHeight(t *testing.T) {
|
||||
DepositRoot: wantedRoot,
|
||||
},
|
||||
}
|
||||
n, root := dc.DepositsNumberAndRootAtHeight(context.Background(), big.NewInt(13))
|
||||
n, root := dc.DepositsNumberAndRootAtHeight(t.Context(), big.NewInt(13))
|
||||
assert.Equal(t, 4, int(n))
|
||||
require.DeepEqual(t, wantedRoot, root[:])
|
||||
})
|
||||
@@ -143,7 +142,7 @@ func TestDepositsNumberAndRootAtHeight(t *testing.T) {
|
||||
DepositRoot: wantedRoot,
|
||||
},
|
||||
}
|
||||
n, root := dc.DepositsNumberAndRootAtHeight(context.Background(), big.NewInt(10))
|
||||
n, root := dc.DepositsNumberAndRootAtHeight(t.Context(), big.NewInt(10))
|
||||
assert.Equal(t, 1, int(n))
|
||||
require.DeepEqual(t, wantedRoot, root[:])
|
||||
})
|
||||
@@ -169,7 +168,7 @@ func TestDepositsNumberAndRootAtHeight(t *testing.T) {
|
||||
Deposit: ðpb.Deposit{},
|
||||
},
|
||||
}
|
||||
n, root := dc.DepositsNumberAndRootAtHeight(context.Background(), big.NewInt(10))
|
||||
n, root := dc.DepositsNumberAndRootAtHeight(t.Context(), big.NewInt(10))
|
||||
assert.Equal(t, 2, int(n))
|
||||
require.DeepEqual(t, wantedRoot, root[:])
|
||||
})
|
||||
@@ -185,7 +184,7 @@ func TestDepositsNumberAndRootAtHeight(t *testing.T) {
|
||||
DepositRoot: wantedRoot,
|
||||
},
|
||||
}
|
||||
n, root := dc.DepositsNumberAndRootAtHeight(context.Background(), big.NewInt(7))
|
||||
n, root := dc.DepositsNumberAndRootAtHeight(t.Context(), big.NewInt(7))
|
||||
assert.Equal(t, 0, int(n))
|
||||
require.DeepEqual(t, params.BeaconConfig().ZeroHash, root)
|
||||
})
|
||||
@@ -201,7 +200,7 @@ func TestDepositsNumberAndRootAtHeight(t *testing.T) {
|
||||
DepositRoot: wantedRoot,
|
||||
},
|
||||
}
|
||||
n, root := dc.DepositsNumberAndRootAtHeight(context.Background(), big.NewInt(10))
|
||||
n, root := dc.DepositsNumberAndRootAtHeight(t.Context(), big.NewInt(10))
|
||||
assert.Equal(t, 1, int(n))
|
||||
require.DeepEqual(t, wantedRoot, root[:])
|
||||
})
|
||||
@@ -237,7 +236,7 @@ func TestDepositsNumberAndRootAtHeight(t *testing.T) {
|
||||
Deposit: ðpb.Deposit{},
|
||||
},
|
||||
}
|
||||
n, root := dc.DepositsNumberAndRootAtHeight(context.Background(), big.NewInt(9))
|
||||
n, root := dc.DepositsNumberAndRootAtHeight(t.Context(), big.NewInt(9))
|
||||
assert.Equal(t, 3, int(n))
|
||||
require.DeepEqual(t, wantedRoot, root[:])
|
||||
})
|
||||
@@ -288,10 +287,10 @@ func TestDepositByPubkey_ReturnsFirstMatchingDeposit(t *testing.T) {
|
||||
},
|
||||
},
|
||||
}
|
||||
dc.InsertDepositContainers(context.Background(), ctrs)
|
||||
dc.InsertDepositContainers(t.Context(), ctrs)
|
||||
|
||||
pk1 := bytesutil.PadTo([]byte("pk1"), 48)
|
||||
dep, blkNum := dc.DepositByPubkey(context.Background(), pk1)
|
||||
dep, blkNum := dc.DepositByPubkey(t.Context(), pk1)
|
||||
|
||||
if dep == nil || !bytes.Equal(dep.Data.PublicKey, pk1) {
|
||||
t.Error("Returned wrong deposit")
|
||||
@@ -303,7 +302,7 @@ func TestDepositByPubkey_ReturnsFirstMatchingDeposit(t *testing.T) {
|
||||
func TestInsertDepositContainers_NotNil(t *testing.T) {
|
||||
dc, err := New()
|
||||
require.NoError(t, err)
|
||||
dc.InsertDepositContainers(context.Background(), nil)
|
||||
dc.InsertDepositContainers(t.Context(), nil)
|
||||
assert.DeepEqual(t, []*ethpb.DepositContainer{}, dc.deposits)
|
||||
}
|
||||
|
||||
@@ -359,10 +358,10 @@ func TestFinalizedDeposits_DepositsCachedCorrectly(t *testing.T) {
|
||||
err = dc.finalizedDeposits.depositTree.pushLeaf(root)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
err = dc.InsertFinalizedDeposits(context.Background(), 2, [32]byte{}, 0)
|
||||
err = dc.InsertFinalizedDeposits(t.Context(), 2, [32]byte{}, 0)
|
||||
require.NoError(t, err)
|
||||
|
||||
cachedDeposits, err := dc.FinalizedDeposits(context.Background())
|
||||
cachedDeposits, err := dc.FinalizedDeposits(t.Context())
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, cachedDeposits, "Deposits not cached")
|
||||
assert.Equal(t, int64(2), cachedDeposits.MerkleTrieIndex())
|
||||
@@ -425,15 +424,15 @@ func TestFinalizedDeposits_UtilizesPreviouslyCachedDeposits(t *testing.T) {
|
||||
err = dc.finalizedDeposits.Deposits().Insert(root[:], 0)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
err = dc.InsertFinalizedDeposits(context.Background(), 1, [32]byte{}, 0)
|
||||
err = dc.InsertFinalizedDeposits(t.Context(), 1, [32]byte{}, 0)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = dc.InsertFinalizedDeposits(context.Background(), 2, [32]byte{}, 0)
|
||||
err = dc.InsertFinalizedDeposits(t.Context(), 2, [32]byte{}, 0)
|
||||
require.NoError(t, err)
|
||||
|
||||
dc.deposits = append(dc.deposits, []*ethpb.DepositContainer{newFinalizedDeposit}...)
|
||||
|
||||
cachedDeposits, err := dc.FinalizedDeposits(context.Background())
|
||||
cachedDeposits, err := dc.FinalizedDeposits(t.Context())
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, cachedDeposits, "Deposits not cached")
|
||||
require.Equal(t, int64(1), cachedDeposits.MerkleTrieIndex())
|
||||
@@ -459,10 +458,10 @@ func TestFinalizedDeposits_HandleZeroDeposits(t *testing.T) {
|
||||
dc, err := New()
|
||||
require.NoError(t, err)
|
||||
|
||||
err = dc.InsertFinalizedDeposits(context.Background(), 2, [32]byte{}, 0)
|
||||
err = dc.InsertFinalizedDeposits(t.Context(), 2, [32]byte{}, 0)
|
||||
require.NoError(t, err)
|
||||
|
||||
cachedDeposits, err := dc.FinalizedDeposits(context.Background())
|
||||
cachedDeposits, err := dc.FinalizedDeposits(t.Context())
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, cachedDeposits, "Deposits not cached")
|
||||
assert.Equal(t, int64(-1), cachedDeposits.MerkleTrieIndex())
|
||||
@@ -509,10 +508,10 @@ func TestFinalizedDeposits_HandleSmallerThanExpectedDeposits(t *testing.T) {
|
||||
}
|
||||
dc.deposits = finalizedDeposits
|
||||
|
||||
err = dc.InsertFinalizedDeposits(context.Background(), 5, [32]byte{}, 0)
|
||||
err = dc.InsertFinalizedDeposits(t.Context(), 5, [32]byte{}, 0)
|
||||
require.NoError(t, err)
|
||||
|
||||
cachedDeposits, err := dc.FinalizedDeposits(context.Background())
|
||||
cachedDeposits, err := dc.FinalizedDeposits(t.Context())
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, cachedDeposits, "Deposits not cached")
|
||||
assert.Equal(t, int64(2), cachedDeposits.MerkleTrieIndex())
|
||||
@@ -592,14 +591,14 @@ func TestFinalizedDeposits_HandleLowerEth1DepositIndex(t *testing.T) {
|
||||
}
|
||||
dc.deposits = finalizedDeposits
|
||||
|
||||
err = dc.InsertFinalizedDeposits(context.Background(), 5, [32]byte{}, 0)
|
||||
err = dc.InsertFinalizedDeposits(t.Context(), 5, [32]byte{}, 0)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Reinsert finalized deposits with a lower index.
|
||||
err = dc.InsertFinalizedDeposits(context.Background(), 2, [32]byte{}, 0)
|
||||
err = dc.InsertFinalizedDeposits(t.Context(), 2, [32]byte{}, 0)
|
||||
require.NoError(t, err)
|
||||
|
||||
cachedDeposits, err := dc.FinalizedDeposits(context.Background())
|
||||
cachedDeposits, err := dc.FinalizedDeposits(t.Context())
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, cachedDeposits, "Deposits not cached")
|
||||
assert.Equal(t, int64(5), cachedDeposits.MerkleTrieIndex())
|
||||
@@ -670,10 +669,10 @@ func TestNonFinalizedDeposits_ReturnsAllNonFinalizedDeposits(t *testing.T) {
|
||||
Index: 3,
|
||||
DepositRoot: rootCreator('D'),
|
||||
})
|
||||
err = dc.InsertFinalizedDeposits(context.Background(), 1, [32]byte{}, 0)
|
||||
err = dc.InsertFinalizedDeposits(t.Context(), 1, [32]byte{}, 0)
|
||||
require.NoError(t, err)
|
||||
|
||||
deps := dc.NonFinalizedDeposits(context.Background(), 1, nil)
|
||||
deps := dc.NonFinalizedDeposits(t.Context(), 1, nil)
|
||||
assert.Equal(t, 2, len(deps))
|
||||
}
|
||||
|
||||
@@ -681,7 +680,7 @@ func TestNonFinalizedDeposits_ReturnsAllNonFinalizedDeposits_Nil(t *testing.T) {
|
||||
dc, err := New()
|
||||
require.NoError(t, err)
|
||||
|
||||
deps := dc.NonFinalizedDeposits(context.Background(), 0, nil)
|
||||
deps := dc.NonFinalizedDeposits(t.Context(), 0, nil)
|
||||
assert.Equal(t, 0, len(deps))
|
||||
}
|
||||
|
||||
@@ -740,10 +739,10 @@ func TestNonFinalizedDeposits_ReturnsNonFinalizedDepositsUpToBlockNumber(t *test
|
||||
Index: 3,
|
||||
DepositRoot: rootCreator('D'),
|
||||
})
|
||||
err = dc.InsertFinalizedDeposits(context.Background(), 1, [32]byte{}, 0)
|
||||
err = dc.InsertFinalizedDeposits(t.Context(), 1, [32]byte{}, 0)
|
||||
require.NoError(t, err)
|
||||
|
||||
deps := dc.NonFinalizedDeposits(context.Background(), 1, big.NewInt(10))
|
||||
deps := dc.NonFinalizedDeposits(t.Context(), 1, big.NewInt(10))
|
||||
assert.Equal(t, 1, len(deps))
|
||||
}
|
||||
|
||||
@@ -785,21 +784,21 @@ func TestFinalizedDeposits_ReturnsTrieCorrectly(t *testing.T) {
|
||||
assert.NoError(t, err)
|
||||
|
||||
// Perform this in a nonsensical ordering
|
||||
err = dc.InsertFinalizedDeposits(context.Background(), 1, [32]byte{}, 0)
|
||||
err = dc.InsertFinalizedDeposits(t.Context(), 1, [32]byte{}, 0)
|
||||
require.NoError(t, err)
|
||||
err = dc.InsertFinalizedDeposits(context.Background(), 2, [32]byte{}, 0)
|
||||
err = dc.InsertFinalizedDeposits(t.Context(), 2, [32]byte{}, 0)
|
||||
require.NoError(t, err)
|
||||
err = dc.InsertFinalizedDeposits(context.Background(), 3, [32]byte{}, 0)
|
||||
err = dc.InsertFinalizedDeposits(t.Context(), 3, [32]byte{}, 0)
|
||||
require.NoError(t, err)
|
||||
err = dc.InsertFinalizedDeposits(context.Background(), 4, [32]byte{}, 0)
|
||||
err = dc.InsertFinalizedDeposits(t.Context(), 4, [32]byte{}, 0)
|
||||
require.NoError(t, err)
|
||||
err = dc.InsertFinalizedDeposits(context.Background(), 4, [32]byte{}, 0)
|
||||
err = dc.InsertFinalizedDeposits(t.Context(), 4, [32]byte{}, 0)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Mimic finalized deposit trie fetch.
|
||||
fd, err := dc.FinalizedDeposits(context.Background())
|
||||
fd, err := dc.FinalizedDeposits(t.Context())
|
||||
require.NoError(t, err)
|
||||
deps := dc.NonFinalizedDeposits(context.Background(), fd.MerkleTrieIndex(), nil)
|
||||
deps := dc.NonFinalizedDeposits(t.Context(), fd.MerkleTrieIndex(), nil)
|
||||
insertIndex := fd.MerkleTrieIndex() + 1
|
||||
|
||||
for _, dep := range deps {
|
||||
@@ -810,24 +809,24 @@ func TestFinalizedDeposits_ReturnsTrieCorrectly(t *testing.T) {
|
||||
}
|
||||
insertIndex++
|
||||
}
|
||||
err = dc.InsertFinalizedDeposits(context.Background(), 5, [32]byte{}, 0)
|
||||
err = dc.InsertFinalizedDeposits(t.Context(), 5, [32]byte{}, 0)
|
||||
require.NoError(t, err)
|
||||
err = dc.InsertFinalizedDeposits(context.Background(), 6, [32]byte{}, 0)
|
||||
err = dc.InsertFinalizedDeposits(t.Context(), 6, [32]byte{}, 0)
|
||||
require.NoError(t, err)
|
||||
err = dc.InsertFinalizedDeposits(context.Background(), 9, [32]byte{}, 0)
|
||||
err = dc.InsertFinalizedDeposits(t.Context(), 9, [32]byte{}, 0)
|
||||
require.NoError(t, err)
|
||||
err = dc.InsertFinalizedDeposits(context.Background(), 12, [32]byte{}, 0)
|
||||
err = dc.InsertFinalizedDeposits(t.Context(), 12, [32]byte{}, 0)
|
||||
require.NoError(t, err)
|
||||
err = dc.InsertFinalizedDeposits(context.Background(), 15, [32]byte{}, 0)
|
||||
err = dc.InsertFinalizedDeposits(t.Context(), 15, [32]byte{}, 0)
|
||||
require.NoError(t, err)
|
||||
err = dc.InsertFinalizedDeposits(context.Background(), 15, [32]byte{}, 0)
|
||||
err = dc.InsertFinalizedDeposits(t.Context(), 15, [32]byte{}, 0)
|
||||
require.NoError(t, err)
|
||||
err = dc.InsertFinalizedDeposits(context.Background(), 14, [32]byte{}, 0)
|
||||
err = dc.InsertFinalizedDeposits(t.Context(), 14, [32]byte{}, 0)
|
||||
require.NoError(t, err)
|
||||
|
||||
fd, err = dc.FinalizedDeposits(context.Background())
|
||||
fd, err = dc.FinalizedDeposits(t.Context())
|
||||
require.NoError(t, err)
|
||||
deps = dc.NonFinalizedDeposits(context.Background(), fd.MerkleTrieIndex(), nil)
|
||||
deps = dc.NonFinalizedDeposits(t.Context(), fd.MerkleTrieIndex(), nil)
|
||||
insertIndex = fd.MerkleTrieIndex() + 1
|
||||
|
||||
for _, dep := range dc.deposits {
|
||||
@@ -888,9 +887,9 @@ func TestMin(t *testing.T) {
|
||||
}
|
||||
dc.deposits = finalizedDeposits
|
||||
|
||||
fd, err := dc.FinalizedDeposits(context.Background())
|
||||
fd, err := dc.FinalizedDeposits(t.Context())
|
||||
require.NoError(t, err)
|
||||
deps := dc.NonFinalizedDeposits(context.Background(), fd.MerkleTrieIndex(), big.NewInt(16))
|
||||
deps := dc.NonFinalizedDeposits(t.Context(), fd.MerkleTrieIndex(), big.NewInt(16))
|
||||
insertIndex := fd.MerkleTrieIndex() + 1
|
||||
for _, dep := range deps {
|
||||
depHash, err := dep.Data.HashTreeRoot()
|
||||
@@ -908,28 +907,28 @@ func TestDepositMap_WorksCorrectly(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
|
||||
pk0 := bytesutil.PadTo([]byte("pk0"), 48)
|
||||
dep, _ := dc.DepositByPubkey(context.Background(), pk0)
|
||||
dep, _ := dc.DepositByPubkey(t.Context(), pk0)
|
||||
var nilDep *ethpb.Deposit
|
||||
assert.DeepEqual(t, nilDep, dep)
|
||||
|
||||
dep = ðpb.Deposit{Proof: makeDepositProof(), Data: ðpb.Deposit_Data{PublicKey: pk0, Amount: 1000}}
|
||||
assert.NoError(t, dc.InsertDeposit(context.Background(), dep, 1000, 0, [32]byte{}))
|
||||
assert.NoError(t, dc.InsertDeposit(t.Context(), dep, 1000, 0, [32]byte{}))
|
||||
|
||||
dep, _ = dc.DepositByPubkey(context.Background(), pk0)
|
||||
dep, _ = dc.DepositByPubkey(t.Context(), pk0)
|
||||
assert.NotEqual(t, nilDep, dep)
|
||||
assert.Equal(t, uint64(1000), dep.Data.Amount)
|
||||
|
||||
dep = ðpb.Deposit{Proof: makeDepositProof(), Data: ðpb.Deposit_Data{PublicKey: pk0, Amount: 10000}}
|
||||
assert.NoError(t, dc.InsertDeposit(context.Background(), dep, 1000, 1, [32]byte{}))
|
||||
assert.NoError(t, dc.InsertDeposit(t.Context(), dep, 1000, 1, [32]byte{}))
|
||||
|
||||
// Make sure we have the same deposit returned over here.
|
||||
dep, _ = dc.DepositByPubkey(context.Background(), pk0)
|
||||
dep, _ = dc.DepositByPubkey(t.Context(), pk0)
|
||||
assert.NotEqual(t, nilDep, dep)
|
||||
assert.Equal(t, uint64(1000), dep.Data.Amount)
|
||||
|
||||
// Make sure another key doesn't work.
|
||||
pk1 := bytesutil.PadTo([]byte("pk1"), 48)
|
||||
dep, _ = dc.DepositByPubkey(context.Background(), pk1)
|
||||
dep, _ = dc.DepositByPubkey(t.Context(), pk1)
|
||||
assert.DeepEqual(t, nilDep, dep)
|
||||
}
|
||||
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package depositsnapshot
|
||||
|
||||
import (
|
||||
"context"
|
||||
"math/big"
|
||||
"testing"
|
||||
|
||||
@@ -13,14 +12,14 @@ var _ PendingDepositsFetcher = (*Cache)(nil)
|
||||
|
||||
func TestInsertPendingDeposit_OK(t *testing.T) {
|
||||
dc := Cache{}
|
||||
dc.InsertPendingDeposit(context.Background(), ðpb.Deposit{}, 111, 100, [32]byte{})
|
||||
dc.InsertPendingDeposit(t.Context(), ðpb.Deposit{}, 111, 100, [32]byte{})
|
||||
|
||||
assert.Equal(t, 1, len(dc.pendingDeposits), "deposit not inserted")
|
||||
}
|
||||
|
||||
func TestInsertPendingDeposit_ignoresNilDeposit(t *testing.T) {
|
||||
dc := Cache{}
|
||||
dc.InsertPendingDeposit(context.Background(), nil /*deposit*/, 0 /*blockNum*/, 0, [32]byte{})
|
||||
dc.InsertPendingDeposit(t.Context(), nil /*deposit*/, 0 /*blockNum*/, 0, [32]byte{})
|
||||
|
||||
assert.Equal(t, 0, len(dc.pendingDeposits))
|
||||
}
|
||||
@@ -34,13 +33,13 @@ func TestPendingDeposits_OK(t *testing.T) {
|
||||
{Eth1BlockHeight: 6, Deposit: ðpb.Deposit{Proof: [][]byte{[]byte("c")}}},
|
||||
}
|
||||
|
||||
deposits := dc.PendingDeposits(context.Background(), big.NewInt(4))
|
||||
deposits := dc.PendingDeposits(t.Context(), big.NewInt(4))
|
||||
expected := []*ethpb.Deposit{
|
||||
{Proof: [][]byte{[]byte("A")}},
|
||||
{Proof: [][]byte{[]byte("B")}},
|
||||
}
|
||||
assert.DeepSSZEqual(t, expected, deposits)
|
||||
|
||||
all := dc.PendingDeposits(context.Background(), nil)
|
||||
all := dc.PendingDeposits(t.Context(), nil)
|
||||
assert.Equal(t, len(dc.pendingDeposits), len(all), "PendingDeposits(ctx, nil) did not return all deposits")
|
||||
}
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package depositsnapshot
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
|
||||
@@ -22,7 +21,7 @@ func TestPrunePendingDeposits_ZeroMerkleIndex(t *testing.T) {
|
||||
{Eth1BlockHeight: 12, Index: 12},
|
||||
}
|
||||
|
||||
dc.PrunePendingDeposits(context.Background(), 0)
|
||||
dc.PrunePendingDeposits(t.Context(), 0)
|
||||
expected := []*ethpb.DepositContainer{
|
||||
{Eth1BlockHeight: 2, Index: 2},
|
||||
{Eth1BlockHeight: 4, Index: 4},
|
||||
@@ -46,7 +45,7 @@ func TestPrunePendingDeposits_OK(t *testing.T) {
|
||||
{Eth1BlockHeight: 12, Index: 12},
|
||||
}
|
||||
|
||||
dc.PrunePendingDeposits(context.Background(), 6)
|
||||
dc.PrunePendingDeposits(t.Context(), 6)
|
||||
expected := []*ethpb.DepositContainer{
|
||||
{Eth1BlockHeight: 6, Index: 6},
|
||||
{Eth1BlockHeight: 8, Index: 8},
|
||||
@@ -65,7 +64,7 @@ func TestPrunePendingDeposits_OK(t *testing.T) {
|
||||
{Eth1BlockHeight: 12, Index: 12},
|
||||
}
|
||||
|
||||
dc.PrunePendingDeposits(context.Background(), 10)
|
||||
dc.PrunePendingDeposits(t.Context(), 10)
|
||||
expected = []*ethpb.DepositContainer{
|
||||
{Eth1BlockHeight: 10, Index: 10},
|
||||
{Eth1BlockHeight: 12, Index: 12},
|
||||
@@ -86,7 +85,7 @@ func TestPruneAllPendingDeposits(t *testing.T) {
|
||||
{Eth1BlockHeight: 12, Index: 12},
|
||||
}
|
||||
|
||||
dc.PruneAllPendingDeposits(context.Background())
|
||||
dc.PruneAllPendingDeposits(t.Context())
|
||||
expected := []*ethpb.DepositContainer{}
|
||||
|
||||
assert.DeepEqual(t, expected, dc.pendingDeposits)
|
||||
@@ -128,10 +127,10 @@ func TestPruneProofs_Ok(t *testing.T) {
|
||||
}
|
||||
|
||||
for _, ins := range deposits {
|
||||
assert.NoError(t, dc.InsertDeposit(context.Background(), ins.deposit, ins.blkNum, ins.index, [32]byte{}))
|
||||
assert.NoError(t, dc.InsertDeposit(t.Context(), ins.deposit, ins.blkNum, ins.index, [32]byte{}))
|
||||
}
|
||||
|
||||
require.NoError(t, dc.PruneProofs(context.Background(), 1))
|
||||
require.NoError(t, dc.PruneProofs(t.Context(), 1))
|
||||
|
||||
assert.DeepEqual(t, [][]byte(nil), dc.deposits[0].Deposit.Proof)
|
||||
assert.DeepEqual(t, [][]byte(nil), dc.deposits[1].Deposit.Proof)
|
||||
@@ -173,10 +172,10 @@ func TestPruneProofs_SomeAlreadyPruned(t *testing.T) {
|
||||
}
|
||||
|
||||
for _, ins := range deposits {
|
||||
assert.NoError(t, dc.InsertDeposit(context.Background(), ins.deposit, ins.blkNum, ins.index, [32]byte{}))
|
||||
assert.NoError(t, dc.InsertDeposit(t.Context(), ins.deposit, ins.blkNum, ins.index, [32]byte{}))
|
||||
}
|
||||
|
||||
require.NoError(t, dc.PruneProofs(context.Background(), 2))
|
||||
require.NoError(t, dc.PruneProofs(t.Context(), 2))
|
||||
|
||||
assert.DeepEqual(t, [][]byte(nil), dc.deposits[2].Deposit.Proof)
|
||||
}
|
||||
@@ -217,10 +216,10 @@ func TestPruneProofs_PruneAllWhenDepositIndexTooBig(t *testing.T) {
|
||||
}
|
||||
|
||||
for _, ins := range deposits {
|
||||
assert.NoError(t, dc.InsertDeposit(context.Background(), ins.deposit, ins.blkNum, ins.index, [32]byte{}))
|
||||
assert.NoError(t, dc.InsertDeposit(t.Context(), ins.deposit, ins.blkNum, ins.index, [32]byte{}))
|
||||
}
|
||||
|
||||
require.NoError(t, dc.PruneProofs(context.Background(), 99))
|
||||
require.NoError(t, dc.PruneProofs(t.Context(), 99))
|
||||
|
||||
assert.DeepEqual(t, [][]byte(nil), dc.deposits[0].Deposit.Proof)
|
||||
assert.DeepEqual(t, [][]byte(nil), dc.deposits[1].Deposit.Proof)
|
||||
@@ -264,10 +263,10 @@ func TestPruneProofs_CorrectlyHandleLastIndex(t *testing.T) {
|
||||
}
|
||||
|
||||
for _, ins := range deposits {
|
||||
assert.NoError(t, dc.InsertDeposit(context.Background(), ins.deposit, ins.blkNum, ins.index, [32]byte{}))
|
||||
assert.NoError(t, dc.InsertDeposit(t.Context(), ins.deposit, ins.blkNum, ins.index, [32]byte{}))
|
||||
}
|
||||
|
||||
require.NoError(t, dc.PruneProofs(context.Background(), 4))
|
||||
require.NoError(t, dc.PruneProofs(t.Context(), 4))
|
||||
|
||||
assert.DeepEqual(t, [][]byte(nil), dc.deposits[0].Deposit.Proof)
|
||||
assert.DeepEqual(t, [][]byte(nil), dc.deposits[1].Deposit.Proof)
|
||||
@@ -311,10 +310,10 @@ func TestPruneAllProofs(t *testing.T) {
|
||||
}
|
||||
|
||||
for _, ins := range deposits {
|
||||
assert.NoError(t, dc.InsertDeposit(context.Background(), ins.deposit, ins.blkNum, ins.index, [32]byte{}))
|
||||
assert.NoError(t, dc.InsertDeposit(t.Context(), ins.deposit, ins.blkNum, ins.index, [32]byte{}))
|
||||
}
|
||||
|
||||
dc.PruneAllProofs(context.Background())
|
||||
dc.PruneAllProofs(t.Context())
|
||||
|
||||
assert.DeepEqual(t, [][]byte(nil), dc.deposits[0].Deposit.Proof)
|
||||
assert.DeepEqual(t, [][]byte(nil), dc.deposits[1].Deposit.Proof)
|
||||
|
||||
5
beacon-chain/cache/registration_test.go
vendored
5
beacon-chain/cache/registration_test.go
vendored
@@ -1,7 +1,6 @@
|
||||
package cache
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
@@ -24,7 +23,7 @@ func TestRegistrationCache(t *testing.T) {
|
||||
Timestamp: uint64(time.Now().Unix()),
|
||||
Pubkey: pubkey,
|
||||
}
|
||||
cache.UpdateIndexToRegisteredMap(context.Background(), m)
|
||||
cache.UpdateIndexToRegisteredMap(t.Context(), m)
|
||||
reg, err := cache.RegistrationByIndex(validatorIndex)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, string(reg.Pubkey), string(pubkey))
|
||||
@@ -38,7 +37,7 @@ func TestRegistrationCache(t *testing.T) {
|
||||
Timestamp: uint64(time.Now().Unix()),
|
||||
Pubkey: pubkey,
|
||||
}
|
||||
cache.UpdateIndexToRegisteredMap(context.Background(), m)
|
||||
cache.UpdateIndexToRegisteredMap(t.Context(), m)
|
||||
reg, err := cache.RegistrationByIndex(validatorIndex2)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, string(reg.Pubkey), string(pubkey))
|
||||
|
||||
5
beacon-chain/cache/skip_slot_cache_test.go
vendored
5
beacon-chain/cache/skip_slot_cache_test.go
vendored
@@ -1,7 +1,6 @@
|
||||
package cache_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"sync"
|
||||
"testing"
|
||||
|
||||
@@ -14,7 +13,7 @@ import (
|
||||
)
|
||||
|
||||
func TestSkipSlotCache_RoundTrip(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
c := cache.NewSkipSlotCache()
|
||||
|
||||
r := [32]byte{'a'}
|
||||
@@ -38,7 +37,7 @@ func TestSkipSlotCache_RoundTrip(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestSkipSlotCache_DisabledAndEnabled(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
c := cache.NewSkipSlotCache()
|
||||
|
||||
r := [32]byte{'a'}
|
||||
|
||||
@@ -81,6 +81,7 @@ go_test(
|
||||
"//proto/prysm/v1alpha1/attestation:go_default_library",
|
||||
"//runtime/version:go_default_library",
|
||||
"//testing/assert:go_default_library",
|
||||
"//testing/fuzz:go_default_library",
|
||||
"//testing/require:go_default_library",
|
||||
"//testing/util:go_default_library",
|
||||
"//time:go_default_library",
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package altair_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"testing"
|
||||
|
||||
@@ -19,9 +18,10 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/math"
|
||||
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1/attestation"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/fuzz"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/util"
|
||||
fuzz "github.com/google/gofuzz"
|
||||
gofuzz "github.com/google/gofuzz"
|
||||
"github.com/prysmaticlabs/go-bitfield"
|
||||
)
|
||||
|
||||
@@ -50,7 +50,7 @@ func TestProcessAttestations_InclusionDelayFailure(t *testing.T) {
|
||||
)
|
||||
wsb, err := blocks.NewSignedBeaconBlock(b)
|
||||
require.NoError(t, err)
|
||||
_, err = altair.ProcessAttestationsNoVerifySignature(context.Background(), beaconState, wsb.Block())
|
||||
_, err = altair.ProcessAttestationsNoVerifySignature(t.Context(), beaconState, wsb.Block())
|
||||
require.ErrorContains(t, want, err)
|
||||
}
|
||||
|
||||
@@ -81,7 +81,7 @@ func TestProcessAttestations_NeitherCurrentNorPrevEpoch(t *testing.T) {
|
||||
)
|
||||
wsb, err := blocks.NewSignedBeaconBlock(b)
|
||||
require.NoError(t, err)
|
||||
_, err = altair.ProcessAttestationsNoVerifySignature(context.Background(), beaconState, wsb.Block())
|
||||
_, err = altair.ProcessAttestationsNoVerifySignature(t.Context(), beaconState, wsb.Block())
|
||||
require.ErrorContains(t, want, err)
|
||||
}
|
||||
|
||||
@@ -110,13 +110,13 @@ func TestProcessAttestations_CurrentEpochFFGDataMismatches(t *testing.T) {
|
||||
want := "source check point not equal to current justified checkpoint"
|
||||
wsb, err := blocks.NewSignedBeaconBlock(b)
|
||||
require.NoError(t, err)
|
||||
_, err = altair.ProcessAttestationsNoVerifySignature(context.Background(), beaconState, wsb.Block())
|
||||
_, err = altair.ProcessAttestationsNoVerifySignature(t.Context(), beaconState, wsb.Block())
|
||||
require.ErrorContains(t, want, err)
|
||||
b.Block.Body.Attestations[0].Data.Source.Epoch = time.CurrentEpoch(beaconState)
|
||||
b.Block.Body.Attestations[0].Data.Source.Root = []byte{}
|
||||
wsb, err = blocks.NewSignedBeaconBlock(b)
|
||||
require.NoError(t, err)
|
||||
_, err = altair.ProcessAttestationsNoVerifySignature(context.Background(), beaconState, wsb.Block())
|
||||
_, err = altair.ProcessAttestationsNoVerifySignature(t.Context(), beaconState, wsb.Block())
|
||||
require.ErrorContains(t, want, err)
|
||||
}
|
||||
|
||||
@@ -151,14 +151,14 @@ func TestProcessAttestations_PrevEpochFFGDataMismatches(t *testing.T) {
|
||||
want := "source check point not equal to previous justified checkpoint"
|
||||
wsb, err := blocks.NewSignedBeaconBlock(b)
|
||||
require.NoError(t, err)
|
||||
_, err = altair.ProcessAttestationsNoVerifySignature(context.Background(), beaconState, wsb.Block())
|
||||
_, err = altair.ProcessAttestationsNoVerifySignature(t.Context(), beaconState, wsb.Block())
|
||||
require.ErrorContains(t, want, err)
|
||||
b.Block.Body.Attestations[0].Data.Source.Epoch = time.PrevEpoch(beaconState)
|
||||
b.Block.Body.Attestations[0].Data.Target.Epoch = time.PrevEpoch(beaconState)
|
||||
b.Block.Body.Attestations[0].Data.Source.Root = []byte{}
|
||||
wsb, err = blocks.NewSignedBeaconBlock(b)
|
||||
require.NoError(t, err)
|
||||
_, err = altair.ProcessAttestationsNoVerifySignature(context.Background(), beaconState, wsb.Block())
|
||||
_, err = altair.ProcessAttestationsNoVerifySignature(t.Context(), beaconState, wsb.Block())
|
||||
require.ErrorContains(t, want, err)
|
||||
}
|
||||
|
||||
@@ -190,7 +190,7 @@ func TestProcessAttestations_InvalidAggregationBitsLength(t *testing.T) {
|
||||
expected := "failed to verify aggregation bitfield: wanted participants bitfield length 3, got: 4"
|
||||
wsb, err := blocks.NewSignedBeaconBlock(b)
|
||||
require.NoError(t, err)
|
||||
_, err = altair.ProcessAttestationsNoVerifySignature(context.Background(), beaconState, wsb.Block())
|
||||
_, err = altair.ProcessAttestationsNoVerifySignature(t.Context(), beaconState, wsb.Block())
|
||||
require.ErrorContains(t, expected, err)
|
||||
}
|
||||
|
||||
@@ -214,7 +214,7 @@ func TestProcessAttestations_OK(t *testing.T) {
|
||||
cfc.Root = mockRoot[:]
|
||||
require.NoError(t, beaconState.SetCurrentJustifiedCheckpoint(cfc))
|
||||
|
||||
committee, err := helpers.BeaconCommitteeFromState(context.Background(), beaconState, att.Data.Slot, 0)
|
||||
committee, err := helpers.BeaconCommitteeFromState(t.Context(), beaconState, att.Data.Slot, 0)
|
||||
require.NoError(t, err)
|
||||
attestingIndices, err := attestation.AttestingIndices(att, committee)
|
||||
require.NoError(t, err)
|
||||
@@ -235,7 +235,7 @@ func TestProcessAttestations_OK(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
wsb, err := blocks.NewSignedBeaconBlock(block)
|
||||
require.NoError(t, err)
|
||||
_, err = altair.ProcessAttestationsNoVerifySignature(context.Background(), beaconState, wsb.Block())
|
||||
_, err = altair.ProcessAttestationsNoVerifySignature(t.Context(), beaconState, wsb.Block())
|
||||
require.NoError(t, err)
|
||||
})
|
||||
t.Run("post-Electra", func(t *testing.T) {
|
||||
@@ -260,7 +260,7 @@ func TestProcessAttestations_OK(t *testing.T) {
|
||||
cfc.Root = mockRoot[:]
|
||||
require.NoError(t, beaconState.SetCurrentJustifiedCheckpoint(cfc))
|
||||
|
||||
committee, err := helpers.BeaconCommitteeFromState(context.Background(), beaconState, att.Data.Slot, 0)
|
||||
committee, err := helpers.BeaconCommitteeFromState(t.Context(), beaconState, att.Data.Slot, 0)
|
||||
require.NoError(t, err)
|
||||
attestingIndices, err := attestation.AttestingIndices(att, committee)
|
||||
require.NoError(t, err)
|
||||
@@ -281,7 +281,7 @@ func TestProcessAttestations_OK(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
wsb, err := blocks.NewSignedBeaconBlock(block)
|
||||
require.NoError(t, err)
|
||||
_, err = altair.ProcessAttestationsNoVerifySignature(context.Background(), beaconState, wsb.Block())
|
||||
_, err = altair.ProcessAttestationsNoVerifySignature(t.Context(), beaconState, wsb.Block())
|
||||
require.NoError(t, err)
|
||||
})
|
||||
}
|
||||
@@ -313,13 +313,13 @@ func TestProcessAttestationNoVerify_SourceTargetHead(t *testing.T) {
|
||||
|
||||
b, err := helpers.TotalActiveBalance(beaconState)
|
||||
require.NoError(t, err)
|
||||
beaconState, err = altair.ProcessAttestationNoVerifySignature(context.Background(), beaconState, att, b)
|
||||
beaconState, err = altair.ProcessAttestationNoVerifySignature(t.Context(), beaconState, att, b)
|
||||
require.NoError(t, err)
|
||||
|
||||
p, err := beaconState.CurrentEpochParticipation()
|
||||
require.NoError(t, err)
|
||||
|
||||
committee, err := helpers.BeaconCommitteeFromState(context.Background(), beaconState, att.Data.Slot, att.Data.CommitteeIndex)
|
||||
committee, err := helpers.BeaconCommitteeFromState(t.Context(), beaconState, att.Data.Slot, att.Data.CommitteeIndex)
|
||||
require.NoError(t, err)
|
||||
indices, err := attestation.AttestingIndices(att, committee)
|
||||
require.NoError(t, err)
|
||||
@@ -458,7 +458,7 @@ func TestValidatorFlag_Add_ExceedsLength(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestFuzzProcessAttestationsNoVerify_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
st := ðpb.BeaconStateAltair{}
|
||||
b := ðpb.SignedBeaconBlockAltair{Block: ðpb.BeaconBlockAltair{}}
|
||||
for i := 0; i < 10000; i++ {
|
||||
@@ -474,10 +474,11 @@ func TestFuzzProcessAttestationsNoVerify_10000(t *testing.T) {
|
||||
}
|
||||
wsb, err := blocks.NewSignedBeaconBlock(b)
|
||||
require.NoError(t, err)
|
||||
r, err := altair.ProcessAttestationsNoVerifySignature(context.Background(), s, wsb.Block())
|
||||
r, err := altair.ProcessAttestationsNoVerifySignature(t.Context(), s, wsb.Block())
|
||||
if err != nil && r != nil {
|
||||
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, s, b)
|
||||
}
|
||||
fuzz.FreeMemory(i)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -555,10 +556,10 @@ func TestSetParticipationAndRewardProposer(t *testing.T) {
|
||||
|
||||
b, err := helpers.TotalActiveBalance(beaconState)
|
||||
require.NoError(t, err)
|
||||
st, err := altair.SetParticipationAndRewardProposer(context.Background(), beaconState, test.epoch, test.indices, test.participatedFlags, b)
|
||||
st, err := altair.SetParticipationAndRewardProposer(t.Context(), beaconState, test.epoch, test.indices, test.participatedFlags, b)
|
||||
require.NoError(t, err)
|
||||
|
||||
i, err := helpers.BeaconProposerIndex(context.Background(), st)
|
||||
i, err := helpers.BeaconProposerIndex(t.Context(), st)
|
||||
require.NoError(t, err)
|
||||
b, err = beaconState.BalanceAtIndex(i)
|
||||
require.NoError(t, err)
|
||||
@@ -661,8 +662,8 @@ func TestRewardProposer(t *testing.T) {
|
||||
{rewardNumerator: 1000000000000, want: 34234377253},
|
||||
}
|
||||
for _, test := range tests {
|
||||
require.NoError(t, altair.RewardProposer(context.Background(), beaconState, test.rewardNumerator))
|
||||
i, err := helpers.BeaconProposerIndex(context.Background(), beaconState)
|
||||
require.NoError(t, altair.RewardProposer(t.Context(), beaconState, test.rewardNumerator))
|
||||
i, err := helpers.BeaconProposerIndex(t.Context(), beaconState)
|
||||
require.NoError(t, err)
|
||||
b, err := beaconState.BalanceAtIndex(i)
|
||||
require.NoError(t, err)
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package altair_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"math"
|
||||
"testing"
|
||||
|
||||
@@ -26,7 +25,7 @@ import (
|
||||
func TestProcessSyncCommittee_PerfectParticipation(t *testing.T) {
|
||||
beaconState, privKeys := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().MaxValidatorsPerCommittee)
|
||||
require.NoError(t, beaconState.SetSlot(1))
|
||||
committee, err := altair.NextSyncCommittee(context.Background(), beaconState)
|
||||
committee, err := altair.NextSyncCommittee(t.Context(), beaconState)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, beaconState.SetCurrentSyncCommittee(committee))
|
||||
|
||||
@@ -34,7 +33,7 @@ func TestProcessSyncCommittee_PerfectParticipation(t *testing.T) {
|
||||
for i := range syncBits {
|
||||
syncBits[i] = 0xff
|
||||
}
|
||||
indices, err := altair.NextSyncCommitteeIndices(context.Background(), beaconState)
|
||||
indices, err := altair.NextSyncCommitteeIndices(t.Context(), beaconState)
|
||||
require.NoError(t, err)
|
||||
ps := slots.PrevSlot(beaconState.Slot())
|
||||
pbr, err := helpers.BlockRootAtSlot(beaconState, ps)
|
||||
@@ -55,7 +54,7 @@ func TestProcessSyncCommittee_PerfectParticipation(t *testing.T) {
|
||||
}
|
||||
|
||||
var reward uint64
|
||||
beaconState, reward, err = altair.ProcessSyncAggregate(context.Background(), beaconState, syncAggregate)
|
||||
beaconState, reward, err = altair.ProcessSyncAggregate(t.Context(), beaconState, syncAggregate)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, uint64(72192), reward)
|
||||
|
||||
@@ -77,7 +76,7 @@ func TestProcessSyncCommittee_PerfectParticipation(t *testing.T) {
|
||||
require.Equal(t, true, balances[indices[0]] > balances[nonSyncIndex])
|
||||
|
||||
// Proposer should be more profitable than rest of the sync committee
|
||||
proposerIndex, err := helpers.BeaconProposerIndex(context.Background(), beaconState)
|
||||
proposerIndex, err := helpers.BeaconProposerIndex(t.Context(), beaconState)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, true, balances[proposerIndex] > balances[indices[0]])
|
||||
|
||||
@@ -102,7 +101,7 @@ func TestProcessSyncCommittee_PerfectParticipation(t *testing.T) {
|
||||
func TestProcessSyncCommittee_MixParticipation_BadSignature(t *testing.T) {
|
||||
beaconState, privKeys := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().MaxValidatorsPerCommittee)
|
||||
require.NoError(t, beaconState.SetSlot(1))
|
||||
committee, err := altair.NextSyncCommittee(context.Background(), beaconState)
|
||||
committee, err := altair.NextSyncCommittee(t.Context(), beaconState)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, beaconState.SetCurrentSyncCommittee(committee))
|
||||
|
||||
@@ -110,7 +109,7 @@ func TestProcessSyncCommittee_MixParticipation_BadSignature(t *testing.T) {
|
||||
for i := range syncBits {
|
||||
syncBits[i] = 0xAA
|
||||
}
|
||||
indices, err := altair.NextSyncCommitteeIndices(context.Background(), beaconState)
|
||||
indices, err := altair.NextSyncCommitteeIndices(t.Context(), beaconState)
|
||||
require.NoError(t, err)
|
||||
ps := slots.PrevSlot(beaconState.Slot())
|
||||
pbr, err := helpers.BlockRootAtSlot(beaconState, ps)
|
||||
@@ -130,14 +129,14 @@ func TestProcessSyncCommittee_MixParticipation_BadSignature(t *testing.T) {
|
||||
SyncCommitteeSignature: aggregatedSig,
|
||||
}
|
||||
|
||||
_, _, err = altair.ProcessSyncAggregate(context.Background(), beaconState, syncAggregate)
|
||||
_, _, err = altair.ProcessSyncAggregate(t.Context(), beaconState, syncAggregate)
|
||||
require.ErrorContains(t, "invalid sync committee signature", err)
|
||||
}
|
||||
|
||||
func TestProcessSyncCommittee_MixParticipation_GoodSignature(t *testing.T) {
|
||||
beaconState, privKeys := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().MaxValidatorsPerCommittee)
|
||||
require.NoError(t, beaconState.SetSlot(1))
|
||||
committee, err := altair.NextSyncCommittee(context.Background(), beaconState)
|
||||
committee, err := altair.NextSyncCommittee(t.Context(), beaconState)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, beaconState.SetCurrentSyncCommittee(committee))
|
||||
|
||||
@@ -145,7 +144,7 @@ func TestProcessSyncCommittee_MixParticipation_GoodSignature(t *testing.T) {
|
||||
for i := range syncBits {
|
||||
syncBits[i] = 0xAA
|
||||
}
|
||||
indices, err := altair.NextSyncCommitteeIndices(context.Background(), beaconState)
|
||||
indices, err := altair.NextSyncCommitteeIndices(t.Context(), beaconState)
|
||||
require.NoError(t, err)
|
||||
ps := slots.PrevSlot(beaconState.Slot())
|
||||
pbr, err := helpers.BlockRootAtSlot(beaconState, ps)
|
||||
@@ -167,7 +166,7 @@ func TestProcessSyncCommittee_MixParticipation_GoodSignature(t *testing.T) {
|
||||
SyncCommitteeSignature: aggregatedSig,
|
||||
}
|
||||
|
||||
_, _, err = altair.ProcessSyncAggregate(context.Background(), beaconState, syncAggregate)
|
||||
_, _, err = altair.ProcessSyncAggregate(t.Context(), beaconState, syncAggregate)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
@@ -175,7 +174,7 @@ func TestProcessSyncCommittee_MixParticipation_GoodSignature(t *testing.T) {
|
||||
func TestProcessSyncCommittee_DontPrecompute(t *testing.T) {
|
||||
beaconState, _ := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().MaxValidatorsPerCommittee)
|
||||
require.NoError(t, beaconState.SetSlot(1))
|
||||
committee, err := altair.NextSyncCommittee(context.Background(), beaconState)
|
||||
committee, err := altair.NextSyncCommittee(t.Context(), beaconState)
|
||||
require.NoError(t, err)
|
||||
committeeKeys := committee.Pubkeys
|
||||
committeeKeys[1] = committeeKeys[0]
|
||||
@@ -192,7 +191,7 @@ func TestProcessSyncCommittee_DontPrecompute(t *testing.T) {
|
||||
SyncCommitteeBits: syncBits,
|
||||
}
|
||||
require.NoError(t, beaconState.UpdateBalancesAtIndex(idx, 0))
|
||||
st, votedKeys, _, err := altair.ProcessSyncAggregateEported(context.Background(), beaconState, syncAggregate)
|
||||
st, votedKeys, _, err := altair.ProcessSyncAggregateEported(t.Context(), beaconState, syncAggregate)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 511, len(votedKeys))
|
||||
require.DeepEqual(t, committeeKeys[0], votedKeys[0].Marshal())
|
||||
@@ -203,7 +202,7 @@ func TestProcessSyncCommittee_DontPrecompute(t *testing.T) {
|
||||
func TestProcessSyncCommittee_processSyncAggregate(t *testing.T) {
|
||||
beaconState, _ := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().MaxValidatorsPerCommittee)
|
||||
require.NoError(t, beaconState.SetSlot(1))
|
||||
committee, err := altair.NextSyncCommittee(context.Background(), beaconState)
|
||||
committee, err := altair.NextSyncCommittee(t.Context(), beaconState)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, beaconState.SetCurrentSyncCommittee(committee))
|
||||
|
||||
@@ -215,7 +214,7 @@ func TestProcessSyncCommittee_processSyncAggregate(t *testing.T) {
|
||||
SyncCommitteeBits: syncBits,
|
||||
}
|
||||
|
||||
st, votedKeys, _, err := altair.ProcessSyncAggregateEported(context.Background(), beaconState, syncAggregate)
|
||||
st, votedKeys, _, err := altair.ProcessSyncAggregateEported(t.Context(), beaconState, syncAggregate)
|
||||
require.NoError(t, err)
|
||||
votedMap := make(map[[fieldparams.BLSPubkeyLength]byte]bool)
|
||||
for _, key := range votedKeys {
|
||||
@@ -228,7 +227,7 @@ func TestProcessSyncCommittee_processSyncAggregate(t *testing.T) {
|
||||
committeeKeys := currentSyncCommittee.Pubkeys
|
||||
balances := st.Balances()
|
||||
|
||||
proposerIndex, err := helpers.BeaconProposerIndex(context.Background(), beaconState)
|
||||
proposerIndex, err := helpers.BeaconProposerIndex(t.Context(), beaconState)
|
||||
require.NoError(t, err)
|
||||
|
||||
for i := 0; i < len(syncBits); i++ {
|
||||
@@ -254,7 +253,7 @@ func TestProcessSyncCommittee_processSyncAggregate(t *testing.T) {
|
||||
func Test_VerifySyncCommitteeSig(t *testing.T) {
|
||||
beaconState, privKeys := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().MaxValidatorsPerCommittee)
|
||||
require.NoError(t, beaconState.SetSlot(1))
|
||||
committee, err := altair.NextSyncCommittee(context.Background(), beaconState)
|
||||
committee, err := altair.NextSyncCommittee(t.Context(), beaconState)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, beaconState.SetCurrentSyncCommittee(committee))
|
||||
|
||||
@@ -262,7 +261,7 @@ func Test_VerifySyncCommitteeSig(t *testing.T) {
|
||||
for i := range syncBits {
|
||||
syncBits[i] = 0xff
|
||||
}
|
||||
indices, err := altair.NextSyncCommitteeIndices(context.Background(), beaconState)
|
||||
indices, err := altair.NextSyncCommitteeIndices(t.Context(), beaconState)
|
||||
require.NoError(t, err)
|
||||
ps := slots.PrevSlot(beaconState.Slot())
|
||||
pbr, err := helpers.BlockRootAtSlot(beaconState, ps)
|
||||
|
||||
@@ -1,21 +1,21 @@
|
||||
package altair_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/altair"
|
||||
state_native "github.com/OffchainLabs/prysm/v6/beacon-chain/state/state-native"
|
||||
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/fuzz"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
fuzz "github.com/google/gofuzz"
|
||||
gofuzz "github.com/google/gofuzz"
|
||||
)
|
||||
|
||||
func TestFuzzProcessDeposits_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
state := ðpb.BeaconStateAltair{}
|
||||
deposits := make([]*ethpb.Deposit, 100)
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
for i := 0; i < 10000; i++ {
|
||||
fuzzer.Fuzz(state)
|
||||
for i := range deposits {
|
||||
@@ -27,14 +27,15 @@ func TestFuzzProcessDeposits_10000(t *testing.T) {
|
||||
if err != nil && r != nil {
|
||||
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, deposits)
|
||||
}
|
||||
fuzz.FreeMemory(i)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFuzzProcessPreGenesisDeposit_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
state := ðpb.BeaconStateAltair{}
|
||||
deposit := ðpb.Deposit{}
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
|
||||
for i := 0; i < 10000; i++ {
|
||||
fuzzer.Fuzz(state)
|
||||
@@ -45,14 +46,15 @@ func TestFuzzProcessPreGenesisDeposit_10000(t *testing.T) {
|
||||
if err != nil && r != nil {
|
||||
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, deposit)
|
||||
}
|
||||
fuzz.FreeMemory(i)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFuzzProcessPreGenesisDeposit_Phase0_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
state := ðpb.BeaconState{}
|
||||
deposit := ðpb.Deposit{}
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
|
||||
for i := 0; i < 10000; i++ {
|
||||
fuzzer.Fuzz(state)
|
||||
@@ -63,11 +65,12 @@ func TestFuzzProcessPreGenesisDeposit_Phase0_10000(t *testing.T) {
|
||||
if err != nil && r != nil {
|
||||
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, deposit)
|
||||
}
|
||||
fuzz.FreeMemory(i)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFuzzProcessDeposit_Phase0_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
state := ðpb.BeaconState{}
|
||||
deposit := ðpb.Deposit{}
|
||||
|
||||
@@ -80,11 +83,12 @@ func TestFuzzProcessDeposit_Phase0_10000(t *testing.T) {
|
||||
if err != nil && r != nil {
|
||||
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, deposit)
|
||||
}
|
||||
fuzz.FreeMemory(i)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFuzzProcessDeposit_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
state := ðpb.BeaconStateAltair{}
|
||||
deposit := ðpb.Deposit{}
|
||||
|
||||
@@ -97,5 +101,6 @@ func TestFuzzProcessDeposit_10000(t *testing.T) {
|
||||
if err != nil && r != nil {
|
||||
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, deposit)
|
||||
}
|
||||
fuzz.FreeMemory(i)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package altair_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/altair"
|
||||
@@ -42,7 +41,7 @@ func TestProcessDeposits_SameValidatorMultipleDepositsSameBlock(t *testing.T) {
|
||||
},
|
||||
})
|
||||
require.NoError(t, err)
|
||||
newState, err := altair.ProcessDeposits(context.Background(), beaconState, []*ethpb.Deposit{dep[0], dep[1], dep[2]})
|
||||
newState, err := altair.ProcessDeposits(t.Context(), beaconState, []*ethpb.Deposit{dep[0], dep[1], dep[2]})
|
||||
require.NoError(t, err, "Expected block deposits to process correctly")
|
||||
require.Equal(t, 2, len(newState.Validators()), "Incorrect validator count")
|
||||
}
|
||||
@@ -70,7 +69,7 @@ func TestProcessDeposits_AddsNewValidatorDeposit(t *testing.T) {
|
||||
},
|
||||
})
|
||||
require.NoError(t, err)
|
||||
newState, err := altair.ProcessDeposits(context.Background(), beaconState, []*ethpb.Deposit{dep[0]})
|
||||
newState, err := altair.ProcessDeposits(t.Context(), beaconState, []*ethpb.Deposit{dep[0]})
|
||||
require.NoError(t, err, "Expected block deposits to process correctly")
|
||||
if newState.Balances()[1] != dep[0].Data.Amount {
|
||||
t.Errorf(
|
||||
@@ -127,7 +126,7 @@ func TestProcessDeposits_RepeatedDeposit_IncreasesValidatorBalance(t *testing.T)
|
||||
},
|
||||
})
|
||||
require.NoError(t, err)
|
||||
newState, err := altair.ProcessDeposits(context.Background(), beaconState, []*ethpb.Deposit{deposit})
|
||||
newState, err := altair.ProcessDeposits(t.Context(), beaconState, []*ethpb.Deposit{deposit})
|
||||
require.NoError(t, err, "Process deposit failed")
|
||||
require.Equal(t, uint64(1000+50), newState.Balances()[1], "Expected balance at index 1 to be 1050")
|
||||
}
|
||||
@@ -256,7 +255,7 @@ func TestPreGenesisDeposits_SkipInvalidDeposit(t *testing.T) {
|
||||
},
|
||||
})
|
||||
require.NoError(t, err)
|
||||
newState, err := altair.ProcessPreGenesisDeposits(context.Background(), beaconState, dep)
|
||||
newState, err := altair.ProcessPreGenesisDeposits(t.Context(), beaconState, dep)
|
||||
require.NoError(t, err, "Expected invalid block deposit to be ignored without error")
|
||||
|
||||
_, ok := newState.ValidatorIndexByPubkey(bytesutil.ToBytes48(dep[0].Data.PublicKey))
|
||||
@@ -370,6 +369,6 @@ func TestProcessDeposits_MerkleBranchFailsVerification(t *testing.T) {
|
||||
})
|
||||
require.NoError(t, err)
|
||||
want := "deposit root did not verify"
|
||||
_, err = altair.ProcessDeposits(context.Background(), beaconState, b.Block.Body.Deposits)
|
||||
_, err = altair.ProcessDeposits(t.Context(), beaconState, b.Block.Body.Deposits)
|
||||
assert.ErrorContains(t, want, err)
|
||||
}
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package altair
|
||||
|
||||
import (
|
||||
"context"
|
||||
"math"
|
||||
"testing"
|
||||
|
||||
@@ -32,7 +31,7 @@ func TestInitializeEpochValidators_Ok(t *testing.T) {
|
||||
InactivityScores: []uint64{0, 1, 2, 3},
|
||||
})
|
||||
require.NoError(t, err)
|
||||
v, b, err := InitializePrecomputeValidators(context.Background(), s)
|
||||
v, b, err := InitializePrecomputeValidators(t.Context(), s)
|
||||
require.NoError(t, err)
|
||||
assert.DeepEqual(t, &precompute.Validator{
|
||||
IsSlashed: true,
|
||||
@@ -74,7 +73,7 @@ func TestInitializeEpochValidators_Overflow(t *testing.T) {
|
||||
InactivityScores: []uint64{0, 1},
|
||||
})
|
||||
require.NoError(t, err)
|
||||
_, _, err = InitializePrecomputeValidators(context.Background(), s)
|
||||
_, _, err = InitializePrecomputeValidators(t.Context(), s)
|
||||
require.ErrorContains(t, "could not read every validator: addition overflows", err)
|
||||
}
|
||||
|
||||
@@ -84,16 +83,16 @@ func TestInitializeEpochValidators_BadState(t *testing.T) {
|
||||
InactivityScores: []uint64{},
|
||||
})
|
||||
require.NoError(t, err)
|
||||
_, _, err = InitializePrecomputeValidators(context.Background(), s)
|
||||
_, _, err = InitializePrecomputeValidators(t.Context(), s)
|
||||
require.ErrorContains(t, "num of validators is different than num of inactivity scores", err)
|
||||
}
|
||||
|
||||
func TestProcessEpochParticipation(t *testing.T) {
|
||||
s, err := testState()
|
||||
require.NoError(t, err)
|
||||
validators, balance, err := InitializePrecomputeValidators(context.Background(), s)
|
||||
validators, balance, err := InitializePrecomputeValidators(t.Context(), s)
|
||||
require.NoError(t, err)
|
||||
validators, balance, err = ProcessEpochParticipation(context.Background(), s, balance, validators)
|
||||
validators, balance, err = ProcessEpochParticipation(t.Context(), s, balance, validators)
|
||||
require.NoError(t, err)
|
||||
require.DeepEqual(t, &precompute.Validator{
|
||||
IsActiveCurrentEpoch: true,
|
||||
@@ -169,9 +168,9 @@ func TestProcessEpochParticipation_InactiveValidator(t *testing.T) {
|
||||
InactivityScores: []uint64{0, 0, 0},
|
||||
})
|
||||
require.NoError(t, err)
|
||||
validators, balance, err := InitializePrecomputeValidators(context.Background(), st)
|
||||
validators, balance, err := InitializePrecomputeValidators(t.Context(), st)
|
||||
require.NoError(t, err)
|
||||
validators, balance, err = ProcessEpochParticipation(context.Background(), st, balance, validators)
|
||||
validators, balance, err = ProcessEpochParticipation(t.Context(), st, balance, validators)
|
||||
require.NoError(t, err)
|
||||
require.DeepEqual(t, &precompute.Validator{
|
||||
IsActiveCurrentEpoch: false,
|
||||
@@ -209,9 +208,9 @@ func TestProcessEpochParticipation_InactiveValidator(t *testing.T) {
|
||||
func TestAttestationsDelta(t *testing.T) {
|
||||
s, err := testState()
|
||||
require.NoError(t, err)
|
||||
validators, balance, err := InitializePrecomputeValidators(context.Background(), s)
|
||||
validators, balance, err := InitializePrecomputeValidators(t.Context(), s)
|
||||
require.NoError(t, err)
|
||||
validators, balance, err = ProcessEpochParticipation(context.Background(), s, balance, validators)
|
||||
validators, balance, err = ProcessEpochParticipation(t.Context(), s, balance, validators)
|
||||
require.NoError(t, err)
|
||||
deltas, err := AttestationsDelta(s, balance, validators)
|
||||
require.NoError(t, err)
|
||||
@@ -247,9 +246,9 @@ func TestAttestationsDelta(t *testing.T) {
|
||||
func TestAttestationsDeltaBellatrix(t *testing.T) {
|
||||
s, err := testStateBellatrix()
|
||||
require.NoError(t, err)
|
||||
validators, balance, err := InitializePrecomputeValidators(context.Background(), s)
|
||||
validators, balance, err := InitializePrecomputeValidators(t.Context(), s)
|
||||
require.NoError(t, err)
|
||||
validators, balance, err = ProcessEpochParticipation(context.Background(), s, balance, validators)
|
||||
validators, balance, err = ProcessEpochParticipation(t.Context(), s, balance, validators)
|
||||
require.NoError(t, err)
|
||||
deltas, err := AttestationsDelta(s, balance, validators)
|
||||
require.NoError(t, err)
|
||||
@@ -285,9 +284,9 @@ func TestAttestationsDeltaBellatrix(t *testing.T) {
|
||||
func TestProcessRewardsAndPenaltiesPrecompute_Ok(t *testing.T) {
|
||||
s, err := testState()
|
||||
require.NoError(t, err)
|
||||
validators, balance, err := InitializePrecomputeValidators(context.Background(), s)
|
||||
validators, balance, err := InitializePrecomputeValidators(t.Context(), s)
|
||||
require.NoError(t, err)
|
||||
validators, balance, err = ProcessEpochParticipation(context.Background(), s, balance, validators)
|
||||
validators, balance, err = ProcessEpochParticipation(t.Context(), s, balance, validators)
|
||||
require.NoError(t, err)
|
||||
s, err = ProcessRewardsAndPenaltiesPrecompute(s, balance, validators)
|
||||
require.NoError(t, err)
|
||||
@@ -324,9 +323,9 @@ func TestProcessRewardsAndPenaltiesPrecompute_Ok(t *testing.T) {
|
||||
func TestProcessRewardsAndPenaltiesPrecompute_InactivityLeak(t *testing.T) {
|
||||
s, err := testState()
|
||||
require.NoError(t, err)
|
||||
validators, balance, err := InitializePrecomputeValidators(context.Background(), s)
|
||||
validators, balance, err := InitializePrecomputeValidators(t.Context(), s)
|
||||
require.NoError(t, err)
|
||||
validators, balance, err = ProcessEpochParticipation(context.Background(), s, balance, validators)
|
||||
validators, balance, err = ProcessEpochParticipation(t.Context(), s, balance, validators)
|
||||
require.NoError(t, err)
|
||||
sCopy := s.Copy()
|
||||
s, err = ProcessRewardsAndPenaltiesPrecompute(s, balance, validators)
|
||||
@@ -352,11 +351,11 @@ func TestProcessInactivityScores_CanProcessInactivityLeak(t *testing.T) {
|
||||
defaultScore := uint64(5)
|
||||
require.NoError(t, s.SetInactivityScores([]uint64{defaultScore, defaultScore, defaultScore, defaultScore}))
|
||||
require.NoError(t, s.SetSlot(params.BeaconConfig().SlotsPerEpoch*primitives.Slot(params.BeaconConfig().MinEpochsToInactivityPenalty+2)))
|
||||
validators, balance, err := InitializePrecomputeValidators(context.Background(), s)
|
||||
validators, balance, err := InitializePrecomputeValidators(t.Context(), s)
|
||||
require.NoError(t, err)
|
||||
validators, _, err = ProcessEpochParticipation(context.Background(), s, balance, validators)
|
||||
validators, _, err = ProcessEpochParticipation(t.Context(), s, balance, validators)
|
||||
require.NoError(t, err)
|
||||
s, _, err = ProcessInactivityScores(context.Background(), s, validators)
|
||||
s, _, err = ProcessInactivityScores(t.Context(), s, validators)
|
||||
require.NoError(t, err)
|
||||
inactivityScores, err := s.InactivityScores()
|
||||
require.NoError(t, err)
|
||||
@@ -373,11 +372,11 @@ func TestProcessInactivityScores_GenesisEpoch(t *testing.T) {
|
||||
defaultScore := uint64(10)
|
||||
require.NoError(t, s.SetInactivityScores([]uint64{defaultScore, defaultScore, defaultScore, defaultScore}))
|
||||
require.NoError(t, s.SetSlot(params.BeaconConfig().GenesisSlot))
|
||||
validators, balance, err := InitializePrecomputeValidators(context.Background(), s)
|
||||
validators, balance, err := InitializePrecomputeValidators(t.Context(), s)
|
||||
require.NoError(t, err)
|
||||
validators, _, err = ProcessEpochParticipation(context.Background(), s, balance, validators)
|
||||
validators, _, err = ProcessEpochParticipation(t.Context(), s, balance, validators)
|
||||
require.NoError(t, err)
|
||||
s, _, err = ProcessInactivityScores(context.Background(), s, validators)
|
||||
s, _, err = ProcessInactivityScores(t.Context(), s, validators)
|
||||
require.NoError(t, err)
|
||||
inactivityScores, err := s.InactivityScores()
|
||||
require.NoError(t, err)
|
||||
@@ -392,11 +391,11 @@ func TestProcessInactivityScores_CanProcessNonInactivityLeak(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
defaultScore := uint64(5)
|
||||
require.NoError(t, s.SetInactivityScores([]uint64{defaultScore, defaultScore, defaultScore, defaultScore}))
|
||||
validators, balance, err := InitializePrecomputeValidators(context.Background(), s)
|
||||
validators, balance, err := InitializePrecomputeValidators(t.Context(), s)
|
||||
require.NoError(t, err)
|
||||
validators, _, err = ProcessEpochParticipation(context.Background(), s, balance, validators)
|
||||
validators, _, err = ProcessEpochParticipation(t.Context(), s, balance, validators)
|
||||
require.NoError(t, err)
|
||||
s, _, err = ProcessInactivityScores(context.Background(), s, validators)
|
||||
s, _, err = ProcessInactivityScores(t.Context(), s, validators)
|
||||
require.NoError(t, err)
|
||||
inactivityScores, err := s.InactivityScores()
|
||||
require.NoError(t, err)
|
||||
@@ -410,9 +409,9 @@ func TestProcessInactivityScores_CanProcessNonInactivityLeak(t *testing.T) {
|
||||
func TestProcessRewardsAndPenaltiesPrecompute_GenesisEpoch(t *testing.T) {
|
||||
s, err := testState()
|
||||
require.NoError(t, err)
|
||||
validators, balance, err := InitializePrecomputeValidators(context.Background(), s)
|
||||
validators, balance, err := InitializePrecomputeValidators(t.Context(), s)
|
||||
require.NoError(t, err)
|
||||
validators, balance, err = ProcessEpochParticipation(context.Background(), s, balance, validators)
|
||||
validators, balance, err = ProcessEpochParticipation(t.Context(), s, balance, validators)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, s.SetSlot(0))
|
||||
s, err = ProcessRewardsAndPenaltiesPrecompute(s, balance, validators)
|
||||
@@ -429,9 +428,9 @@ func TestProcessRewardsAndPenaltiesPrecompute_GenesisEpoch(t *testing.T) {
|
||||
func TestProcessRewardsAndPenaltiesPrecompute_BadState(t *testing.T) {
|
||||
s, err := testState()
|
||||
require.NoError(t, err)
|
||||
validators, balance, err := InitializePrecomputeValidators(context.Background(), s)
|
||||
validators, balance, err := InitializePrecomputeValidators(t.Context(), s)
|
||||
require.NoError(t, err)
|
||||
_, balance, err = ProcessEpochParticipation(context.Background(), s, balance, validators)
|
||||
_, balance, err = ProcessEpochParticipation(t.Context(), s, balance, validators)
|
||||
require.NoError(t, err)
|
||||
_, err = ProcessRewardsAndPenaltiesPrecompute(s, balance, []*precompute.Validator{})
|
||||
require.ErrorContains(t, "validator registries not the same length as state's validator registries", err)
|
||||
@@ -442,7 +441,7 @@ func TestProcessInactivityScores_NonEligibleValidator(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
defaultScore := uint64(5)
|
||||
require.NoError(t, s.SetInactivityScores([]uint64{defaultScore, defaultScore, defaultScore, defaultScore}))
|
||||
validators, balance, err := InitializePrecomputeValidators(context.Background(), s)
|
||||
validators, balance, err := InitializePrecomputeValidators(t.Context(), s)
|
||||
require.NoError(t, err)
|
||||
|
||||
// v0 is eligible (not active previous epoch, slashed and not withdrawable)
|
||||
@@ -463,9 +462,9 @@ func TestProcessInactivityScores_NonEligibleValidator(t *testing.T) {
|
||||
// v3 is eligible (active previous epoch)
|
||||
validators[3].IsActivePrevEpoch = true
|
||||
|
||||
validators, _, err = ProcessEpochParticipation(context.Background(), s, balance, validators)
|
||||
validators, _, err = ProcessEpochParticipation(t.Context(), s, balance, validators)
|
||||
require.NoError(t, err)
|
||||
s, _, err = ProcessInactivityScores(context.Background(), s, validators)
|
||||
s, _, err = ProcessInactivityScores(t.Context(), s, validators)
|
||||
require.NoError(t, err)
|
||||
inactivityScores, err := s.InactivityScores()
|
||||
require.NoError(t, err)
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package altair_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"math"
|
||||
"testing"
|
||||
@@ -29,7 +28,7 @@ func TestProcessSyncCommitteeUpdates_CanRotate(t *testing.T) {
|
||||
BodyRoot: bytesutil.PadTo([]byte{'c'}, 32),
|
||||
}
|
||||
require.NoError(t, s.SetLatestBlockHeader(h))
|
||||
postState, err := altair.ProcessSyncCommitteeUpdates(context.Background(), s)
|
||||
postState, err := altair.ProcessSyncCommitteeUpdates(t.Context(), s)
|
||||
require.NoError(t, err)
|
||||
current, err := postState.CurrentSyncCommittee()
|
||||
require.NoError(t, err)
|
||||
@@ -38,7 +37,7 @@ func TestProcessSyncCommitteeUpdates_CanRotate(t *testing.T) {
|
||||
require.DeepEqual(t, current, next)
|
||||
|
||||
require.NoError(t, s.SetSlot(params.BeaconConfig().SlotsPerEpoch))
|
||||
postState, err = altair.ProcessSyncCommitteeUpdates(context.Background(), s)
|
||||
postState, err = altair.ProcessSyncCommitteeUpdates(t.Context(), s)
|
||||
require.NoError(t, err)
|
||||
c, err := postState.CurrentSyncCommittee()
|
||||
require.NoError(t, err)
|
||||
@@ -48,7 +47,7 @@ func TestProcessSyncCommitteeUpdates_CanRotate(t *testing.T) {
|
||||
require.DeepEqual(t, next, n)
|
||||
|
||||
require.NoError(t, s.SetSlot(primitives.Slot(params.BeaconConfig().EpochsPerSyncCommitteePeriod)*params.BeaconConfig().SlotsPerEpoch-1))
|
||||
postState, err = altair.ProcessSyncCommitteeUpdates(context.Background(), s)
|
||||
postState, err = altair.ProcessSyncCommitteeUpdates(t.Context(), s)
|
||||
require.NoError(t, err)
|
||||
c, err = postState.CurrentSyncCommittee()
|
||||
require.NoError(t, err)
|
||||
@@ -61,7 +60,7 @@ func TestProcessSyncCommitteeUpdates_CanRotate(t *testing.T) {
|
||||
// Test boundary condition.
|
||||
slot := params.BeaconConfig().SlotsPerEpoch * primitives.Slot(time.CurrentEpoch(s)+params.BeaconConfig().EpochsPerSyncCommitteePeriod)
|
||||
require.NoError(t, s.SetSlot(slot))
|
||||
boundaryCommittee, err := altair.NextSyncCommittee(context.Background(), s)
|
||||
boundaryCommittee, err := altair.NextSyncCommittee(t.Context(), s)
|
||||
require.NoError(t, err)
|
||||
require.DeepNotEqual(t, boundaryCommittee, n)
|
||||
}
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package altair_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
@@ -97,7 +96,7 @@ func TestSyncCommitteeIndices_CanGet(t *testing.T) {
|
||||
t.Run(version.String(v), func(t *testing.T) {
|
||||
helpers.ClearCache()
|
||||
st := getState(t, tt.args.validatorCount, v)
|
||||
got, err := altair.NextSyncCommitteeIndices(context.Background(), st)
|
||||
got, err := altair.NextSyncCommitteeIndices(t.Context(), st)
|
||||
if tt.wantErr {
|
||||
require.ErrorContains(t, tt.errString, err)
|
||||
} else {
|
||||
@@ -129,18 +128,18 @@ func TestSyncCommitteeIndices_DifferentPeriods(t *testing.T) {
|
||||
}
|
||||
|
||||
st := getState(t, params.BeaconConfig().MaxValidatorsPerCommittee)
|
||||
got1, err := altair.NextSyncCommitteeIndices(context.Background(), st)
|
||||
got1, err := altair.NextSyncCommitteeIndices(t.Context(), st)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, st.SetSlot(params.BeaconConfig().SlotsPerEpoch))
|
||||
got2, err := altair.NextSyncCommitteeIndices(context.Background(), st)
|
||||
got2, err := altair.NextSyncCommitteeIndices(t.Context(), st)
|
||||
require.NoError(t, err)
|
||||
require.DeepNotEqual(t, got1, got2)
|
||||
require.NoError(t, st.SetSlot(params.BeaconConfig().SlotsPerEpoch*primitives.Slot(params.BeaconConfig().EpochsPerSyncCommitteePeriod)))
|
||||
got2, err = altair.NextSyncCommitteeIndices(context.Background(), st)
|
||||
got2, err = altair.NextSyncCommitteeIndices(t.Context(), st)
|
||||
require.NoError(t, err)
|
||||
require.DeepNotEqual(t, got1, got2)
|
||||
require.NoError(t, st.SetSlot(params.BeaconConfig().SlotsPerEpoch*primitives.Slot(2*params.BeaconConfig().EpochsPerSyncCommitteePeriod)))
|
||||
got2, err = altair.NextSyncCommitteeIndices(context.Background(), st)
|
||||
got2, err = altair.NextSyncCommitteeIndices(t.Context(), st)
|
||||
require.NoError(t, err)
|
||||
require.DeepNotEqual(t, got1, got2)
|
||||
}
|
||||
@@ -206,7 +205,7 @@ func TestSyncCommittee_CanGet(t *testing.T) {
|
||||
if !tt.wantErr {
|
||||
require.NoError(t, tt.args.state.SetSlot(primitives.Slot(tt.args.epoch)*params.BeaconConfig().SlotsPerEpoch))
|
||||
}
|
||||
got, err := altair.NextSyncCommittee(context.Background(), tt.args.state)
|
||||
got, err := altair.NextSyncCommittee(t.Context(), tt.args.state)
|
||||
if tt.wantErr {
|
||||
require.ErrorContains(t, tt.errString, err)
|
||||
} else {
|
||||
@@ -270,7 +269,7 @@ func TestValidateNilSyncContribution(t *testing.T) {
|
||||
func TestSyncSubCommitteePubkeys_CanGet(t *testing.T) {
|
||||
helpers.ClearCache()
|
||||
st := getState(t, params.BeaconConfig().MaxValidatorsPerCommittee)
|
||||
com, err := altair.NextSyncCommittee(context.Background(), st)
|
||||
com, err := altair.NextSyncCommittee(t.Context(), st)
|
||||
require.NoError(t, err)
|
||||
sub, err := altair.SyncSubCommitteePubkeys(com, 0)
|
||||
require.NoError(t, err)
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package altair_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/altair"
|
||||
@@ -13,7 +12,7 @@ import (
|
||||
func TestProcessEpoch_CanProcess(t *testing.T) {
|
||||
st, _ := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().MaxValidatorsPerCommittee)
|
||||
require.NoError(t, st.SetSlot(10*params.BeaconConfig().SlotsPerEpoch))
|
||||
err := altair.ProcessEpoch(context.Background(), st)
|
||||
err := altair.ProcessEpoch(t.Context(), st)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, uint64(0), st.Slashings()[2], "Unexpected slashed balance")
|
||||
|
||||
@@ -45,7 +44,7 @@ func TestProcessEpoch_CanProcess(t *testing.T) {
|
||||
func TestProcessEpoch_CanProcessBellatrix(t *testing.T) {
|
||||
st, _ := util.DeterministicGenesisStateBellatrix(t, params.BeaconConfig().MaxValidatorsPerCommittee)
|
||||
require.NoError(t, st.SetSlot(10*params.BeaconConfig().SlotsPerEpoch))
|
||||
err := altair.ProcessEpoch(context.Background(), st)
|
||||
err := altair.ProcessEpoch(t.Context(), st)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, uint64(0), st.Slashings()[2], "Unexpected slashed balance")
|
||||
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package altair_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/altair"
|
||||
@@ -17,7 +16,7 @@ import (
|
||||
)
|
||||
|
||||
func TestTranslateParticipation(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
s, _ := util.DeterministicGenesisStateAltair(t, 64)
|
||||
require.NoError(t, s.SetSlot(s.Slot()+params.BeaconConfig().MinAttestationInclusionDelay))
|
||||
|
||||
@@ -73,7 +72,7 @@ func TestTranslateParticipation(t *testing.T) {
|
||||
func TestUpgradeToAltair(t *testing.T) {
|
||||
st, _ := util.DeterministicGenesisState(t, params.BeaconConfig().MaxValidatorsPerCommittee)
|
||||
preForkState := st.Copy()
|
||||
aState, err := altair.UpgradeToAltair(context.Background(), st)
|
||||
aState, err := altair.UpgradeToAltair(t.Context(), st)
|
||||
require.NoError(t, err)
|
||||
|
||||
require.Equal(t, preForkState.GenesisTime(), aState.GenesisTime())
|
||||
|
||||
@@ -105,6 +105,7 @@ go_test(
|
||||
"//proto/prysm/v1alpha1/attestation/aggregation/attestations:go_default_library",
|
||||
"//runtime/version:go_default_library",
|
||||
"//testing/assert:go_default_library",
|
||||
"//testing/fuzz:go_default_library",
|
||||
"//testing/require:go_default_library",
|
||||
"//testing/util:go_default_library",
|
||||
"//time/slots:go_default_library",
|
||||
|
||||
@@ -178,7 +178,7 @@ func VerifyAttestationNoVerifySignature(
|
||||
}
|
||||
}
|
||||
|
||||
return attestation.IsValidAttestationIndices(ctx, indexedAtt)
|
||||
return attestation.IsValidAttestationIndices(ctx, indexedAtt, params.BeaconConfig().MaxValidatorsPerCommittee, params.BeaconConfig().MaxCommitteesPerSlot)
|
||||
}
|
||||
|
||||
// ProcessAttestationNoVerifySignature processes the attestation without verifying the attestation signature. This
|
||||
@@ -243,7 +243,7 @@ func VerifyIndexedAttestation(ctx context.Context, beaconState state.ReadOnlyBea
|
||||
ctx, span := trace.StartSpan(ctx, "core.VerifyIndexedAttestation")
|
||||
defer span.End()
|
||||
|
||||
if err := attestation.IsValidAttestationIndices(ctx, indexedAtt); err != nil {
|
||||
if err := attestation.IsValidAttestationIndices(ctx, indexedAtt, params.BeaconConfig().MaxValidatorsPerCommittee, params.BeaconConfig().MaxCommitteesPerSlot); err != nil {
|
||||
return err
|
||||
}
|
||||
domain, err := signing.Domain(
|
||||
|
||||
@@ -41,7 +41,7 @@ func TestProcessAttestationNoVerifySignature_BeaconFuzzIssue78(t *testing.T) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
_, err = blocks.ProcessAttestationNoVerifySignature(ctx, st, att)
|
||||
require.ErrorContains(t, "committee index 1 >= committee count 1", err)
|
||||
}
|
||||
|
||||
@@ -42,7 +42,7 @@ func TestProcessAggregatedAttestation_OverlappingBits(t *testing.T) {
|
||||
require.NoError(t, beaconState.SetCurrentJustifiedCheckpoint(cfc))
|
||||
require.NoError(t, beaconState.AppendCurrentEpochAttestations(ðpb.PendingAttestation{}))
|
||||
|
||||
committee, err := helpers.BeaconCommitteeFromState(context.Background(), beaconState, att1.Data.Slot, att1.Data.CommitteeIndex)
|
||||
committee, err := helpers.BeaconCommitteeFromState(t.Context(), beaconState, att1.Data.Slot, att1.Data.CommitteeIndex)
|
||||
require.NoError(t, err)
|
||||
attestingIndices1, err := attestation.AttestingIndices(att1, committee)
|
||||
require.NoError(t, err)
|
||||
@@ -64,7 +64,7 @@ func TestProcessAggregatedAttestation_OverlappingBits(t *testing.T) {
|
||||
AggregationBits: aggBits2,
|
||||
}
|
||||
|
||||
committee, err = helpers.BeaconCommitteeFromState(context.Background(), beaconState, att2.Data.Slot, att2.Data.CommitteeIndex)
|
||||
committee, err = helpers.BeaconCommitteeFromState(t.Context(), beaconState, att2.Data.Slot, att2.Data.CommitteeIndex)
|
||||
require.NoError(t, err)
|
||||
attestingIndices2, err := attestation.AttestingIndices(att2, committee)
|
||||
require.NoError(t, err)
|
||||
@@ -137,7 +137,7 @@ func TestProcessAttestationsNoVerify_OlderThanSlotsPerEpoch(t *testing.T) {
|
||||
},
|
||||
AggregationBits: aggBits,
|
||||
}
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
|
||||
t.Run("attestation older than slots per epoch", func(t *testing.T) {
|
||||
beaconState, _ := util.DeterministicGenesisState(t, 100)
|
||||
@@ -360,9 +360,9 @@ func TestConvertToIndexed_OK(t *testing.T) {
|
||||
Signature: att.Signature,
|
||||
}
|
||||
|
||||
committee, err := helpers.BeaconCommitteeFromState(context.Background(), state, att.Data.Slot, att.Data.CommitteeIndex)
|
||||
committee, err := helpers.BeaconCommitteeFromState(t.Context(), state, att.Data.Slot, att.Data.CommitteeIndex)
|
||||
require.NoError(t, err)
|
||||
ia, err := attestation.ConvertToIndexed(context.Background(), att, committee)
|
||||
ia, err := attestation.ConvertToIndexed(t.Context(), att, committee)
|
||||
require.NoError(t, err)
|
||||
assert.DeepEqual(t, wanted, ia, "Convert attestation to indexed attestation didn't result as wanted")
|
||||
}
|
||||
@@ -448,7 +448,7 @@ func TestVerifyIndexedAttestation_OK(t *testing.T) {
|
||||
|
||||
tt.attestation.Signature = marshalledSig
|
||||
|
||||
err = blocks.VerifyIndexedAttestation(context.Background(), state, tt.attestation)
|
||||
err = blocks.VerifyIndexedAttestation(t.Context(), state, tt.attestation)
|
||||
assert.NoError(t, err, "Failed to verify indexed attestation")
|
||||
}
|
||||
}
|
||||
@@ -471,7 +471,7 @@ func TestValidateIndexedAttestation_AboveMaxLength(t *testing.T) {
|
||||
want := "validator indices count exceeds MAX_VALIDATORS_PER_COMMITTEE"
|
||||
st, err := state_native.InitializeFromProtoUnsafePhase0(ðpb.BeaconState{})
|
||||
require.NoError(t, err)
|
||||
err = blocks.VerifyIndexedAttestation(context.Background(), st, indexedAtt1)
|
||||
err = blocks.VerifyIndexedAttestation(t.Context(), st, indexedAtt1)
|
||||
assert.ErrorContains(t, want, err)
|
||||
}
|
||||
|
||||
@@ -493,7 +493,7 @@ func TestValidateIndexedAttestation_BadAttestationsSignatureSet(t *testing.T) {
|
||||
}
|
||||
|
||||
want := "nil or missing indexed attestation data"
|
||||
_, err := blocks.AttestationSignatureBatch(context.Background(), beaconState, atts)
|
||||
_, err := blocks.AttestationSignatureBatch(t.Context(), beaconState, atts)
|
||||
assert.ErrorContains(t, want, err)
|
||||
|
||||
atts = []ethpb.Att{}
|
||||
@@ -514,7 +514,7 @@ func TestValidateIndexedAttestation_BadAttestationsSignatureSet(t *testing.T) {
|
||||
}
|
||||
|
||||
want = "expected non-empty attesting indices"
|
||||
_, err = blocks.AttestationSignatureBatch(context.Background(), beaconState, atts)
|
||||
_, err = blocks.AttestationSignatureBatch(t.Context(), beaconState, atts)
|
||||
assert.ErrorContains(t, want, err)
|
||||
}
|
||||
|
||||
@@ -542,7 +542,7 @@ func TestVerifyAttestations_HandlesPlannedFork(t *testing.T) {
|
||||
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
|
||||
}))
|
||||
|
||||
comm1, err := helpers.BeaconCommitteeFromState(context.Background(), st, 1 /*slot*/, 0 /*committeeIndex*/)
|
||||
comm1, err := helpers.BeaconCommitteeFromState(t.Context(), st, 1 /*slot*/, 0 /*committeeIndex*/)
|
||||
require.NoError(t, err)
|
||||
att1 := util.HydrateAttestation(ðpb.Attestation{
|
||||
AggregationBits: bitfield.NewBitlist(uint64(len(comm1))),
|
||||
@@ -561,7 +561,7 @@ func TestVerifyAttestations_HandlesPlannedFork(t *testing.T) {
|
||||
}
|
||||
att1.Signature = bls.AggregateSignatures(sigs).Marshal()
|
||||
|
||||
comm2, err := helpers.BeaconCommitteeFromState(context.Background(), st, 1*params.BeaconConfig().SlotsPerEpoch+1 /*slot*/, 1 /*committeeIndex*/)
|
||||
comm2, err := helpers.BeaconCommitteeFromState(t.Context(), st, 1*params.BeaconConfig().SlotsPerEpoch+1 /*slot*/, 1 /*committeeIndex*/)
|
||||
require.NoError(t, err)
|
||||
att2 := util.HydrateAttestation(ðpb.Attestation{
|
||||
AggregationBits: bitfield.NewBitlist(uint64(len(comm2))),
|
||||
@@ -583,7 +583,7 @@ func TestVerifyAttestations_HandlesPlannedFork(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestRetrieveAttestationSignatureSet_VerifiesMultipleAttestations(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
numOfValidators := uint64(params.BeaconConfig().SlotsPerEpoch.Mul(4))
|
||||
validators := make([]*ethpb.Validator, numOfValidators)
|
||||
_, keys, err := util.DeterministicDepositsAndKeys(numOfValidators)
|
||||
@@ -602,7 +602,7 @@ func TestRetrieveAttestationSignatureSet_VerifiesMultipleAttestations(t *testing
|
||||
require.NoError(t, st.SetSlot(5))
|
||||
require.NoError(t, st.SetValidators(validators))
|
||||
|
||||
comm1, err := helpers.BeaconCommitteeFromState(context.Background(), st, 1 /*slot*/, 0 /*committeeIndex*/)
|
||||
comm1, err := helpers.BeaconCommitteeFromState(t.Context(), st, 1 /*slot*/, 0 /*committeeIndex*/)
|
||||
require.NoError(t, err)
|
||||
att1 := util.HydrateAttestation(ðpb.Attestation{
|
||||
AggregationBits: bitfield.NewBitlist(uint64(len(comm1))),
|
||||
@@ -621,7 +621,7 @@ func TestRetrieveAttestationSignatureSet_VerifiesMultipleAttestations(t *testing
|
||||
}
|
||||
att1.Signature = bls.AggregateSignatures(sigs).Marshal()
|
||||
|
||||
comm2, err := helpers.BeaconCommitteeFromState(context.Background(), st, 1 /*slot*/, 1 /*committeeIndex*/)
|
||||
comm2, err := helpers.BeaconCommitteeFromState(t.Context(), st, 1 /*slot*/, 1 /*committeeIndex*/)
|
||||
require.NoError(t, err)
|
||||
att2 := util.HydrateAttestation(ðpb.Attestation{
|
||||
AggregationBits: bitfield.NewBitlist(uint64(len(comm2))),
|
||||
@@ -651,7 +651,7 @@ func TestRetrieveAttestationSignatureSet_VerifiesMultipleAttestations(t *testing
|
||||
require.NoError(t, st.SetSlot(5))
|
||||
require.NoError(t, st.SetValidators(validators))
|
||||
|
||||
comm1, err := helpers.BeaconCommitteeFromState(context.Background(), st, 1 /*slot*/, 0 /*committeeIndex*/)
|
||||
comm1, err := helpers.BeaconCommitteeFromState(t.Context(), st, 1 /*slot*/, 0 /*committeeIndex*/)
|
||||
require.NoError(t, err)
|
||||
commBits1 := primitives.NewAttestationCommitteeBits()
|
||||
commBits1.SetBitAt(0, true)
|
||||
@@ -673,7 +673,7 @@ func TestRetrieveAttestationSignatureSet_VerifiesMultipleAttestations(t *testing
|
||||
}
|
||||
att1.Signature = bls.AggregateSignatures(sigs).Marshal()
|
||||
|
||||
comm2, err := helpers.BeaconCommitteeFromState(context.Background(), st, 1 /*slot*/, 1 /*committeeIndex*/)
|
||||
comm2, err := helpers.BeaconCommitteeFromState(t.Context(), st, 1 /*slot*/, 1 /*committeeIndex*/)
|
||||
require.NoError(t, err)
|
||||
commBits2 := primitives.NewAttestationCommitteeBits()
|
||||
commBits2.SetBitAt(1, true)
|
||||
@@ -702,7 +702,7 @@ func TestRetrieveAttestationSignatureSet_VerifiesMultipleAttestations(t *testing
|
||||
}
|
||||
|
||||
func TestRetrieveAttestationSignatureSet_AcrossFork(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
numOfValidators := uint64(params.BeaconConfig().SlotsPerEpoch.Mul(4))
|
||||
validators := make([]*ethpb.Validator, numOfValidators)
|
||||
_, keys, err := util.DeterministicDepositsAndKeys(numOfValidators)
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package blocks_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/blocks"
|
||||
@@ -63,7 +62,7 @@ func TestProcessAttesterSlashings_DataNotSlashable(t *testing.T) {
|
||||
for i, s := range b.Block.Body.AttesterSlashings {
|
||||
ss[i] = s
|
||||
}
|
||||
_, err = blocks.ProcessAttesterSlashings(context.Background(), beaconState, ss, v.SlashValidator)
|
||||
_, err = blocks.ProcessAttesterSlashings(t.Context(), beaconState, ss, v.SlashValidator)
|
||||
assert.ErrorContains(t, "attestations are not slashable", err)
|
||||
}
|
||||
|
||||
@@ -102,7 +101,7 @@ func TestProcessAttesterSlashings_IndexedAttestationFailedToVerify(t *testing.T)
|
||||
for i, s := range b.Block.Body.AttesterSlashings {
|
||||
ss[i] = s
|
||||
}
|
||||
_, err = blocks.ProcessAttesterSlashings(context.Background(), beaconState, ss, v.SlashValidator)
|
||||
_, err = blocks.ProcessAttesterSlashings(t.Context(), beaconState, ss, v.SlashValidator)
|
||||
assert.ErrorContains(t, "validator indices count exceeds MAX_VALIDATORS_PER_COMMITTEE", err)
|
||||
}
|
||||
|
||||
@@ -244,7 +243,7 @@ func TestProcessAttesterSlashings_AppliesCorrectStatus(t *testing.T) {
|
||||
currentSlot := 2 * params.BeaconConfig().SlotsPerEpoch
|
||||
require.NoError(t, tc.st.SetSlot(currentSlot))
|
||||
|
||||
newState, err := blocks.ProcessAttesterSlashings(context.Background(), tc.st, []ethpb.AttSlashing{tc.slashing}, v.SlashValidator)
|
||||
newState, err := blocks.ProcessAttesterSlashings(t.Context(), tc.st, []ethpb.AttSlashing{tc.slashing}, v.SlashValidator)
|
||||
require.NoError(t, err)
|
||||
newRegistry := newState.Validators()
|
||||
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package blocks
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
v "github.com/OffchainLabs/prysm/v6/beacon-chain/core/validators"
|
||||
@@ -11,13 +10,14 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/fuzz"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
fuzz "github.com/google/gofuzz"
|
||||
gofuzz "github.com/google/gofuzz"
|
||||
)
|
||||
|
||||
func TestFuzzProcessAttestationNoVerify_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
ctx := context.Background()
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
ctx := t.Context()
|
||||
state := ðpb.BeaconState{}
|
||||
att := ðpb.Attestation{}
|
||||
|
||||
@@ -28,11 +28,12 @@ func TestFuzzProcessAttestationNoVerify_10000(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
_, err = ProcessAttestationNoVerifySignature(ctx, s, att)
|
||||
_ = err
|
||||
fuzz.FreeMemory(i)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFuzzProcessBlockHeader_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
state := ðpb.BeaconState{}
|
||||
block := ðpb.SignedBeaconBlock{}
|
||||
|
||||
@@ -47,13 +48,14 @@ func TestFuzzProcessBlockHeader_10000(t *testing.T) {
|
||||
}
|
||||
wsb, err := blocks.NewSignedBeaconBlock(block)
|
||||
require.NoError(t, err)
|
||||
_, err = ProcessBlockHeader(context.Background(), s, wsb)
|
||||
_, err = ProcessBlockHeader(t.Context(), s, wsb)
|
||||
_ = err
|
||||
fuzz.FreeMemory(i)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFuzzverifyDepositDataSigningRoot_10000(_ *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
var ba []byte
|
||||
var pubkey [fieldparams.BLSPubkeyLength]byte
|
||||
var sig [96]byte
|
||||
@@ -77,14 +79,14 @@ func TestFuzzverifyDepositDataSigningRoot_10000(_ *testing.T) {
|
||||
}
|
||||
|
||||
func TestFuzzProcessEth1DataInBlock_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
e := ðpb.Eth1Data{}
|
||||
state, err := state_native.InitializeFromProtoUnsafePhase0(ðpb.BeaconState{})
|
||||
require.NoError(t, err)
|
||||
for i := 0; i < 10000; i++ {
|
||||
fuzzer.Fuzz(state)
|
||||
fuzzer.Fuzz(e)
|
||||
s, err := ProcessEth1DataInBlock(context.Background(), state, e)
|
||||
s, err := ProcessEth1DataInBlock(t.Context(), state, e)
|
||||
if err != nil && s != nil {
|
||||
t.Fatalf("state should be nil on err. found: %v on error: %v for state: %v and eth1data: %v", s, err, state, e)
|
||||
}
|
||||
@@ -92,7 +94,7 @@ func TestFuzzProcessEth1DataInBlock_10000(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestFuzzareEth1DataEqual_10000(_ *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
eth1data := ðpb.Eth1Data{}
|
||||
eth1data2 := ðpb.Eth1Data{}
|
||||
|
||||
@@ -105,7 +107,7 @@ func TestFuzzareEth1DataEqual_10000(_ *testing.T) {
|
||||
}
|
||||
|
||||
func TestFuzzEth1DataHasEnoughSupport_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
eth1data := ðpb.Eth1Data{}
|
||||
var stateVotes []*ethpb.Eth1Data
|
||||
for i := 0; i < 100000; i++ {
|
||||
@@ -122,7 +124,7 @@ func TestFuzzEth1DataHasEnoughSupport_10000(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestFuzzProcessBlockHeaderNoVerify_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
state := ðpb.BeaconState{}
|
||||
block := ðpb.BeaconBlock{}
|
||||
|
||||
@@ -131,13 +133,14 @@ func TestFuzzProcessBlockHeaderNoVerify_10000(t *testing.T) {
|
||||
fuzzer.Fuzz(block)
|
||||
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
|
||||
require.NoError(t, err)
|
||||
_, err = ProcessBlockHeaderNoVerify(context.Background(), s, block.Slot, block.ProposerIndex, block.ParentRoot, []byte{})
|
||||
_, err = ProcessBlockHeaderNoVerify(t.Context(), s, block.Slot, block.ProposerIndex, block.ParentRoot, []byte{})
|
||||
_ = err
|
||||
fuzz.FreeMemory(i)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFuzzProcessRandao_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
state := ðpb.BeaconState{}
|
||||
b := ðpb.SignedBeaconBlock{}
|
||||
|
||||
@@ -151,15 +154,16 @@ func TestFuzzProcessRandao_10000(t *testing.T) {
|
||||
}
|
||||
wsb, err := blocks.NewSignedBeaconBlock(b)
|
||||
require.NoError(t, err)
|
||||
r, err := ProcessRandao(context.Background(), s, wsb)
|
||||
r, err := ProcessRandao(t.Context(), s, wsb)
|
||||
if err != nil && r != nil {
|
||||
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, b)
|
||||
}
|
||||
fuzz.FreeMemory(i)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFuzzProcessRandaoNoVerify_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
state := ðpb.BeaconState{}
|
||||
blockBody := ðpb.BeaconBlockBody{}
|
||||
|
||||
@@ -172,14 +176,15 @@ func TestFuzzProcessRandaoNoVerify_10000(t *testing.T) {
|
||||
if err != nil && r != nil {
|
||||
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, blockBody)
|
||||
}
|
||||
fuzz.FreeMemory(i)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFuzzProcessProposerSlashings_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
state := ðpb.BeaconState{}
|
||||
p := ðpb.ProposerSlashing{}
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
for i := 0; i < 10000; i++ {
|
||||
fuzzer.Fuzz(state)
|
||||
fuzzer.Fuzz(p)
|
||||
@@ -189,11 +194,12 @@ func TestFuzzProcessProposerSlashings_10000(t *testing.T) {
|
||||
if err != nil && r != nil {
|
||||
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and slashing: %v", r, err, state, p)
|
||||
}
|
||||
fuzz.FreeMemory(i)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFuzzVerifyProposerSlashing_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
state := ðpb.BeaconState{}
|
||||
proposerSlashing := ðpb.ProposerSlashing{}
|
||||
for i := 0; i < 10000; i++ {
|
||||
@@ -203,14 +209,15 @@ func TestFuzzVerifyProposerSlashing_10000(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
err = VerifyProposerSlashing(s, proposerSlashing)
|
||||
_ = err
|
||||
fuzz.FreeMemory(i)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFuzzProcessAttesterSlashings_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
state := ðpb.BeaconState{}
|
||||
a := ðpb.AttesterSlashing{}
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
for i := 0; i < 10000; i++ {
|
||||
fuzzer.Fuzz(state)
|
||||
fuzzer.Fuzz(a)
|
||||
@@ -220,14 +227,15 @@ func TestFuzzProcessAttesterSlashings_10000(t *testing.T) {
|
||||
if err != nil && r != nil {
|
||||
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and slashing: %v", r, err, state, a)
|
||||
}
|
||||
fuzz.FreeMemory(i)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFuzzVerifyAttesterSlashing_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
state := ðpb.BeaconState{}
|
||||
attesterSlashing := ðpb.AttesterSlashing{}
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
for i := 0; i < 10000; i++ {
|
||||
fuzzer.Fuzz(state)
|
||||
fuzzer.Fuzz(attesterSlashing)
|
||||
@@ -235,11 +243,12 @@ func TestFuzzVerifyAttesterSlashing_10000(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
err = VerifyAttesterSlashing(ctx, s, attesterSlashing)
|
||||
_ = err
|
||||
fuzz.FreeMemory(i)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFuzzIsSlashableAttestationData_10000(_ *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
attestationData := ðpb.AttestationData{}
|
||||
attestationData2 := ðpb.AttestationData{}
|
||||
|
||||
@@ -251,7 +260,7 @@ func TestFuzzIsSlashableAttestationData_10000(_ *testing.T) {
|
||||
}
|
||||
|
||||
func TestFuzzslashableAttesterIndices_10000(_ *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
attesterSlashing := ðpb.AttesterSlashing{}
|
||||
|
||||
for i := 0; i < 10000; i++ {
|
||||
@@ -261,10 +270,10 @@ func TestFuzzslashableAttesterIndices_10000(_ *testing.T) {
|
||||
}
|
||||
|
||||
func TestFuzzProcessAttestationsNoVerify_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
state := ðpb.BeaconState{}
|
||||
b := ðpb.SignedBeaconBlock{}
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
for i := 0; i < 10000; i++ {
|
||||
fuzzer.Fuzz(state)
|
||||
fuzzer.Fuzz(b)
|
||||
@@ -279,14 +288,15 @@ func TestFuzzProcessAttestationsNoVerify_10000(t *testing.T) {
|
||||
if err != nil && r != nil {
|
||||
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, b)
|
||||
}
|
||||
fuzz.FreeMemory(i)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFuzzVerifyIndexedAttestationn_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
state := ðpb.BeaconState{}
|
||||
idxAttestation := ðpb.IndexedAttestation{}
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
for i := 0; i < 10000; i++ {
|
||||
fuzzer.Fuzz(state)
|
||||
fuzzer.Fuzz(idxAttestation)
|
||||
@@ -294,11 +304,12 @@ func TestFuzzVerifyIndexedAttestationn_10000(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
err = VerifyIndexedAttestation(ctx, s, idxAttestation)
|
||||
_ = err
|
||||
fuzz.FreeMemory(i)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFuzzverifyDeposit_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
state := ðpb.BeaconState{}
|
||||
deposit := ðpb.Deposit{}
|
||||
for i := 0; i < 10000; i++ {
|
||||
@@ -312,10 +323,10 @@ func TestFuzzverifyDeposit_10000(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestFuzzProcessVoluntaryExits_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
state := ðpb.BeaconState{}
|
||||
e := ðpb.SignedVoluntaryExit{}
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
for i := 0; i < 10000; i++ {
|
||||
fuzzer.Fuzz(state)
|
||||
fuzzer.Fuzz(e)
|
||||
@@ -325,11 +336,12 @@ func TestFuzzProcessVoluntaryExits_10000(t *testing.T) {
|
||||
if err != nil && r != nil {
|
||||
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and exit: %v", r, err, state, e)
|
||||
}
|
||||
fuzz.FreeMemory(i)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFuzzProcessVoluntaryExitsNoVerify_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
state := ðpb.BeaconState{}
|
||||
e := ðpb.SignedVoluntaryExit{}
|
||||
for i := 0; i < 10000; i++ {
|
||||
@@ -337,15 +349,16 @@ func TestFuzzProcessVoluntaryExitsNoVerify_10000(t *testing.T) {
|
||||
fuzzer.Fuzz(e)
|
||||
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
|
||||
require.NoError(t, err)
|
||||
r, err := ProcessVoluntaryExits(context.Background(), s, []*ethpb.SignedVoluntaryExit{e})
|
||||
r, err := ProcessVoluntaryExits(t.Context(), s, []*ethpb.SignedVoluntaryExit{e})
|
||||
if err != nil && r != nil {
|
||||
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, e)
|
||||
}
|
||||
fuzz.FreeMemory(i)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFuzzVerifyExit_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
ve := ðpb.SignedVoluntaryExit{}
|
||||
rawVal := ðpb.Validator{}
|
||||
fork := ðpb.Fork{}
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package blocks_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/blocks"
|
||||
@@ -95,7 +94,7 @@ func TestProcessAttesterSlashings_RegressionSlashableIndices(t *testing.T) {
|
||||
for i, s := range b.Block.Body.AttesterSlashings {
|
||||
ss[i] = s
|
||||
}
|
||||
newState, err := blocks.ProcessAttesterSlashings(context.Background(), beaconState, ss, v.SlashValidator)
|
||||
newState, err := blocks.ProcessAttesterSlashings(t.Context(), beaconState, ss, v.SlashValidator)
|
||||
require.NoError(t, err)
|
||||
newRegistry := newState.Validators()
|
||||
if !newRegistry[expectedSlashedVal].Slashed {
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package blocks_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/blocks"
|
||||
@@ -46,7 +45,7 @@ func TestBatchVerifyDepositsSignatures_Ok(t *testing.T) {
|
||||
|
||||
deposit.Proof = proof
|
||||
require.NoError(t, err)
|
||||
verified, err := blocks.BatchVerifyDepositsSignatures(context.Background(), []*ethpb.Deposit{deposit})
|
||||
verified, err := blocks.BatchVerifyDepositsSignatures(t.Context(), []*ethpb.Deposit{deposit})
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, true, verified)
|
||||
}
|
||||
@@ -69,7 +68,7 @@ func TestBatchVerifyDepositsSignatures_InvalidSignature(t *testing.T) {
|
||||
|
||||
deposit.Proof = proof
|
||||
require.NoError(t, err)
|
||||
verified, err := blocks.BatchVerifyDepositsSignatures(context.Background(), []*ethpb.Deposit{deposit})
|
||||
verified, err := blocks.BatchVerifyDepositsSignatures(t.Context(), []*ethpb.Deposit{deposit})
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, false, verified)
|
||||
}
|
||||
@@ -164,7 +163,7 @@ func TestBatchVerifyPendingDepositsSignatures_Ok(t *testing.T) {
|
||||
sig2 := sk2.Sign(sr2[:])
|
||||
pendingDeposit2.Signature = sig2.Marshal()
|
||||
|
||||
verified, err := blocks.BatchVerifyPendingDepositsSignatures(context.Background(), []*ethpb.PendingDeposit{pendingDeposit, pendingDeposit2})
|
||||
verified, err := blocks.BatchVerifyPendingDepositsSignatures(t.Context(), []*ethpb.PendingDeposit{pendingDeposit, pendingDeposit2})
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, true, verified)
|
||||
}
|
||||
@@ -175,7 +174,7 @@ func TestBatchVerifyPendingDepositsSignatures_InvalidSignature(t *testing.T) {
|
||||
WithdrawalCredentials: make([]byte, 32),
|
||||
Signature: make([]byte, 96),
|
||||
}
|
||||
verified, err := blocks.BatchVerifyPendingDepositsSignatures(context.Background(), []*ethpb.PendingDeposit{pendingDeposit})
|
||||
verified, err := blocks.BatchVerifyPendingDepositsSignatures(t.Context(), []*ethpb.PendingDeposit{pendingDeposit})
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, false, verified)
|
||||
}
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package blocks_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"testing"
|
||||
|
||||
@@ -177,7 +176,7 @@ func TestProcessEth1Data_SetsCorrectly(t *testing.T) {
|
||||
|
||||
period := uint64(params.BeaconConfig().SlotsPerEpoch.Mul(uint64(params.BeaconConfig().EpochsPerEth1VotingPeriod)))
|
||||
for i := uint64(0); i < period; i++ {
|
||||
processedState, err := blocks.ProcessEth1DataInBlock(context.Background(), beaconState, b.Block.Body.Eth1Data)
|
||||
processedState, err := blocks.ProcessEth1DataInBlock(t.Context(), beaconState, b.Block.Body.Eth1Data)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, true, processedState.Version() == version.Phase0)
|
||||
}
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package blocks_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/blocks"
|
||||
@@ -47,7 +46,7 @@ func TestProcessVoluntaryExits_NotActiveLongEnoughToExit(t *testing.T) {
|
||||
}
|
||||
|
||||
want := "validator has not been active long enough to exit"
|
||||
_, err = blocks.ProcessVoluntaryExits(context.Background(), state, b.Block.Body.VoluntaryExits)
|
||||
_, err = blocks.ProcessVoluntaryExits(t.Context(), state, b.Block.Body.VoluntaryExits)
|
||||
assert.ErrorContains(t, want, err)
|
||||
}
|
||||
|
||||
@@ -77,7 +76,7 @@ func TestProcessVoluntaryExits_ExitAlreadySubmitted(t *testing.T) {
|
||||
}
|
||||
|
||||
want := "validator with index 0 has already submitted an exit, which will take place at epoch: 10"
|
||||
_, err = blocks.ProcessVoluntaryExits(context.Background(), state, b.Block.Body.VoluntaryExits)
|
||||
_, err = blocks.ProcessVoluntaryExits(t.Context(), state, b.Block.Body.VoluntaryExits)
|
||||
assert.ErrorContains(t, want, err)
|
||||
}
|
||||
|
||||
@@ -125,7 +124,7 @@ func TestProcessVoluntaryExits_AppliesCorrectStatus(t *testing.T) {
|
||||
},
|
||||
}
|
||||
|
||||
newState, err := blocks.ProcessVoluntaryExits(context.Background(), state, b.Block.Body.VoluntaryExits)
|
||||
newState, err := blocks.ProcessVoluntaryExits(t.Context(), state, b.Block.Body.VoluntaryExits)
|
||||
require.NoError(t, err, "Could not process exits")
|
||||
newRegistry := newState.Validators()
|
||||
if newRegistry[0].ExitEpoch != helpers.ActivationExitEpoch(primitives.Epoch(state.Slot()/params.BeaconConfig().SlotsPerEpoch)) {
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package blocks_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"io"
|
||||
"testing"
|
||||
|
||||
@@ -51,7 +50,7 @@ func TestProcessBlockHeader_ImproperBlockSlot(t *testing.T) {
|
||||
currentEpoch := time.CurrentEpoch(state)
|
||||
priv, err := bls.RandKey()
|
||||
require.NoError(t, err)
|
||||
pID, err := helpers.BeaconProposerIndex(context.Background(), state)
|
||||
pID, err := helpers.BeaconProposerIndex(t.Context(), state)
|
||||
require.NoError(t, err)
|
||||
block := util.NewBeaconBlock()
|
||||
block.Block.ProposerIndex = pID
|
||||
@@ -61,7 +60,7 @@ func TestProcessBlockHeader_ImproperBlockSlot(t *testing.T) {
|
||||
block.Signature, err = signing.ComputeDomainAndSign(state, currentEpoch, block.Block, params.BeaconConfig().DomainBeaconProposer, priv)
|
||||
require.NoError(t, err)
|
||||
|
||||
proposerIdx, err := helpers.BeaconProposerIndex(context.Background(), state)
|
||||
proposerIdx, err := helpers.BeaconProposerIndex(t.Context(), state)
|
||||
require.NoError(t, err)
|
||||
validators[proposerIdx].Slashed = false
|
||||
validators[proposerIdx].PublicKey = priv.PublicKey().Marshal()
|
||||
@@ -70,7 +69,7 @@ func TestProcessBlockHeader_ImproperBlockSlot(t *testing.T) {
|
||||
|
||||
wsb, err := consensusblocks.NewSignedBeaconBlock(block)
|
||||
require.NoError(t, err)
|
||||
_, err = blocks.ProcessBlockHeader(context.Background(), state, wsb)
|
||||
_, err = blocks.ProcessBlockHeader(t.Context(), state, wsb)
|
||||
assert.ErrorContains(t, "block.Slot 10 must be greater than state.LatestBlockHeader.Slot 10", err)
|
||||
}
|
||||
|
||||
@@ -85,7 +84,7 @@ func TestProcessBlockHeader_WrongProposerSig(t *testing.T) {
|
||||
lbhdr, err := beaconState.LatestBlockHeader().HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
|
||||
proposerIdx, err := helpers.BeaconProposerIndex(context.Background(), beaconState)
|
||||
proposerIdx, err := helpers.BeaconProposerIndex(t.Context(), beaconState)
|
||||
require.NoError(t, err)
|
||||
|
||||
block := util.NewBeaconBlock()
|
||||
@@ -98,7 +97,7 @@ func TestProcessBlockHeader_WrongProposerSig(t *testing.T) {
|
||||
|
||||
wsb, err := consensusblocks.NewSignedBeaconBlock(block)
|
||||
require.NoError(t, err)
|
||||
_, err = blocks.ProcessBlockHeader(context.Background(), beaconState, wsb)
|
||||
_, err = blocks.ProcessBlockHeader(t.Context(), beaconState, wsb)
|
||||
want := "signature did not verify"
|
||||
assert.ErrorContains(t, want, err)
|
||||
}
|
||||
@@ -142,7 +141,7 @@ func TestProcessBlockHeader_DifferentSlots(t *testing.T) {
|
||||
|
||||
wsb, err := consensusblocks.NewSignedBeaconBlock(block)
|
||||
require.NoError(t, err)
|
||||
_, err = blocks.ProcessBlockHeader(context.Background(), state, wsb)
|
||||
_, err = blocks.ProcessBlockHeader(t.Context(), state, wsb)
|
||||
want := "is different than block slot"
|
||||
assert.ErrorContains(t, want, err)
|
||||
}
|
||||
@@ -172,7 +171,7 @@ func TestProcessBlockHeader_PreviousBlockRootNotSignedRoot(t *testing.T) {
|
||||
blockSig, err := signing.ComputeDomainAndSign(state, currentEpoch, &sszBytes, params.BeaconConfig().DomainBeaconProposer, priv)
|
||||
require.NoError(t, err)
|
||||
validators[5896].PublicKey = priv.PublicKey().Marshal()
|
||||
pID, err := helpers.BeaconProposerIndex(context.Background(), state)
|
||||
pID, err := helpers.BeaconProposerIndex(t.Context(), state)
|
||||
require.NoError(t, err)
|
||||
block := util.NewBeaconBlock()
|
||||
block.Block.Slot = 10
|
||||
@@ -183,7 +182,7 @@ func TestProcessBlockHeader_PreviousBlockRootNotSignedRoot(t *testing.T) {
|
||||
|
||||
wsb, err := consensusblocks.NewSignedBeaconBlock(block)
|
||||
require.NoError(t, err)
|
||||
_, err = blocks.ProcessBlockHeader(context.Background(), state, wsb)
|
||||
_, err = blocks.ProcessBlockHeader(t.Context(), state, wsb)
|
||||
want := "does not match"
|
||||
assert.ErrorContains(t, want, err)
|
||||
}
|
||||
@@ -216,7 +215,7 @@ func TestProcessBlockHeader_SlashedProposer(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
|
||||
validators[12683].PublicKey = priv.PublicKey().Marshal()
|
||||
pID, err := helpers.BeaconProposerIndex(context.Background(), state)
|
||||
pID, err := helpers.BeaconProposerIndex(t.Context(), state)
|
||||
require.NoError(t, err)
|
||||
block := util.NewBeaconBlock()
|
||||
block.Block.Slot = 10
|
||||
@@ -227,7 +226,7 @@ func TestProcessBlockHeader_SlashedProposer(t *testing.T) {
|
||||
|
||||
wsb, err := consensusblocks.NewSignedBeaconBlock(block)
|
||||
require.NoError(t, err)
|
||||
_, err = blocks.ProcessBlockHeader(context.Background(), state, wsb)
|
||||
_, err = blocks.ProcessBlockHeader(t.Context(), state, wsb)
|
||||
want := "was previously slashed"
|
||||
assert.ErrorContains(t, want, err)
|
||||
}
|
||||
@@ -257,7 +256,7 @@ func TestProcessBlockHeader_OK(t *testing.T) {
|
||||
currentEpoch := time.CurrentEpoch(state)
|
||||
priv, err := bls.RandKey()
|
||||
require.NoError(t, err)
|
||||
pID, err := helpers.BeaconProposerIndex(context.Background(), state)
|
||||
pID, err := helpers.BeaconProposerIndex(t.Context(), state)
|
||||
require.NoError(t, err)
|
||||
block := util.NewBeaconBlock()
|
||||
block.Block.ProposerIndex = pID
|
||||
@@ -269,7 +268,7 @@ func TestProcessBlockHeader_OK(t *testing.T) {
|
||||
bodyRoot, err := block.Block.Body.HashTreeRoot()
|
||||
require.NoError(t, err, "Failed to hash block bytes got")
|
||||
|
||||
proposerIdx, err := helpers.BeaconProposerIndex(context.Background(), state)
|
||||
proposerIdx, err := helpers.BeaconProposerIndex(t.Context(), state)
|
||||
require.NoError(t, err)
|
||||
validators[proposerIdx].Slashed = false
|
||||
validators[proposerIdx].PublicKey = priv.PublicKey().Marshal()
|
||||
@@ -278,7 +277,7 @@ func TestProcessBlockHeader_OK(t *testing.T) {
|
||||
|
||||
wsb, err := consensusblocks.NewSignedBeaconBlock(block)
|
||||
require.NoError(t, err)
|
||||
newState, err := blocks.ProcessBlockHeader(context.Background(), state, wsb)
|
||||
newState, err := blocks.ProcessBlockHeader(t.Context(), state, wsb)
|
||||
require.NoError(t, err, "Failed to process block header got")
|
||||
var zeroHash [32]byte
|
||||
nsh := newState.LatestBlockHeader()
|
||||
@@ -318,7 +317,7 @@ func TestBlockSignatureSet_OK(t *testing.T) {
|
||||
currentEpoch := time.CurrentEpoch(state)
|
||||
priv, err := bls.RandKey()
|
||||
require.NoError(t, err)
|
||||
pID, err := helpers.BeaconProposerIndex(context.Background(), state)
|
||||
pID, err := helpers.BeaconProposerIndex(t.Context(), state)
|
||||
require.NoError(t, err)
|
||||
block := util.NewBeaconBlock()
|
||||
block.Block.Slot = 10
|
||||
@@ -327,7 +326,7 @@ func TestBlockSignatureSet_OK(t *testing.T) {
|
||||
block.Block.ParentRoot = latestBlockSignedRoot[:]
|
||||
block.Signature, err = signing.ComputeDomainAndSign(state, currentEpoch, block.Block, params.BeaconConfig().DomainBeaconProposer, priv)
|
||||
require.NoError(t, err)
|
||||
proposerIdx, err := helpers.BeaconProposerIndex(context.Background(), state)
|
||||
proposerIdx, err := helpers.BeaconProposerIndex(t.Context(), state)
|
||||
require.NoError(t, err)
|
||||
validators[proposerIdx].Slashed = false
|
||||
validators[proposerIdx].PublicKey = priv.PublicKey().Marshal()
|
||||
|
||||
@@ -229,13 +229,16 @@ func verifyBlobCommitmentCount(slot primitives.Slot, body interfaces.ReadOnlyBea
|
||||
if body.Version() < version.Deneb {
|
||||
return nil
|
||||
}
|
||||
|
||||
kzgs, err := body.BlobKzgCommitments()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(slot)
|
||||
if len(kzgs) > maxBlobsPerBlock {
|
||||
return fmt.Errorf("too many kzg commitments in block: %d", len(kzgs))
|
||||
|
||||
commitmentCount, maxBlobsPerBlock := len(kzgs), params.BeaconConfig().MaxBlobsPerBlock(slot)
|
||||
if commitmentCount > maxBlobsPerBlock {
|
||||
return fmt.Errorf("too many kzg commitments in block: actual count %d - max allowed %d", commitmentCount, maxBlobsPerBlock)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -926,8 +926,10 @@ func TestVerifyBlobCommitmentCount(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, blocks.VerifyBlobCommitmentCount(rb.Slot(), rb.Body()))
|
||||
|
||||
b = ðpb.BeaconBlockDeneb{Body: ðpb.BeaconBlockBodyDeneb{BlobKzgCommitments: make([][]byte, params.BeaconConfig().MaxBlobsPerBlock(rb.Slot())+1)}}
|
||||
maxCommitmentsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(rb.Slot())
|
||||
|
||||
b = ðpb.BeaconBlockDeneb{Body: ðpb.BeaconBlockBodyDeneb{BlobKzgCommitments: make([][]byte, maxCommitmentsPerBlock+1)}}
|
||||
rb, err = consensusblocks.NewBeaconBlock(b)
|
||||
require.NoError(t, err)
|
||||
require.ErrorContains(t, fmt.Sprintf("too many kzg commitments in block: %d", params.BeaconConfig().MaxBlobsPerBlock(rb.Slot())+1), blocks.VerifyBlobCommitmentCount(rb.Slot(), rb.Body()))
|
||||
require.ErrorContains(t, fmt.Sprintf("too many kzg commitments in block: actual count %d - max allowed %d", maxCommitmentsPerBlock+1, maxCommitmentsPerBlock), blocks.VerifyBlobCommitmentCount(rb.Slot(), rb.Body()))
|
||||
}
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package blocks_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"testing"
|
||||
|
||||
@@ -51,7 +50,7 @@ func TestProcessProposerSlashings_UnmatchedHeaderSlots(t *testing.T) {
|
||||
},
|
||||
}
|
||||
want := "mismatched header slots"
|
||||
_, err := blocks.ProcessProposerSlashings(context.Background(), beaconState, b.Block.Body.ProposerSlashings, v.SlashValidator)
|
||||
_, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, b.Block.Body.ProposerSlashings, v.SlashValidator)
|
||||
assert.ErrorContains(t, want, err)
|
||||
}
|
||||
|
||||
@@ -84,7 +83,7 @@ func TestProcessProposerSlashings_SameHeaders(t *testing.T) {
|
||||
},
|
||||
}
|
||||
want := "expected slashing headers to differ"
|
||||
_, err := blocks.ProcessProposerSlashings(context.Background(), beaconState, b.Block.Body.ProposerSlashings, v.SlashValidator)
|
||||
_, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, b.Block.Body.ProposerSlashings, v.SlashValidator)
|
||||
assert.ErrorContains(t, want, err)
|
||||
}
|
||||
|
||||
@@ -134,7 +133,7 @@ func TestProcessProposerSlashings_ValidatorNotSlashable(t *testing.T) {
|
||||
"validator with key %#x is not slashable",
|
||||
bytesutil.ToBytes48(beaconState.Validators()[0].PublicKey),
|
||||
)
|
||||
_, err = blocks.ProcessProposerSlashings(context.Background(), beaconState, b.Block.Body.ProposerSlashings, v.SlashValidator)
|
||||
_, err = blocks.ProcessProposerSlashings(t.Context(), beaconState, b.Block.Body.ProposerSlashings, v.SlashValidator)
|
||||
assert.ErrorContains(t, want, err)
|
||||
}
|
||||
|
||||
@@ -173,7 +172,7 @@ func TestProcessProposerSlashings_AppliesCorrectStatus(t *testing.T) {
|
||||
block := util.NewBeaconBlock()
|
||||
block.Block.Body.ProposerSlashings = slashings
|
||||
|
||||
newState, err := blocks.ProcessProposerSlashings(context.Background(), beaconState, block.Block.Body.ProposerSlashings, v.SlashValidator)
|
||||
newState, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, block.Block.Body.ProposerSlashings, v.SlashValidator)
|
||||
require.NoError(t, err)
|
||||
|
||||
newStateVals := newState.Validators()
|
||||
@@ -221,7 +220,7 @@ func TestProcessProposerSlashings_AppliesCorrectStatusAltair(t *testing.T) {
|
||||
block := util.NewBeaconBlock()
|
||||
block.Block.Body.ProposerSlashings = slashings
|
||||
|
||||
newState, err := blocks.ProcessProposerSlashings(context.Background(), beaconState, block.Block.Body.ProposerSlashings, v.SlashValidator)
|
||||
newState, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, block.Block.Body.ProposerSlashings, v.SlashValidator)
|
||||
require.NoError(t, err)
|
||||
|
||||
newStateVals := newState.Validators()
|
||||
@@ -269,7 +268,7 @@ func TestProcessProposerSlashings_AppliesCorrectStatusBellatrix(t *testing.T) {
|
||||
block := util.NewBeaconBlock()
|
||||
block.Block.Body.ProposerSlashings = slashings
|
||||
|
||||
newState, err := blocks.ProcessProposerSlashings(context.Background(), beaconState, block.Block.Body.ProposerSlashings, v.SlashValidator)
|
||||
newState, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, block.Block.Body.ProposerSlashings, v.SlashValidator)
|
||||
require.NoError(t, err)
|
||||
|
||||
newStateVals := newState.Validators()
|
||||
@@ -317,7 +316,7 @@ func TestProcessProposerSlashings_AppliesCorrectStatusCapella(t *testing.T) {
|
||||
block := util.NewBeaconBlock()
|
||||
block.Block.Body.ProposerSlashings = slashings
|
||||
|
||||
newState, err := blocks.ProcessProposerSlashings(context.Background(), beaconState, block.Block.Body.ProposerSlashings, v.SlashValidator)
|
||||
newState, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, block.Block.Body.ProposerSlashings, v.SlashValidator)
|
||||
require.NoError(t, err)
|
||||
|
||||
newStateVals := newState.Validators()
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package blocks_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/binary"
|
||||
"testing"
|
||||
|
||||
@@ -21,7 +20,7 @@ import (
|
||||
func TestProcessRandao_IncorrectProposerFailsVerification(t *testing.T) {
|
||||
beaconState, privKeys := util.DeterministicGenesisState(t, 100)
|
||||
// We fetch the proposer's index as that is whom the RANDAO will be verified against.
|
||||
proposerIdx, err := helpers.BeaconProposerIndex(context.Background(), beaconState)
|
||||
proposerIdx, err := helpers.BeaconProposerIndex(t.Context(), beaconState)
|
||||
require.NoError(t, err)
|
||||
epoch := primitives.Epoch(0)
|
||||
buf := make([]byte, 32)
|
||||
@@ -42,7 +41,7 @@ func TestProcessRandao_IncorrectProposerFailsVerification(t *testing.T) {
|
||||
want := "block randao: signature did not verify"
|
||||
wsb, err := consensusblocks.NewSignedBeaconBlock(b)
|
||||
require.NoError(t, err)
|
||||
_, err = blocks.ProcessRandao(context.Background(), beaconState, wsb)
|
||||
_, err = blocks.ProcessRandao(t.Context(), beaconState, wsb)
|
||||
assert.ErrorContains(t, want, err)
|
||||
}
|
||||
|
||||
@@ -62,7 +61,7 @@ func TestProcessRandao_SignatureVerifiesAndUpdatesLatestStateMixes(t *testing.T)
|
||||
wsb, err := consensusblocks.NewSignedBeaconBlock(b)
|
||||
require.NoError(t, err)
|
||||
newState, err := blocks.ProcessRandao(
|
||||
context.Background(),
|
||||
t.Context(),
|
||||
beaconState,
|
||||
wsb,
|
||||
)
|
||||
@@ -85,7 +84,7 @@ func TestRandaoSignatureSet_OK(t *testing.T) {
|
||||
},
|
||||
}
|
||||
|
||||
set, err := blocks.RandaoSignatureBatch(context.Background(), beaconState, block.Body.RandaoReveal)
|
||||
set, err := blocks.RandaoSignatureBatch(t.Context(), beaconState, block.Body.RandaoReveal)
|
||||
require.NoError(t, err)
|
||||
verified, err := set.Verify()
|
||||
require.NoError(t, err)
|
||||
|
||||
@@ -200,7 +200,7 @@ func createAttestationSignatureBatch(
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := attestation.IsValidAttestationIndices(ctx, ia); err != nil {
|
||||
if err := attestation.IsValidAttestationIndices(ctx, ia, params.BeaconConfig().MaxValidatorsPerCommittee, params.BeaconConfig().MaxCommitteesPerSlot); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
indices := ia.GetAttestingIndices()
|
||||
|
||||
@@ -85,6 +85,7 @@ go_test(
|
||||
"//proto/engine/v1:go_default_library",
|
||||
"//proto/prysm/v1alpha1:go_default_library",
|
||||
"//testing/assert:go_default_library",
|
||||
"//testing/fuzz:go_default_library",
|
||||
"//testing/require:go_default_library",
|
||||
"//testing/util:go_default_library",
|
||||
"//time/slots:go_default_library",
|
||||
|
||||
@@ -1,21 +1,21 @@
|
||||
package electra_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/electra"
|
||||
state_native "github.com/OffchainLabs/prysm/v6/beacon-chain/state/state-native"
|
||||
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/fuzz"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
fuzz "github.com/google/gofuzz"
|
||||
gofuzz "github.com/google/gofuzz"
|
||||
)
|
||||
|
||||
func TestFuzzProcessDeposits_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
state := ðpb.BeaconStateElectra{}
|
||||
deposits := make([]*ethpb.Deposit, 100)
|
||||
ctx := context.Background()
|
||||
ctx := t.Context()
|
||||
for i := 0; i < 10000; i++ {
|
||||
fuzzer.Fuzz(state)
|
||||
for i := range deposits {
|
||||
@@ -27,11 +27,12 @@ func TestFuzzProcessDeposits_10000(t *testing.T) {
|
||||
if err != nil && r != nil {
|
||||
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, deposits)
|
||||
}
|
||||
fuzz.FreeMemory(i)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFuzzProcessDeposit_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
state := ðpb.BeaconStateElectra{}
|
||||
deposit := ðpb.Deposit{}
|
||||
|
||||
@@ -44,5 +45,6 @@ func TestFuzzProcessDeposit_10000(t *testing.T) {
|
||||
if err != nil && r != nil {
|
||||
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, deposit)
|
||||
}
|
||||
fuzz.FreeMemory(i)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -319,7 +319,7 @@ func TestBatchProcessNewPendingDeposits(t *testing.T) {
|
||||
validDep := stateTesting.GeneratePendingDeposit(t, sk, params.BeaconConfig().MinActivationBalance, bytesutil.ToBytes32(wc), 0)
|
||||
invalidDep := ð.PendingDeposit{PublicKey: make([]byte, 48)}
|
||||
deps := []*eth.PendingDeposit{validDep, invalidDep}
|
||||
require.NoError(t, electra.BatchProcessNewPendingDeposits(context.Background(), st, deps))
|
||||
require.NoError(t, electra.BatchProcessNewPendingDeposits(t.Context(), st, deps))
|
||||
require.Equal(t, 1, len(st.Validators()))
|
||||
require.Equal(t, 1, len(st.Balances()))
|
||||
})
|
||||
@@ -335,7 +335,7 @@ func TestBatchProcessNewPendingDeposits(t *testing.T) {
|
||||
wc[31] = byte(0)
|
||||
validDep := stateTesting.GeneratePendingDeposit(t, sk, params.BeaconConfig().MinActivationBalance, bytesutil.ToBytes32(wc), 0)
|
||||
deps := []*eth.PendingDeposit{validDep, validDep}
|
||||
require.NoError(t, electra.BatchProcessNewPendingDeposits(context.Background(), st, deps))
|
||||
require.NoError(t, electra.BatchProcessNewPendingDeposits(t.Context(), st, deps))
|
||||
require.Equal(t, 1, len(st.Validators()))
|
||||
require.Equal(t, 1, len(st.Balances()))
|
||||
require.Equal(t, params.BeaconConfig().MinActivationBalance*2, st.Balances()[0])
|
||||
@@ -354,7 +354,7 @@ func TestBatchProcessNewPendingDeposits(t *testing.T) {
|
||||
invalidSigDep := stateTesting.GeneratePendingDeposit(t, sk, params.BeaconConfig().MinActivationBalance, bytesutil.ToBytes32(wc), 0)
|
||||
invalidSigDep.Signature = make([]byte, 96)
|
||||
deps := []*eth.PendingDeposit{validDep, invalidSigDep}
|
||||
require.NoError(t, electra.BatchProcessNewPendingDeposits(context.Background(), st, deps))
|
||||
require.NoError(t, electra.BatchProcessNewPendingDeposits(t.Context(), st, deps))
|
||||
require.Equal(t, 1, len(st.Validators()))
|
||||
require.Equal(t, 1, len(st.Balances()))
|
||||
require.Equal(t, 2*params.BeaconConfig().MinActivationBalance, st.Balances()[0])
|
||||
@@ -368,12 +368,12 @@ func TestProcessDepositRequests(t *testing.T) {
|
||||
require.NoError(t, st.SetDepositRequestsStartIndex(1))
|
||||
|
||||
t.Run("empty requests continues", func(t *testing.T) {
|
||||
newSt, err := electra.ProcessDepositRequests(context.Background(), st, []*enginev1.DepositRequest{})
|
||||
newSt, err := electra.ProcessDepositRequests(t.Context(), st, []*enginev1.DepositRequest{})
|
||||
require.NoError(t, err)
|
||||
require.DeepEqual(t, newSt, st)
|
||||
})
|
||||
t.Run("nil request errors", func(t *testing.T) {
|
||||
_, err = electra.ProcessDepositRequests(context.Background(), st, []*enginev1.DepositRequest{nil})
|
||||
_, err = electra.ProcessDepositRequests(t.Context(), st, []*enginev1.DepositRequest{nil})
|
||||
require.ErrorContains(t, "nil deposit request", err)
|
||||
})
|
||||
|
||||
@@ -406,7 +406,7 @@ func TestProcessDepositRequests(t *testing.T) {
|
||||
Signature: sig.Marshal(),
|
||||
},
|
||||
}
|
||||
st, err = electra.ProcessDepositRequests(context.Background(), st, requests)
|
||||
st, err = electra.ProcessDepositRequests(t.Context(), st, requests)
|
||||
require.NoError(t, err)
|
||||
|
||||
pbd, err := st.PendingDeposits()
|
||||
@@ -437,7 +437,7 @@ func TestProcessDeposit_Electra_Simple(t *testing.T) {
|
||||
},
|
||||
})
|
||||
require.NoError(t, err)
|
||||
pdSt, err := electra.ProcessDeposits(context.Background(), st, deps)
|
||||
pdSt, err := electra.ProcessDeposits(t.Context(), st, deps)
|
||||
require.NoError(t, err)
|
||||
pbd, err := pdSt.PendingDeposits()
|
||||
require.NoError(t, err)
|
||||
@@ -592,7 +592,7 @@ func TestApplyPendingDeposit_TopUp(t *testing.T) {
|
||||
dep := stateTesting.GeneratePendingDeposit(t, sk, excessBalance, bytesutil.ToBytes32(wc), 0)
|
||||
require.NoError(t, st.SetValidators(validators))
|
||||
|
||||
require.NoError(t, electra.ApplyPendingDeposit(context.Background(), st, dep))
|
||||
require.NoError(t, electra.ApplyPendingDeposit(t.Context(), st, dep))
|
||||
|
||||
b, err := st.BalanceAtIndex(0)
|
||||
require.NoError(t, err)
|
||||
@@ -608,7 +608,7 @@ func TestApplyPendingDeposit_UnknownKey(t *testing.T) {
|
||||
wc[31] = byte(0)
|
||||
dep := stateTesting.GeneratePendingDeposit(t, sk, params.BeaconConfig().MinActivationBalance, bytesutil.ToBytes32(wc), 0)
|
||||
require.Equal(t, 0, len(st.Validators()))
|
||||
require.NoError(t, electra.ApplyPendingDeposit(context.Background(), st, dep))
|
||||
require.NoError(t, electra.ApplyPendingDeposit(t.Context(), st, dep))
|
||||
// activates new validator
|
||||
require.Equal(t, 1, len(st.Validators()))
|
||||
b, err := st.BalanceAtIndex(0)
|
||||
@@ -630,7 +630,7 @@ func TestApplyPendingDeposit_InvalidSignature(t *testing.T) {
|
||||
Amount: 100,
|
||||
}
|
||||
require.Equal(t, 0, len(st.Validators()))
|
||||
require.NoError(t, electra.ApplyPendingDeposit(context.Background(), st, dep))
|
||||
require.NoError(t, electra.ApplyPendingDeposit(t.Context(), st, dep))
|
||||
// no validator added
|
||||
require.Equal(t, 0, len(st.Validators()))
|
||||
// no topup either
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package electra_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/electra"
|
||||
@@ -54,7 +53,7 @@ func TestProcessOperationsWithNilRequests(t *testing.T) {
|
||||
|
||||
require.NoError(t, st.SetSlot(1))
|
||||
|
||||
_, err = electra.ProcessOperations(context.Background(), st, b.Block())
|
||||
_, err = electra.ProcessOperations(t.Context(), st, b.Block())
|
||||
require.ErrorContains(t, tc.errMsg, err)
|
||||
})
|
||||
}
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package electra_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/electra"
|
||||
@@ -78,7 +77,7 @@ func TestProcessEpoch_CanProcessElectra(t *testing.T) {
|
||||
TargetIndex: 1,
|
||||
},
|
||||
}))
|
||||
err := electra.ProcessEpoch(context.Background(), st)
|
||||
err := electra.ProcessEpoch(t.Context(), st)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, uint64(0), st.Slashings()[2], "Unexpected slashed balance")
|
||||
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package electra_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/electra"
|
||||
@@ -290,7 +289,7 @@ func TestProcessWithdrawRequests(t *testing.T) {
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
|
||||
got, err := electra.ProcessWithdrawalRequests(context.Background(), tt.args.st, tt.args.wrs)
|
||||
got, err := electra.ProcessWithdrawalRequests(t.Context(), tt.args.st, tt.args.wrs)
|
||||
if (err != nil) != tt.wantErr {
|
||||
t.Errorf("ProcessWithdrawalRequests() error = %v, wantErr %v", err, tt.wantErr)
|
||||
return
|
||||
|
||||
@@ -48,6 +48,7 @@ go_test(
|
||||
"//consensus-types/primitives:go_default_library",
|
||||
"//proto/prysm/v1alpha1:go_default_library",
|
||||
"//testing/assert:go_default_library",
|
||||
"//testing/fuzz:go_default_library",
|
||||
"//testing/require:go_default_library",
|
||||
"//testing/util:go_default_library",
|
||||
"@com_github_google_go_cmp//cmp:go_default_library",
|
||||
|
||||
@@ -5,12 +5,13 @@ import (
|
||||
|
||||
state_native "github.com/OffchainLabs/prysm/v6/beacon-chain/state/state-native"
|
||||
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/fuzz"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
fuzz "github.com/google/gofuzz"
|
||||
gofuzz "github.com/google/gofuzz"
|
||||
)
|
||||
|
||||
func TestFuzzFinalUpdates_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
base := ðpb.BeaconState{}
|
||||
|
||||
for i := 0; i < 10000; i++ {
|
||||
@@ -19,5 +20,6 @@ func TestFuzzFinalUpdates_10000(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
_, err = ProcessFinalUpdates(s)
|
||||
_ = err
|
||||
fuzz.FreeMemory(i)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package epoch_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"math"
|
||||
"testing"
|
||||
@@ -170,7 +169,7 @@ func TestProcessRegistryUpdates_NoRotation(t *testing.T) {
|
||||
}
|
||||
beaconState, err := state_native.InitializeFromProtoPhase0(base)
|
||||
require.NoError(t, err)
|
||||
newState, err := epoch.ProcessRegistryUpdates(context.Background(), beaconState)
|
||||
newState, err := epoch.ProcessRegistryUpdates(t.Context(), beaconState)
|
||||
require.NoError(t, err)
|
||||
for i, validator := range newState.Validators() {
|
||||
assert.Equal(t, params.BeaconConfig().MaxSeedLookahead, validator.ExitEpoch, "Could not update registry %d", i)
|
||||
@@ -194,7 +193,7 @@ func TestProcessRegistryUpdates_EligibleToActivate(t *testing.T) {
|
||||
beaconState, err := state_native.InitializeFromProtoPhase0(base)
|
||||
require.NoError(t, err)
|
||||
currentEpoch := time.CurrentEpoch(beaconState)
|
||||
newState, err := epoch.ProcessRegistryUpdates(context.Background(), beaconState)
|
||||
newState, err := epoch.ProcessRegistryUpdates(t.Context(), beaconState)
|
||||
require.NoError(t, err)
|
||||
for i, validator := range newState.Validators() {
|
||||
if uint64(i) < limit && validator.ActivationEpoch != helpers.ActivationExitEpoch(currentEpoch) {
|
||||
@@ -229,7 +228,7 @@ func TestProcessRegistryUpdates_EligibleToActivate_Cancun(t *testing.T) {
|
||||
beaconState, err := state_native.InitializeFromProtoDeneb(base)
|
||||
require.NoError(t, err)
|
||||
currentEpoch := time.CurrentEpoch(beaconState)
|
||||
newState, err := epoch.ProcessRegistryUpdates(context.Background(), beaconState)
|
||||
newState, err := epoch.ProcessRegistryUpdates(t.Context(), beaconState)
|
||||
require.NoError(t, err)
|
||||
for i, validator := range newState.Validators() {
|
||||
// Note: In Deneb, only validators indices before `MaxPerEpochActivationChurnLimit` should be activated.
|
||||
@@ -257,7 +256,7 @@ func TestProcessRegistryUpdates_ActivationCompletes(t *testing.T) {
|
||||
}
|
||||
beaconState, err := state_native.InitializeFromProtoPhase0(base)
|
||||
require.NoError(t, err)
|
||||
newState, err := epoch.ProcessRegistryUpdates(context.Background(), beaconState)
|
||||
newState, err := epoch.ProcessRegistryUpdates(t.Context(), beaconState)
|
||||
require.NoError(t, err)
|
||||
for i, validator := range newState.Validators() {
|
||||
assert.Equal(t, params.BeaconConfig().MaxSeedLookahead, validator.ExitEpoch, "Could not update registry %d, unexpected exit slot", i)
|
||||
@@ -281,7 +280,7 @@ func TestProcessRegistryUpdates_ValidatorsEjected(t *testing.T) {
|
||||
}
|
||||
beaconState, err := state_native.InitializeFromProtoPhase0(base)
|
||||
require.NoError(t, err)
|
||||
newState, err := epoch.ProcessRegistryUpdates(context.Background(), beaconState)
|
||||
newState, err := epoch.ProcessRegistryUpdates(t.Context(), beaconState)
|
||||
require.NoError(t, err)
|
||||
for i, validator := range newState.Validators() {
|
||||
assert.Equal(t, params.BeaconConfig().MaxSeedLookahead+1, validator.ExitEpoch, "Could not update registry %d, unexpected exit slot", i)
|
||||
@@ -306,7 +305,7 @@ func TestProcessRegistryUpdates_CanExits(t *testing.T) {
|
||||
}
|
||||
beaconState, err := state_native.InitializeFromProtoPhase0(base)
|
||||
require.NoError(t, err)
|
||||
newState, err := epoch.ProcessRegistryUpdates(context.Background(), beaconState)
|
||||
newState, err := epoch.ProcessRegistryUpdates(t.Context(), beaconState)
|
||||
require.NoError(t, err)
|
||||
for i, validator := range newState.Validators() {
|
||||
assert.Equal(t, exitEpoch, validator.ExitEpoch, "Could not update registry %d, unexpected exit slot", i)
|
||||
@@ -386,7 +385,7 @@ func TestProcessHistoricalDataUpdate(t *testing.T) {
|
||||
name: "before capella can process and get historical root",
|
||||
st: func() state.BeaconState {
|
||||
st, _ := util.DeterministicGenesisState(t, 1)
|
||||
st, err := transition.ProcessSlots(context.Background(), st, params.BeaconConfig().SlotsPerHistoricalRoot-1)
|
||||
st, err := transition.ProcessSlots(t.Context(), st, params.BeaconConfig().SlotsPerHistoricalRoot-1)
|
||||
require.NoError(t, err)
|
||||
return st
|
||||
},
|
||||
@@ -410,7 +409,7 @@ func TestProcessHistoricalDataUpdate(t *testing.T) {
|
||||
name: "after capella can process and get historical summary",
|
||||
st: func() state.BeaconState {
|
||||
st, _ := util.DeterministicGenesisStateCapella(t, 1)
|
||||
st, err := transition.ProcessSlots(context.Background(), st, params.BeaconConfig().SlotsPerHistoricalRoot-1)
|
||||
st, err := transition.ProcessSlots(t.Context(), st, params.BeaconConfig().SlotsPerHistoricalRoot-1)
|
||||
require.NoError(t, err)
|
||||
return st
|
||||
},
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package precompute_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/epoch/precompute"
|
||||
@@ -206,10 +205,10 @@ func TestProcessAttestations(t *testing.T) {
|
||||
for i := 0; i < len(pVals); i++ {
|
||||
pVals[i] = &precompute.Validator{CurrentEpochEffectiveBalance: 100}
|
||||
}
|
||||
pVals, _, err = precompute.ProcessAttestations(context.Background(), beaconState, pVals, &precompute.Balance{})
|
||||
pVals, _, err = precompute.ProcessAttestations(t.Context(), beaconState, pVals, &precompute.Balance{})
|
||||
require.NoError(t, err)
|
||||
|
||||
committee, err := helpers.BeaconCommitteeFromState(context.Background(), beaconState, att1.Data.Slot, att1.Data.CommitteeIndex)
|
||||
committee, err := helpers.BeaconCommitteeFromState(t.Context(), beaconState, att1.Data.Slot, att1.Data.CommitteeIndex)
|
||||
require.NoError(t, err)
|
||||
indices, err := attestation.AttestingIndices(att1, committee)
|
||||
require.NoError(t, err)
|
||||
@@ -218,7 +217,7 @@ func TestProcessAttestations(t *testing.T) {
|
||||
t.Error("Not a prev epoch attester")
|
||||
}
|
||||
}
|
||||
committee, err = helpers.BeaconCommitteeFromState(context.Background(), beaconState, att2.Data.Slot, att2.Data.CommitteeIndex)
|
||||
committee, err = helpers.BeaconCommitteeFromState(t.Context(), beaconState, att2.Data.Slot, att2.Data.CommitteeIndex)
|
||||
require.NoError(t, err)
|
||||
indices, err = attestation.AttestingIndices(att2, committee)
|
||||
require.NoError(t, err)
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package precompute_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/altair"
|
||||
@@ -239,7 +238,7 @@ func TestUnrealizedCheckpoints(t *testing.T) {
|
||||
state, err := state_native.InitializeFromProtoAltair(base)
|
||||
require.NoError(t, err)
|
||||
|
||||
_, _, err = altair.InitializePrecomputeValidators(context.Background(), state)
|
||||
_, _, err = altair.InitializePrecomputeValidators(t.Context(), state)
|
||||
require.NoError(t, err)
|
||||
|
||||
jc, fc, err := precompute.UnrealizedCheckpoints(state)
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package precompute_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/epoch/precompute"
|
||||
@@ -29,7 +28,7 @@ func TestNew(t *testing.T) {
|
||||
})
|
||||
require.NoError(t, err)
|
||||
e := params.BeaconConfig().FarFutureSlot
|
||||
v, b, err := precompute.New(context.Background(), s)
|
||||
v, b, err := precompute.New(t.Context(), s)
|
||||
require.NoError(t, err)
|
||||
assert.DeepEqual(t, &precompute.Validator{
|
||||
IsSlashed: true,
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package precompute
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
|
||||
@@ -40,9 +39,9 @@ func TestProcessRewardsAndPenaltiesPrecompute(t *testing.T) {
|
||||
beaconState, err := state_native.InitializeFromProtoPhase0(base)
|
||||
require.NoError(t, err)
|
||||
|
||||
vp, bp, err := New(context.Background(), beaconState)
|
||||
vp, bp, err := New(t.Context(), beaconState)
|
||||
require.NoError(t, err)
|
||||
vp, bp, err = ProcessAttestations(context.Background(), beaconState, vp, bp)
|
||||
vp, bp, err = ProcessAttestations(t.Context(), beaconState, vp, bp)
|
||||
require.NoError(t, err)
|
||||
|
||||
processedState, err := ProcessRewardsAndPenaltiesPrecompute(beaconState, bp, vp, AttestationsDelta, ProposersDelta)
|
||||
@@ -83,9 +82,9 @@ func TestAttestationDeltas_ZeroEpoch(t *testing.T) {
|
||||
beaconState, err := state_native.InitializeFromProtoPhase0(base)
|
||||
require.NoError(t, err)
|
||||
|
||||
pVals, pBal, err := New(context.Background(), beaconState)
|
||||
pVals, pBal, err := New(t.Context(), beaconState)
|
||||
assert.NoError(t, err)
|
||||
pVals, pBal, err = ProcessAttestations(context.Background(), beaconState, pVals, pBal)
|
||||
pVals, pBal, err = ProcessAttestations(t.Context(), beaconState, pVals, pBal)
|
||||
require.NoError(t, err)
|
||||
|
||||
pBal.ActiveCurrentEpoch = 0 // Could cause a divide by zero panic.
|
||||
@@ -121,9 +120,9 @@ func TestAttestationDeltas_ZeroInclusionDelay(t *testing.T) {
|
||||
beaconState, err := state_native.InitializeFromProtoPhase0(base)
|
||||
require.NoError(t, err)
|
||||
|
||||
pVals, pBal, err := New(context.Background(), beaconState)
|
||||
pVals, pBal, err := New(t.Context(), beaconState)
|
||||
require.NoError(t, err)
|
||||
_, _, err = ProcessAttestations(context.Background(), beaconState, pVals, pBal)
|
||||
_, _, err = ProcessAttestations(t.Context(), beaconState, pVals, pBal)
|
||||
require.ErrorContains(t, "attestation with inclusion delay of 0", err)
|
||||
}
|
||||
|
||||
@@ -155,9 +154,9 @@ func TestProcessRewardsAndPenaltiesPrecompute_SlashedInactivePenalty(t *testing.
|
||||
require.NoError(t, beaconState.SetValidators(vs))
|
||||
}
|
||||
|
||||
vp, bp, err := New(context.Background(), beaconState)
|
||||
vp, bp, err := New(t.Context(), beaconState)
|
||||
require.NoError(t, err)
|
||||
vp, bp, err = ProcessAttestations(context.Background(), beaconState, vp, bp)
|
||||
vp, bp, err = ProcessAttestations(t.Context(), beaconState, vp, bp)
|
||||
require.NoError(t, err)
|
||||
rewards, penalties, err := AttestationsDelta(beaconState, bp, vp)
|
||||
require.NoError(t, err)
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user